By Joseph S. Nye, Jr. ASPEN – Humans are a tool-making species, but can we control the tools we make? When Robert Oppenheimer and other physicists developed the first nuclear fission weapon in the 1940s, they worried that their invention might destroy humanity. Thus far, it has not, but controlling nuclear weapons has been a [...]

Sunday Times 2

Artificial Intelligence and threat to national security

View(s):

By Joseph S. Nye, Jr.

ASPEN – Humans are a tool-making species, but can we control the tools we make? When Robert Oppenheimer and other physicists developed the first nuclear fission weapon in the 1940s, they worried that their invention might destroy humanity. Thus far, it has not, but controlling nuclear weapons has been a persistent challenge ever since.

Now, many scientists see artificial intelligence—algorithms and software that enable machines to perform tasks that typically require human intelligence—as an equally transformational tool. Like previous general-purpose technologies, AI has enormous potential for good and evil. In cancer research, it can sort through and summarise more studies in a few minutes than a human team can do over the course of months. Likewise, it can reliably predict patterns of protein folds that would take human researchers many years to uncover.

But AI also lowers the costs and the barriers to entry for misfits, terrorists, and other bad actors who might wish to cause harm. As a recent RAND study warned, “the marginal cost to resurrect a dangerous virus similar to smallpox can be as little as $100,000, while developing a complex vaccine can be over $1 billion.”

Moreover, some experts worry that advanced AI will be so much smarter than humans that it will control us, rather than the other way around. Estimates of how long it will take to develop such superintelligent machines—known as artificial general intelligence—vary from a few years to a few decades. But whatever the case, the growing risks from today’s narrow AI already demand greater attention.

For 40 years, the Aspen Strategy Group (consisting of former government officials, academics, businesspeople, and journalists) has met each summer to focus on a major national-security problem. Past sessions have dealt with subjects such as nuclear weapons, cyber-attacks, and the rise of China. This year, we focused on AI’s implications for national security, examining the benefits as well as the risks.

Among the benefits are a greater ability to sort through enormous amounts of intelligence data, strengthen early-warning systems, improve complicated logistical systems, and inspect computer code to improve cybersecurity. But there are also big risks, such as advances in autonomous weapons, accidental errors in programming algorithms, and adversarial AIs that can weaken cybersecurity.

China has been making massive investments in the broader AI arms race, and it also boasts some structural advantages. The three key resources for AI are data to train the models; smart engineers to develop algorithms; and computing power to run them. China has few legal or privacy limits on access to data (though ideology constrains some datasets), and it is well supplied with bright young engineers. The area where it is most behind the United States is in the advanced microchips that produce the computing power for AI.

American export controls limit China’s access to these frontier chips, as well as to the costly Dutch lithography machines that make them. The consensus among the Aspen experts was that China is a year or two behind the US; but the situation remains volatile. Although Presidents Joe Biden and Xi Jinping agreed to hold bilateral discussions on AI when they met last fall, there was little optimism in Aspen about the prospects for AI arms control.

Autonomous weapons pose a particularly serious threat. After more than a decade of diplomacy at the United Nations, countries have failed to agree on a ban on autonomous lethal weapons. International humanitarian law requires that militaries discriminate between armed combatants and civilians, and the Pentagon has long required that a human be in the decision-making loop before a weapon is fired. But in some contexts, such as defending against incoming missiles, there is no time for human intervention.

Since the context matters, humans must tightly define (in the code) what weapons can and cannot do. In other words, there should be a human “on the loop” rather than “in the loop.” This is not just some speculative question. In the Ukraine war, the Russians jam Ukrainian forces’ signals, compelling the Ukrainians to programme their devices for autonomous final decision-making about when to fire.

One of the most frightening dangers of AI is its application to biological warfare or terrorism. When countries agreed to ban biological weapons in 1972, the common belief was that such devices were not useful, owing to the risk of “blowback” on one’s own side. But with synthetic biology, it may be possible to develop a weapon that destroys one group but not another. Or a terrorist with access to a laboratory may simply want to kill as many people as possible, as the Aum Shinrikyo doomsday cult did in Japan in 1995. (While they used sarin, which is non-transmissible, their modern equivalent could use AI to develop a contagious virus.)

In the case of nuclear technology, countries agreed, in 1968, on a non-proliferation treaty that now has 191 members. The International Atomic Energy Agency regularly inspects domestic energy programmes to confirm that they are being used solely for peaceful purposes. And despite intense Cold War competition, the leading countries in nuclear technology agreed, in 1978, to practise restraint in the export of the most sensitive facilities and technical knowledge. Such a precedent suggests some paths for AI, though there are obvious differences between the two technologies.

It is a truism that technology moves faster than policy or diplomacy, especially when it is driven by intense market competition in the private sector. If there was one major conclusion of this year’s Aspen Strategy Group meeting, it was that governments need to pick up their pace.

(Joseph S. Nye, Jr., Co-Chair of the Aspen Strategy Group, is a former dean of the Harvard Kennedy School, a former US assistant secretary of defense, and the author, most recently, of A Life in the American Century – Polity Press, 2024).

Copyright: Project Syndicate, 2024. www.project-syndicate.org

Share This Post

WhatsappDeliciousDiggGoogleStumbleuponRedditTechnoratiYahooBloggerMyspaceRSS

Advertising Rates

Please contact the advertising office on 011 - 2479521 for the advertising rates.