The End of Mutual Assured Destruction?

3 months ago 3

The rapid development of artificial intelligence in recent years has led many analysts to suggest that it will upend international politics and the military balance of power. Some have gone so far as to claim, in the words of the technologists Dan Hendrycks, Eric Schmidt, and Alexandr Wang, that advanced AI systems could “establish one state’s complete dominance and control, leaving the fate of rivals subject to its will.”

AI is no doubt a transformative technology, one that will strengthen the economic, political, and military foundations of state power. But the winner of the AI race will not necessarily enjoy unchallenged dominance over its major competitors. The power of nuclear weapons, the most significant invention of the last century, remains a major impediment to the bulldozing change brought by AI. As long as systems of nuclear deterrence remain in place, the economic and military advantages produced by AI will not allow states to fully impose their political preferences on one another. Consider that the economy of the United States is almost 15 times as large as that of Russia, and almost 1,000 times as large as that of North Korea, yet Washington struggles to get Moscow or Pyongyang to do what it wants, in large part because of their nuclear arsenals.

Some analysts have suggested that AI advances could challenge this dynamic. To undermine nuclear deterrence, AI would need to knock down its central pillar: a state’s capacity to respond to a nuclear attack with a devastating nuclear strike of its own, what is known as second-strike capability. AI technology could plausibly make it easier for a state to destroy a rival’s entire nuclear arsenal in one “splendid first strike” by pinpointing the locations of nuclear submarines and mobile launchers. It could also prevent a rival from launching a retaliatory strike by disabling command-and-control networks. And it could strengthen missile defenses such that a rival could no longer credibly threaten retaliation. If AI could in this way help a state escape the prospect of mutual assured destruction, the technology would make that state unrivaled in its capacity to threaten and coerce adversaries—an outcome in line with increasingly popular visions of AI-enabled dominance.

But undermining the nuclear balance of power will not be easy. Emerging technologies still face very real constraints in the nuclear domain. Even the most sophisticated AI-powered targeting and sensor systems may struggle to locate a mobile nuclear launcher hidden under a bridge, isolate the signatures of a nuclear-armed submarine from the background noise of the ocean, and orchestrate the simultaneous destruction of hundreds of targets on land, air, and sea—with zero room for error. And competitors will respond to their adversaries’ use of new technology with moves of their own to defend their systems, as they have at every turn since the dawn of the atomic age.

Yet even if it does not challenge nuclear deterrence, AI may encourage mistrust and dangerous actions among nuclear-armed states. Many of the steps that governments could take to protect and toughen their second-strike capabilities risk alarming rivals, potentially spurring expensive and dangerous arms races. It also remains possible that AI systems could cross a crucial threshold and exhibit extremely rapid improvements in capabilities. Were that to happen, their advantages to the country that possesses them could become more pronounced and difficult for rivals to contend with. Policymakers should monitor for such a scenario and facilitate regular communication between AI and nuclear experts. At the same time, they should take steps to reduce the probability of accidents and escalation, including assessing nuclear systems for AI-related vulnerabilities and maintaining channels of communication between nuclear powers. Such steps will help ensure that nuclear stability—and not just nuclear deterrence—endures in the age of AI.

FIRST STRIKE

Nuclear deterrence depends, most fundamentally, on states’ possessing the ability to retaliate after absorbing a nuclear attack: as long as two nuclear powers credibly maintain a second-strike capability that can inflict unacceptable damage on their adversary, a first strike is suicidal. This understanding has for decades sustained a relatively stable equilibrium. But second-strike capabilities are not invulnerable. States can eliminate delivery platforms, such as road-mobile missile launchers and nuclear submarines, provided that they can find them. The difficulty of finding and disabling these platforms is one of the central obstacles to launching a splendid first strike. The sheer size of China, Russia, the United States, the Atlantic Ocean, and the Pacific Ocean—the most important domains for nuclear competition today—makes such a strike hard to accomplish.

The emergence of powerful AI systems, however, could solve that problem. Capable of processing and analyzing vast amounts of data, a military equipped with such technologies could better target the nuclear assets of its rivals. Consider ground-launched mobile missiles, one of the platforms that underpin Russian and Chinese second-strike capabilities. These missiles, which are carried on vehicles that can hide under camouflage netting, bridges, or tunnels and drive from one concealed location to another, are probably the most difficult element of Russian and Chinese nuclear forces to eliminate. (Silo-based ballistic missiles, by contrast, are much more vulnerable to attack.) The improved speed and scale of AI-empowered intelligence processing may make it easier to conduct operations against these vehicles. AI systems can scour and integrate huge amounts of data from satellites, reconnaissance aircraft, signals intelligence intercepts, stealth drones, ground-based sensors, and human intelligence to more effectively find and track mobile nuclear forces.

When it comes to the sea, the potential convergence of AI with sensing technologies might make the oceans “transparent,” allowing governments to track ballistic missile submarines in real time. That is a particular concern for the United States, which keeps a much higher percentage of its warheads on submarines than Russia or China does. AI could make it easier to track submarines by automating pattern recognition from multiple types of sensors across massive ocean areas and over long durations. It could also help a state hack into the systems its adversaries use to track their own weapons.

Yet even with the assistance of AI, states will not be absolutely sure that a splendid first strike can knock out a rival’s capacity to retaliate. On land, for instance, China and Russia could respond to improvements in U.S. tracking systems with their own countermeasures. They could invest in antisatellite weapons and jamming capabilities. They could adopt old-fashioned low-tech solutions, such as covering roads with netting or constructing decoys, to increase the number of targets an attacker would need to strike. They could order their launchers to emit fewer signals, making it harder for the United States to track them. They could modify the launchers to move faster, widening the target area U.S. strikes would have to hit. They could even use their own AI systems to inject false information into channels monitored by the U.S. intelligence community.

In the maritime domain, too, AI is unlikely to make the sea fully transparent. Any system will struggle to continuously identify, track, and monitor multiple targets over long ranges and amid ocean background noise, especially as submarines get quieter and oceans noisier. Submarines remain extraordinarily difficult to detect when submerged at depth and operating at low speeds, because of how sound moves underwater, shifting ocean conditions, and the inherent noisiness of the marine environment. In the seas, false alarms are frequent; reliable contact is rare. And at sea, as on land, major powers can tip the scales in their favor through various countermeasures: they can jam signals, manipulate sensor data, use undersea sensors and uncrewed vehicles to detect adversary assets, and operate their own submarines in protected bastions close to their home shores. Detection will thus remain a matter of probability, even with the introduction of AI—and states are unlikely to want to risk a splendid first strike on anything less than a safe bet.

COMMAND AND CONTROL

Beyond making it easier to find and destroy an adversary’s nuclear weapons, AI could plausibly threaten the nuclear command-and-control systems that would be needed to launch a retaliatory strike. Command-and-control systems are responsible for detecting attacks, reporting them to the relevant authority, and transmitting retaliation orders to nuclear forces. These systems must be able to identify a wide array of missiles; assess damage on the ground; send short messages over thousands of miles, including deep underwater; and protect the leaders responsible for nuclear decision-making. States seeking to disable a command-and-control system must proceed with caution, as overt moves to knock out a rival’s system may be seen as the prelude to a nuclear attack, potentially justifying preemptive retaliation. Like most elements of a splendid first strike, disarming a command-and-control system requires, in the words of the scholar James Acton, destroying “as much of each component as possible with as little warning as possible.”

Many parts of a nuclear command-and-control system are already vulnerable to attack and will become increasingly so with new technological breakthroughs. Nuclear and conventional strikes can already destroy fixed equipment, such as radars and high-frequency transmitters. With improvements in AI, meanwhile, it may become easier to track mobile command posts on land as well as more precisely target airborne ones. Satellites that provide early warning of incoming nuclear attacks may become increasingly vulnerable to AI-enhanced antisatellite weapons. And sophisticated cyber-operations supercharged by AI might allow states to penetrate a rival’s command-and-control networks, disable early warning systems, and disrupt the transmission of orders—a risk that may grow as states modernize their systems, moving away from analog and hardwired capabilities toward digital ones.

But taking out an entire command-and-control system in one fell swoop will never be straightforward, even with advanced technology. States have gone to great lengths to make their systems resilient, incorporating redundancy and elaborate countermeasures. On land, some command bunkers in China, Russia, and the United States are buried at least 700 meters underground in hard rock, deep enough to survive even a direct hit from a large nuclear weapon. In space, every major nuclear power has either sent hundreds or thousands of satellites into orbit or is close to doing so, making it difficult to destroy all of a country’s advance warning and communications systems. In the air, the curvature of the earth limits the distance at which surveillance radar can track airborne command posts. And in cyberspace, the ability to launch a disarming strike will require persistent access to an adversary’s systems, risking detection and possible retaliation. States will also expend enormous effort trying to prevent such intrusions, most likely with the aid of their own AI defenses, while operating multiple redundant networks. As a result of these challenges, it seems unlikely that AI will credibly threaten the elimination of command-and-control systems.

DEFENSE, DEFENSE

A final fear is that AI could weaken nuclear deterrence by improving missile defenses, thereby minimizing the possibility of a successful second strike and making first strikes more tempting. Establishing strong defenses against nuclear missile attacks has always been an exceptionally challenging task. Russia and China have few such capabilities, and U.S. defense systems are error-prone, spread thin, and unable to fully prevent an attack from a major nuclear power. These systems must do the nearly impossible: detect a launch, track hundreds of missiles traveling through space at 20 times the speed of sound, estimate their future trajectories, and destroy them with interceptors—all in less than 30 minutes, the rough flight time for most land-based missiles traveling between the United States and Russia or China. National security officials have likened this process to trying to shoot a bullet with another bullet. And they must be able to do all this economically and at scale, so that an attacker cannot overwhelm a defensive system with many cheaper missiles.

AI could improve some of these capabilities. Machine-learning algorithms might, for example, rapidly analyze data from multiple sensors to distinguish actual warheads from sophisticated decoys and speed up decision-making once an adversary launches its missiles. Software advances can make it easier to predict a missile’s trajectory. In addition, through advancements in materials science, AI might help produce lighter, more agile interceptors, making the weapons cheaper and more maneuverable in flight.

But none of these developments will take place overnight: defensive architectures take years to develop, and U.S. adversaries will not stand by and watch. Attackers maintain significant advantages: they can launch from unexpected directions, overwhelm defenses with coordinated salvo attacks using large numbers of decoys, or directly target key defense systems. In the highly unlikely event that missile defense becomes so strong that it is insurmountable, states could resort to more creative delivery methods, such as smuggling and pre-positioning small nuclear devices in enemy territory. An AI-hardened system of missile defense would depend on machine learning algorithms that are trained on large, reliable data sets regarding decoy measures and missile systems—data that U.S. adversaries have every incentive to obscure. Indeed, U.S. rivals could try to confuse AI algorithms by manipulating missile tests. In short, even AI-enhanced defenses face physical and economic constraints that sophisticated nuclear powers can exploit.

NOTHING FOR GRANTED

Even in the face of AI-driven technological change, nuclear deterrence should remain strong. This does not mean, however, that AI poses no risks to global nuclear stability. Even if predictions of AI power overestimate the technology’s actual capabilities, states may perceive greater threats and take potentially destabilizing actions. If a state believes that an AI-powered rival can more easily discover its missiles, for example, it may decide to build more warheads, house more of its weapons on harder-to-detect mobile vehicles, direct those vehicles to move faster, and delegate more launch authority to vehicle drivers to reduce the need for long-distance communication that could be intercepted. If states believe their command-and-control systems are vulnerable to AI-enabled attacks, they may wish to reduce the number of steps required to launch a counterattack. These are all moves that increase the probability of escalation and accidental launches, especially with respect to states with fewer resources and less experience in managing nuclear weapons, such as India and Pakistan.

Beyond such destabilizing uncertainty, it remains possible that countries will develop significantly more powerful AI systems that could threaten methods of nuclear deterrence in ways that cannot yet be anticipated, especially if—as some experts suggest is possible—AI R&D becomes fully automated, spurring a sudden takeoff in capabilities. National security experts should not dismiss this eventuality, and they should continuously monitor for evidence of rapid takeoffs in AI capabilities among rivals.

AI may encourage mistrust among nuclear-armed states.

But even if intelligence is a powerful asset, it isn’t magic, and states seeking to use AI to disarm their adversaries will confront real physical, practical, and institutional limits. A state empowered by transformative AI systems will still have to solve an enormous number of extremely difficult problems before it can contemplate decapitating a nuclear competitor: it will need to integrate advanced AI into large-scale military bureaucracies, no easy task, and figure out how to test these systems in advance—a tall order given the need both to avoid triggering a preemptive response and to rehearse thousands of steps with little room for error. No matter how intelligent they are, powerful AI systems will not be able to evade the laws of physics. And no tool or capability guarantees that a state can impose its preferences on another: the United States, after all, possessed overwhelming military and economic power over the Taliban in Afghanistan yet suffered an unambiguous defeat after two decades of war.

None of this should encourage complacency. Even if nuclear deterrence persists, AI will still transform national security in many ways, including by helping states develop new autonomous weapons and by strengthening offensive and defensive cyber-capabilities. These developments will have significant consequences, even if they do not give any state the ability to impose its will on nuclear-armed adversaries. AI may also democratize access to dangerous capabilities, lowering the barriers for nonstate actors and individuals to build and deploy devastating biological, chemical, and radiological weapons. These are just a few of the problems that policymakers will have to grapple with as AI continues to improve.

In the nuclear domain, U.S. officials should not wait to see what advances AI brings. They should ensure that policy processes include AI experts alongside nuclear ones, encouraging a dialogue that has thus far been lacking in the national security community. They should conduct rigorous reviews of nuclear systems to check for vulnerabilities that could be exploited by advanced AI, especially in cyberspace, and deepen the ranks of AI experts within the U.S. government. They should carefully calibrate any statements about the need to race to artificial general intelligence, or the importance of being the first to develop a decisive wonder weapon, lest they exacerbate risky and costly nuclear competition. And through arms control dialogues, they should work to strengthen the significant ethical, political, and legal constraints on a state’s ability to launch a splendid first strike or to take other escalatory steps. The system of nuclear deterrence that has been in place since the end of World War II cannot be taken for granted. As political rivalries and the growing competition for AI supremacy heat up between the world’s great powers, it is more important than ever to maintain channels of communication and pathways to reduce the risk of inadvertent escalation and calamity.

Loading...

Read Entire Article