What the Technology Means for Deterrence and War
Artificial intelligence is rapidly becoming indispensable to national security decision-making. Militaries around the world already depend on AI models to sift through satellite imagery, assess adversaries’ capabilities, and generate recommendations for when, where, and how force should be deployed. As these systems advance, they promise to reshape how states respond to threats. But advanced AI platforms also threaten to undermine deterrence, which has long provided the overall basis for U.S. security strategy.
Effective deterrence depends on a country being credibly able and willing to impose unacceptable harm on an adversary. AI strengthens some of the foundations of that credibility. Better intelligence, faster assessments, and more consistent decision-making can reinforce deterrence by more clearly communicating to adversaries a country’s defense capabilities as well as its apparent resolve to use them. Yet adversaries can also exploit AI to undermine these goals: they can poison the training data of models on which countries rely, thereby altering their output, or launch AI-enabled influence operations to sway the behavior of key officials. In a high-stakes crisis, such manipulation could limit a state’s ability to maintain credible deterrence and distort or even paralyze its leaders’ decision-making.
Consider a crisis scenario in which China has placed sweeping economic sanctions on Taiwan and launched large-scale military drills around the island. U.S. defense officials turn to AI-powered systems to help formulate the U.S. response—unaware that Chinese information operations have already corrupted these systems by poisoning their training data and core inputs. As a result, the models overstate China’s actual capabilities and understate U.S. readiness, producing a skewed assessment that ultimately discourages U.S. mobilization. At the same time, Chinese influence campaigns, boosted by sudden floods of AI-driven fake content across platforms such as Facebook and TikTok, suppress the U.S. public’s support of intervention. Unable to interpret their intelligence and gauge public sentiment accurately, U.S. leaders may then conclude that decisive action is too risky.
China, sensing opportunity, now launches a full blockade of Taiwan and commences drone strikes. It also saturates the island with deepfakes of U.S. officials expressing their willingness to concede Taiwan, fabricated polls showing collapsing U.S. support, and rumors of U.S. abandonment. In this scenario, credible signals from the United States showing that it was inclined to respond might have deterred China from escalating—and might well have been pursued if U.S. officials had not been dissuaded by poisoned AI systems and distorted public sentiment. Instead of strengthening deterrence, AI has undermined U.S. credibility and opened the door to Chinese aggression.
As AI systems become increasingly central to leaders’ decision-making, they could give information warfare a potent new role in coercion and conflict. To bolster deterrence in the AI age, then, policymakers, defense planners, and intelligence agencies must reckon with the ways in which AI models can be weaponized and ensure that digital defenses against these threats are keeping pace. The outcome of future crises may depend on it.
DETERRENCE IN THE AI AGE
For deterrence to work, an adversary must believe that a defender is both capable of imposing serious costs and resolved to do so if challenged. Some elements of military power are visible, but others—such as certain weapons capabilities, readiness levels, and mobilization capacities—are harder to gauge from the outside. Resolve is even more opaque: only the leaders of a country typically know precisely how willing they are to wage war. Deterrence, therefore, hinges on how effectively a country can credibly signal both its capabilities and its willingness to act.
Costly military actions, such as repositioning forces or raising readiness levels, demonstrate credibility because they require time, resources, and political risk. After a Pakistani militant group launched an attack on the Indian Parliament in 2001, for example, India amassed troops along its border with Pakistan, and by credibly signaling both its ability and determination to act, it deterred further strikes on its soil. The domestic political pressures inherent in democracies can also bolster credibility. Leaders of democracies must answer to their citizens, and making threats only to later back down can result in political backlash. In 1982, for instance, after Argentina seized the Falkland Islands, strong public pressure in the United Kingdom reinforced Prime Minister Margaret Thatcher’s determination to act, lending additional credibility to the United Kingdom’s threat of military response. Such accountability generally gives a democratic state’s deterrent threats more weight than those of autocratic states. Speed is also a factor: a deterrent state’s threats are more credible when it is seen to be able to act swiftly and automatically against a threat.
On the surface, artificial intelligence appears well suited to strengthen deterrence. By processing vast amounts of data, AI can provide better intelligence, clarify signals, and accelerate leaders’ decisions by producing faster and more comprehensive analyses. In the war in Ukraine, AI tools allow the Ukrainian military to scan satellite and drone images to identify Russian troop and equipment movements, missile sites, and supply routes; pull and aggregate data from radar, sound, and radio signals; and sift rapidly through training manuals, intelligence reports, or other materials to create a more complete picture of Russian force strength. For defense planners, such information allows a clearer assessment of their military capabilities relative to those of an adversary.
AI can also reinforce deterrence by ensuring that each side’s actions are clearly communicated to the other. Since states frequently have incentives to bluff, they may struggle to demonstrate that they are truly prepared to follow through on their threats. By contrast, AI-enabled tools can ensure that when a country takes costly actions to signal its resolve, those actions are communicated quickly, clearly, and consistently. The adversary’s own AI systems can then efficiently interpret these signals, lessening the risk of misperception. For instance, by tracking domestic public opinion in real time, AI tools can help a democratic country demonstrate that it is prepared to act by showing that its threatened response is backed by real political support. Adversaries can then use their own AI tools to affirm that this support is genuine. Using AI to spot patterns and anomalies that humans might miss—such as sudden changes in troop movements, financial flows, or cyberactivity—can give leaders a clearer read on an adversary’s intentions.
Because an aggressor can exploit even slight delays in a target country’s response—to seize territory or otherwise advance its aims—deterrence works best when the latter is able to persuade the aggressor that it will respond quickly enough to deny it any such time advantage. AI helps reinforce this perception by enabling defenders to detect challenges earlier and respond faster. Improving leaders’ long-term planning can strengthen and maintain credibility in longer crises, too. By running large numbers of “what if” scenarios—using data on force, geography, supply lines, and alliances—AI can provide leaders a clearer picture of how a conflict might unfold and help them maintain consistent strategies as conditions evolve.
STRENGTH AND FRAGILITY
The same AI technologies that strengthen deterrence can also make it vulnerable to exploitation. Rather than helping a country credibly convey what it knows about itself, AI systems, if they are manipulated, can instead leave leaders unsure of their own capabilities and resolve. Adversaries could use AI to distort public opinion or poison the very AI systems on which a country’s leaders depend. By deploying these twin tactics—AI-enabled influence operations and AI model poisoning—an adversary could reshape a country’s information environment in ways that directly affect its deterrence. In the worst case, such confusion could cause a country’s deterrence to fail, even when its underlying capabilities and resolve are strong.
An adversary could also use influence operations to target a country’s public as well as influential figures shaping that country’s national debate, including decision-makers in government. Recent advances in data science and generative AI have made influence operations far more potent across three linked areas: target identification, persona creation, and individually tailored content. Previously, adversaries seeking to deploy targeted propaganda could only group people into clusters based on similar attributes. With modern AI, however, they can automate this process using data science to target individuals at a massive level.
With these tools, AI can predict targets’ susceptibility to specific narratives or to fake social media profiles that are designed to attract their attention. Whereas bots were once clumsy and easily spotted, generative AI can today make so-called synthetic personas that appear authentic and escape ready detection. These fake profiles can be developed over time to become indistinguishable from real users—featuring realistic posting habits, interests, and language quirks. Moreover, fake accounts can now be created and operated at an enormous scale, making them harder to detect. Such developments allow these personas to spread synthetic content into targeted communities. Seeded across multiple social media platforms, they can steer debate and inflame divisions. To weaken public resolve in the United States, for instance, such fake personas may spread claims that the U.S. military is overstretched, that allies free-ride on American security, or that particular international causes are not worth fighting for. Amplifying messages across many platforms can make false information feel true, or at least create sufficient confusion to undermine public consensus around an issue.
AI systems could give information warfare a potent new role in coercion and conflict.
Using thousands of unique fake accounts, AI tools may soon be able to deliver individually tailored content in real time across an entire population. This is cognitive warfare, and the implications for deterrence are clear. Because much of a democracy’s deterrent credibility is tied to domestic political pressures, operations that manipulate public sentiment can weaken that state’s signals of resolve. AI manipulations might make a country’s domestic audience less inclined to support a strong military response to an act of foreign aggression—especially one against an ally—and thus distort polling data and other supposedly empirical signals from the public to which democratic leaders pay attention. This can leave such leaders unsure of how much support they truly have and how much backlash they might face if they yield. Such uncertainty can cause hesitation, weaken leaders’ resolve, and cloud their decision-making—all of which can make a state’s deterrent threats appear less convincing.
State-aligned groups are already exploring ways to undermine information security through AI-enabled influence operations. One example is GoLaxy, a Chinese company that uses generative AI tools and vast open-source data sets to build detailed psychological profiles of surveilled individuals and deploy, on a large scale, synthetic personas that mimic authentic users. The company’s campaigns often entail gathering detailed information on influential figures, using that information to produce messages that are likely to persuade targeted audiences, and then sending those messages from carefully crafted social media personas. By achieving an acute level of precision and amplifying misleading narratives across multiple platforms, such operations can sow confusion, corrode public discourse, and weaken the domestic base that makes deterrent signals credible abroad. GoLaxy’s alignment with Chinese state priorities and its ties to state-linked research institutes and superconducting firms make it a sophisticated propaganda engine.
Documents we analyzed at the Vanderbilt University Institute of National Security show that GoLaxy has already carried out operations in Hong Kong and Taiwan and has been assembling dossiers on members of the U.S. Congress as well as public figures around the world. Open-source intelligence allows adversaries to build comprehensive dossiers on politicians, military leaders, and soldiers for strategic purposes. Precisely targeted persona operations can then use that information. To score tactical wins, for instance, adversaries could target soldiers with deepfake messages containing false impressions of battlefield conditions or circumstances at home—and including accurate personal details about those soldiers’ lives could make the fabrications seem realistic enough to distract their attention or disrupt unit cohesion. In the political sphere, adversaries could blend real photographs of politicians with cloned voices or faces. Even if they are never released, the threat of their exposure could dampen targets’ rhetoric, stall legislative procedures, or weaken leaders’ resolve. And from a strategic standpoint, hostile parties could simulate authorities giving false orders to stand down or divert to alternative communication channels, which could open a window for an adversary to gain ground. The result is a cognitive fog of war.
POISONING THE WELL
Another pathway that adversaries can take to create uncertainty for defenders is model poisoning: the strategic manipulation of the AI systems on which governments rely for intelligence and decision-making support. By corrupting these systems’ training data or compromising their analytical pipelines, adversaries can distort a defender’s understanding of its relative strength and of the urgency of the threat. A system that displays an underestimation of an adversary’s powers can encourage unwarranted confidence in a defender; one that exaggerates the nature of the threat can induce hesitation. Either way, the effective manipulation of such AI systems could do more than simply complicate a defender’s crisis management—it could weaken the credibility of its deterrent signals and thus create dangerous risks.
Essentially, model poisoning works by manipulating a model’s data pipeline so that it overlooks important information and absorbs false inputs. This, in turn, can push the system toward misleading or degraded assessments. One method is by planting false information in the data sets that an AI system ingests to learn. Appearing harmless to human reviewers, this hidden information can nonetheless weaken or bias a model’s reasoning capabilities—for example, by tricking it into flagging certain types of malware as benign so that an adversary might sneak behind an AI-driven firewall. Although no instances of such an approach have yet been recorded, current AI research has demonstrated that existing data sets are vulnerable to this type of data-poisoning attack. What was once theoretical is now possible in practice.
Models can also be poisoned by creating corrupted webpages. An AI system is constantly performing live searches of the Internet for new information; these sites could inject hidden instructions to it and thus skew the model’s assessment. If the filters that screen incoming data are weak, even a small number of corrupted sites can induce inaccurate responses.
An especially stealthy form of information warfare, model poisoning allows adversaries to distort a defender’s understandings about capabilities and resolve—its own and those of others—by changing the very workings of the tools they use for clarity. In a crisis, poisoning could encourage a leader to hesitate or—worse—miscalculate, weakening deterrence and opening the door to escalation.
GETTING OUT IN FRONT
The advent of AI systems was expected to strengthen deterrence by sending clearer signals to adversaries about a defender’s capabilities and resolve. But the rising use of information warfare driven by those same systems threatens to do the opposite. Even in its early stages, this new type of information warfare has shown that AI technologies can influence how information is interpreted, introduce uncertainty into judgment processes, and distort the data that underpins decision-making. These threats will only become more potent as AI develops further.
Even a powerful country such as the United States may have difficulty signaling its deterrent credibility if it becomes exposed to advanced AI-enabled information warfare. For policymakers and citizens alike, the challenge will be figuring out how to harness the benefits of AI while preventing its weaponization. Strategies for countering this new threat must be developed as rapidly as the technologies underpinning it.
Meeting this challenge will require governments and researchers taking steps to harden analytic systems against model poisoning and actively counter AI-enabled influence operations whenever they are detected. To combat the work of firms such as GoLaxy, for instance, the United States and its allies must be able to rapidly detect and disrupt synthetic networks with tools that are capable of identifying and neutralizing AI-driven personas before they take hold. Education campaigns about synthetic media and how it can be identified can also strengthen public awareness of the threat. Democratic governments, social media and AI platforms, and interdisciplinary researchers should work together to develop such solutions.
At the strategic level, the United States should invest in technologies that can quickly detect synthetic messages. The government, academia, and the private sector should design new decision-making safeguards and data-filtering systems that can withstand corrupted inputs, while working with U.S. allies to expose and punish perpetrators of large-scale information campaigns. Additionally, this alliance should programmatically test new models to root out deficiencies—including the kind of data poisoning that may not be obvious in day-to-day use—and do so with rigorous transparency, to allow for peer review. Resilient safeguards and diligent testing are necessary to ensure that AI systems can reliably perform in moments of extraordinary stress or crisis.
In the AI era, deterrence can no longer rest on capabilities and resolve. It will require leaders, defense strategists, and other decision-makers to be able to preserve the reliability of their information environment—even amid widespread digital distortion.
Loading…
link
