1 Introduction

China and the United States are both utilizing artificial intelligence (AI) and associated technologies to transform their militaries and explore new patterns of warfare. What does this transformation mean for bilateral military stability? In the competition–confrontation spectrum of military interactions, we shall see AI’s influence in a two-fold pattern. Both its strengths and weaknesses can give rise to risks and challenges, if not well managed. We need to go beyond the high hope that the intelligence revolution will lead to disruptive transformation in military affairs and make warfare increasingly fast, precise, clean and even moral; we need to debunk the myths surrounding this transformation and bridge the gap between technological and strategic considerations.

Both countries and militaries seek to use algorithms to speed up the process of collecting, integrating, and transmitting data, which can free up human labor for higher-level tasks such as data analysis. Some current and future projects even attempt to develop machines that are capable of decision-making in the turbulent and complex environment of military competition and confrontation. Whether the two countries’ efforts amount to an “AI arms race” remains an open and definitional question. What is clear is that the two militaries are receiving strong political and financial support, albeit constrained by economic conditions in both countries.

Just like any technology and the military capabilities it enables, if AI’s development is not well managed, it could cause security relations to go awry. There are concerns that the development of AI in military affairs can harm stability between major military powers like the U.S. and China (Bidwell and MacDonald 2018). In all fields of AI utilization, the following concerns surround the current and future roles of AI: accountability, safety and security, reliability, explicability, adaptability, human control, and responsibility. AI (along with its associated technologies) has the potential to contribute to strategic and operational instability as a result of both its strengths and weaknesses.

The militaries of both China and the U.S. are still in the early stages of the ongoing transformation in military affairs, exploring where the future of warfare may lead and what AI-enabled modernization will mean for bilateral military stability. Some preliminary consensuses are being formed. The general guidelines of technological governance cover the legal, ethical, and operational aspects of militarized conflicts. But very few of these guidelines are formalized and institutionalized. The two countries and militaries have not come together to settle their differences in terms of definitions and perceptions. Even more disturbing, they don’t talk to each other as much as they should. A mismatch in perceptions related to the strategic and operational impact of AI-enabled capabilities can evolve into a challenge for not only the two countries but the larger international community.

Finding a common denominator regarding the future development of AI is not just a mission for stakeholders within national borders, but also a shared enterprise among nations. As the China Academy of Information and Communication Technology (CAICT) and various other Chinese institutes or initiatives have suggested, the competition mindset, be it an arms race or technological marathon analogy, should be avoided. Global cooperation on rules, regulations, standards, and laws should be deepened and transcend political values and interests.

There are military observers, thinkers, and decision-makers in both countries who have warned about the potential negative impact of AI. In the Chinese military, when it comes to the relationship between humans and machines, a human-oriented mindset still trumps technological centralism and determinism. This idea, deeply rooted in traditional Chinese military thought and Marxist philosophy, emphasizes agency and has not been completely rejected by the instrumentalist military operations specialists who are eager to see machines come “alive”. The U.S. government has released principles for how it intends to govern AI in defense applications in the Defense Innovation Board’s “Ethical Principles for Artificial Intelligence” (Department of Defense 2020).

The following sections will first review the relevance of AI in military transformation. Then, the practices and promises of AI utilization in the military context will be discussed. Part three will debunk some myths about the expected roles of AI in its current form. Part four will be devoted to a discussion of particular risks and challenges. It is worth noting that, AI’s potential disturbing impact does not only stem from its strengths, but also its weaknesses. Part five will address the different concerns behind the risks, especially the gap between the focus on military effectiveness and the human-centered standard of risk control. The last part will discuss what needs to and can possibly be done to better manage the impact of AI in military affairs.

2 AI’s (expected) relevance in military transformation

The idea of AI dates back to the 1940s. Its development over the past 60 years was an evolutionary process full of ups and downs. Its nonlinear progress is the very feature that we should pay attention to when assessing its current and future influences in military applications.

2.1 Narrow but promising AI

In 1948, Alan Turing described thinking machines for the first time and proposed the “Turing test” (Turing 1950). Later, in 1956, Malvern Minsky, John McCarthy, Claude Shannon, and other scholars held a conference on artificial intelligence at Dartmouth College, known as the “Dartmouth Conference”, formally establishing the concept of artificial intelligence and the goals of its development. Since then, research on AI has been developing in propositional reasoning, knowledge representation, natural language processing, machine perception, machine learning, with both high tides and low points. In the 2010s, due to the convergence of big data analytics, improved machine learning, and enhanced computational power, a period of explosive AI development began that continues today.

All current AI systems fall into the “Narrow AI” category, with machine learning (ML) as the most prevalent approach. ML is centered on algorithms that address specific problems in relatively simple environments and scenarios. It is far less capable than “General AI” or AGI, which if ever realized would be able to carry out a broad range of tasks with human-level intelligence. Still, Narrow AI has given rise to substantive advancements and developments in military affairs.

The military applications of AI cut across different levels, from strategic and operational processes, to tactical and technological performance. This combination of uses, in the context of military transformation, is leading toward new patterns of militarized conflict. This can be understood in the Chinese study of military science as an evolution of war, from “informatized warfare” (xinxihua zhanzhen) toward “intelligentized warfare” (zhinenghua zhanzheng) (Ministry of Defense of People’s Republic of China 2019).Footnote 1 U.S. military thinkers do not use this term per se, but their vision of future warfare captures the same prospect of technology-enabled advances in capabilities, doctrines, and patterns.

Some areas of military affairs have already been partially or fully transformed by prominent aspects of AI-driven warfare. Military and security practitioners and observers share three general perspectives regarding the value of AI in creating a new warfare ecosystem. All these together lead to a new pattern and spectrum of general situational awareness. A variety of approaches and models of situational awareness have been developed (Stanton et al. 2017). It is generally accepted that situational awareness consists of three steps: perceiving, understanding, and predicting (Endsley 1995). It deals not only with information processing, but also robust decision-making.

2.2 Three conceptualizations of enhanced general awareness

The first conceptualization of enhanced situational awareness involves the basic functional application of data and information processing. AI offers the possibility to speed up information gathering and interpretation, freeing human agency for higher level missions. For example, the Pentagon’s Project Maven seeks to use algorithms to more rapidly interpret images from drone surveillance feeds. From image recognition to processing of publicly available or classified information databases, AI processing functions could help militaries to more accurately and quickly decipher information, which then contributes to faster and/or better decision-making.

“Intelligentized” warfare thus will be an upgraded or evolved version of information warfare. No longer will it be merely a release or accumulation of energy in the physical world; as warfare evolves, it will become a rapid amplification and convergence of nonlinear factors that is both self-sustaining and self-focusing, involving multiple military operational nodes and platforms. In their ultimate form, intelligentized combat systems will have acquired the ability to self-adapt, self-learn, self-confront, self-repair, and self-evolve, moving from efficient and effective information processing systems to an evolvable ecosystem of military operations.

The second perspective centers around human–machine interactions in different scenarios and missions, especially those involving algorithms that help respond in cases where human coordinators cannot directly guide the machines. According to the U.S. Air Force’s vision, for example, all fifth-generation jet fighters will be equipped with drones as wingmen for future operations in an “anti-access/area denial” environment (Harper 2019). In its future form, the machines within the interactive system will be supported by on-board algorithms, enabling them to move “autonomously” without any supervision or assistance.

In the broad fields of strategic and operational intelligence and command and control, foes’ intentions are expected to be deduced to calculable and predictable inputs. On the output side, the attack, response, and policy targets will be optimized; the mission planning and execution will be streamlined.

It is expected that unmanned combat will become a basic form of warfare, as the integration of AI and related technologies push autonomy to an advanced level. Multi-domain and cross-domain operations will expand from mission planning, physical integration and loose coordination, to heterogeneous fusion, integrated data networks, tactical mutual controlling, and cross-domain integrated offense and defense. The development of decentralization and weak-centralization will be better integrated with traditional, centralized military systems, with greater compatibility contributing to the decline of purely human-dominated mode of command and control (C2).

A third perspective of AI-enhanced situational awareness involves a higher-level and idealized transformation—a paradigm shift of warfare. The idea of “disruptive” and revolutionary transformation that arose with the development of AI and associated technologies is shared by many military thinkers and planners across national borders (e.g. Work and Brimley 2014; Yuan 2017; Lin 2017; Jin 2017; Li 2017).This view holds that the speed and complexity of conflicts are increasing, not only quantitatively, but also in a qualitative sense, in terms of a shortened observe-orient-decide-act (OODA) loop, the increasing velocity of military platforms and munitions, as well as the expanded network that connects all the nodes. From decision-making to the functioning of the kill chain, individual platforms are becoming more deeply embedded within a complex network, or even a network of networks. The new operational elements represented by AI cloud cluster would reconstruct the battlefield ecosystem, thus changing the mechanisms through which militaries can win wars. The role of virtual space in combat systems will become increasingly vital and gradually integrated with physical space. Biochemical conditions and human processing capabilities are becoming insufficient to meet the emerging need of such future militarized conflicts.

Within these three categories, AI is expected to provide better opportunities for a more robust spectrum of situational awareness. Situational awareness can be defined as the “perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley 1988). The full spectrum of enhanced awareness could potentially cut across levels of tactics, operations and strategy, from knowing to understanding, and from understanding to deciding.

3 U.S. and Chinese militaries meet AI

To a large extent, the application of AI in military affairs lies at the heart of the current phase of the revolution of military affairs (RMA) within the U.S. military. The U.S. started employing AI to augment its military capabilities decades ago. Since the 1980s, the U.S. National Aeronautics and Space Administration (NASA) and the Defense Advanced Research Projects Agency (DARPA) have been funding projects related to neural network computing to realize fast computation, communication, and storage. The efficiency of the new AI-enabled systems is much higher than that of the traditional ones in early information warfare in performing complex functions, such as image recognition and integrated sensory processing. At present, the U.S. military is developing deep neural network technology, which is expected to simulate the information processing mechanism of human brain neurons.

The U.S. military’s use of AI cuts across different layers, from strategic reorientation to great power competition, and operational processes in different theaters, all the way down to tactical and technological functions of weapon systems. Preparing to return to competitions and potential conflicts with opponents who have military might with roughly the same or similar capability and capacity, Washington is saying farewell to its post-Cold War and post-9/11-era focus. The U.S launched modernization and readiness-enhancing programs of strategic and conventional forces in both defensive and offensive realms.

To this end, the U.S. is running a series of transformation experiments, adhering to the general and overall philosophy of “flexible force deployment”: the cross-service “joint all-domain operations,” based on the Army’s earlier idea of “multi-domain operations”; the Marine Corps’ transition to flexible and miniaturized force structure; the Navy’s “distributed maritime operations” and new emphasis on sea control; the “penetrating counter air” of the Air force; future-oriented explorations like “mosaic warfare;” and so on (DoD 2016; U.S. Air Force 2015; U.S. Joint Chiefs of Staff 2018a, b; U.S. Army 2018). All of these initiatives emphasize greater jointness and deeper integration, not only within and across services, but also between and among allies. AI is one of the (if not “the”) most crucial enabling technologies for achieving decentralized and flexible force deployment and operations, and strengthening networks across new platforms.

The Pentagon’s unclassified budget for AI development has grown dramatically in recent years, from $600 million in FY 2016 to $2.5 billion in FY 2021, supporting over 600 active projects (Office of the Under Secretary of Defense 2020). The Pentagon has initiated a series of big data technology projects that use AI algorithms to acquire and process information from various sources of data, including text, sound, image, and video. If successful, these projects will improve decision-makers’ situational awareness as well as their ability to judge threats and determine courses of action.

At present, the Pentagon’s AI research activities are at the discretion of the research agencies of the various services, as well as DARPA and the Intelligence Advanced Research Projects Activity (IARPA). The Pentagon’s Joint Artificial Intelligence Center (JAIC) is in charge of coordination across different AI programs. For instance, JAIC oversees the National Mission Initiatives (NMIs), which use artificial intelligence to solve the most pressing operational challenges. In April 2017, then-U.S. Undersecretary of Defense Robert Work issued a memo announcing the establishment of the Algorithmic Warfare Cross-Functional Team (AWCFT), or Project Maven (Deputy Secretary of Defense 2017). The team uses AI algorithms to develop software for fast data gathering and processing, and to provide high-quality and timely intelligence, including efficient target detection, classification, and early warning calculations. The team also promotes research on advanced algorithms to aid military decision-making.

China has been engaging in an AI-enabled military modernization process that shares many features of the structural, doctrinal, and technological transformations of the U.S. military. China is still behind the U.S., especially in terms of basic research on algorithms, software frameworks, and manufacturing of high-end chips. But China has some particular advantages, such as fewer obstacles to data collection and usage, which facilitate easier algorithm training.

China’s 2017 “Next Generation AI Development Plan” describes AI as a “strategic technology” that has become a “new focus of international competition” (China State Council 2017). According to the document, China will seek to develop a core AI industry worth over 150 billion RMB by 2020 and to “firmly seize the strategic initiative” and reach “world-leading levels” of AI investment by 2030. China uses a “military-civil fusion” approach—which, as the name suggests, integrates civilian and military resources—to develop capabilities, such as autonomous C2 systems, predictive operational planning, and a better fusion of intelligence, surveillance, and reconnaissance (ISR).

Broadly speaking, the People’s Liberation Army (PLA) envisions the military operationalizing AI and related technologies, including cloud computing, big data analytics, and unmanned systems. During a 2020 press briefing, Senior Colonel Ren Guoqiang acknowledged that, although PLA forces are working toward informatization, the PLA is still a long way from establishing robust informatized warfare capabilities (Ministry of Defense 2020).

The Chinese military has been primarily focusing on the functional and instrumental enabling effects of AI. Scenarios involving these capabilities include information management and processing, intelligence assessment and prediction, autonomous offense and defense systems, cyber-electronic-magnetic warfare, logistics and support, trainings and simulations, etc. The People’s Liberation Army (PLA), which is engaged in a process of moving from the “realization of mechanization” to informatization, has deemed AI and associated technologies vital to the improvement of speed, accuracy, and efficiency in all missions, from logistics and intelligence, to battle planning and strategic decision-making (Ibid).

For both countries and militaries, the amount and complexity of data is growing beyond the capability of humans to assess on the battlefield. With the pace and reach of modern militarized conflicts increasing in a multi-dimensional fashion, the complex and diversely distributed battlefield information sensors have spread out over the land, sea, air, outer space, and electromagnetic domain.

To meet new challenges and seize new opportunities, both militaries are attempting to pursue new C2 architectures that cut across different domains. For instance, the U.S. military has been pursuing a coherent and consistent architecture of Joint All-Domain Command and Control (JADC2) (Congressional Research Service 2020). This type of systems require open architectures that enable rapid development and integration of new applications and seamless interoperability between forces. It is essential that this architecture is not only available across services, but also used uniformly, across the entire theater of operations. Rapid information-sharing and the ability to leverage various resources are critical features of these systems, if they are to be feasible and practical. AI is indispensable in helping to understand which forces are available and best suited to perform specific tasks. When AI-enabled systems span a large geographic area, they will be able to assist commanders in assessing the availability and suitability of resources, units, and assets. AI can alert commanders quickly and automatically when it is necessary to adjust the list of mission targets and force pool to optimize asset deployment and use.

Another major area of AI utilization (and associated technologies) in both countries is military unmanned aerial vehicles (UAVs). AI, coupled with other new technologies, such as cloud computing, have promoted the development of UAVs. On August 30, 2018, the U.S. Department of Defense released The Unmanned Systems Roadmap FY 2017–2042, which further integrates unmanned systems into existing traditional systems and identifies major areas of focus (Department of Defense 2017).

The major concerns and goals of military UAV development in both countries contain three aspects. The first involves enhancing the interoperability of unmanned systems and their integration in systemic confrontations. Second, both aim to strengthen autonomous combat capabilities, especially the efficiency, effectiveness and adaptability of unmanned systems in uncertain battlefield environments in the future. The third aspect involves solidifying cyber defense, information protection and electromagnetic defense.

C2 and UAV are just two broad categories of efforts—with each comprising many specific projects—of the modernization and transformation taking place in both countries, which (will) make them increasingly reliant on algorithms to coordinate intelligence, command, logistics, and weapon systems. Deep neural networks perform well in image, audio, and text data. Computer vision and speech recognition are becoming increasingly important in information processing and intelligence management at both the tactical and strategic levels. Militaries in both countries believe that a more capable command, control, communication, computers, intelligence, surveillance, and reconnaissance (C4ISR) system may, in the future, not only facilitate but also oversee planning and decision-making (Work 2015; Wang 2015; Chai 2019; Gong et al. 2016).

4 Myths of “disruptive” transformation

No technological breakthrough is easy to realize. No technological transition takes place linearly or smoothly. The emergent processes in the existing sociotechnical configuration are deeply intertwined, creating structural and institutional costs for changes. Advances in evolution often take place in non-linear, uncertain and indeterminist ways, as the process of AI’s development has shown. In an environment characterized by technological challenges and organized complexity, with different actors interacting in a highly complicated manner, evolutional changes are hard to actualize (Weaver 1948; Liu 2018a).

4.1 Failing expectations for awareness

Two lines of argument help further explain the processes of technological evolution. The first perspective understands evolution as a process with technological regimes that create inertia for established technologies (Nelson and Winter 1982). The second identifies socio-technological transitions as processes with “new combinations” of elements being created that further lead to trajectories of change (Schumpeter 1934). Either way, evolution is characterized by uncertainties and costs, and more failures than successes. When it comes to AI’s development, especially in highly unpredictable, uncertain, complex and confrontational scenarios of military conflict, many “expected” achievements and images of intelligentized military transformation are far from both the reality and the future.

There is no doubt that AI has great advantages over humans in tasks such as searching, computing, storing, and optimizing. When more targets and complex missions are present, however, it is very challenging for AI to form correct or effective general situational awareness as expected in the aforementioned transformational efforts (Jia and Wang 2020). Unlike human cognition, AI at present is a solely computer-centered and statistics-based tool. It remains a daunting job to introduce the human cognitive model into AI so that it can function in reasoning instead of computing, decision-making instead of matching, and memorizing instead of storing.

As mentioned, AI’s most significant potential contribution to future patterns of warfare is its creation of a new model of general situational awareness. But what the current AI can offer, for the foreseeable future, is much less than that. Situational awareness is composed of two parts. One is the mechanic part; that is, the formalization of symbols. The other is the organic part, which involves the intentional actions of understanding, explaining, and thinking. The mechanic part can be optimized computationally. But the organic part is only possible when cognitive processing is present.

In essence, computer and machine in the current age of AI is a tool to realize informal intentionality by means of formalization. It is a process to reflect physical rules and psychological traces through mathematics and physics. There is no place for “common sense” in AI. But common sense is the very essence of cognition. Human intelligence, with its organic framework of cognition, is seen especially in highly competitive and complex environments such as militarized conflicts. This cognition goes far beyond what a machine can do today. It is not only physical, but also psychological, physiological, ethical, and characterized by time and space topology. With no such compound cognition, what AI has is nothing but statistical inputs and probabilistic outputs.

With this foundational constructive feature, mainstream AI relies on a strong assumption of rational choice. This implies that the individuals or groups with decision-making power have high levels of behavioral homogeneity. This assumption, in the context of war and conflicts, however, neglects vital differences between different warring parties, as well as “anomalies” in interpretations of fluid environments and unstable processes.

To solve this fundamental problem, after years of development, many researchers and thinkers have been trying to deconstruct and rebuild mainstream intelligence science, by bringing in the heterogeneity of individual behaviors. Here, the bottleneck of human–machine integration is not simple interaction between human and machine, but the fusion of cognition and computation.

Between human and machine, as shown in Table 1, the former is relatively strong in identifying the dynamic features of a situation, while the latter the static features. The human is stronger in understanding without all knowing, while machine is potentially closer to all-knowing but does not understand well. The two are simply ontologically different.

Table 1 Human–machine relative strength and weakness in awareness

4.2 Relative (dis-)advantages

Technically speaking, the limitations of AI come from a weak theoretical foundation and vulnerable algorithm robustness due to the lack of effective approaches to brain cognitive simulation. The causes lie in the following issues: we have not so far achieved fundamental breakthroughs on the cognitive mechanisms of the human mind; mainstream AI is still limited to the classical logic and statistical thinking by using knowledge acquired from large amounts of data; support from massive training data and high-performance computing platforms is indispensable; big data is never “big” enough, as a real intelligent machine in the current model would need infinite amount of data, while the human brain does not require infinite data or tremendous energy to operate; AI’s performance relies on parameter optimization, and its models are devoid of explicability; small perturbations in the data and parameters of a neural network can cause great deviations, something that has yet to be satisfyingly addressed.

Some of these issues are related to the fundamental path and nature of current AI. Without a completely new path, bottlenecks cannot be solved. But some other issues are considered to be more technically controllable and manageable, and thus have raised particular concerns, such as the questions of adaptability, perturbations, and ethics. When related technical, operational, and policy challenges are met appropriately, it is believed that AI in this current form—for instance, characterized by deep learning—can still vastly improve effectiveness and efficiency on the battlefield.

But due to the aforementioned limitations, in the sense of both science-philosophy and technology, AI is generally not suitable for general-purpose applications. In addition, it is computationally intensive in training, requiring more experienced people to adjust parameters (that is, to set the frame and the super-parameters) to increase efficiency.

AI is at the same time strong and weak, enabling and limiting, promising and self-failing. There is clear border within which AI can and will dramatically shift the way major powers fight one another. There are also myths which, if neglected, can lead to wrong and empty expectations for AI’s impact. From the traditional machine learning algorithm to the rapid development of neural network deep learning, one of the fundamental drivers for AI development was the breakthrough of computing power. For instance, graphics processing units (GPU), with extremely strong computing power, has replaced the central processing unit (CPU), enabling complex computing to run more smoothly. The current algorithm family may be running out of potential. Without the breakthrough of many important technologies, such as efficient chips and advanced quantum computers, the future development of AI will be fundamentally limited. But that doesn’t mean the potential for AI utilization has been exhausted already.

It is through the lens of this balanced perspective that we should see the challenges that AI and associated technologies might have already created in great power interactions, especially in the highly complicated military arena. The challenges to stability and risks of loss of control exist in terms of both the strengths and weaknesses of AI.

5 Risks of instability

The AI-enabled evolution in military affairs in major countries like U.S. and China is a product of a push and pull that takes place in different political, organizational, and technological layers of national security. When they accumulate momentum operationally, some military and security logics, beneath bilateral security relations, may go awry if not well managed.

As previously mentioned, the current AI systems are largely limited to performing the tasks for which they were specifically designed and trained. But there are specific attempts by the U.S. to go beyond this limitation. Lifelong Learning Machines (L2M), a 4-year program launched in 2017, for instance, is intended to develop novel machine-learning approaches that enable systems to learn continuously during execution and apply previously learned information to novel situations the way biological systems do. The program has two technical areas: the first focuses on the development of continual learning mechanisms operating in a unified system; the second aims to translate biological learning mechanisms into algorithms. Such an effort is a double-edged sword when it comes to the impact of AI on military and strategic stability.

Indeed, AI can present some stabilizing effects, for instance, through the enhancement of crisis and battlefield simulations. AI-enabled war games now involve complex multirole interactions with variables and parameters that can be adjusted to explore how dynamic interactions of various factors, such as weapons and allies, can influence the development of a complex strategic environment. This use of evolutionary learning can help stabilize strategic relations and mutual deterrence by demonstrating to decision-makers the consequences of certain behaviors and actions.

But in more areas, AI may cause disturbance, potentially damaging operational and strategic stability between major powers, as a new pattern of warfare and preparations for it are on the way. Disturbance caused by AI can be derived from two directions. First, AI is quite capable. Second, it is still largely impotent. It is worth noting that the following discussions are not about military effectiveness, but about the impact of technological evolution on the way militaries and countries interact. Therefore, the following lists of disturbing factors do not necessarily exhaust the characteristics of current AI-related doctrines and future intelligentized warfare.

5.1 Disruption through strengths

For both major and minor powers, there are still no answers to the question of how AI-enabled weapon systems might alter different scenarios. Without settling on these questions first, the introduction of AI-enabled weapons can destabilize tense security environments or cause unnecessary losses and casualties. The operational and strategic enabling effects of AI-based military transformations and preparations, if not regulated and managed systematically, can harm traditional stability and create a dilemma for mutual deterrence between major military powers in six ways: regarding strategic interaction; nuclear deterrence; nuclear-conventional nexus; cybersecurity; operational decision-making; and ethical misjudgment.

First, AI’s enabling capability coupled with the new pursuits of current military transformations can shake the interactive foundation of strategic stability. Basic teachings of international relations, strategic studies, and game theory hold that, one key distinction between why cooperation works under certain circumstances but less so in other environments is how cooperation, in the long run, evolves. Cooperation via iterated interactions may evolve well when the shadow of the future is “long” enough, and thus future values are not discounted too much, and the penalty for defection is not too devastating. When competitions and interactions are more prone to be a one-shot game, you can never be sure of your adversary’s intentions, hidden even behind friendly words and gestures. In a basic setting of nuclear deterrence and stability, for instance, when there’s little hope for a successful first strike, deterrence works better because the possibility of a second strike iterates the game. This makes cooperation (i.e., mutual deterrence) more plausible.

In this way, AI-enabled systems, by enhancing the precision and functional efficiency of both strategic and conventional weapons, can lower the value of future payoffs and decrease the potential rounds of interactions. With a shorter shadow of the future, the prospect for cooperation dwindles. For both China and United States, there is legitimate concern over the deployment of AI-enabled weapon systems in disputed territories, low-to-medium intensity conflict zones, and densely populated urban areas. With military transformations designed to make wars faster, more precisely lethal, and more distant from human fighters and commanders in both physical and digital worlds, the very interactive requirements of strategic and operational stability are not necessarily met (Qi 2021).

The second, and a specifically daunting challenge that current applications of AI technology can give rise to, is its direct threat to nuclear stability. Advancements in AI give leading nuclear powers such as the United States greater opportunities to limit the deterrence capabilities of other nuclear powers such as China. Rapidly improving capabilities in ISR data collection and analysis, control of autonomous sensor networks, and autonomous target recognition can enable a technologically superior country to not only track but also target a smaller power’s nuclear assets. Consequently, the pursuit of such capabilities by stronger nuclear powers could undermine a weaker country’s nuclear deterrent. Moreover, concerns that a stronger nuclear power has such abilities could lead to an AI arms race and greater instability. One major point of concern in the AI-nuclear nexus is that nuclear powers do not need to have real AI abilities, just perceived capabilities, to destabilize the strategic balance (Saalman 2019; Mahnken et al. 2019).

The third challenge is that AI may contribute to the blurring of lines between conventional and strategic operations. This fuzziness is all the more likely to occur given the current military doctrinal transformation led by the United States that focuses on enhancing redundancy, flexibility, and resilience when facing unpredictable challenges in volatile geopolitical environments. For instance, in a strategic crisis, AI technologies could enable conventional weapons to neutralize nuclear assets, including relatively well-protected targets such as hardened intercontinental ballistic missile silos (Homes 2016). In the eyes of the attacker, this type of attack neutralizes potential retaliatory capabilities and achieves unilateral deterrence against future strategic interactions. This may, in turn, substantially increase the impetus for the state that has been attacked to use its nuclear assets before these assets are disarmed.

The fourth challenge is the problem of information security. Although all the major powers invested heavily in the defensive part of cyber defense, the combination of AI with strategic and political factors makes it difficult for major powers, such as China and U.S., to advance meaningful cyber arms control and risk reduction. Cybersecurity dilemmas can exist where any defensive measures to disrupt, deny, or deter may appear, like preparations for an attack or an actual attack (Buchanan 2017). U.S. Cyber Command has adopted a doctrine of “persistent engagement” and “defending forward” that essentially blurs the line between offense and defense. Strengthened by AI technologies, the cyber domain increasingly becomes a “use it or lose it” arena. Cyber actions to disrupt the opponent’s awareness capabilities, with or without corresponding kinetic attacks, can prompt parties in confrontations or crises to rush to escalate or even launch preemptive attacks (Schneider 2020; Kreps and Schneider 2019).

The fifth challenge, if AI meets the expectations and promises of military transformation, is that rapid decision-supporting and even decision-making functions can become very destabilizing. AI’s advantage in speed can be detrimental if it unnecessarily accelerates the escalation of conflicts from crisis to war, or even from conventional war to nuclear confrontation. Furthermore, improvements in ISR capabilities can narrow the window for diplomatic mediation and reduce the time available for crisis management. At the operational level, as some command and control tasks, especially data- and information-related ones, reach beyond the ceiling of human biochemical conditions and capabilities, the functioning and processes of military systems will be increasingly reliant on AI. Even if the human is kept in the decision loop, the machine would endogenously determine what the human sees, thinks, believes, and decides in military confrontations. Human control, at least partially, is doomed to be neutralized.

Beneath the aforementioned strategic and operational challenges, there is a sixth one: the danger of ethical misjudgment. If an AI system fails to meet human moral standards, will automation spell disaster? In combat, there were cases when troops, including U.S. special forces units in Afghanistan, were spotted by shepherds and consequently failed in their mission. And, if AI-assisted decision-making is unconstrained and purely focused on efficiency-seeking, possible suggestions for indiscriminate removal of civilian targets in engagement may arise. In this way, AI algorithms may be bound to make mistakes, resulting in unnecessary casualties caused by mixed signals or miscommunication.

All in all, no matter to what extent AI can bring about a fundamental change in and even of warfare, the large-scale use of intelligentized and unmanned equipment enhances the flexibility and precision of operations. At the same time, however, the negative impact of AI systems is also obvious. Moreover, being capable and enabling is not the sole mechanism through which AI can give rise to unwanted consequences.

5.2 Disruption through impotence

As AI is still at the nascent stage of development and application, its use brings with it high risks of mistakes and unintended consequences, not only due to its high capability to change warfare, but also its lack of power to meet expectations. All of the following drawbacks and limits of AI in the military environment would create opportunities and space for vital mistakes decreasing the operational and/or strategic stability between major powers.

The first challenge stemming from AI’s impotency is a result of the inherent limitations of narrow AI, which largely relies on high-quality data for training. When training data is limited, artificial neural networks are prone to overfitting and making poor generalizations. A model is overfitted when it is too attuned and specific to the original dataset on which it was trained. As it fits exactly to the training set, it loses applicability to other data and capability to generalize across different problems.

This naturally limits the reliability of AI in a turbulent, militarized, competitive, and conflictual environment. An AI system, as a component of military transformation, is based on established rules, experiences, and knowledge. It cannot independently innovate to face an unpredictable future. Once an opponent’s behavior deviates from past patterns, the system would not know what and from whom or what to learn. As a result, it would be unable to make the corresponding supportive decisions or judgment in time, thus significantly increasing the risk of failure.

In addition, systems trained with specific data sets are susceptible to adversarial attacks, biases, and data manipulation. In unpredictable military conflict environments, AI systems can find it difficult to assess and differentiate between distinctive strategic and operational maneuvers, such as the differences between a bluffing action and a shock-and-awe operation. This shortcoming could lead AI systems to behave unexpectedly in a crisis, underestimating or overreacting, and disrupting deterrence or escalating tensions, respectively.

Furthermore, disrupted and attacked AI systems not only lead to errors but also create exploitable loopholes to disturb mutual stability. For example, by modifying image pixels, one can successfully trick Google’s image recognition system into mistakenly identifying subjects; wearing specially designed glasses can cause most face recognition systems to misjudge; adding specific words to a sentence or intentionally misspelling words can interfere with natural language understanding systems used for text analysis.

At the tactical and operational level, if weapon systems are paralyzed, they may lose combat ability, or even turn to attack one’s own personnel and assets. Take the F-35 as an example. Arguably the first completely data-driven fighter plane in service and already on the way to becoming more autonomous, the F-35 has ten million lines of code and relies heavily on software for all its sub-systems. Such an advanced platform has exposed a large number of problems in the past few years that are highly correlated with its software algorithms (U.S. Government Accountability Office 2014). At the strategic level, when an enemy uses a loophole to modify the algorithm, it can cause the system to miscalculate through deception or disguise. If a third party were to use such tactics, it could, theoretically, cause substantive misunderstanding and misjudgment between major powers.

The second challenge is that the lack of “explain-ability” creates uncertainties in human–machine interactions. Most algorithms are still in the “black box” stage, meaning that it is very difficult to fully understand why an AI system has made a particular choice, especially if it behaves unexpectedly. Even if an AI system makes clear choices after successfully adapting to new environments, it remains difficult to trust the system due to the inability to fully examine and understand how it reached these decisions. In military competition, when systemic confrontation is heavily relying on AI-enabled systems, the problem of distrust can fail the original expectations of efficiency enhancement and hinder clear and timely processing of information and interactions between actors, not only within each country’s command structure, but also between the two.

When such difficulties happen within the arena of situational awareness, the robust communication and processing of signals, and subsequent smooth bargaining between countries and militaries, especially during crises, may be compromised. While in some cases, AI-enabled automation can help situational awareness by eliminating excessive workloads, the complexity and pattern errors that many automated systems bring—that is, when people mistakenly think the system is in a pattern, but it isn’t—can disrupt situational awareness by pushing people out of the loop. Humans outside the loop may not be a problem when the automated system is working well. But when automation fails, no one will be there to detect the problem, correctly interpret the information, and intervene in a timely manner.

The third challenge, similar to the last risk associated with AI’s strength, also lies in the ethical risks an AI system may incur, the dangers of violating the standards of morality in war. In modern warfare, belligerents are restricted by certain international laws, regulations, or cultures. However, the intelligent combat systems discussed above do not possess human-like cognition. It is difficult to constrain them by moral and ethical principles. The main source of AI’s self-evolution is learning from the experience or lessons of past operations. Without manual intervention or control, machines cannot tell which lessons should be followed. They only learn from mistakes and successes.

All in all, beneath the promises of AI making future warfare more precise, efficient, and clean, the technology itself creates a variety of sources of potential instability between major military powers like the U.S. and China. Some of the risks are associated with the promises per se—the technology is too mighty. Some others are results of the myths of AI mania, its enabling power is limited and constrained, with possible failures leading to expected and unexpected risks.

In addition, in the U.S. military, the technologically-driven pursuit of effectiveness and efficiency is coupled with the strategic and geopolitical anxiety of great power challenges (The White House 2017; Department of Defense 2018; Work and Grant 2019; Edelman and McNamara 2017; Montgomery 2014; Dougherty 2019). This combination makes AI-mania intense, both politically and financially, and gives rise to challenges to sound management and governance.

6 Different concerns of effectiveness and risks

Many of the concerns regarding the potential risks of military AI are shared by thinkers and observers from different countries or backgrounds. To a large extent, the aforementioned concerns can find their roots in the Clausewitzian tradition of “the fog of war”, frictions, and uncertainties. Take, for example, the issue and challenge of programming AI systems for contingency and reliability.

Current AI systems are trained for particular tasks. In the domain of military competition and confrontation, however, the environment can change rapidly and in a non-linear fashion. If the context for a given AI system changes, it may not be able to adapt. The environment will become even more unpredictable with the inclusion of more AI-enabled automated and autonomous systems. This would then move beyond the ability of any systems to reliably and correctly comprehend. Accidents and mistakes happen. They may take place more frequently than we expect as we move into man–machine loops that further complicate planning and operations.

These sources of concern and moral positions have different political-philosophical orientations. The underlying concerns can be described in terms or principles such as accountability, safety, security, reliability, explicability, human control, responsibility, and so on. Beneath these terms and conceptualizations, there is an inner “tug of war” between different philosophies and perceptions. Broadly speaking, a distinction can be made between technological determinism and emphasis on human agency.

The technologically determinist voices embrace and have strong expectations of the changes AI could potentially bring about in warfare. Along this line, in both the U.S. and China, there are those who believe or hope the new technology can give their country and military an upper hand in future potential confrontations (Chen 2018; Zhu 2017; Guo et al. 2016; Qiu 2018; Li 2018, 2019; Shen et al. 2014). Thus, to a certain extent, when they are dealing with the question of risk management, the question is often turned into one of military effectiveness, that is, to use more advanced technology to repair imperfect systems. This approach, which is largely efficiency and effect-oriented, does not address the aforementioned challenges arising from AI and associated technologies, however.

Arguments focusing on human agency, on the other hand, take a human-centered perspective. Their concerns originate from the questions of what humanity means and how to advocate for humans to prevail over the ascendant machine and to regulate who and how to pull the triggers in military domains (Liu 2018b; Dong 2018; Zhao and Li 2017; Zhao 2018; Zhao and Xiao 2019; Gu et al. 2019; Mu 2018; Long and Xu 2020). To manage relevant risks, a balance has to be stricken between these two perspectives.

Having a human in the loop is the most commonly seen and cited principle and standard when it comes to the question of management and governance of AI in military affairs. It means that a person has complete control over when to start or stop any action performed by an intelligent system. But some hold different views.

Among the arguments advocating for less human intervention is the human-on-the-loop approach (Verney et al. 2021). It gives humans oversight of an AI-enabled automated system but pushes human control further away from the center of the automated processes of decision-making. AI would not need human pre-approval before taking actions. This argument is based on the understanding that the military needs to move so quickly to react to incoming threats that over-intervention from a human in decision-making would do nothing but weaken effectiveness.

Indeed, in military affairs and especially the current transformation, managing and governing the impact of AI and associated risks can run counter to the need to increase military effectiveness. Take, for example, missile defense after hypersonic missiles came into service. Hypersonic reentry vehicles can travel more than ten times the speed of sound. And, more importantly, they can maneuver to avoid detection by existing surveillance systems. From the defensive perspective, making ISR systems autonomous and integrating sensor data from all relevant sources can help improve the efficiency and effect of reaction. One could argue that such automation in defensive systems would do little harm to inter-state military stability. But these supporting platforms, without meaningful human intervention, can still create false alarms, while their informational and actional inputs in the chains of command and operations can put inter-state interactions in a dangerous spot of self-reinforcing escalation.

Of course, AI is still far from being sufficiently capable to fulfill advanced informational roles and command missions. This is related to a fundamental challenge to the realization of intelligentized warfare. It is also about data, the sheer size and volume of it. As mentioned above, the foundational functions of AI center around the optimization of existing data and the collection and processing of new data. With the improvement of AI systems, such as machine learning, image and language recognition, and simulation, the amount of data produced per unit of time will increase. War requires high accuracy and timeliness of data, so a large number of sensors are needed to acquire real-time battlefield data.

But at present, the stove-piped transmission of data often comes from non-compatible sensors. Threats sensed by one node of ISR networks cannot (yet) be engaged precisely and expeditiously from other nodes, either across distances or across domains. This continues to create significant impediment to the realization of human-on-the-loop-type design. Lots of efforts and resources have been invested into related exploration, for instance, the aforementioned JADC2. The efforts of developing such frameworks and systems are supported by fear of the potential dangers of ignoring or missing an incoming attack.

In short, countries and militaries need to face the challenges and risks of AI’s enabling potential as well as its incompetence. This does not mean that we should reverse the process of making machines “smart.” The potential risks caused by AI-enabled systems can and should be solved, not by cutting off the connection between human agency and machine, or in effect stepping backward. Instead, agency is the very thing we need. Basic values shared by peoples from different backgrounds, areas, and countries should not be simply written as rules but should be also learned by the machines. Enabling machines to learn and understand human values is the ultimate way to fulfill the promise of technological advancement, while at the same time reducing the probability of system failure.

Technologically speaking, AI can become a reliable tool, partner, or even peer, but only when the machines are made to really understand human values rather than being programmed to behave as if they follow ethical principles. Politically speaking, a constant compromise will have to be made between efficiency and effectiveness on the one hand, and stability and human control on the other, to mediate the risks and challenges that arise from AI’s interference and even dominance.

7 Conclusion: managing the risks

Does the “human-in-the-loop” setting solve the problem of risks and instability? Yes, but only partially. Until artificial general intelligence (AGI) is born, if ever, there is no such thing as fully independently functioning AI systems. They function in different environments with human input into the specific design and operation process, with varying levels of autonomy. For now, a widely shared idea holds that AI at all times should be used as a decision aid, in enhancing the efficiency and precision of human decision-making. Human judgments remain essential and necessary for the execution of military missions.

But it is too optimistic to assume that human-in-the-loop automatically ensures stability. These systems can give humans faulty or biased information in a crisis. AI systems are not sufficiently robust yet and are relatively vulnerable to attack and disruption, as discussed earlier. The idea that there are natural limits to AI cannot be discounted. But at the same time, the potential for AI’s enabling role has not been “exhausted” yet. AI-enabled autonomous systems could still take actions that lead to unintended escalation, such as crossing into another country’s territory, maneuvering too close to another country’s military vessels, or even firing without human authorization. These incidents could exacerbate a crisis and add challenges to conflict resolution.

Looking ahead, although industry, academia, and think tanks in both countries started dialogue long ago, and these voices are indeed heard by the two governments, more direct exchange of ideas between the political and military decision-makers remains crucial. In the future, the two countries should share rather than compete over the list of crucial ideas, definitions, and operational implications regarding the utilization of AI.

Countries like China and the U.S. should establish systematic confidence-building measures and develop a shared understanding of what a future AI-enabled military transformation might entail, as well as its strategic impacts. Although it may be difficult for the two countries to agree on specific questions, such as how to tailor defense tools for AI systems that span multiple military domains, the two sides can still work together to find common ground and jointly explore the incorporation of AI to strengthen strategic stability.

In this regard, efforts to frame military AI utilization as a national and strategic technological competition contribute nothing but greater distrust to future dialogue and coordination on managing strategic stability. The two countries should hold dialogues examining how existing international law can constrain the use of AI for military purposes and the implications of private sector development of dual-use technology. They should also address the risks that the weaponization of technology poses to nuclear stability and develop practical measures for technological management. Moreover, the two sides should establish a systematic dialogue mechanism to exchange views on emerging concerns, such as fail-safe mechanisms and how to reduce the risk of crises and conflict escalation due to AI-driven cyberattacks, especially on strategic assets.

There are also some beneficial, long-term steps the two countries could take, which are currently low in feasibility. China and the U.S. should share their respective AI strategies, doctrines, or other relevant documents to increase transparency and enhance mutual understanding. The two countries can also work together to make AI more stabilizing through enhancing global arms control, by improving monitoring and verification capabilities. New norms and necessary restrictions against deployment and use of AI-enabled weaponized systems should be considered for now and the future. The two should set limitations on the deployment of AI weapon systems in sensitive areas and exercise restraint in employing AI in strategic command and control systems, particularly with respect to nuclear weapons. Furthermore, they should formulate bilateral or multilateral agreements that prohibit attacks on nuclear C4ISR systems. Finally, they should work to prevent the use of autonomous weapons against other countries’ strategic assets, including missile submarines, intercontinental ballistic missiles, and second-strike countermeasure systems.

The future is not now, yet. AI is still at a nascent stage of development. But harm is possible, either now or in the very near future. In the Chinese language, the word “risk” literally and philosophically means a mix of “risk” (wei) and “opportunity” (ji). Technology has never been an apolitical and amoral force, and so it remains today. AI is technology, not tools on its own. The influence of, and the opportunities brought about by AI depend on how it is developed and applied in particular scenarios. The impact and policy responses are embedded in both technological and political-strategic logics. How AI shapes future warfare and stability is not solely determined by its functional capacity. To develop technologies to fight, we need to fight the negative effects of technologies at the same time.