Abstract
The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about responsible AI should (i) concern each step in the MDMP, and (ii) take ethical considerations and enhanced performance in military operations into account. A characterization of the debate on responsible AI in the military, considering both machine and human weaknesses and strengths, is provided in this paper. We present inroads into the improvement of the MDMP, and thus military operations, through the use of AI for decision support, taking each quadrant of this characterization into account.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
While in many private and public sector domains AI solutions are becoming an essential tool driving change and development, progress in the use of AI for military purposes has been hindered by a number of important ethical questions for which answers have been lacking. These questions primarily concern autonomous military platforms, which typically center on the use of lethal autonomous weapon systems (LAWS)Footnote 1 and the potential risk of nuclear escalation.Footnote 2 A recent literature review on data science and AI in military decision-making found that most of the studies examining these topics originate in social sciences. As a result, the debate about the use of AI for military purposes, although of high strategic importance, appears to be limited in terms of its scope and perspective. Additionally, the use of data science at operational and strategic level seems to be largely under-examined in current literature (Meerveld & Lindelauf, 2022). In this paper, we argue that the ethical discussion on the use of AI in military operations should re-shift its focus from so-called ‘killer robots’ and the concept of fully autonomous AI applications to solutions that remain subject of (meaningful) human control. As argued by various researchers [e.g., Tóth et al. (2022)], the use of Lethal Autonomous Weapon Systems (LAWS) is generally considered to be illegal and immoral, despite potentially decreasing risks to military personnel. There is also consensus among policy makers that AI cannot fully replace human decision-making. However, it is necessary to examine both the opportunities and risks of military AI in a broader context and to explore how AI technology can be controlled, supervised and potentially assimilated into force structure and doctrine (Johnson, 2020a, b), either strengthening or complicating deterrence (Johnson, 2019, 2020a, b). In line with the consequentialist approach towards the ethics of military AI, we argue that in discussing the responsibility of AI-based decision support techniques, military effectiveness and the entire decision-making chain in military operations should be taken into account. For example, certain types of military AI robots subjected to human control and judgment may be permissible for self-defense purposes, human-AI teaming could lead to faster and more appropriate decision-making under pressure and uncertainty, and AI systems could be broadly used for adaptive training of military personnel, thereby helping to mitigate decision-making biases [e.g., by means of detecting drowsiness or fatigue from neurometric signals in the brain (Weelden et al. 2022)]. In Fig. 1 we visualize the current debate on responsible AI in a military context and its focal points (i.e., the lower right quadrant, Machine Weakness (MW) and the endpoint of the MDMP). In what follows, we first elaborate on the military decision-making process (MDMP) that in large part precedes lethal target engagement on a battlefield. Next, we present some examples of potential use of AI solutions in the MDMP together with their benefits and infer the issue of the (ir)responsibility of military AI.
AI in support of the military decision-making process (mdmp)
Military decision-making consists of an iterative logical planning method to select the best course of action for a given battlefield situation. It can be conducted at levels ranging from tactical to strategic. Each step in this process lends itself to automation. This does not only hold for the MDMP, but also for related processes like the intelligence cycle and the targeting cycle. As argued in Ekelhof (2018), instead of focusing on the target engagement as an endpoint, the process should be examined in its entirety. To illustrate this point, we visualized the preferred scope with the blue circle in Fig. 1. Below, we first briefly describe the MDMP. Subsequently, we explore the potential advantages of AI in decision-making and provide some examples of how AI can specifically support the MDMP at several different (sub-)steps.
The MDMP and its challenges
The US Army defines seven steps in the MDMP: (1) receipt of mission, (2) mission analysis, (3) course of action (COA) development, (4) COA analysis, (5) COA comparison, (6) COA approval, and (7) the order production, dissemination, and transition (Reese, 2015). The level of detail of the MDMP depends on the available time and resources, as well as other factors. Each step in the MDMP has numerous sub-steps that generate intermediate products. Examples include intelligence products developed during the intelligence preparation of the battlefield (IPB) that are used to indicate COAs and decision points for commanders or geospatial products from terrain analyses that can include recommendations on battle positions and optimal avenues of approach. The intelligence cycle, per NATO standard consisting of four steps (Direct, Collect, Process, and Disseminate) (Davies & Gustafson, 2013), is the separate but relating sub-process by which these intelligence products are created. Other examples of sub-processes in the MDMP are the targeting cycle, as explained by Ekelhof (2018), or the continuous lessons learned process in order to incorporate best practices and lessons learned into military doctrine (Weber & Aha, 2003), which ultimately forms important input in, for example, the COA development phase.The MDMP and its related processes entail many labor-intensive, handcrafted products. This has two important consequences. First, due to the complexity of the information space, the MDMP is hugely susceptible to cognitive biases. These can be both conscious and unconscious and may result in suboptimal performance. An example of a cognitive bias is groupthink which is a problem typically encountered during the analysis and assessment phase of the Intelligence Cycle (Parker, 2020). Another example is the anchoring bias when decisions are made based on initial evidence (the anchor) (Heuer, 1999), as exemplified in a scenario where a group of aviators need to determine the optimal location of battle positions after having received an initial list of good locations during helicopter mission planning. Even though intuitive decision-making in the MDMP may be effective, it is well known that both intuition and uncertainty can lead to faulty and erroneous decision outcomes (Van Den Bosch & Bronkhorst, 2018). Because our human cognitive mechanisms are ill-equipped to convert information from a high volume of data into valuable knowledge (Cotton, 2005), the susceptibility to cognitive biases increases with the exponential growth of data volume (Heuer, 1999). It is expected that the challenge of information overload will only increase, since modern military operations increasingly rely on open-source data (Ekelhof, 2018). Second, labor-intensive processes tend to be time-consuming. The contemporary digitized environment results in a proliferation of various data sources in different formats (i.e., numerical, text, sound, and image) and intelligence requires their fusion and interpretation (Van Den Bosch & Bronkhorst, 2018). In most military situations, it is of high importance to design efficient and streamlined planning processes, avoiding labor-intensive sub-steps, when possible, to ensure that no time is lost (Hanska, 2020). After all, the aim is to outpace the opponent’s OODA-loop (i.e., Observe, Orient, Decide, Act) (Osinga, 2007) and AI-based automation can be an important driver of such efficiency gain. In addition, time pressure can further increase the chance of a cognitive bias [e.g. (Roskes et al., 2011) and (Eidelman & Crandall, 2012)]. In sum, human decision-making mechanisms appear to be deficient in many military circumstances given a limited capacity to process all potentially relevant data and a limited amount of time. The value of AI is found in the capacity to support human decision-making, which optimizes the overall outcome (Lindelauf et al., 2022). In the next section, we address the opportunities offered by AI in more detail by presenting examples of automation of (sub-) elements in the MDMP.
The added value of AI for military decision-making
Given the limitations of human decision-making, the advantage of (partial) automatization with AI can be found both in the temporal dimension and in decision quality. A NATO Research Task Group for instance examined the need for automation in every step of the intelligence cycle (NATO Science & Technology Organization, 2020) and found that AI helps to automate manual tasks, identify patterns in complex datasets and accelerate the decision-making process in general. Since the collection of more information and perspectives results in less biased intelligence products (Richey, 2015), using computer power to increase the amount of data that can be processed and analyzed may reduce cognitive bias. Confirmation bias, for instance, can be avoided through the automated analysis of competing hypotheses (Dhami et al., 2019). Other advantages of machines over humans are that they allow for scalable simulations, conduct logical reasoning, have transferable knowledge and an expandable memory space (Suresh & Guttag, 2021), (Silver, et al., 2016).An important aspect of the current debate about the use of AI for decision-making concerns the potential dangers of providing AI systems with too much autonomy, leading to unforeseen consequences. A part of the solution is to provide sufficient information to the leadership about how the AI systems have been designed, what their decisions are based on (explainability), which tasks are suitable for automation and how to deal with technical errors (Lever & Schneider, 2021). Tasks not suitable for automation, e.g., those in which humans outperform machines, are typically tasks of high complexity (Blair et al., 2021). The debate on responsible AI should therefore also take human strengths (HS quadrant) into account. In practice, AI systems cannot work in isolation but need to team up with human decision-makers. Next to the acknowledgment of bounded rationality in humans and ‘human weakness’ (viz. lower left quadrant in Fig. 1; HW), it is also important to take into consideration that AI cannot be completely free of bias for two reasons. First, all AI systems based on machine learning have a so-called inductive bias comprising the set of implicit or explicit assumptions required for making predictions about unseen data. Second, the output of machine learning systems is based on past data collected in human decision-making events (machine weakness, MW, viz. lower right quadrant in Fig. 1). Uncovering the second type of bias may lead to insights regarding past human performance and may ultimately improve the overall process.
Examples of AI in the MDMP
It is important to examine the risks of AI and strategies for their mitigation. This mitigation, however, is useless without examining the corresponding opportunities at the same time (MS quadrant in Fig. 1). In this paragraph, therefore, we present some examples of AI applications in the MDMP. In doing so, we provide an impetus for expanding the debate on responsible AI by taking every quadrant in Fig. 1 into account.An example of machine strength is the use of AI to aid the intelligence analyst in the generation of geospatial information products for tactical terrain analysis. This is an essential sub-step of the MDMP since military land operations depend heavily on terrain. AI-supported terrain analysis enables the optimization of possible COAs for a military commander, and additionally allows for an optimized analysis of the most likely enemy course of action (De Reus et al., 2021). Another example is the use of autonomous technologies to aid in target system analysis (TSA), a process that normally takes months (Ekelhof, 2018). TSA consists of the analysis of an enemy’s system in order to identify and prioritize specific targets (and their components) with the goal of resource optimization in neutralizing the opponent’s most vulnerable assets (Jux, 2021). Examples of AI use in TSA include automated entity recognition in satellite footage to increase the information position necessary to conduct TSA, and AI-supported prediction of enemy troop locations, buildup and dynamics based upon information gathered from the imagery analysis phase. Ekelhof (2018) also provides examples of autonomous technologies currently in use for weaponeering (i.e., the assessment of which weapon should be used for the selected targets and related military objectives) and collateral damage estimation (CDE), both sub-steps of the targeting process. Another illustrative example of the added value of AI for the MDMP is in wargaming, an important part of the COA analysis phase in the MDMP. In wargames AI can help participants to understand possible perspectives, perceptions, and calculations of adversaries for instance (Davis & Bracken, 2021). Yet another example is the possibility of a 3D view of a certain COA, enabling swift examination of the terrain characteristics (e.g., potential sightlines) to enhance decision-making (Kase, et al., 2022). AI-enabled cognitive systems can also collect and assess information about the attentional state of human decision-makers, using sensor technologies and neuroimaging data to detect mind wandering or cognitive overload (Weelden et al., 2022). Algorithms from other domains may also represent value to the MDMP, such as the weather-routing optimization algorithm for ships (Lin et al., 2013), the team formation optimization tool used in sports (Beal et al., 2019), or the many applications of deep learning in natural language processing (NLP) (Otter et al., 2020), with NLP applications that summarize texts (such as Quillbot and Wordtune) decreasing time to decision in the MDMP. Finally, digital twin technology (using AI) has already demonstrated its value in a military context and holds a promise for future applications, e.g., enabling maintenance personnel to predict future engine failures on airplanes (Mendi et al., 2021). In the future, live monitoring of all physical assets relevant to military operations, such as (hostile) military facilities, platforms, and (national) critical infrastructure, might be possible.
Conclusion
The debate on responsible AI in a military context should not have a predominant focus on ethical issues regarding LAWS. By providing a characterization of this debate into four quadrants, i.e., human–machine versus strength-weakness, we argued that the use of AI in the entire decision-making chain in military operations is feasible and necessary. We described the MDMP and its challenges resulting from the labor-intensive and handcrafted products it involves. The susceptibility to cognitive biases and the time-consuming character of those labor-intensive processes present limitations to human decision-making. We conclude that the value of AI can, therefore, be found in the capacity to support this decision-making to optimize its outcome. Ignoring the capabilities of AI to alleviate the limitations of human cognitive performance in military operations, thereby potentially increasing risks for military personnel and civilians, would be irresponsible and unethical.
References
Altmann, J., & Sauer, F. (2017). Autonomous weapon systems and strategic stability. Survival, 59(5), 117–142.
Beal, R., Norman, T. J., & Ramchurn, S. D. (2019). Artificial intelligence for team sports: A survey. The Knowledge Engineering Review, 34, e28.
Blair, D., Chapa, J., Cuomo, S., & Hurst, J. (2021). Humans and hardware: an exploration of blended tactical workflows using John Boyd’s OODA loop. In R. Johnson, M. Kitzen, & T. Sweijs (Eds.), The conduct of war in the 21st century : Kinetic, connected and synthetic (pp. 93–115). Taylor & Francis Group.
Cotton, A. J. (2005). Information technology-information overload for strategic leaders. Army War College.
Davies, P. H., & Gustafson, K. (2013). The intelligence cycle is dead, long live the intelligence cycle: rethinking intelligence fundamentals for a new intelligence doctrine. In M. Phythian (Ed.), Understanding the intelligence cycle (pp. 70–89). Routledge.
Davis, P. K., & Bracken, P. (2021). Artificial intelligence for wargaming and modeling. The Journal of Defense Modeling and Simulation, 15485129211073126.
De Reus, N., Kerbusch, P., Schadd, M., & Ab de Vos, M. (2021). Geospatial analysis for Machine Learning in Tactical Decision Support. STO-MP-MSG-184. NATO.
Dhami, M. K., Belton, I. K., & Mandel, D. R. (2019). The “analysis of competing hypotheses” in intelligence analysis. Applied Cognitive Psychology, 33(6), 1080–1090.
Eidelman, S., & Crandall, C. S. (2012). Bias in favor of the status quo. Social and Personality Psychology Compass, 6(3), 270–281.
Ekelhof, M. A. (2018). Lifting the fog of targeting. Naval War College Review, 71(3), 61–95.
Hanska, J. (2020). War of time: Managing time and temporality in operational art. Palgrave Macmillan.
Heuer, R. J. (1999). Psychology of intelligence analysis. Center for the Study of Intelligence.
Horowitz, M. C., Scharre, P., & Velez-Green, A. (2019). A stable nuclear future? The impact of autonomous systems and artificial intelligence. arXiv preprint, arXiv:1912.05291.
Johnson, J. (2019). The AI-cyber nexus: Implications for military escalation, deterrence and strategic stability. Journal of Cyber Policy, 4(3), 442–460. https://doi.org/10.1080/23738871.2019.1701693
Johnson, J. (2020a). Delegating strategic decision-making to machines: Dr. Strangelove Redux? Journal of Strategic Studies. https://doi.org/10.1080/01402390.2020.1759038
Johnson, J. (2020b). Deterrence in the age of artificial intelligence & autonomy: A paradigm shift in nuclear deterrence theory and practice? Defense & Security Analysis, 36(4), 422–448.
Jux, A. (2021). Targeting. In M. Willis, A. Haider, D. C. Teletin, & D. Wagner (Eds.), A Comprehensive approach to countering unmanned aircraft systems (pp. 147–166). Joint Air Power Competence Centre.
Kase, S. E., Hung, C. P., Krayzman, T., Hare, J. Z., Rinderspacher, B. C., & Su, S. M. (2022). The future of collaborative human-artificial intelligence decision-making for mission planning. Frontiers in Psychology, 1246.
Lever, M., & Schneider, S. (2021). Decision augmentation and automation with artificial intelligence: Threat or opportunity for managers? Business Horizons, 64(5), 711–724. https://doi.org/10.1016/j.bushor.2021.02.026
Lin, Y.-H., Fang, M.-C., & Yeung, R. W. (2013). The optimization of ship weather-routing algorithm based on the composite influence of multi-dynamic elements. Applied Ocean Research, 43, 184–194.
Lindelauf, R., Monsuur, H., & Voskuijl, M. (2022). Military helicopter flight mission planning using data science and operations research. In NL ARMS, Netherlands Annual Review of Military Studies. Leiden University Press.
Meerveld, H., & Lindelauf, R. (2022). Data science in military decision-making: A literature review. Retrieved from SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4217447
Mendi, A. F., Erol, T., & Doğan, D. (2021). Digital twin in the military field. IEEE Internet Computing, 26(5), 33–40.
NATO Science and Technology Organization. (2020). Automation in the intelligence cycle. Retrieved 21 October, 2022, from NATO https://www.sto.nato.int/Lists/STONewsArchive/displaynewsitem.aspx?ID=552
Osinga, F. P. (2007). Science, strategy and war: The strategic theory of John Boyd. Routledge.
Otter, D. W., Medina, J. R., & Kalita, J. K. (2020). A survey of the usages of deep learning for natural language processing. IEEE Transactions on Neural Networks and Learning Systems, 32(2), 604–624.
Parker, C. G. (2020). The UK National Security Council and misuse of intelligence by policy makers: Reducing the risk? Intelligence and National Security, 35(7), 990–1006.
Reese, P. P. (2015). Military decisionmaking process: Lessons and best practices. Center for Army Lessons Learned.
Richey, M. K. (2015). From crowds to crystal balls: Hybrid analytic methods for anticipatory intelligence. American Intelligence Journal, 32(1), 146–151.
Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of Military Ethics, 13(3), 211–227.
Roff, H. M., & Danks, D. (2018). “Trust but Verify”: The difficulty of trusting autonomous weapons systems. Journal of Military Ethics, 17(1), 2–20.
Roskes, M., Sligte, D., Shalvi, S., & De Dreu, C. K. (2011). The right side? Under time pressure, approach motivation leads to right-oriented bias. Psychological Science, 22(11), 1403–1407.
Sharkey, N. (2010). Saying ‘no!’ to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369–383.
Silver, D., Huang, A., Maddison, C., Guez, A., Sifre, L., Van Den Driessche, G., & Dieleman, S. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529(7587), 484–489.
Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In Equity and access in algorithms, mechanisms, and optimization (pp. 1–9).
Tóth, Z., Caruana, R., Gruber, T., & Loebbecke, C. (2022). The dawn of the AI robots: Towards a new framework of AI robot accountability. Journal of Business Ethics, 178(4), 895–916.
Van Den Bosch, K., & Bronkhorst, A. (2018). Human-AI cooperation to benefit military decision making. NATO.
Weber, R. O., & Aha, D. W. (2003). Intelligent delivery of military lessons learned. Decision Support Systems, 34(3), 287–304.
Weelden, E. V., Alimardani, M., Wiltshire, T. J., & Louwerse, M. M. (2022). Aviation and neurophysiology; A systematic review. Applied Ergonomics, 105, 103838. https://doi.org/10.1016/j.apergo.2022.103838
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Meerveld, H.W., Lindelauf, R.H.A., Postma, E.O. et al. The irresponsibility of not using AI in the military. Ethics Inf Technol 25, 14 (2023). https://doi.org/10.1007/s10676-023-09683-0
Accepted:
Published:
DOI: https://doi.org/10.1007/s10676-023-09683-0