Abstract
Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This chapter provides one such framework. It identifies five principles -- justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
Roberts et al. (2020).
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
It should be noted that the High-Level Expert Group’s principles also include provisions for human control, but given its focus on trustworthy AI, these are more flexible. For example, it allows that less human oversight may be exercised so long as more extensive testing and stricter governance is in place.
References
Acalvio Autonomous Deception. (2019). Acalvio. 2019. https://www.acalvio.com/
Asaro, P. (2012). On banning Autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94(886), 687–709. https://doi.org/10.1017/S1816383112000768
BehavioSec: Continuous Authentication Through Behavioral Biometrics. (2019). BehavioSec. 2019. https://www.behaviosec.com/
Boardman, M., & Butcher, F. (2019). An exploration of maintaining human control in AI enabled systems and the challenges of achieving it. STO-MP-IST-178.
Boulanin, V., Carlsson, M. P., Goussac, N., & Davidson, D. (2020). Limits on autonomy in weapon systems: Identifying practical elements of human control. Stockholm International Peace Research Institute and the International Committee of the Red Cross. https://www.sipri.org/publications/2020/other-publications/limits-autonomy-weapon-systems-identifying-practical-elements-human-control-0
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., & Dafoe, A., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. ArXiv:1802.07228 [Cs], February. http://arxiv.org/abs/1802.07228
Brunstetter, D., & Braun, M. (2013). From jus ad bellum to jus ad vim: Recalibrating our understanding of the moral use of force. Ethics & International Affairs, 27(01), 87–106. https://doi.org/10.1017/S0892679412000792
DarkLight Offers First of Its Kind Artificial Intelligence to Enhance Cybersecurity Defenses. (2017). Business Wire. 26 July 2017. https://www.businesswire.com/news/home/20170726005117/en/DarkLight-Offers-Kind-Artificial-Intelligence-Enhance-Cybersecurity
DeepLocker: How AI Can Power a Stealthy New Breed of Malware. (2018). Security intelligence (blog). 8 August 2018. https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/
Department for Digital, Culture, Media & Sport. (2018). Data Ethics Framework. https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework
DIB. (2020a). AI principles: Recommendations on the ethical use of Artificial Intelligence by the Department of Defense. https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF
DIB. (2020b). AI principles: Recommendations on the ethical use of Artificial Intelligence by the Department of Defense - supporting document. Defence Innovation Board [DIB]. https://media.defense.gov/2019/Oct/31/2002204459/-1/-1/0/DIB_AI_PRINCIPLES_SUPPORTING_DOCUMENT.PDF
Docherty, B. (2014). Shaking the foundations: The human rights implications of killer robots. Human Rights Watch. https://www.hrw.org/report/2014/05/12/shaking-foundations/human-rights-implications-killer-robots
Ekelhof, M. (2019). Moving beyond semantics on Autonomous weapons: Meaningful human control in operation. Global Policy, 10(3), 343–348. https://doi.org/10.1111/1758-5899.12665
Enemark, C. (2011). Drones over Pakistan: Secrecy, ethics, and counterinsurgency. Asian Security, 7(3), 218–237. https://doi.org/10.1080/14799855.2011.615082
Floridi, L. (2008). The method of levels of abstraction. Minds and Machines, 18(3), 303–329. https://doi.org/10.1007/s11023-008-9113-7
Floridi, L. (2016a). Mature information societies—A matter of expectations. Philosophy & Technology, 29(1), 1–4. https://doi.org/10.1007/s13347-016-0214-6
Floridi, L. (2016b). Faultless responsibility: On the nature and allocation of moral responsibility for distributed moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160112. https://doi.org/10.1098/rsta.2016.0112
Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31(1), 1–8. https://doi.org/10.1007/s13347-018-0303-9
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in Society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1.
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Fraga-Lamas, P., Fernández-Caramés, T. M., Suárez-Albela, M., Castedo, L., & González-López, M. (2016). A review on internet of things for defense and public safety. Sensors (Basel, Switzerland), 16(10). https://doi.org/10.3390/s16101644
Gavaghan, C., Knott, A., Maclaurin, J., Zerilli, J., & Liddicoat, J. (2019). Government use of artificial intelligence in New Zealand, Final report on phase 1 of the law Foundation’s artificial intelligence and law in New Zealand project’. In New Zealand Law Foundation. https://www.cs.otago.ac.nz/research/ai/AI-Law/NZLF%20report.pdf
International Telecommunications Union. (2017). Minimum Requirements Related to Technical Performance for IMT-2020 Radio Interface(s). 2017. https://www.itu.int/pub/R-REP-M.2410-2017
Japanese Society for Artificial Intelligence [JSAI]. (2017). Ethical Guidelines. http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2
Johnson, A. M., & Axinn, S. (2013). The morality of Autonomous robots. Journal of Military Ethics, 12(2), 129–141. https://doi.org/10.1080/15027570.2013.818399
King, T. M., Arbon, J., Santiago, D., Adamo, D., Chin, W., & Shanmugam, R. (2019). AI for testing today and tomorrow: Industry perspectives. In 2019 IEEE international conference on Artificial Intelligence Testing (AITest) (pp. 81–88). IEEE. https://doi.org/10.1109/AITest.2019.000-3
Kott, A., Swami, A., & West, B. J. (2017). The internet of Battle things. ArXiv:1712.08980 [Cs], December. http://arxiv.org/abs/1712.08980
Lysaght, R. J., Harris, R., & Kelly, W. (1988). Artificial intelligence for command and control. ANALYTICS INC WILLOW GROVE PA. https://apps.dtic.mil/docs/citations/ADA229342
McMahan, J. (2013). Forward. In R. Jenkins, M. Robillard, & B. J. Strawser (Eds.), Who should die? The ethics of killing in war (pp. ix–xiv). Oxford University Press.
Mirsky, Y., Mahler, T., Shelef, I., & Elovici, Y. (2019). CT-GAN: Malicious tampering of 3D medical imagery using deep learning. ResearchGate. https://www.researchgate.net/publication/330357848_CT-GAN_Malicious_Tampering_of_3D_Medical_Imagery_using_Deep_Learning/figures?lo=1
Mökander, J., & Floridi, L. (2021). Ethics-based auditing to develop trustworthy AI. Minds and Machines, 1–5. https://doi.org/10.1007/s11023-021-09557-8
NATO. (2020). NATO 2030: United for a new era. Brussels. https://www.nato.int/nato_static_fl2014/assets/pdf/2020/12/pdf/201201-Reflection-Group-Final-Report-Uni.pdf
O’Connell, M. E. (2014). The American way of bombing: How legal and ethical norms change. In M. Evangelista & H. Shue (Eds.), The American way of bombing changing ethical and legal norms, from flying fortresses to drones. Cornel University Press.
Rigaki, M., & Elragal, A. (2017). Adversarial deep learning against intrusion detection classifiers (p. 14). Luleå tekniska universitet, Datavetenskap.
Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, V., & Floridi, L. (2020). The Chinese approach to artificial intelligence: An analysis of policy, ethics, and regulation. AI & SOCIETY, 36. https://doi.org/10.1007/s00146-020-00992-2
Schubert, J., Brynielsson, J., Nilsson, M., & Svenmarck, P. (2018). Artificial intelligence for decision support in command and control systems, p. 15.
Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21(2), 75–87. https://doi.org/10.1007/s10676-018-9494-0
Sharkey, N. (2010). Saying “no!” to lethal Autonomous targeting. Journal of Military Ethics, 9(4), 369–383. https://doi.org/10.1080/15027570.2010.537903
Sharkey, N. (2012a). Killing made easy: From joysticks to politics. In P. Lin, K. Abney, & G. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 111–128). MIT Press.
Sharkey, N. E. (2012b). The Evitability of Autonomous robot warfare. International Review of the Red Cross, 94(886), 787–799. https://doi.org/10.1017/S1816383112000732
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77. https://doi.org/10.1111/j.1468-5930.2007.00346.x
Sparrow, R. (2016). Robots and respect: Assessing the case against Autonomous weapon systems. Ethics & International Affairs, 30(1), 93–116. https://doi.org/10.1017/S0892679415000647
Taddeo, M. (2012a). Information warfare: A philosophical perspective. Philosophy and Technology, 25(1), 105–120.
Taddeo, M. (2012b). An analysis for a just cyber warfare. In Fourth international conference of cyber conflict. NATO CCD COE and IEEE Publication.
Taddeo, M. (2013). Cyber security and individual rights, striking the right balance. Philosophy & Technology, 26(4), 353–356. https://doi.org/10.1007/s13347-013-0140-9
Taddeo, M. (2014a). Just information warfare. Topoi, 1–12. https://doi.org/10.1007/s11245-014-9245-8
Taddeo, M. (2014b). The struggle between liberties and authorities in the information age. Science and Engineering Ethics, 1–14. https://doi.org/10.1007/s11948-014-9586-0
Taddeo, M. (2017a). The limits of deterrence theory in cyberspace. Philosophy & Technology, 31. https://doi.org/10.1007/s13347-017-0290-2
Taddeo, M. (2017b). Trusting Digital Technologies Correctly. Minds and Machines 27(4), 565–68. https://doi.org/10.1007/s11023-017-9450-5.
Taddeo, M. (2019a). The challenges of cyber deterrence. In C. Öhman & D. Watson (Eds.), The 2018 yearbook of the digital ethics lab (pp. 85–103). Springer. https://doi.org/10.1007/978-3-030-17152-0_7
Taddeo, M. (2019b). Three ethical challenges of applications of artificial intelligence in cybersecurity. Minds and Machines, 29(2), 187–191. https://doi.org/10.1007/s11023-019-09504-8
Taddeo, M., & Floridi, L. (2018). Regulate artificial intelligence to avert cyber arms race. Nature, 556(7701), 296–298. https://doi.org/10.1038/d41586-018-04602-6
Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557–560. https://doi.org/10.1038/s42256-019-0109-1
Tamburrini, G. (2016). On banning autonomous weapons systems: From deontological to wide consequentialist reasons. In B. Nehal, S. Beck, R. Geiβ, H.-Y. Liu, & C. Kreβ (Eds.), Autonomous weapons systems: Law, ethics, policy (pp. 122–142). Cambridge University Press.
The UK and International Humanitarian Law 2018. (n.d.) Accessed 1 Nov 2020. https://www.gov.uk/government/publications/international-humanitarian-law-and-the-uk-government/uk-and-international-humanitarian-law-2018
US Army. (2017). Robotic and autonomous systems strategy. https://www.tradoc.army.mil/Portals/14/Documents/RAS_Strategy.pdf
Yang, G.-Z., Bellingham, J., Dupont, P. E., Fischer, P., Floridi, L., Full, R., Jacobstein, N., et al. (2018). The grand challenges of science robotics. Science Robotics, 3(14), eaar7650. https://doi.org/10.1126/scirobotics.aar7650
Zhuge, J., Holz, T., Han, X., Song, C., & Zou, W. (2007). Collecting Autonomous spreading malware using high-interaction honeypots. In S. Qing, H. Imai, & G. Wang (Eds.), Information and communications security (Lecture Notes in Computer Science) (pp. 438–451). Springer.
Acknowledgement
We are very grateful to Isaac Taylor for his work and comments on an early version of this chapter and to Rebecca Hogg and the participants of the 2020 Dstl AI Fest for their questions and comments, for they enabled us to improve several aspects of our analysis. We are responsible for any remaining mistakes.
Funding
Mariarosaria Taddeo and Alexander Blanchard’s work on this chapter has been funded by the Dstl Ethics Fellowship held at the Alan Turing Institute. The research underpinning this work was funded by the UK Defence Chief Scientific Advisor’s Science and Technology Portfolio, through the Dstl Autonomy Programme. This chapter is an overview of UK Ministry of Defence (MOD) sponsored research and is released for informational purposes only. The contents of this paper should not be interpreted as representing the views of the UK MOD, nor should it be assumed that they reflect any current or future UK MOD policy. The information contained in this chapter cannot supersede any statutory or contractual requirements or liabilities and is offered without prejudice or commitment.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Taddeo, M., McNeish, D., Blanchard, A., Edgar, E. (2022). Ethical Principles for Artificial Intelligence in National Defence. In: Mökander, J., Ziosi, M. (eds) The 2021 Yearbook of the Digital Ethics Lab. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-031-09846-8_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-09846-8_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09845-1
Online ISBN: 978-3-031-09846-8
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)