Skip to main content

Ethical Principles for Artificial Intelligence in National Defence

  • Chapter
  • First Online:
The 2021 Yearbook of the Digital Ethics Lab

Part of the book series: Digital Ethics Lab Yearbook ((DELY))

  • 567 Accesses

Abstract

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an edge over adversaries. As a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Unfortunately, they remain unmatched by efforts to define ethical frameworks to guide the use of AI in the defence domain. This chapter provides one such framework. It identifies five principles -- justified and overridable uses; just and transparent systems and processes; human moral responsibility; meaningful human control; reliable AI systems – and related recommendations to foster ethically sound uses of AI for national defence purposes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 159.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.gov.uk/government/publications/future-force-concept-jcn-117

  2. 2.

    https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF

  3. 3.

    Roberts et al. (2020).

  4. 4.

    https://www.csa.gov.sg/~/media/csa/documents/publications/singaporecybersecuritystrategy.pdf

  5. 5.

    https://www.nisc.go.jp/eng/pdf/cs-senryaku2018-en.pdf

  6. 6.

    https://www.business.gov.au/news/budget-2019-20

  7. 7.

    www.aitesting.org

  8. 8.

    https://assets.kpmg/content/dam/kpmg/xx/pdf/2018/04/next-major-defense-challenge.pdf

  9. 9.

    https://www.nato.int/docu/review/articles/2019/02/12/natos-role-in-cyberspace/index.html

  10. 10.

    https://www.un.org/en/sections/un-charter/un-charter-full-text/

  11. 11.

    https://www.loc.gov/rr/frd/Military_Law/pdf/ASubjScd-27-1_1975.pdf

  12. 12.

    https://www.icrc.org/en/doc/resources/documents/misc/57jm93.htm

  13. 13.

    https://www.roke.co.uk/products/startle

  14. 14.

    https://breakingdefense.com/2019/03/atlas-killer-robot-no-virtual-crewman-yes/

  15. 15.

    https://www.oecd.org/going-digital/ai/principles/

  16. 16.

    It should be noted that the High-Level Expert Group’s principles also include provisions for human control, but given its focus on trustworthy AI, these are more flexible. For example, it allows that less human oversight may be exercised so long as more extensive testing and stricter governance is in place.

References

Download references

Acknowledgement

We are very grateful to Isaac Taylor for his work and comments on an early version of this chapter and to Rebecca Hogg and the participants of the 2020 Dstl AI Fest for their questions and comments, for they enabled us to improve several aspects of our analysis. We are responsible for any remaining mistakes.

Funding

Mariarosaria Taddeo and Alexander Blanchard’s work on this chapter has been funded by the Dstl Ethics Fellowship held at the Alan Turing Institute. The research underpinning this work was funded by the UK Defence Chief Scientific Advisor’s Science and Technology Portfolio, through the Dstl Autonomy Programme. This chapter is an overview of UK Ministry of Defence (MOD) sponsored research and is released for informational purposes only. The contents of this paper should not be interpreted as representing the views of the UK MOD, nor should it be assumed that they reflect any current or future UK MOD policy. The information contained in this chapter cannot supersede any statutory or contractual requirements or liabilities and is offered without prejudice or commitment.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariarosaria Taddeo .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Taddeo, M., McNeish, D., Blanchard, A., Edgar, E. (2022). Ethical Principles for Artificial Intelligence in National Defence. In: Mökander, J., Ziosi, M. (eds) The 2021 Yearbook of the Digital Ethics Lab. Digital Ethics Lab Yearbook. Springer, Cham. https://doi.org/10.1007/978-3-031-09846-8_16

Download citation

Publish with us

Policies and ethics