Skip to main content

Regulating Artificial General Intelligence (AGI)

  • Chapter
  • First Online:
Law and Artificial Intelligence

Part of the book series: Information Technology and Law Series ((ITLS,volume 35))

Abstract

This chapter discusses whether on-going EU policymaking on AI is relevant for Artificial General Intelligence (AGI) and what it would mean to potentially regulate it in the future. AGI is typically contrasted with narrow Artificial Intelligence (AI), which excels only within a specific given context. Although many researchers are working on AGI, there is uncertainty about the feasibility of developing it. If achieved, AGI could have cognitive capabilities similar to or beyond those of humans and may be able to perform a broad range of tasks. There are concerns that such AGI could undergo recursive circles of self-improvement, potentially leading to superintelligence. With such capabilities, superintelligent AGI could be a significant power factor in society. However, dystopian superintelligence scenarios are highly controversial and uncertain, so regulating existing narrow AI should be a priority.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Bygrave 2020.

  2. 2.

    Ng and Leung 2020, p. 64.

  3. 3.

    Garg et al. 2021.

  4. 4.

    Floridi and Chiriatti 2020.

  5. 5.

    Silver et al. 2017.

  6. 6.

    Everitt et al. 2018.

  7. 7.

    Floridi and Chiriatti 2020.

  8. 8.

    Everitt 2018; Everitt et al. 2018; Goertzel and Pennachin 2007; Heaven 2020; Huang 2017; Yampolskiy and Fox 2013.

  9. 9.

    E.g. Bostrom 2014.

  10. 10.

    Bostrom 2014; Chalmers 2010; Yudkowsky 2008.

  11. 11.

    This dystopic scenario is extensively elaborated in Bostrom 2014.

  12. 12.

    United States Executive Office of the President 2016, p. 8.

  13. 13.

    Collingridge 1980, p. 11.

  14. 14.

    United States Executive Office of the President 2016, p. 8.

  15. 15.

    Ibid.

  16. 16.

    Ibid.

  17. 17.

    Ibid.

  18. 18.

    European Commission 2021, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 2021/0106 (COD). Hereinafter referred to as the proposed AIA.

  19. 19.

    Armstrong et al. 2016; Bostrom 2014; Everitt 2018; Goertzel 2015; Naudé and Dimitri 2020; Yudkowsky 2008.

  20. 20.

    Ibid.

  21. 21.

    Everitt 2018; Everitt et al. 2018; Goertzel and Pennachin 2007; Heaven 2020; Huang 2017; Yampolskiy and Fox 2013.

  22. 22.

    Everitt et al. 2018.

  23. 23.

    ‘Cognitive’, Merriam-Webster Dictionary, https://www.merriam-webster.com/dictionary/cognitive. Accessed on 29 June 2021.

  24. 24.

    The question was central to Turing 1950. However, as noted by Bringsjord and Govindarajulu 2020, Descartes discussed an earlier version of the Turing test in 1637: ‘If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that they were not real men.’

  25. 25.

    Lutz and Tamò 2015 suggest that one should use other verbs to describe the ‘thinking’ of robots, such as ‘sense-process-weigh-act’.

  26. 26.

    See this in detail in Searle 1997.

  27. 27.

    Bringsjord and Govindarajulu 2020. Ng and Leung 2020 do not offer a conclusion on whether AGI can achieve consciousness.

  28. 28.

    See Floridi 2005 for a test to distinguish between conscious (human) and conscious-less agents.

  29. 29.

    Signorelli 2018, Conscious machines also raise the question of whether they are worthy of rights protection, which is not considered here. See Gellers 2021.

  30. 30.

    Legg and Hutter 2007; Everitt et al. 2018, p. 3.

  31. 31.

    Bostrom 2014.

  32. 32.

    Turing 1950.

  33. 33.

    Bostrom 2014, p. 27, Turing 1950, p. 456.

  34. 34.

    Huang 2017.

  35. 35.

    Weinbaum and Veitas 2017.

  36. 36.

    Darwish et al. 2020.

  37. 37.

    Weinbaum and Veitas 2017.

  38. 38.

    Bostrom 2014, p. 35, Koene and Deca 2013.

  39. 39.

    Turchin 2019.

  40. 40.

    Ibid., p. 51.

  41. 41.

    Silver et al. 2018.

  42. 42.

    Floridi and Chiriatti 2020, p. 684: ‘In the same way [that] Google “reads” our queries without[,] of course[,] understanding them, and offers relevant answers […] GPT-3 writes a text continuing the sequence of our words (the prompt), without any understanding.’

  43. 43.

    Floridi 2019.

  44. 44.

    Ibid.

  45. 45.

    Signorelli 2018.

  46. 46.

    Chalmers 2010; Moravec (1976) (unpublished manuscript cited in Bostrom 2014, p. 28), an example of an evolutionary approach to AGI is in Weinbaum and Veitas 2017.

  47. 47.

    Bostrom 2014; Sotala 2017; Yudkowsky 2008.

  48. 48.

    Becker and Gottschlich 2021.

  49. 49.

    Chalmers 2010; Good 1966.

  50. 50.

    Bostrom 2014; Sotala 2017; Yudkowsky 2008.

  51. 51.

    Bostrom 2014.

  52. 52.

    Etzioni 2016 argues against an existential threat based on the argument that experts believe it will take more than 25 years to develop AGI; for an opposing view, see Dafoe and Russell 2016.

  53. 53.

    Müller and Bostrom 2014.

  54. 54.

    Etzioni 2016.

  55. 55.

    Grace et al. 2018, p. 729 state that ‘Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans’.

  56. 56.

    European Parliament 2017, para 51.

  57. 57.

    Bostrom 2014.

  58. 58.

    Makridakis 2017.

  59. 59.

    Ibid., Sotala 2017; Yudkowsky 2008.

  60. 60.

    Ibid.

  61. 61.

    Bostrom 2014, p. 70; Goertzel 2015.

  62. 62.

    Goertzel 2015.

  63. 63.

    Ibid.

  64. 64.

    Good 1966, p. 33.

  65. 65.

    Armstrong et al. 2016.

  66. 66.

    Ibid.

  67. 67.

    Bostrom 2014; Galanos 2019; Liu et al. 2018.

  68. 68.

    United States Executive Office of the President 2016, p. 8.

  69. 69.

    Russell 2016, p. 58.

  70. 70.

    Everitt 2018, p. 4.

  71. 71.

    For an overview of AGI safety issues, see Everitt et al. 2018.

  72. 72.

    Soares et al. 2015.

  73. 73.

    Everitt et al. 2018.

  74. 74.

    Goertzel 2015.

  75. 75.

    Yudkowsky 2008.

  76. 76.

    Liu et al. 2018, p. 8.

  77. 77.

    Goertzel 2015, p. 55 notes, ‘Bostrom and Yudkowsky … worry about what happens when a very powerful and intelligent reward-maximiser is paired with a goal system that gives rewards for achieving foolish goals[, such as] tiling the universe with paperclips’.

  78. 78.

    Wiener 1960 states, ‘if we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire’.

  79. 79.

    Everitt 2018, p. 204.

  80. 80.

    Bostrom 2014, p. 107 notes, ‘more or less any level of intelligence could in principle be combined with more or less any final goal’.

  81. 81.

    Goertzel 2015, p. 64.

  82. 82.

    As mentioned above, AGI could use its intelligence to improve its code quickly, with accelerating enhancement capabilities, see Bostrom 2014; Chalmers 2010; Goertzel 2015.

  83. 83.

    Guihot et al. 2017, p. 32.

  84. 84.

    The proposed AIA, Article 3(1).

  85. 85.

    Autonomous weapons are excluded from the scope of the proposed AIA but fit within its AI definition.

  86. 86.

    The proposed AIA, Recital 14.

  87. 87.

    The proposed AIA, Title III, Chapters 2 and 3.

  88. 88.

    The proposed AIA, Annex III, Section 6(b).

  89. 89.

    The proposed AIA, Article 5(1)(b).

  90. 90.

    Armstrong et al. 2016; Bostrom 2014; Everitt 2018; Goertzel 2015.

  91. 91.

    Bostrom 2014.

  92. 92.

    Lo et al. 2019.

  93. 93.

    Ibid., p. 78.

  94. 94.

    Neff and Nagy 2016.

  95. 95.

    As mentioned above, this is not the case as the proposed AIA directly regulates only specific types of narrow AI.

  96. 96.

    Everitt 2018, p. 204.

  97. 97.

    Armstrong et al. 2016; Bostrom 2014; Everitt 2018; Goertzel 2015.

  98. 98.

    Bostrom 2014.

  99. 99.

    Ibid., p. 253.

  100. 100.

    Armstrong et al. 2016.

  101. 101.

    Everitt 2018; Goertzel 2015.

  102. 102.

    Weinbaum and Veitas 2017.

  103. 103.

    Goertzel 2015, p. 85.

  104. 104.

    Everitt 2018, p. 204.

  105. 105.

    Yudkowsky 2008, p. 334 argues that ‘[l]egislation could (for example) require researchers to publicly report their [f]riendliness strategies or penalise researchers whose AIs cause damage’.

  106. 106.

    The proposed AIA, Article 10(3).

  107. 107.

    Naudé and Dimitri 2020.

  108. 108.

    Armstrong et al. 2016.

  109. 109.

    Ibid.

References

  • Armstrong S, Bostrom N, Shulman C (2016) Racing to the precipice: a model of artificial intelligence development. AI and Society 31:201–206

    Google Scholar 

  • Becker K, Gottschlich J (2021) AI Programmer: autonomously creating software programs using genetic algorithms. Paper presented at the Proceedings of the Genetic and Evolutionary Computation Conference, https://doi.org/10.1145/3449726.3463125

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Bringsjord S, Govindarajulu N (2020) Artificial intelligence. https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/ Accessed 6 July 2021

  • Bygrave L (2020) Machine learning, cognitive sovereignty and data protection rights with respect to automated decisions. University of Oslo Faculty of Law

    Google Scholar 

  • Chalmers D (2010) The singularity: a philosophical analysis. Journal of Consciousness Studies 17:7–65

    Google Scholar 

  • Collingridge D (1980) The social control of technology. Frances Pinter, London

    Google Scholar 

  • Dafoe A, Russell S (2016) Yes, we are worried about the existential risk of artificial intelligence. https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/ Accessed 5 July 2021

  • Darwish A, Hassanien A E, Das S (2020) A survey of swarm and evolutionary computing approaches for deep learning. Artificial Intelligence Review, 53(3):1767–1812

    Google Scholar 

  • Etzioni O (2016) No, the experts don’t think superintelligent AI is a threat to humanity. https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ Accessed 6 July 2021

  • European Commission (2021) Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 2021/0106 (COD)

    Google Scholar 

  • European Parliament (2017) Civil Law Rules on Robotics: European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103 (INL)

    Google Scholar 

  • Everitt T (2018) Towards safe artificial general intelligence. Australian National University, Canberra

    Google Scholar 

  • Everitt T, Lea G, Hutter M (2018) AGI safety literature review. International Joint Conference on Artificial Intelligence

    Google Scholar 

  • Floridi L (2005) Consciousness, agents and the knowledge game. Minds and Machines 15:415–444

    Google Scholar 

  • Floridi L (2019) Should we be afraid of AI? Aeon Magazine 9 May 2016

    Google Scholar 

  • Floridi L, Chiriatti M (2020) GPT-3: Its nature, scope, limits and consequences. Minds and Machines 30:681–694

    Google Scholar 

  • Galanos V (2019) Exploring expanding expertise: artificial intelligence as an existential threat and the role of prestigious commentators 2014–2018. Technology Analysis and Strategic Management 31:421–432

    Google Scholar 

  • Garg S, Sinha S, Kar A, Mani M (2021) A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management doi: https://doi.org/10.1108/IJPPM-08-2020-0427

  • Gellers J (2021) Rights for robots: artificial intelligence, animal and environmental law. Artificial intelligence, animal and environmental law. Taylor and Francis, Abingdon

    Google Scholar 

  • Goertzel B (2015) Superintelligence: fears, promises and potentials. Journal of Evolution and Technology 25:55–87

    Google Scholar 

  • Goertzel B, Pennachin C (2007) Artificial general intelligence. Springer, Berlin

    Google Scholar 

  • Good I (1966) Speculations concerning the first ultraintelligent machine. Advances in Computers 6:31–88

    Google Scholar 

  • Grace K, Salvatier J, Dafoe A, Zhang B, Evans O (2018) When will AI exceed human performance? Evidence from AI experts. The Journal of Artificial Intelligence Research 62:729–754

    Google Scholar 

  • Guihot M, Matthew A, Suzor N (2017) Nudging robots: Innovative solutions to regulate artificial intelligence, Vanderbilt Journal of Entertainment and Technology Law 20:385–456

    Google Scholar 

  • Heaven W D (2020) Artificial general intelligence: Are we close, and does it even make sense to try? https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai Accessed 6 July 2021

  • Huang T-J (2017) Imitating the brain with neurocomputer a “new” way towards artificial general intelligence. International Journal of Automation and Computing 14(5):520–531

    Google Scholar 

  • Koene R, Deca D (2013) Whole brain emulation seeks to implement a mind and its general intelligence through system identification. Journal of Artificial General Intelligence 4:1–9

    Google Scholar 

  • Legg S, Hutter M (2007) Universal intelligence: a definition of machine intelligence. Minds and Machines 17:391–444

    Google Scholar 

  • Liu H-Y, Lauta K, Maas M (2018) Governing boring apocalypses: a new typology of existential vulnerabilities and exposures for existential risk research. Futures 102:6–19

    Google Scholar 

  • Lo Y, Woo C, Ng K (2019) The necessary roadblock to artificial general intelligence: corrigibility. AI Matters 5:77–84

    Google Scholar 

  • Lutz C, Tamò A (2015) Robocode-ethicists: privacy-friendly robots, an ethical responsibility of engineers? 2015 ACM SIGCOMM Workshop on Ethics in Networked Systems Research, London

    Google Scholar 

  • Makridakis S (2017) The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90:46–60

    Google Scholar 

  • Müller V, Bostrom N (2014) Future progress in artificial intelligence: a poll among experts. AI Matters 1:9–11

    Google Scholar 

  • Naudé W, Dimitri N (2020) The race for an artificial general intelligence: implications for public policy. AI and Society 35:367–379

    Google Scholar 

  • Neff G, Nagy P (2016) Automation, algorithms, and politics: talking to nots: dymbiotic agency and the case of Tay. International Journal of Communication, 10: 4915–4931

    Google Scholar 

  • Ng G, Leung W (2020) Strong artificial intelligence and consciousness. Journal of Artificial Intelligence and Consciousness 7:63–72

    Google Scholar 

  • Russell S (2016) Should one fear supersmart robots? Scientific American 314:58–59

    Google Scholar 

  • Searle J (1997) The mystery of consciousness. New York Review of Books, New York

    Google Scholar 

  • Signorelli C (2018) Can computers become conscious and overcome humans? Front Robot AI 5:121

    Google Scholar 

  • Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, . . . Bolton A (2017) Mastering the game of go without human knowledge. Nature 550:354-35

    Google Scholar 

  • Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, […] Graepel T (2018) A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362(6419):1140–1144

    Google Scholar 

  • Soares N, Fallenstein B, Armstrong S, Yudkowsky E (2015) Corrigibility. https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10124/10136 Accessed 6 July 2021

  • Sotala K (2017) How feasible is the rapid development of artificial superintelligence? Physica Scripta 92:113001:1–14

    Google Scholar 

  • Turchin A (2019) Assessing the future plausibility of catastrophically dangerous AI, 107:45–58

    Google Scholar 

  • Turing A M (1950) Computing machinery and intelligence. Mind 59:433–460

    Google Scholar 

  • United States Executive Office of the President (2016) Preparing for the future of artificial intelligence. Technical report. National Science and Technology Council, Washington D.C., October 2016

    Google Scholar 

  • Weinbaum D, Veitas V (2017) Open ended intelligence: the individuation of intelligent agents. Journal of Experimental and Theoretical Artificial Intelligence 29:371–396

    Google Scholar 

  • Wiener N (1960) Some moral and technical consequences of automation. Science 131:1355–1358

    Google Scholar 

  • Yampolskiy R, Fox J (2013) Safety engineering for artificial general intelligence. Topoi 32:217–226

    Google Scholar 

  • Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Bostrom N, Cirkovic M (eds) Global catastrophic risks. Oxford University Press, Oxford

    Google Scholar 

Download references

Acknowledgements

The research presented in this chapter was partly financed by the Vulnerability in the Robot Society (VIROS) project, which is funded by the Norwegian Research Council (project number 247947). I thank the editors, anonymous reviewers and all members of the VIROS Project, especially Lee Bygrave, Rebecca Schmidt, Live Sunniva Hjort and Tereza Duchoňová, for their comments to an earlier version of this chapter. All errors are the sole responsibility of the author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tobias Mahler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 T.M.C. Asser Press and the authors

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mahler, T. (2022). Regulating Artificial General Intelligence (AGI). In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-523-2_26

Download citation

  • DOI: https://doi.org/10.1007/978-94-6265-523-2_26

  • Published:

  • Publisher Name: T.M.C. Asser Press, The Hague

  • Print ISBN: 978-94-6265-522-5

  • Online ISBN: 978-94-6265-523-2

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics