Abstract
This chapter discusses whether on-going EU policymaking on AI is relevant for Artificial General Intelligence (AGI) and what it would mean to potentially regulate it in the future. AGI is typically contrasted with narrow Artificial Intelligence (AI), which excels only within a specific given context. Although many researchers are working on AGI, there is uncertainty about the feasibility of developing it. If achieved, AGI could have cognitive capabilities similar to or beyond those of humans and may be able to perform a broad range of tasks. There are concerns that such AGI could undergo recursive circles of self-improvement, potentially leading to superintelligence. With such capabilities, superintelligent AGI could be a significant power factor in society. However, dystopian superintelligence scenarios are highly controversial and uncertain, so regulating existing narrow AI should be a priority.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Bygrave 2020.
- 2.
Ng and Leung 2020, p. 64.
- 3.
Garg et al. 2021.
- 4.
Floridi and Chiriatti 2020.
- 5.
Silver et al. 2017.
- 6.
Everitt et al. 2018.
- 7.
Floridi and Chiriatti 2020.
- 8.
- 9.
E.g. Bostrom 2014.
- 10.
- 11.
This dystopic scenario is extensively elaborated in Bostrom 2014.
- 12.
United States Executive Office of the President 2016, p. 8.
- 13.
Collingridge 1980, p. 11.
- 14.
United States Executive Office of the President 2016, p. 8.
- 15.
Ibid.
- 16.
Ibid.
- 17.
Ibid.
- 18.
European Commission 2021, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 2021/0106 (COD). Hereinafter referred to as the proposed AIA.
- 19.
- 20.
Ibid.
- 21.
- 22.
Everitt et al. 2018.
- 23.
‘Cognitive’, Merriam-Webster Dictionary, https://www.merriam-webster.com/dictionary/cognitive. Accessed on 29 June 2021.
- 24.
The question was central to Turing 1950. However, as noted by Bringsjord and Govindarajulu 2020, Descartes discussed an earlier version of the Turing test in 1637: ‘If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that they were not real men.’
- 25.
Lutz and Tamò 2015 suggest that one should use other verbs to describe the ‘thinking’ of robots, such as ‘sense-process-weigh-act’.
- 26.
See this in detail in Searle 1997.
- 27.
- 28.
See Floridi 2005 for a test to distinguish between conscious (human) and conscious-less agents.
- 29.
- 30.
- 31.
Bostrom 2014.
- 32.
Turing 1950.
- 33.
- 34.
Huang 2017.
- 35.
Weinbaum and Veitas 2017.
- 36.
Darwish et al. 2020.
- 37.
Weinbaum and Veitas 2017.
- 38.
- 39.
Turchin 2019.
- 40.
Ibid., p. 51.
- 41.
Silver et al. 2018.
- 42.
Floridi and Chiriatti 2020, p. 684: ‘In the same way [that] Google “reads” our queries without[,] of course[,] understanding them, and offers relevant answers […] GPT-3 writes a text continuing the sequence of our words (the prompt), without any understanding.’
- 43.
Floridi 2019.
- 44.
Ibid.
- 45.
Signorelli 2018.
- 46.
- 47.
- 48.
Becker and Gottschlich 2021.
- 49.
- 50.
- 51.
Bostrom 2014.
- 52.
- 53.
Müller and Bostrom 2014.
- 54.
Etzioni 2016.
- 55.
Grace et al. 2018, p. 729 state that ‘Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans’.
- 56.
European Parliament 2017, para 51.
- 57.
Bostrom 2014.
- 58.
Makridakis 2017.
- 59.
- 60.
Ibid.
- 61.
- 62.
Goertzel 2015.
- 63.
Ibid.
- 64.
Good 1966, p. 33.
- 65.
Armstrong et al. 2016.
- 66.
Ibid.
- 67.
- 68.
United States Executive Office of the President 2016, p. 8.
- 69.
Russell 2016, p. 58.
- 70.
Everitt 2018, p. 4.
- 71.
For an overview of AGI safety issues, see Everitt et al. 2018.
- 72.
Soares et al. 2015.
- 73.
Everitt et al. 2018.
- 74.
Goertzel 2015.
- 75.
Yudkowsky 2008.
- 76.
Liu et al. 2018, p. 8.
- 77.
Goertzel 2015, p. 55 notes, ‘Bostrom and Yudkowsky … worry about what happens when a very powerful and intelligent reward-maximiser is paired with a goal system that gives rewards for achieving foolish goals[, such as] tiling the universe with paperclips’.
- 78.
Wiener 1960 states, ‘if we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire’.
- 79.
Everitt 2018, p. 204.
- 80.
Bostrom 2014, p. 107 notes, ‘more or less any level of intelligence could in principle be combined with more or less any final goal’.
- 81.
Goertzel 2015, p. 64.
- 82.
- 83.
Guihot et al. 2017, p. 32.
- 84.
The proposed AIA, Article 3(1).
- 85.
Autonomous weapons are excluded from the scope of the proposed AIA but fit within its AI definition.
- 86.
The proposed AIA, Recital 14.
- 87.
The proposed AIA, Title III, Chapters 2 and 3.
- 88.
The proposed AIA, Annex III, Section 6(b).
- 89.
The proposed AIA, Article 5(1)(b).
- 90.
- 91.
Bostrom 2014.
- 92.
Lo et al. 2019.
- 93.
Ibid., p. 78.
- 94.
Neff and Nagy 2016.
- 95.
As mentioned above, this is not the case as the proposed AIA directly regulates only specific types of narrow AI.
- 96.
Everitt 2018, p. 204.
- 97.
- 98.
Bostrom 2014.
- 99.
Ibid., p. 253.
- 100.
Armstrong et al. 2016.
- 101.
- 102.
Weinbaum and Veitas 2017.
- 103.
Goertzel 2015, p. 85.
- 104.
Everitt 2018, p. 204.
- 105.
Yudkowsky 2008, p. 334 argues that ‘[l]egislation could (for example) require researchers to publicly report their [f]riendliness strategies or penalise researchers whose AIs cause damage’.
- 106.
The proposed AIA, Article 10(3).
- 107.
Naudé and Dimitri 2020.
- 108.
Armstrong et al. 2016.
- 109.
Ibid.
References
Armstrong S, Bostrom N, Shulman C (2016) Racing to the precipice: a model of artificial intelligence development. AI and Society 31:201–206
Becker K, Gottschlich J (2021) AI Programmer: autonomously creating software programs using genetic algorithms. Paper presented at the Proceedings of the Genetic and Evolutionary Computation Conference, https://doi.org/10.1145/3449726.3463125
Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford
Bringsjord S, Govindarajulu N (2020) Artificial intelligence. https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/ Accessed 6 July 2021
Bygrave L (2020) Machine learning, cognitive sovereignty and data protection rights with respect to automated decisions. University of Oslo Faculty of Law
Chalmers D (2010) The singularity: a philosophical analysis. Journal of Consciousness Studies 17:7–65
Collingridge D (1980) The social control of technology. Frances Pinter, London
Dafoe A, Russell S (2016) Yes, we are worried about the existential risk of artificial intelligence. https://www.technologyreview.com/2016/11/02/156285/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/ Accessed 5 July 2021
Darwish A, Hassanien A E, Das S (2020) A survey of swarm and evolutionary computing approaches for deep learning. Artificial Intelligence Review, 53(3):1767–1812
Etzioni O (2016) No, the experts don’t think superintelligent AI is a threat to humanity. https://www.technologyreview.com/2016/09/20/70131/no-the-experts-dont-think-superintelligent-ai-is-a-threat-to-humanity/ Accessed 6 July 2021
European Commission (2021) Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, 2021/0106 (COD)
European Parliament (2017) Civil Law Rules on Robotics: European Parliament Resolution of 16 February 2017 with Recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103 (INL)
Everitt T (2018) Towards safe artificial general intelligence. Australian National University, Canberra
Everitt T, Lea G, Hutter M (2018) AGI safety literature review. International Joint Conference on Artificial Intelligence
Floridi L (2005) Consciousness, agents and the knowledge game. Minds and Machines 15:415–444
Floridi L (2019) Should we be afraid of AI? Aeon Magazine 9 May 2016
Floridi L, Chiriatti M (2020) GPT-3: Its nature, scope, limits and consequences. Minds and Machines 30:681–694
Galanos V (2019) Exploring expanding expertise: artificial intelligence as an existential threat and the role of prestigious commentators 2014–2018. Technology Analysis and Strategic Management 31:421–432
Garg S, Sinha S, Kar A, Mani M (2021) A review of machine learning applications in human resource management. International Journal of Productivity and Performance Management doi: https://doi.org/10.1108/IJPPM-08-2020-0427
Gellers J (2021) Rights for robots: artificial intelligence, animal and environmental law. Artificial intelligence, animal and environmental law. Taylor and Francis, Abingdon
Goertzel B (2015) Superintelligence: fears, promises and potentials. Journal of Evolution and Technology 25:55–87
Goertzel B, Pennachin C (2007) Artificial general intelligence. Springer, Berlin
Good I (1966) Speculations concerning the first ultraintelligent machine. Advances in Computers 6:31–88
Grace K, Salvatier J, Dafoe A, Zhang B, Evans O (2018) When will AI exceed human performance? Evidence from AI experts. The Journal of Artificial Intelligence Research 62:729–754
Guihot M, Matthew A, Suzor N (2017) Nudging robots: Innovative solutions to regulate artificial intelligence, Vanderbilt Journal of Entertainment and Technology Law 20:385–456
Heaven W D (2020) Artificial general intelligence: Are we close, and does it even make sense to try? https://www.technologyreview.com/2020/10/15/1010461/artificial-general-intelligence-robots-ai-agi-deepmind-google-openai Accessed 6 July 2021
Huang T-J (2017) Imitating the brain with neurocomputer a “new” way towards artificial general intelligence. International Journal of Automation and Computing 14(5):520–531
Koene R, Deca D (2013) Whole brain emulation seeks to implement a mind and its general intelligence through system identification. Journal of Artificial General Intelligence 4:1–9
Legg S, Hutter M (2007) Universal intelligence: a definition of machine intelligence. Minds and Machines 17:391–444
Liu H-Y, Lauta K, Maas M (2018) Governing boring apocalypses: a new typology of existential vulnerabilities and exposures for existential risk research. Futures 102:6–19
Lo Y, Woo C, Ng K (2019) The necessary roadblock to artificial general intelligence: corrigibility. AI Matters 5:77–84
Lutz C, Tamò A (2015) Robocode-ethicists: privacy-friendly robots, an ethical responsibility of engineers? 2015 ACM SIGCOMM Workshop on Ethics in Networked Systems Research, London
Makridakis S (2017) The forthcoming artificial intelligence (AI) revolution: its impact on society and firms. Futures 90:46–60
Müller V, Bostrom N (2014) Future progress in artificial intelligence: a poll among experts. AI Matters 1:9–11
Naudé W, Dimitri N (2020) The race for an artificial general intelligence: implications for public policy. AI and Society 35:367–379
Neff G, Nagy P (2016) Automation, algorithms, and politics: talking to nots: dymbiotic agency and the case of Tay. International Journal of Communication, 10: 4915–4931
Ng G, Leung W (2020) Strong artificial intelligence and consciousness. Journal of Artificial Intelligence and Consciousness 7:63–72
Russell S (2016) Should one fear supersmart robots? Scientific American 314:58–59
Searle J (1997) The mystery of consciousness. New York Review of Books, New York
Signorelli C (2018) Can computers become conscious and overcome humans? Front Robot AI 5:121
Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, . . . Bolton A (2017) Mastering the game of go without human knowledge. Nature 550:354-35
Silver D, Hubert T, Schrittwieser J, Antonoglou I, Lai M, Guez A, […] Graepel T (2018) A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362(6419):1140–1144
Soares N, Fallenstein B, Armstrong S, Yudkowsky E (2015) Corrigibility. https://www.aaai.org/ocs/index.php/WS/AAAIW15/paper/viewFile/10124/10136 Accessed 6 July 2021
Sotala K (2017) How feasible is the rapid development of artificial superintelligence? Physica Scripta 92:113001:1–14
Turchin A (2019) Assessing the future plausibility of catastrophically dangerous AI, 107:45–58
Turing A M (1950) Computing machinery and intelligence. Mind 59:433–460
United States Executive Office of the President (2016) Preparing for the future of artificial intelligence. Technical report. National Science and Technology Council, Washington D.C., October 2016
Weinbaum D, Veitas V (2017) Open ended intelligence: the individuation of intelligent agents. Journal of Experimental and Theoretical Artificial Intelligence 29:371–396
Wiener N (1960) Some moral and technical consequences of automation. Science 131:1355–1358
Yampolskiy R, Fox J (2013) Safety engineering for artificial general intelligence. Topoi 32:217–226
Yudkowsky E (2008) Artificial intelligence as a positive and negative factor in global risk. In: Bostrom N, Cirkovic M (eds) Global catastrophic risks. Oxford University Press, Oxford
Acknowledgements
The research presented in this chapter was partly financed by the Vulnerability in the Robot Society (VIROS) project, which is funded by the Norwegian Research Council (project number 247947). I thank the editors, anonymous reviewers and all members of the VIROS Project, especially Lee Bygrave, Rebecca Schmidt, Live Sunniva Hjort and Tereza Duchoňová, for their comments to an earlier version of this chapter. All errors are the sole responsibility of the author.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 T.M.C. Asser Press and the authors
About this chapter
Cite this chapter
Mahler, T. (2022). Regulating Artificial General Intelligence (AGI). In: Custers, B., Fosch-Villaronga, E. (eds) Law and Artificial Intelligence. Information Technology and Law Series, vol 35. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-523-2_26
Download citation
DOI: https://doi.org/10.1007/978-94-6265-523-2_26
Published:
Publisher Name: T.M.C. Asser Press, The Hague
Print ISBN: 978-94-6265-522-5
Online ISBN: 978-94-6265-523-2
eBook Packages: Law and CriminologyLaw and Criminology (R0)