Abstract
Artificial Intelligence has been proven to be one of the most influential scientific fields in today’s business world since its technological breakthroughs play an ever-increasing role in various sectors of modern life and transactions. Nonetheless, concerns are raised about the possible adverse effects they may have on individuals and society, given that various incidents of human rights violations, during—and due to—the operation of the so-called autonomous AI systems, have already been noticed. This ‘negative’ aspect of AI systems is attributed to the so-called “black box problem” or “black-box effect”, which constitutes an inherent limitation of AI challenging AI’s further evolution and public acceptance and sparking a lively debate in scientific community about the potential tools for counteracting it. The present paper aims at shedding light on the “new” legal and ethical challenges that AI poses for modern societies. First, the paper introduces the concept of AI “opacity” and examines certain reasons for it. Subsequently, it presents several incidents of violation of human rights taken place due to AI systems in various sectors, including the job market, the banking sector, (private) insurances, justice, transactions, art, and transportations. The paper concludes with some of the most important recommended guiding principles for counteracting the black box effect of AI and defying the new legal and ethical challenges posed by AI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
See more about AI applications in our lives: European Commission, White Paper on AI—A European approach to excellence and trust, Com (2020) 65 final, p. 1; European Commission, Communication-Building trust in Human-Centric Artificial Intelligence, Com (2019) 168 final, p. 1; Kemper and Kolkman (2019), p. 2082, who argues about an ‘algorithmic life’.
- 2.
- 3.
European Commission, Annexes, SWD (2021) 84 final, p. 35.
- 4.
- 5.
- 6.
Felzmann et al. (2019), p. 1.
- 7.
Cf. Licht and Licht (2020), p. 918.
- 8.
McCarthy and Hayes (1969).
- 9.
Wulf and Seizov (2020), p. 619.
- 10.
- 11.
- 12.
- 13.
- 14.
- 15.
- 16.
- 17.
See also European Commission, Annexes, SWD (2021) 84 final, p. 34.
- 18.
Hacker et al. (2020), pp. 429–430.
- 19.
- 20.
Burrel (2016), p. 1.
- 21.
Smith (2016).
- 22.
- 23.
- 24.
- 25.
European Commission, COM (2020) 65 final, p. 11.
- 26.
- 27.
Hacker et al. (2020), p. 430.
- 28.
- 29.
- 30.
- 31.
- 32.
See European Commission, Statement on Artificial Intelligence, Robotics and ‘Autonomous Systems’ (2018).
- 33.
See Uni global Union, Top 10 Principles for ethical artificial intelligence.
- 34.
- 35.
See Dastin (2018).
- 36.
Council Directive 2000/43/EC about implementing the principle of equal treatment between persons irrespective of racial or ethnic origin; the Council Directive 2000/78/EC about establishing a general framework for equal treatment in employment and occupation; Council Directive 2006/54/EC on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation.
- 37.
Art. 8 of the Council Directive 2000/43/EC; Art. 10 of the Council Directive 2000/78/EC; Art. 19 of the Council Directive 2006/54/EC.
- 38.
European Commission, COM (2022) 496 final.
- 39.
Art. 4.
- 40.
C-177/88 of 8 November 1990 (Dekker); C-180/95 of 22 April 1997 (Draehmpaehl).
- 41.
European Commission, COM (2021), 206 final, Art. 6 par. 2 and Annex III, n. 4.
- 42.
European Commission, COM (2021) 206 final, Art. 16–29.
- 43.
Wulf and Seizov (2020), pp. 637–638.
- 44.
For instance, in Spain the “Central de Informatcion des Reisigos” (CIR); in Greece the “Teiresias registry”; in UK the ““behavioural” account data”.
- 45.
Langenbucher (2020), pp. 1–2.
- 46.
- 47.
Art. 3 par. 1h.
- 48.
Art. 3 par. 1.
- 49.
See Art. 9 of the Directive 2004/113/EC.
- 50.
See Art. 4 Com (2022) 496 final.
- 51.
See previous footnote n. 41. The EuCJ has reached a similar conclusion three decades ago at Danfoss case (C-109/88 of 17 October 1989), where it was stated that in case of a payment system that totally lacks transparency, it is not the claimant, rather the defendant-user of the opaque system, who must bear the burden of proof that a discrimination has not been taken place.
- 52.
Langenbucher (2020), passim.
- 53.
See Annex III, Art. 5b.
- 54.
- 55.
European Insurance and Occupational Pensions Authority (2021), p. 9.
- 56.
Langenbucher (2020), pp. 1–2.
- 57.
- 58.
- 59.
The adverse selection problem refers generally to a situation in which sellers have information that buyers do not have, or vice versa, about some aspect of product quality. In the health insurance field, this manifests itself through healthy people choosing managed care and less healthy people choosing more generous plans. To fight adverse selection, insurance companies reduce exposure to large claims by limiting coverage or raising premiums. See for more Furubotn and Richter (2005), p. 222; Veljanovski (2007), pp. 40, 117.
- 60.
See Articles 5 and 6 of the Regulation of the European Parliament and of the Council 2016/679 (General Data Protection Regulation).
- 61.
The same concern is also raised in case of AI system in medical sector, like the famous IBM’s AI system ‘Watson’ accused of incorrect medical treatment, see Ross and Swetlitz (2018).
- 62.
Art. 3 par. 3h.
- 63.
Art. 5 par. 1 and Recital Nr. 15, 18 and 20.
- 64.
Art. 8 of Directive 2000/43/EC and Art. 9 of Directive 2004/113/EC.
- 65.
- 66.
- 67.
The most prevalent one is the risk assessment system COMPAS (Correctional Offender Management Profiling for Alternative Sanctions).
- 68.
Eaglin (2021), p. 364.
- 69.
European Commission, COM (2021), 710 final, p. 11.
- 70.
Angwin et al. (2016).
- 71.
Already established in Wisconsin criminal justice jurisprudence at State v. Skaff, 447 N.W.2d 84, 85 (Wis. Ct. App. 1989) regarding accuracy of sentencing (based on the previous case Gardner v. Florida, 430 U.S. 349, 351–52 (1977), and at State v. Gallion, 678 N.W.2d 197, 209 (Wis. 2004) regarding individual sentencing.
- 72.
Loomis v. Wisconsin, 881 N.W.2d 749 (Wisc. 2016).
- 73.
- 74.
Malenchik v. State 928 N.E. 2d 564 (Ind. 2010).
- 75.
Rhodes v. State 896 N.E. 2d 1193 (Ind. Ct. Appeal 2008).
- 76.
People v. Younglove, N. 341901, 2019 WL 846117 (Mich. Ct. Appeal Feb. 21, 2019).
- 77.
See also Washington (2018).
- 78.
- 79.
See also Freeman (2016), p. 99.
- 80.
Id., p. 104.
- 81.
Director of Public Prosecutions for Western Australia v. Mangolamara (2007) 169 A. Crim. R. 379[2007] WASC 71, [165].
- 82.
See Annex III, Art. 8a.
- 83.
Both Alexa and Siri are voice-controlled digital or virtual assistant programs based on AI that accept voice commands to create to-do lists, order items online, set reminders and answer questions (via internet searches), see for more https://en.wikipedia.org/wiki/Amazon_Alexa, https://en.wikipedia.org/wiki/Siri.
- 84.
UNESCO (2019), p. 16.
- 85.
Id.
- 86.
See, inter allia, the provisions of Art. 26 ‘Advertising on Online Platforms’, and Art. 27 ‘Recommender System Transparency’.
- 87.
See more paradigms at Iglesias et al. (2019), pp. 12 ff.
- 88.
Yanski-Ravid (2017), p. 676.
- 89.
See Kennedy (2019).
- 90.
European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies (2020/2015(INI)), Recital D.
- 91.
Id., pp. 13–15.
- 92.
- 93.
Turing (1950).
- 94.
See Varadi (2023).
- 95.
- 96.
See O’kane (2018).
- 97.
European Commission, COM (2022) 495 final.
- 98.
European Commission, COM (2021) 202 final.
- 99.
- 100.
See Assaro (2006), p. 10.
- 101.
Id.
- 102.
Papadouli (2022), p. 34.
- 103.
- 104.
- 105.
- 106.
See Rissland (1988).
- 107.
See Spyropoulos et al. (2022).
- 108.
Id.
- 109.
- 110.
See Rissland (1990).
- 111.
- 112.
Adadi and Berrada (2018), p. 52138.
- 113.
Id., p. 52142. See also European Parliament, Civil Law Rule on Robotics, (2018/C 252/25), Ethical Principles, Recital Nr. 12.
- 114.
Papadouli (2022), p. 28.
- 115.
See also Lipton (2017), pp. 7 ff.
- 116.
See European Commission, COM (2021) 206 final, Recital Nr. 38, 39.
- 117.
See Art. 13 par. 3 COM (2021) 206 final.
- 118.
Papadouli (2022), p. 33.
- 119.
Cf. Floridi et al. (2018), pp. 697–698.
- 120.
Lipton (2017), pp. 15 ff.
- 121.
See Woodstra (2020), p. 3.
- 122.
Felzmann et al. (2019), p. 4.
- 123.
See European Commission, COM (2019), 168 final, p. 4 (Reference 13)·Opinion of the European Economic and Social Committee, COM (2020) 65 final, Recital Nr. 2.3.
References
Adadi A, Berrada M (2018) Peeking inside the black-box: a survey of explainable artificial intelligence (XAI). IEEE Access 6:52138–52160
Allen C, Varner G, Zinser J (2000) Prolegomena to any further artificial moral agent. J Exp Theor Artif Intell 12:251–261
Anderson M, Anderson SL (2007) Machine ethics: creating an ethical intelligent agent. AI Mag 4:15–26
Angwin J, Larson J, Mattu S & Kirchner L 2016 Machine bias: there’s software used across the country to predict future criminals. And it’s biased against blacks, PROPUBLICA (May 23, 2016). Available online at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Assaro P (2006) What should we want from a robot ethic? Int Rev Inf Ethics 6:9–16
Bathaee Y (2018) The artificial intelligence black box and the failure of intent and causation. Harv Law Rev 31:890–934
Burrel J (2016) How the machine “thinks”: understanding opacity in machine learning algorithms. Big Data Soc 2016:1–12
Burt A (2019) The AI transparency paradox. Available online at Accessed 02 Sept 2023
Carabantes M (2020) Black-box artificial intelligence: an epistemological and critical analysis. AI Soc 35:309–317
Dastin J (2018) Amazon scraps secret AI recruiting tool that showed bias against women. Available online at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Denicola R (2016) Ex machina: copyright protection for computer-generated works. Rutgers Univ Law Rev 69:251–287
Diakopoulos N (2014) Algorithmic accountability reporting: on the investigation of black boxes Columbia Journalism School-Tow Center for Digital Journalism. Available online https://academiccommons.columbia.edu/doi/10.7916/D8ZK5TW2
Eaglin J (2021) Population-based sentencing. Cornell Law Rev 106:353–408
European Commission (2018) Statement on artificial intelligence, robotics and autonomous systems. Available online at http://earthnet.ntua.gr/wp-content/uploads/2018/07/Artificial-inteligence-robotics-and-autonomous-systems.pdf
European Commission (2020) White Paper on AI, COM (2020) 65 final
European Commission (2021) Impact assessment accompanying the proposal for a regulation of the European Parliament and of the Council Laying down harmonized rules on Artificial Intelligence and Amending Certain Union Legislative Acts SWD (2021) 84 final
European Economic and Social Committee (2020) Opinion on “White paper on artificial intelligence — A European approach to excellence and trust”, COM (2020) 65 final (2020/C 364/12)
European Insurance and Occupational Pensions Authority (2021) Artificial intelligence governance principles: towards ethical and trustworthy artificial intelligence in the European Insurance Sector. Available online at https://www.eiopa.europa.eu/document-library/report/artificial-intelligence-governance-principles-towards-ethical-and_en
European Parliament, Civil Law Rule on Robotics (2018/C 252/25)
European Parliament Resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies (2020/2015(INI))
Felzmann H, Villaronga EF, Lutz C, Tamò-Larrieux A (2019) Transparency you can trust: transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data Soc 2019:1–19
Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People: an ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Mind Mach 28:689–707
Freeman K (2016) Algorithmic injustice: how the Wisconsin supreme court failed to protect due process rights in state v. Loomis. NC J Law Technol 75:76–105
Furubotn E, Richter R (2005) Institutions & economic theory, 2nd edn. University of Michigan Press, Michigan
Goodman B, Flaxman S (2017) European Union regulations on algorithmic decision making and a “right to explanation”. AI Mag 2017:50–57
Hacker P, Krestel R, Grundmann S, Naumann F (2020) Explainable AI under contract and tort law: legal incentives and technical challenges. Artif Intell Law 28:415–439
Hagendorff T (2020) The ethics ff AI ethics: an evaluation of guidelines. Mind Mach 30:99–120
Han-Wei L, Ching-Fu L, Yu-Jie C (2019) Beyond State v. Loomis: artificial intelligence, government algorithmization, and accountability. Int J Law Inf Technol 27:122–141
Iglesias M, Shamuilia S & Anderberg A (2019) Intellectual property and artificial intelligence publication office of the European Union
Kancevičienė N (2019) Insurance, smart information systems and ethics. ORBIT J 2(2) Available online at https://doi.org/10.29297/orbit.v2i2.106
Karnow C (1996) Liability for distributed artificial intelligences. Berkley Technol Law Rev 11:147–204
Kemper J, Kolkman D (2019) Transparent to whom? No algorithmic accountability without a critical audience. Inf Commun Soc 22:2081–2096
Kennedy J (2019) How AI completed Schubert’s Unfinished Symphony No 8. Available online at https://www.siliconrepublic.com/machines/unfinished-symphony-no-8-ai-huawei
Kotsiantis SB (2013) Decision trees: a recent overview. Artif Intell Rev 39:261–283
Langenbucher K (2020) Responsible A.I. credit scoring-a legal framework. Eur Bus Law Rev 31:1–5
Lepri B, Oliver N, Letouze E, Pentland A, Vinck P (2018) Fair, transparent and accountable algorithmic decision-making processes. Philos Technol 31:611–627
Licht K, Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 35:915–926
Lipton Z (2017) The mythos of model interpretability. Acmqeue 2017:1–27
McCarthy J, Hayes P (1969) Some philosophical problems from the standpoint of artificial intelligence. Available online http://jmc.stanford.edu/articles/mcchay69/mcchay69.pdf
O’kane S (2018) Uber reportedly thinks its self-driving car killed someone because it decided not to swere. Available online at https://www.theverge.com/2018/5/7/17327682/uber-self-driving-car-decision-kill-swerve
Papadouli V (2022) Transparency in artificial intelligence: a legal perspective. J Ethics Leg Technol 4:25–40
Rissland E (1988) Artificial intelligence and legal reasoning, a discussion of the field and Gardner’s Book. AI Mag 9(3):45–55
Rissland E (1990) artificial intelligence and law: stepping stone to a model of legal reasoning. Yale Law J 99:1957–1981
Ross C, Swetlitz I (2018) IBM’s Watson supercomputer recommended ‘unsafe and incorrect’ cancer treatments, internal documents show. Available online at https://www.statnews.com/2018/07/25/ibm-watson-recommended-unsafe-incorrect-treatments
Russell S, Norvig P (2020) Artificial intelligence: a modern approach, 4th edn. Pearson Series for AI, Hoboken
Schmelzer R (2020) Towards a more transparent AI. Journal of Decision Systems 2020. Available online https://www.forbes.com/sites/cognitiveworld/2020/05/23/towards-a-more-transparent-ai/?sh=29a5f52d3d93
Smith M (2016) In Wisconsin, a backlash against using data to foretell defendants’ futures, N.Y. Times (June 22, 2016). Available online at https://www.nytimes.com/2016/06/23/us/backlash-in-wisconsin-against-using-data-to-foretell-defendants-futures.html
Spyropoulos A, Kornilakis A, Makris G, Bratsas C, Tsiantos V, Antoniou I (2022) Semantic representation of the intersection of Criminal Law and Civil Tort. Data 7(12):1–15
Thelissone E, Padh K, Celis LE (2017) Regulatory mechanism and algorithms towards trust in AI/ML. Available online at https://www.researchgate.net/profile/Eva-Thelisson/publication/318913104_Regulatory_Mechanisms_and_Algorithms_towards_Trust_in_AIML/links/5984d9ef458515605844ef66/Regulatory-Mechanisms-and-Algorithms-towards-Trust-in-AI-ML.pdf
Turing A (1950) Computing machinery and intelligence. Mind 59(236):433–460
Uni global Union, Top 10 Principles for ethical artificial intelligence. Available online at http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf
UNESCO (2019) Preliminary study on the ethics of AI. Available online at https://unesdoc.unesco.org/ark:/48223/pf0000367823. Accessed 02 Sept 2023
Van Lent M, Fisher W, Mancuso M (2004) An explainable artificial intelligence system for small-unit tactical behavior. IAAI Emerg Appl 2004:900–907
Varadi P (2023) Can ChatGBT write a Book? Available online at https://medium.com/@umfpeti/can-chat-gpt-write-a-book-3cca3d00e4e4; https://www.theatlantic.com/books/archive/2023/02/chatgpt-ai-technology-writing-poetry/673035/
Veljanovski C (2007) Economic principles of Law. Cambridge University Press, Cambridge
Vigdor N (2019) Apple card investigated after gender discrimination complaints. Available online at also https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html
Washington A (2018) How to argue with an algorithm: Lessons from the COMPAS-Propublica Debate. Colo Technol Law J 17(2018):133–159
Wettig S, Zehendner E (2004) A legal analysis of human and electronic agents. Artif Intell Law 12:111–135
Woodstra F (2020) What does transparent AI mean?. AI Police Exchange 2020. Available online https://aipolicyexchange.org/2020/05/09/what-does-transparent-ai-mean/
Wulf A, Seizov O (2020) Artificial intelligence and transparency: a blueprint for improving the regulation of AI applications in the EU. Eur Bus Law Rev 31:611–640
Yanski-Ravid S (2017) Generating rebrandt, artificial intelligence, copyright, and accountability in the 3A era-the human-like authors are already here- a new model. Mich State Law Rev 2017:659–726
Zednik C (2019) Solving the black box problem. Available online https://arxiv.org/ftp/arxiv/papers/1903/1903.04361.pdf
Zerilli J, Knott A, Maclaurin J, Gavaghan C (2018) Transparency in algorithmic and human decision-making: is there a double standard? Philos Technol 32:1–24
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Papadouli, V. (2023). Artificial Intelligence’s Black Box: Posing New Ethical and Legal Challenges on Modern Societies. In: Kornilakis, A., Nouskalis, G., Pergantis, V., Tzimas, T. (eds) Artificial Intelligence and Normative Challenges. Law, Governance and Technology Series, vol 59. Springer, Cham. https://doi.org/10.1007/978-3-031-41081-9_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-41081-9_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-41080-2
Online ISBN: 978-3-031-41081-9
eBook Packages: Law and CriminologyLaw and Criminology (R0)