Skip to main content

Artificial Intelligence’s Black Box: Posing New Ethical and Legal Challenges on Modern Societies

  • Chapter
  • First Online:
Artificial Intelligence and Normative Challenges

Part of the book series: Law, Governance and Technology Series ((LGTS,volume 59))

  • 566 Accesses

Abstract

Artificial Intelligence has been proven to be one of the most influential scientific fields in today’s business world since its technological breakthroughs play an ever-increasing role in various sectors of modern life and transactions. Nonetheless, concerns are raised about the possible adverse effects they may have on individuals and society, given that various incidents of human rights violations, during—and due to—the operation of the so-called autonomous AI systems, have already been noticed. This ‘negative’ aspect of AI systems is attributed to the so-called “black box problem” or “black-box effect”, which constitutes an inherent limitation of AI challenging AI’s further evolution and public acceptance and sparking a lively debate in scientific community about the potential tools for counteracting it. The present paper aims at shedding light on the “new” legal and ethical challenges that AI poses for modern societies. First, the paper introduces the concept of AI “opacity” and examines certain reasons for it. Subsequently, it presents several incidents of violation of human rights taken place due to AI systems in various sectors, including the job market, the banking sector, (private) insurances, justice, transactions, art, and transportations. The paper concludes with some of the most important recommended guiding principles for counteracting the black box effect of AI and defying the new legal and ethical challenges posed by AI.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    See more about AI applications in our lives: European Commission, White Paper on AI—A European approach to excellence and trust, Com (2020) 65 final, p. 1; European Commission, Communication-Building trust in Human-Centric Artificial Intelligence, Com (2019) 168 final, p. 1; Kemper and Kolkman (2019), p. 2082, who argues about an ‘algorithmic life’.

  2. 2.

    Denicola (2016), pp. 254–255; Yanski-Ravid (2017), p. 676.

  3. 3.

    European Commission, Annexes, SWD (2021) 84 final, p. 35.

  4. 4.

    Russell and Norvig (2020), p. 39; Wettig and Zehendner (2004), pp. 111–135.

  5. 5.

    Karnow (1996), pp. 155 ff.; Yanski-Ravid (2017), pp. 674–675· Adadi and Berrada (2018), p. 52144.

  6. 6.

    Felzmann et al. (2019), p. 1.

  7. 7.

    Cf. Licht and Licht (2020), p. 918.

  8. 8.

    McCarthy and Hayes (1969).

  9. 9.

    Wulf and Seizov (2020), p. 619.

  10. 10.

    Burrel (2016), p. 1; Adadi and Berrada (2018), p. 52141; Zednik (2019); European Commission, Com (2020) 65 final, p. 12; European Commission, Annexes, SWD (2021) 84 final, p. 34.

  11. 11.

    Schmelzer (2020). See also Lepri et al. (2018), pp. 611 ff.; Thelissone et al. (2017).

  12. 12.

    Burrel (2016), p. 5; Carabantes (2020), p. 310. See also European Commission, Annexes, SWD (2021) 84 final, p. 34.

  13. 13.

    Lipton (2017), p. 13· Carabantes (2020), p. 310.

  14. 14.

    Russell and Norvig (2020), p. 727; Bathaee (2018), pp. 901–902.

  15. 15.

    Hacker et al. (2020), pp. 433–435; Wulf and Seizov (2020), pp. 617–618.

  16. 16.

    Russell and Norvig (2020); Bathaee (2018), pp. 901–902.

  17. 17.

    See also European Commission, Annexes, SWD (2021) 84 final, p. 34.

  18. 18.

    Hacker et al. (2020), pp. 429–430.

  19. 19.

    Russell and Norvig (2020); Kotsiantis (2013).

  20. 20.

    Burrel (2016), p. 1.

  21. 21.

    Smith (2016).

  22. 22.

    Diakopoulos (2014), p. 1; Burrel (2016), p. 3; Carabantes (2020), p. 311.

  23. 23.

    Goodman and Flaxman (2017), p. 54; Opinion of the European Economic and Social Committee on “White paper on Artificial Intelligence — A European approach to excellence and trust”, COM (2020) 65 final 364/12), Recital 2.6.

  24. 24.

    Zerilli et al. (2018); Kemper and Kolkman (2019), p. 2082.

  25. 25.

    European Commission, COM (2020) 65 final, p. 11.

  26. 26.

    Burrel (2016), p. 2; Carabantes (2020), pp. 312–313.

  27. 27.

    Hacker et al. (2020), p. 430.

  28. 28.

    Burrel (2016), p. 4; Carabantes (2020), pp. 312–313.

  29. 29.

    Burrel (2016), pp. 4–5; Carabantes (2020), p. 313.

  30. 30.

    Carabantes (2020), p. 316. Cf. Hacker et al. (2020), p. 435.

  31. 31.

    Kemper and Kolkman (2019), p. 2082; Carabantes (2020), p. 310; Lipton (2017), p. 2.

  32. 32.

    See European Commission, Statement on Artificial Intelligence, Robotics and ‘Autonomous Systems’ (2018).

  33. 33.

    See Uni global Union, Top 10 Principles for ethical artificial intelligence.

  34. 34.

    European Economic and Social Committee, COM (2020) 65 final 364/12, Recital 2.24; European Commission, Annexes, SWD (2021) 84 final, pp. 36–37.

  35. 35.

    See Dastin (2018).

  36. 36.

    Council Directive 2000/43/EC about implementing the principle of equal treatment between persons irrespective of racial or ethnic origin; the Council Directive 2000/78/EC about establishing a general framework for equal treatment in employment and occupation; Council Directive 2006/54/EC on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation.

  37. 37.

    Art. 8 of the Council Directive 2000/43/EC; Art. 10 of the Council Directive 2000/78/EC; Art. 19 of the Council Directive 2006/54/EC.

  38. 38.

    European Commission, COM (2022) 496 final.

  39. 39.

    Art. 4.

  40. 40.

    C-177/88 of 8 November 1990 (Dekker); C-180/95 of 22 April 1997 (Draehmpaehl).

  41. 41.

    European Commission, COM (2021), 206 final, Art. 6 par. 2 and Annex III, n. 4.

  42. 42.

    European Commission, COM (2021) 206 final, Art. 16–29.

  43. 43.

    Wulf and Seizov (2020), pp. 637–638.

  44. 44.

    For instance, in Spain the “Central de Informatcion des Reisigos” (CIR); in Greece the “Teiresias registry”; in UK the ““behavioural” account data”.

  45. 45.

    Langenbucher (2020), pp. 1–2.

  46. 46.

    Burt (2019), p. 2; see Vigdor (2019).

  47. 47.

    Art. 3 par. 1h.

  48. 48.

    Art. 3 par. 1.

  49. 49.

    See Art. 9 of the Directive 2004/113/EC.

  50. 50.

    See Art. 4 Com (2022) 496 final.

  51. 51.

    See previous footnote n. 41. The EuCJ has reached a similar conclusion three decades ago at Danfoss case (C-109/88 of 17 October 1989), where it was stated that in case of a payment system that totally lacks transparency, it is not the claimant, rather the defendant-user of the opaque system, who must bear the burden of proof that a discrimination has not been taken place.

  52. 52.

    Langenbucher (2020), passim.

  53. 53.

    See Annex III, Art. 5b.

  54. 54.

    European Insurance and Occupational Pensions Authority (2021); Kancevičienė (2019).

  55. 55.

    European Insurance and Occupational Pensions Authority (2021), p. 9.

  56. 56.

    Langenbucher (2020), pp. 1–2.

  57. 57.

    See https://www2.deloitte.com/content/dam/Deloitte/xe/Documents/financial-services/Artificial-Intelligence-in-Insurance.pdf.

  58. 58.

    Carabantes (2020), p. 314. Cf. Langenbucher (2020), pp. 1–2.

  59. 59.

    The adverse selection problem refers generally to a situation in which sellers have information that buyers do not have, or vice versa, about some aspect of product quality. In the health insurance field, this manifests itself through healthy people choosing managed care and less healthy people choosing more generous plans. To fight adverse selection, insurance companies reduce exposure to large claims by limiting coverage or raising premiums. See for more Furubotn and Richter (2005), p. 222; Veljanovski (2007), pp. 40, 117.

  60. 60.

    See Articles 5 and 6 of the Regulation of the European Parliament and of the Council 2016/679 (General Data Protection Regulation).

  61. 61.

    The same concern is also raised in case of AI system in medical sector, like the famous IBM’s AI system ‘Watson’ accused of incorrect medical treatment, see Ross and Swetlitz (2018).

  62. 62.

    Art. 3 par. 3h.

  63. 63.

    Art. 5 par. 1 and Recital Nr. 15, 18 and 20.

  64. 64.

    Art. 8 of Directive 2000/43/EC and Art. 9 of Directive 2004/113/EC.

  65. 65.

    Zerilli et al. (2018); European Commission, White Paper on AI, COM (2020) 65 final, p. 14; European Commission, Proposal on Artificial Intelligence Act, COM (2022) 206 final, Recital Nr. 40.

  66. 66.

    European Commission, COM (2021) 710 final, 3.2, pp. 12–13; Freeman (2016), pp. 76–77.

  67. 67.

    The most prevalent one is the risk assessment system COMPAS (Correctional Offender Management Profiling for Alternative Sanctions).

  68. 68.

    Eaglin (2021), p. 364.

  69. 69.

    European Commission, COM (2021), 710 final, p. 11.

  70. 70.

    Angwin et al. (2016).

  71. 71.

    Already established in Wisconsin criminal justice jurisprudence at State v. Skaff, 447 N.W.2d 84, 85 (Wis. Ct. App. 1989) regarding accuracy of sentencing (based on the previous case Gardner v. Florida, 430 U.S. 349, 351–52 (1977), and at State v. Gallion, 678 N.W.2d 197, 209 (Wis. 2004) regarding individual sentencing.

  72. 72.

    Loomis v. Wisconsin, 881 N.W.2d 749 (Wisc. 2016).

  73. 73.

    Loomis v. Wisconsin, 881 N.W.2d 749 (Wisc. 2016). See also Han-Wei et al. (2019).

  74. 74.

    Malenchik v. State 928 N.E. 2d 564 (Ind. 2010).

  75. 75.

    Rhodes v. State 896 N.E. 2d 1193 (Ind. Ct. Appeal 2008).

  76. 76.

    People v. Younglove, N. 341901, 2019 WL 846117 (Mich. Ct. Appeal Feb. 21, 2019).

  77. 77.

    See also Washington (2018).

  78. 78.

    Freeman (2016), p. 93; Washington (2018), pp. 134, 146–147.

  79. 79.

    See also Freeman (2016), p. 99.

  80. 80.

    Id., p. 104.

  81. 81.

    Director of Public Prosecutions for Western Australia v. Mangolamara (2007) 169 A. Crim. R. 379[2007] WASC 71, [165].

  82. 82.

    See Annex III, Art. 8a.

  83. 83.

    Both Alexa and Siri are voice-controlled digital or virtual assistant programs based on AI that accept voice commands to create to-do lists, order items online, set reminders and answer questions (via internet searches), see for more https://en.wikipedia.org/wiki/Amazon_Alexa, https://en.wikipedia.org/wiki/Siri.

  84. 84.

    UNESCO (2019), p. 16.

  85. 85.

    Id.

  86. 86.

    See, inter allia, the provisions of Art. 26 ‘Advertising on Online Platforms’, and Art. 27 ‘Recommender System Transparency’.

  87. 87.

    See more paradigms at Iglesias et al. (2019), pp. 12 ff.

  88. 88.

    Yanski-Ravid (2017), p. 676.

  89. 89.

    See Kennedy (2019).

  90. 90.

    European Parliament resolution of 20 October 2020 on intellectual property rights for the development of artificial intelligence technologies (2020/2015(INI)), Recital D.

  91. 91.

    Id., pp. 13–15.

  92. 92.

    See https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents/artificial-intelligence-and-intellectual-property-copyright-and-patents#copyright.

  93. 93.

    Turing (1950).

  94. 94.

    See Varadi (2023).

  95. 95.

    Self-driving cars have already involved in car-accidents in America, see for more O’kane (2018); Wulf and Seizov (2020), p. 631.

  96. 96.

    See O’kane (2018).

  97. 97.

    European Commission, COM (2022) 495 final.

  98. 98.

    European Commission, COM (2021) 202 final.

  99. 99.

    Cf. Allen et al. (2000), p. 257; Anderson and Anderson (2007), p. 18; Hagendorff (2020), p. 111.

  100. 100.

    See Assaro (2006), p. 10.

  101. 101.

    Id.

  102. 102.

    Papadouli (2022), p. 34.

  103. 103.

    See Allen et al. (2000), p. 251; Anderson and Anderson (2007), p. 15.

  104. 104.

    Carabantes (2020), p. 316; Papadouli (2022), pp. 34 ff.

  105. 105.

    Papadouli (2022), p. 35. Cf. also Assaro (2006), p. 11; Anderson and Anderson (2007), p. 15.

  106. 106.

    See Rissland (1988).

  107. 107.

    See Spyropoulos et al. (2022).

  108. 108.

    Id.

  109. 109.

    See Floridi et al. (2018); European Commission, Com (2019), 168 final, p. 2; European Commission, COM (2020), 65 final, p. 3.

  110. 110.

    See Rissland (1990).

  111. 111.

    Van Lent et al. (2004), p. 900. See also Adadi and Berrada (2018), p. 52139; Zednik (2019), p. 2; Licht and Licht (2020), p. 919; Carabantes (2020), p. 314; Wulf and Seizov (2020), pp. 621–622.

  112. 112.

    Adadi and Berrada (2018), p. 52138.

  113. 113.

    Id., p. 52142. See also European Parliament, Civil Law Rule on Robotics, (2018/C 252/25), Ethical Principles, Recital Nr. 12.

  114. 114.

    Papadouli (2022), p. 28.

  115. 115.

    See also Lipton (2017), pp. 7 ff.

  116. 116.

    See European Commission, COM (2021) 206 final, Recital Nr. 38, 39.

  117. 117.

    See Art. 13 par. 3 COM (2021) 206 final.

  118. 118.

    Papadouli (2022), p. 33.

  119. 119.

    Cf. Floridi et al. (2018), pp. 697–698.

  120. 120.

    Lipton (2017), pp. 15 ff.

  121. 121.

    See Woodstra (2020), p. 3.

  122. 122.

    Felzmann et al. (2019), p. 4.

  123. 123.

    See European Commission, COM (2019), 168 final, p. 4 (Reference 13)·Opinion of the European Economic and Social Committee, COM (2020) 65 final, Recital Nr. 2.3.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vasiliki Papadouli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Papadouli, V. (2023). Artificial Intelligence’s Black Box: Posing New Ethical and Legal Challenges on Modern Societies. In: Kornilakis, A., Nouskalis, G., Pergantis, V., Tzimas, T. (eds) Artificial Intelligence and Normative Challenges. Law, Governance and Technology Series, vol 59. Springer, Cham. https://doi.org/10.1007/978-3-031-41081-9_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41081-9_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41080-2

  • Online ISBN: 978-3-031-41081-9

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics