Skip to main content

Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability

  • Conference paper
  • First Online:
EU Internet Law in the Digital Single Market

Abstract

The major challenge is how to safeguard the right to informational self-determination and prevent harms to individuals caused by algorithmic activities and algorithm-driven outcomes. In the light of the growing impact of artificial intelligence on both society and individuals, the issue of ‘traceability’ of automated decision-making processes arises. In a broader perspective, the transparency obligation is articulated as a need to face ‘the opacity of the algorithm’, an issue that not only end users, but also the designers of a system, cannot deal with in a sufficient way. The ability to look inside the ‘black box’ of machine learning algorithms, in the obscure rationale of algorithm classifying new inputs or predicting unknown correlations and variables, protecting simultaneously commercial secrets and intellectual property rights, has provoked a significant debate between industry, data protection advocates, academics and policymakers. This chapter discusses the application of transparency and explainability principles to artificial intelligence, considering the nature, the inherent obstacles and the risks associated with such application, in the light of the new regulatory framework, and provides a pragmatic explanation as to why and how the black box will not become another Pandora’s Box.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 249.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    See Norwegian Data Protection Authority (2018), p. 5.

  2. 2.

    As underlined by the Commission Nationale de l’Informatique et des Libertés (‘CNIL’) ‘algorithms are not only opaque to their end users, the designers themselves are also steadily losing the ability to understand the logic behind the results produced’. For more information on the ‘engineering principles’, i.e. intelligibility, accountability and human intervention, see CNIL (2017), p. 51.

  3. 3.

    See Mitrou (2019), p. 54, as well as Wischmeyer (2020), p. 76 for a comprehensive list of policy papers, national planning strategies, expert recommendations, and stakeholder initiatives on AI transparency.

  4. 4.

    Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).

  5. 5.

    See Articles 13(2)(f) and 14(2)(g) respectively.

  6. 6.

    Supra n2.

  7. 7.

    There is a lot of discussion and no consensus on the definition of artificial intelligence. One of the most commonly used definitions is the one provided by the EU High-Level Expert Group on Artificial Intelligence (2018), i.e. ‘systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions’. Pursuant to the EU High-Level Expert Group on Artificial Intelligence, ‘AI as a scientific discipline includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)’. For more information on AI definition and taxonomy see also Samoili et al. (2020).

  8. 8.

    See Communications from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions (2018).

  9. 9.

    Supra n1, p. 6.

  10. 10.

    Big Data deem to be the evolution of existing data analytics technologies. There is no clear difference between big data and traditional analytics. However, the following three characteristics are usually associated with the big data trend: Volume, i.e. the data generated are usually produced in large volumes, Velocity, i.e. data is generated at a high-speed pace and Variety, i.e. the data generated usually takes various formats or types. See European Commission (2017).

  11. 11.

    See Bathaee (2018), p. 891.

  12. 12.

    See Castelvecchi (2016), p. 20.

  13. 13.

    This term has been commonly used to describe the data-monitoring systems in planes. See Pasquale (2015), p. 320.

  14. 14.

    See Balkin (2017), pp. 1217 and 1219.

  15. 15.

    See Council of Europe (2017).

  16. 16.

    Supra n2, p. 16.

  17. 17.

    For an analysis of transparency principle, see Article 29 Working Party Guidelines on transparency under Regulation 2016/679, adopted on 29 November 2017 as last revised and adopted on 11 April 2018.

  18. 18.

    Other synonyms or related terms to ‘explainability’ used by scholars are ‘explicability’, ‘intelligibility’, ‘comprehensibility’, ‘understandability’ and ‘foreseeability’.

  19. 19.

    See Ferretti et al. (2018), p. 322.

  20. 20.

    See Wachter et al. (2017), pp. 76 and 78.

  21. 21.

    It seems that criticism was mostly intense prior to the issuance of Article 29 Working Party 251 Guidelines on Automated individual decision-making and profiling for the purposes of Regulation 2016/679, adopted on 3 October 2017, as last revised and adopted on 6 February 2018.

  22. 22.

    For a detailed analysis of the legislative process see Wachter et al. (2017), p. 81.

  23. 23.

    See Klimas and Vaitiukait (2008) p. 73.

  24. 24.

    See Baratta (2014).

  25. 25.

    See Wachter and Mittelstadt (2019), p. 499. The authors state further ‘this situation is not accidental. In standing jurisprudence, the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent’.

  26. 26.

    See Brkan (2019), p. 13 and Wischmeyer (2020), p. 83 for respective case-law.

  27. 27.

    See Kaminski (2019), p. 194.

  28. 28.

    See Selbst and Powles (2017), p. 235.

  29. 29.

    Kaminski (2019), p. 195.

  30. 30.

    WP29 Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 make explicit references to Recital 71 and its purpose. In particular Recital 71 appears within the Guidelines three times: in page 19 ‘Recital 71 says that such processing should be “subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.”’ In page 27 ‘Recital 71 highlights that in any case suitable safeguards should also include: … specific information to the data subject and the right … to obtain an explanation of the decision reached after such assessment and to challenge the decision. The controller must provide a simple way for the data subject to exercise these rights. This emphasises the need for transparency about the processing. The data subject will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis’. And in Page 31 ‘Article 22 (3) and Recital 71 also specify that even in the cases referred to in 22(2)(a) and (c) the processing should be subject to suitable safeguards’.

  31. 31.

    Kaminski (2019), p. 204.

  32. 32.

    Pursuant to the Guidelines, safeguards related to Art. 22 GDPR and Recital 71 include indicatively: ‘Regular quality assurance checks of the systems to make sure that individuals are being treated fairly and not discriminated against, whether on the basis of special categories of personal data or otherwise; algorithmic auditing—testing the algorithms used and developed by machine learning systems to prove that they are actually performing as intended, and not producing discriminatory, erroneous or unjustified results; for independent ‘third party’ auditing auditors shall be provided with all necessary information about how the algorithm or machine learning system works; obtaining contractual assurances for third party algorithms that auditing and testing has been carried out and the algorithm is compliant with agreed standards; specific measures for data minimisation to incorporate clear retention periods for profiles and for any personal data used when creating or applying the profiles; using anonymisation or pseudonymisation techniques in the context of profiling; ways to allow the data subject to express his or her point of view and contest the decision; and a mechanism for human intervention in defined cases, for example providing a link to an appeals process at the point the automated decision is delivered to the data subject, with agreed timescales for the review and a named contact point for any queries’.

  33. 33.

    See ICO (2017), p. 54.

  34. 34.

    See CNIL (2017), p. 51.

  35. 35.

    See European Parliament (2020).

  36. 36.

    See Hamon et al. (2020), p. 25.

  37. 37.

    Loi n° 2016-1321 du 7 octobre 2016 pour une République numérique.

  38. 38.

    Unsupervised learning forms one of the three main categories of machine learning, along with supervised and reinforcement learning. It is a type of machine learning that looks for previously undetected patterns in a data set with no pre-existing labels and with a minimum of human supervision.

  39. 39.

    See Lipton (2018), p. 13.

  40. 40.

    An increasing number of AI-based systems possesses in-built feedback loops, which allow them to constantly adjust the weight of their variables depending on the effects of their algorithms on the users. See Wischmeyer (2020), p. 82.

  41. 41.

    See Donovan et al. (2018), p. 7.

  42. 42.

    Supra n40, p. 81.

  43. 43.

    See Rai (2020).

  44. 44.

    Recital 63 GDPR explicitly states ‘That right should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protecting the software’.

  45. 45.

    See Veale et al. (2018).

  46. 46.

    Wischmeyer refers to ‘the legitimate interests of the system operators in AI secrecy which make individual access to information rights in this field a delicate balancing act’. See Wischmeyer (2020), p. 79.

  47. 47.

    See Burrell (2016), p. 3.

  48. 48.

    Supra n3, p. 82.

  49. 49.

    See Ananny and Crawford (2016), p. 7.

  50. 50.

    See Mitrou (2019), p. 72.

  51. 51.

    See Wachter et al. (2018). The authors introduced the concept of ‘counterfactual explanations’, according to which only those factors of an AI-based decision-making process must be disclosed that would need to change to arrive at a different outcome.

  52. 52.

    Recital 58 GDPR acknowledges that the principle of transparency is ‘of particular relevance in situations where the proliferation of actors and the technological complexity of practice makes it difficult for the data subject to know and understand whether, by whom and for what purpose personal data relating to him are being collected, such as in the case of online advertising’. Article 29 Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 on the other side states that ‘It has to be noted that complexity is no excuse for failing to provide information to the data subject’ (2018).

  53. 53.

    The Council of Europe (2019) makes use of a similar definition, i.e. ‘effective transparency’, in the context that the underlying goal of increasing transparency shall not always apply in parallel with the demand for ‘more data’.

  54. 54.

    See Kamarinou et al. (2017), pp. 23f.

  55. 55.

    See New and Castro (2018), p. 4.

  56. 56.

    See European Parliament, European Parliamentary Research Service (2019a) EU guidelines on ethics in artificial intelligence: Context and implementation.

  57. 57.

    An algorithmic auditing has been heavily supported in theory. The Article 29 Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 provides for safeguards, including quality assurance checks, algorithmic auditing, independent third-party auditing.

  58. 58.

    This requirement derives from the perspective that individuals do not necessarily possess—nor do they need to develop—the competence to evaluate complex matters on their own. To this end, they can and should rely on experts and public institutions, especially the courts, to which they can bring their complaints and which can exercise their more far-reaching investigatory powers.

  59. 59.

    In other words, different groups of stakeholders, including citizens, consumers, the media, parliaments, regulatory agencies, and courts are entitled to access to different degrees of information.

  60. 60.

    New and Castro (2018), p. 18. Wischmeyer, indicatively, suggests that regulators have to perform a balancing test regarding AI secrecy and AI transparency considering the following factors: (1) The criticality of the sector in which the system is employed; (2) the relative importance of the system operator for this sector; (3) the quality and weight of the individual and collective rights which are affected by the deployment of the system; (4) the kind of data concerned; (5) how closely the technology is integrated into individual decision-making processes; (6) and, finally, whether and how it is possible to protect the legitimate interest of the system operators in AI secrecy through organisational, technical or legal means. For more information supra n2, Wischmeyer (2020) p. 94.

  61. 61.

    Kaminski (2019), p. 210.

  62. 62.

    See indicatively, Norwegian Data Protection Authority (2018), p. 27 and European Parliament, European Parliamentary Research Service (2019c) Understanding algorithmic decision-making: Opportunities and Challenges, p. 1.

  63. 63.

    E.g. implementation of privacy by design and by default principles for any personal data related projects, designation of data protection officers with the required technical skills to ensure compliance with the privacy legal and regulatory framework etc.

  64. 64.

    ‘High risk’ data processing activities are indicatively listed in the GDPR (Art. 35 and Recitals 75 and 91 GDPR), further clarified by Article 29 Working Party Guidelines and complemented by the national data protection authorities.

  65. 65.

    Mitrou (2019), p. 61.

  66. 66.

    AI applications deem to meet several of the criteria set above (supra n63), since they (a) involve novel technologies, (b) process personal data on a large scale, (c) produce legal effects on data subjects based on a systematic and extensive evaluation of personal aspects. The ICO in its report on big data, artificial intelligence, machine learning, and data protection (ICO 2017) notes firmly that ‘potential privacy risks’ have already been identified with ‘the use of inferred data and predictive analytics’.

  67. 67.

    See European Parliament, European Parliamentary Research Service (2019c) Understanding algorithmic decision-making: Opportunities and Challenges, p. 73.

  68. 68.

    European Parliament, European Parliamentary Research Service (2019b) EU guidelines on ethics in artificial intelligence: Context and implementation, p. 3.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lilian Mitrou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vorras, A., Mitrou, L. (2021). Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability. In: Synodinou, TE., Jougleux, P., Markou, C., Prastitou-Merdi, T. (eds) EU Internet Law in the Digital Single Market. Springer, Cham. https://doi.org/10.1007/978-3-030-69583-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69583-5_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69582-8

  • Online ISBN: 978-3-030-69583-5

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics