Abstract
The major challenge is how to safeguard the right to informational self-determination and prevent harms to individuals caused by algorithmic activities and algorithm-driven outcomes. In the light of the growing impact of artificial intelligence on both society and individuals, the issue of ‘traceability’ of automated decision-making processes arises. In a broader perspective, the transparency obligation is articulated as a need to face ‘the opacity of the algorithm’, an issue that not only end users, but also the designers of a system, cannot deal with in a sufficient way. The ability to look inside the ‘black box’ of machine learning algorithms, in the obscure rationale of algorithm classifying new inputs or predicting unknown correlations and variables, protecting simultaneously commercial secrets and intellectual property rights, has provoked a significant debate between industry, data protection advocates, academics and policymakers. This chapter discusses the application of transparency and explainability principles to artificial intelligence, considering the nature, the inherent obstacles and the risks associated with such application, in the light of the new regulatory framework, and provides a pragmatic explanation as to why and how the black box will not become another Pandora’s Box.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
See Norwegian Data Protection Authority (2018), p. 5.
- 2.
As underlined by the Commission Nationale de l’Informatique et des Libertés (‘CNIL’) ‘algorithms are not only opaque to their end users, the designers themselves are also steadily losing the ability to understand the logic behind the results produced’. For more information on the ‘engineering principles’, i.e. intelligibility, accountability and human intervention, see CNIL (2017), p. 51.
- 3.
- 4.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation).
- 5.
See Articles 13(2)(f) and 14(2)(g) respectively.
- 6.
Supra n2.
- 7.
There is a lot of discussion and no consensus on the definition of artificial intelligence. One of the most commonly used definitions is the one provided by the EU High-Level Expert Group on Artificial Intelligence (2018), i.e. ‘systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions’. Pursuant to the EU High-Level Expert Group on Artificial Intelligence, ‘AI as a scientific discipline includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems)’. For more information on AI definition and taxonomy see also Samoili et al. (2020).
- 8.
See Communications from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions (2018).
- 9.
Supra n1, p. 6.
- 10.
Big Data deem to be the evolution of existing data analytics technologies. There is no clear difference between big data and traditional analytics. However, the following three characteristics are usually associated with the big data trend: Volume, i.e. the data generated are usually produced in large volumes, Velocity, i.e. data is generated at a high-speed pace and Variety, i.e. the data generated usually takes various formats or types. See European Commission (2017).
- 11.
See Bathaee (2018), p. 891.
- 12.
See Castelvecchi (2016), p. 20.
- 13.
This term has been commonly used to describe the data-monitoring systems in planes. See Pasquale (2015), p. 320.
- 14.
See Balkin (2017), pp. 1217 and 1219.
- 15.
See Council of Europe (2017).
- 16.
Supra n2, p. 16.
- 17.
For an analysis of transparency principle, see Article 29 Working Party Guidelines on transparency under Regulation 2016/679, adopted on 29 November 2017 as last revised and adopted on 11 April 2018.
- 18.
Other synonyms or related terms to ‘explainability’ used by scholars are ‘explicability’, ‘intelligibility’, ‘comprehensibility’, ‘understandability’ and ‘foreseeability’.
- 19.
See Ferretti et al. (2018), p. 322.
- 20.
See Wachter et al. (2017), pp. 76 and 78.
- 21.
It seems that criticism was mostly intense prior to the issuance of Article 29 Working Party 251 Guidelines on Automated individual decision-making and profiling for the purposes of Regulation 2016/679, adopted on 3 October 2017, as last revised and adopted on 6 February 2018.
- 22.
For a detailed analysis of the legislative process see Wachter et al. (2017), p. 81.
- 23.
See Klimas and Vaitiukait (2008) p. 73.
- 24.
See Baratta (2014).
- 25.
See Wachter and Mittelstadt (2019), p. 499. The authors state further ‘this situation is not accidental. In standing jurisprudence, the European Court of Justice (ECJ; Bavarian Lager, YS. and M. and S., and Nowak) and the Advocate General (AG; YS. and M. and S. and Nowak) have consistently restricted the remit of data protection law to assessing the legitimacy of input personal data undergoing processing, and to rectify, block, or erase it. Critically, the ECJ has likewise made clear that data protection law is not intended to ensure the accuracy of decisions and decision-making processes involving personal data, or to make these processes fully transparent’.
- 26.
- 27.
See Kaminski (2019), p. 194.
- 28.
See Selbst and Powles (2017), p. 235.
- 29.
Kaminski (2019), p. 195.
- 30.
WP29 Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 make explicit references to Recital 71 and its purpose. In particular Recital 71 appears within the Guidelines three times: in page 19 ‘Recital 71 says that such processing should be “subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.”’ In page 27 ‘Recital 71 highlights that in any case suitable safeguards should also include: … specific information to the data subject and the right … to obtain an explanation of the decision reached after such assessment and to challenge the decision. The controller must provide a simple way for the data subject to exercise these rights. This emphasises the need for transparency about the processing. The data subject will only be able to challenge a decision or express their view if they fully understand how it has been made and on what basis’. And in Page 31 ‘Article 22 (3) and Recital 71 also specify that even in the cases referred to in 22(2)(a) and (c) the processing should be subject to suitable safeguards’.
- 31.
Kaminski (2019), p. 204.
- 32.
Pursuant to the Guidelines, safeguards related to Art. 22 GDPR and Recital 71 include indicatively: ‘Regular quality assurance checks of the systems to make sure that individuals are being treated fairly and not discriminated against, whether on the basis of special categories of personal data or otherwise; algorithmic auditing—testing the algorithms used and developed by machine learning systems to prove that they are actually performing as intended, and not producing discriminatory, erroneous or unjustified results; for independent ‘third party’ auditing auditors shall be provided with all necessary information about how the algorithm or machine learning system works; obtaining contractual assurances for third party algorithms that auditing and testing has been carried out and the algorithm is compliant with agreed standards; specific measures for data minimisation to incorporate clear retention periods for profiles and for any personal data used when creating or applying the profiles; using anonymisation or pseudonymisation techniques in the context of profiling; ways to allow the data subject to express his or her point of view and contest the decision; and a mechanism for human intervention in defined cases, for example providing a link to an appeals process at the point the automated decision is delivered to the data subject, with agreed timescales for the review and a named contact point for any queries’.
- 33.
See ICO (2017), p. 54.
- 34.
See CNIL (2017), p. 51.
- 35.
See European Parliament (2020).
- 36.
See Hamon et al. (2020), p. 25.
- 37.
Loi n° 2016-1321 du 7 octobre 2016 pour une République numérique.
- 38.
Unsupervised learning forms one of the three main categories of machine learning, along with supervised and reinforcement learning. It is a type of machine learning that looks for previously undetected patterns in a data set with no pre-existing labels and with a minimum of human supervision.
- 39.
See Lipton (2018), p. 13.
- 40.
An increasing number of AI-based systems possesses in-built feedback loops, which allow them to constantly adjust the weight of their variables depending on the effects of their algorithms on the users. See Wischmeyer (2020), p. 82.
- 41.
See Donovan et al. (2018), p. 7.
- 42.
Supra n40, p. 81.
- 43.
See Rai (2020).
- 44.
Recital 63 GDPR explicitly states ‘That right should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protecting the software’.
- 45.
See Veale et al. (2018).
- 46.
Wischmeyer refers to ‘the legitimate interests of the system operators in AI secrecy which make individual access to information rights in this field a delicate balancing act’. See Wischmeyer (2020), p. 79.
- 47.
See Burrell (2016), p. 3.
- 48.
Supra n3, p. 82.
- 49.
See Ananny and Crawford (2016), p. 7.
- 50.
See Mitrou (2019), p. 72.
- 51.
See Wachter et al. (2018). The authors introduced the concept of ‘counterfactual explanations’, according to which only those factors of an AI-based decision-making process must be disclosed that would need to change to arrive at a different outcome.
- 52.
Recital 58 GDPR acknowledges that the principle of transparency is ‘of particular relevance in situations where the proliferation of actors and the technological complexity of practice makes it difficult for the data subject to know and understand whether, by whom and for what purpose personal data relating to him are being collected, such as in the case of online advertising’. Article 29 Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 on the other side states that ‘It has to be noted that complexity is no excuse for failing to provide information to the data subject’ (2018).
- 53.
The Council of Europe (2019) makes use of a similar definition, i.e. ‘effective transparency’, in the context that the underlying goal of increasing transparency shall not always apply in parallel with the demand for ‘more data’.
- 54.
See Kamarinou et al. (2017), pp. 23f.
- 55.
See New and Castro (2018), p. 4.
- 56.
See European Parliament, European Parliamentary Research Service (2019a) EU guidelines on ethics in artificial intelligence: Context and implementation.
- 57.
An algorithmic auditing has been heavily supported in theory. The Article 29 Working Party Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 provides for safeguards, including quality assurance checks, algorithmic auditing, independent third-party auditing.
- 58.
This requirement derives from the perspective that individuals do not necessarily possess—nor do they need to develop—the competence to evaluate complex matters on their own. To this end, they can and should rely on experts and public institutions, especially the courts, to which they can bring their complaints and which can exercise their more far-reaching investigatory powers.
- 59.
In other words, different groups of stakeholders, including citizens, consumers, the media, parliaments, regulatory agencies, and courts are entitled to access to different degrees of information.
- 60.
New and Castro (2018), p. 18. Wischmeyer, indicatively, suggests that regulators have to perform a balancing test regarding AI secrecy and AI transparency considering the following factors: (1) The criticality of the sector in which the system is employed; (2) the relative importance of the system operator for this sector; (3) the quality and weight of the individual and collective rights which are affected by the deployment of the system; (4) the kind of data concerned; (5) how closely the technology is integrated into individual decision-making processes; (6) and, finally, whether and how it is possible to protect the legitimate interest of the system operators in AI secrecy through organisational, technical or legal means. For more information supra n2, Wischmeyer (2020) p. 94.
- 61.
Kaminski (2019), p. 210.
- 62.
- 63.
E.g. implementation of privacy by design and by default principles for any personal data related projects, designation of data protection officers with the required technical skills to ensure compliance with the privacy legal and regulatory framework etc.
- 64.
‘High risk’ data processing activities are indicatively listed in the GDPR (Art. 35 and Recitals 75 and 91 GDPR), further clarified by Article 29 Working Party Guidelines and complemented by the national data protection authorities.
- 65.
Mitrou (2019), p. 61.
- 66.
AI applications deem to meet several of the criteria set above (supra n63), since they (a) involve novel technologies, (b) process personal data on a large scale, (c) produce legal effects on data subjects based on a systematic and extensive evaluation of personal aspects. The ICO in its report on big data, artificial intelligence, machine learning, and data protection (ICO 2017) notes firmly that ‘potential privacy risks’ have already been identified with ‘the use of inferred data and predictive analytics’.
- 67.
See European Parliament, European Parliamentary Research Service (2019c) Understanding algorithmic decision-making: Opportunities and Challenges, p. 73.
- 68.
European Parliament, European Parliamentary Research Service (2019b) EU guidelines on ethics in artificial intelligence: Context and implementation, p. 3.
References
Ananny M, Crawford K (2016) Seeing without knowing: limitations of the transparency ideal and its application to algorithmic accountability. New Media Soc 20(3):973–989
Article 29 Data Protection Working Party (2018) Guidelines on automated individual decision making and profiling for the purposes of regulation 2016/679 (wp251rev.01)
Article 29 Working Party Guidelines on transparency under Regulation 2016/679, adopted on 29 November 2017 as last revised and adopted on 11 April 2018
Balkin J (2017) The three laws of robotics in the age of big data. Ohio State Law J 78:1217–1241. https://ssrn.com/abstract=2890965. Accessed 30 Mar 2020
Baratta R (2014) Complexity of EU law in the domestic implementing process. In the 19th Quality of Legislations Seminar “EU Legislative Drafting: Views from those applying EU law in the Member States” Brussels, 3 July 2014. https://ec.europa.eu/dgs/legal_service/seminars/20140703_baratta_speech.pdf. Accessed 30 Mar 2020
Bathaee Y (ed) (2018) The artificial intelligence black box and the failure of intent and causation. Harv J Law Technol 31:2
Brkan M (2019) Do algorithms rule the world? Algorithmic decision-making in the framework of the GDPR and beyond. Int J Law Info Technol 1:13–20
Burrell J (2016) How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc 2016:1–12. https://doi.org/10.1177/2053951715622512
Castelvecchi D (2016) Can we open the Black Box of AI? Nature 538(7623):20–23
Commission Nationale de l’Informatique et des Libertés (CNIL) (2017) How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence. https://www.cnil.fr/sites/default/files/atoms/files/cnil_rapport_ai_gb_web.pdf. Accessed 30 Mar 2020
Communications from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions, Artificial Intelligence for Europe, SWD(2018) 137 final
Council of Europe (2017) Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications, Prepared by the Committee of Experts on Internet Intermediaries (MSI-NET). https://rm.coe.int/study-hr-dimension-of-automated-data-processing-incl-algorithms/168075b94a. Accessed 30 Mar 2020
Council of Europe (2019) Consultative Committee of the Convention or the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108), Report on Artificial Intelligence, Artificial Intelligence and Data Protection: Challenges and Possible Remedies. https://rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6. Accessed 30 Mar 2020
Donovan J, Matthews J, Caplan R, Hanson L (2018) Algorithmic accountability: a primer. https://datasociety.net/output/algorithmic-accountability-a-primer/. Accessed 30 Mar 2020
EU High-Level Expert Group on Artificial Intelligence (2018) A definition of AI: main capabilities and scientific disciplines. EU Commission. https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines Accessed 30 Mar 2020
European Commission, Digital Transformation Monitor (2017) Big data: a complex and evolving regulatory framework. https://ec.europa.eu/growth/tools-databases/dem/monitor/sites/default/files/DTM_Big%20Data%20v1_0.pdf. Accessed 30 Mar 2020
European Parliament, European Parliamentary Research Service (2019a) Governance framework for algorithmic accountability and transparency. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_STU(2019)624262_EN.pdf. Accessed 30 Mar 2020
European Parliament, European Parliamentary Research Service (2019b) EU guidelines on ethics in artificial intelligence: context and implementation. https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf. Accessed 30 Mar 2020
European Parliament, European Parliamentary Research Service (2019c) Understanding algorithmic decision-making: opportunities and challenges. https://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf. Accessed 30 Mar 2020
European Parliament Resolution of 12 February 2020 on automated decision-making processes: ensuring consumer protection and free movement of goods and services. https://www.europarl.europa.eu/doceo/document/TA-9-2020-0032_EN.html. Accessed 30 Mar 2020
Ferretti A, Schneider M, Blasimme A (2018) Machine learning in medicine: opening the new data protection Black Box. Eur Data Prot Law Rev 4:320–332. https://doi.org/10.21552/edpl/2018/3/10
Hamon R, Junklewitz H, Sanchez I (2020) Robustness and explainability of artificial intelligence - from technical to policy solutions, Publications Office of the European Union, Luxembourg. https://doi.org/10.2760/57493
Information Commissioner Office (ICO) (2017) Big data, artificial intelligence, machine learning and data protection
Kamarinou D, Millard C, Singh J (2017) Machine learning with personal data. In: Leenes R, Brakel R, Gutwirth S, De Hert R (eds) Data protection and privacy: the age of intelligent machines. Hart, Oxford
Kaminski M (2019) The right to explanation, explained. Berkeley Technol Law J 34:189–218. https://doi.org/10.15779/Z38TD9N83H
Klimas T, Vaitiukait J (2008) The law of recitals in European Community legislation. ILSA J Int Comp Law 15(1):65–93
Lipton ZC (2018) The mythos of model interpretability. In machine learning, the concept of interpretability is both important and slippery. ACMQueue 16(3). https://queue.acm.org/detail.cfm?id=3241340. Accessed 30 Mar 2020
Mitrou L (2019) Data protection, artificial intelligence and cognitive series. Is the General Data Protection Regulation (GDPR) “Artificial Intelligence-proof”? https://ssrn.com/abstract=3386914 or https://doi.org/10.2139/ssrn.3386914. Accessed 30 Mar 2020
New J, Castro D (2018) How policymakers can foster algorithmic accountability. Information Technology and Innovation Foundation, Washington DC
Norwegian Data Protection Authority (2018) Artificial intelligence and privacy. https://www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf. Accessed 30 Mar 2020
Pasquale F (2015) The Black Box Society: the secret algorithms that control money and information. Harvard University Press, Massachusetts
Rai A (2020) Explainable AI: from black box to glass box. J Acad Mark Sci 48:137–141
Samoili S, López Cobo M, Gómez E, De Prato G, Martínez-Plumed F, Delipetrev B (2020) AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence, EUR 30117 EN, Publications Office of the European Union, Luxembourg
Selbst A, Powles J (2017) Meaningful information and the right to explanation. Int Data Privacy Law 7:233–242
Veale M, Reuben B, Edwards L (2018) Algorithms that remember: model inversion attacks and data protection law. Philos Trans R Soc A Math Phys Eng Sci 376(2133). https://doi.org/10.1098/rsta.2018.0083
Wachter S, Mittelstadt B (2019) A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Columbia Bus Law Rev 2019(2):494–620
Wachter S, Mittelstadt B, Floridi L (2017) Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. Int Data Private Law 7:76–99
Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the Black Box: automated decisions and the GDPR. Harv J Law Technol 31:841–887
Wischmeyer T (2020) Artificial intelligence and transparency: opening the Black Box. In: Wischmeyer T, Rademacher (eds) Regulating artificial intelligence. Switzerland, pp 75–101
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Vorras, A., Mitrou, L. (2021). Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability. In: Synodinou, TE., Jougleux, P., Markou, C., Prastitou-Merdi, T. (eds) EU Internet Law in the Digital Single Market. Springer, Cham. https://doi.org/10.1007/978-3-030-69583-5_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-69583-5_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-69582-8
Online ISBN: 978-3-030-69583-5
eBook Packages: Law and CriminologyLaw and Criminology (R0)