Keywords

Extended Abstract

The European Commission has identified Artificial Intelligence (AI) as the “most strategic technology of the 21st century” [7]. AI is already part of our everyday life through many successful applications into real-world usage and according to Accenture [16] the economic impact of the automation of knowledge work, robots and self-driving vehicles could reach between 6.5 and 12 €trillion annually by 2025. People are used to buzzwords like smart watch, smart phone, smart home, smart car, smart city, etc. In practice, we are surrounded by smart gadgets, i.e., devices connected to Internet and endowed with some level of autonomy and intelligence thanks to AI systems. The cohabitation of humans and smart gadgets makes society demand the development of a new generation of explainable AI systems, i.e., AI systems ready to explain naturally (as humans do) their automatic decisions.

Thus, the research field on explainable AI is flourishing and attracting more and more attention not only regarding technical but also ethical and legal issues [8]. The ACM Code of Ethics [1] highlighted explanation as a basic principle in the search for “Algorithmic Transparency and Accountability”. In addition, Floridi et al. defined the concept of “explicability” in reference to both “intelligibility” and “explainability” and hence captured the need for transparency and for accountability in an ethical framework for AI [10]. Moreover, the new European General Data Protection Regulation (GDPR) [14] refers to the “right to explanation”, i.e., GDPR states that European citizens have the right to ask for explanations of decisions affecting them, no matter who (or what AI system) makes such decision.

The term eXplainable Artificial Intelligence (XAI) was coined by the USA Defense Advanced Research Projects Agency (DARPA) [11]. Assuming that “even though current AI systems offer many benefits in many applications, their effectiveness is limited by a lack of explanation ability when interacting with humans” DARPA launched to the research community (including both academy and industry) the challenge of designing new self-explanatory AI systems from 2017 to 2021.

In Europe, there is not any initiative similar to the DARPA challenge on XAI yet. However, the European Commission has already pointed out the convenience of launching a pilot in XAI [7]. In June 2018, the Confederation of Laboratories for Artificial Intelligence Research in Europe (CLAIREFootnote 1), a novel initiative to create a network of excellence in AI with the most well-recognized universities and R+D centres, emphasized in its European vision for AI the need to search for transparent, explainable, fair and socially compatible intelligent systems. Moreover, The AI4EUFootnote 2 H2020 Project is funded by call ICT-26 2018 (grant 825619) with the aim of: (1) to mobilize the entire European AI community to make AI promises real for the European Society and Economy; and (2) to create a leading collaborative AI European platform to nurture economic growth. Explainable Human-centered AI is highlighted as one of the five key research areas to consider and it is present in 5 out of the 8 experimental pilots to be developed.

In the rest of this manuscript we briefly review a selection of outstanding Zadeh’s contributions which are likely to have direct impact in the research field of XAI. The paradigm of Computing with Words (CWW) is especially relevant because humans are used to explanations in natural language (NL).

From Prof. Zadeh’s seminal ideas on fuzzy sets and systems [21], many key concepts such as linguistic variables and linguistic rules have turned up in the field of Fuzzy Logic (FL). Accordingly, FL has many successful applications [19]. In addition, as it is described in [4], about 30% of publications in XAI come from authors well recognized in the field of FL. This is mainly due to the commitment of the fuzzy community to produce interpretable fuzzy systems [3]. Actually, interpretability is deeply rooted in the fundamentals of FL. However, it is worthy to note that interpretability is not guaranteed only because of applying FL. In practice, producing interpretable fuzzy systems is a matter of careful design [17].

In XAI, interpretability is a key issue but understandability and comprehensibility which are not so deeply considered by the FL community also play a prominent role. Nowadays, a new generation of intelligent systems is expected to provide users with natural explanations. Those explanations should be easy to understand no matter the user background. Since, humans think and compute naturally with words, explanations in NL are likely to be considered as natural explanations. Prof. Zadeh was the first to talk about CWW [22] as an extension of fuzzy sets and systems. Later, Prof. Kacprzyk gave some hints about how to implement CWW [12]. Moreover, he highlighted the need to connect CWW with the paradigm of NL Generation (NLG) [9]. It is worth noting that NLG is a well-known area within the Computational Linguistics and AI research fields. The connection between FL and NLG has been further researched by other authors [2, 15].

In addition, Prof. Zadeh was pioneer to introduce a new generation of more natural intelligent systems, ready to compute with perceptions and make approximate reasoning as humans naturally do. Thus, the Computational Theory of Perceptions (CTP) [20, 23] was first introduced by Zadeh and later applied by Trivino and Sugeno to automatically generate linguistic descriptions of complex phenomena [18]. The CTP has been successfully applied for example to explain the energy consumption at home [5] or to automatically generate linguistic descriptions associated to the USA census data [6].

In addition, Prof. Zadeh also coined the concept of cointension [24]. The semantic-cointension approach [13] is already applied to assess interpretability of fuzzy systems. Likewise, it can be considered when evaluating the understandability of explanations in XAI. In short, two different concepts referring almost to the same entities are taken as cointensive. Accordingly, an explanation in NL is deemed as comprehensible only when the explicit semantics embedded in it is cointensive with the implicit semantics inferred by the user when reading and processing the given explanation.

To sum up, Prof. Zadeh made many highly valuable contributions to the FL field and beyond. Many of these contributions were pioneer ideas and/or challenging proposals with a lot of potential to be fully developed later by other researchers. Nowadays, XAI is a prominent and fruitful research field where many of Zaden’s contributions can become crucial if they are carefully considered and thoroughly developed. For example, two major open challenges for XAI are: (1) how to build conversational agents able to provide humans with semantic grounding, persuasive and trustworthy interactive explanations; and (2) how to measure the effectiveness and naturalness of automatically generated explanations. CWW as well as fuzzy measures and Z-numbers [25] introduced by Zadeh are likely to contribute to successfully address both challenges and achieve valuable results.