Keywords

1 Introduction

The number of artificial intelligence applications that are available on the business and consumer market have increased over the last years (Das et al. 2015). In some areas, more tasks have even been taken over by intelligent algorithms. Also, the future impact of AI is expected to become further pervasive and encompassing. One such example is a lifelong personal assistant (Gil and Selman 2019) that supports and tutors humans. These systems will highly affect social lives and influence human decisions. Trusting and relying on such systems to make correct (or ‘good’) suggestions or decisions is inevitable for these AI systems to achieve their full functionality (Mohseni et al. 2018).

AI systems can provide explanations together with their decisions and suggestions or interact with users when questions about their decisions and suggestions arise: in human-computer-interaction – or rather human-AI-interaction – explainability provides transparency and contributes to trust (Miller 2019). Even though trust itself is influenced by a variety of other aspects, e.g., human, robotic, and environmental factors (Schaefer et al. 2016), we focus here on aspects regarding explainability when interacting with artificial intelligence systems and how this can yield transparency. Also, the need for explainability of AI systems’ decisions and behaviors has grown in general (Gunning 2017), and explainability is seen as a toolset to understand the underlying technicalities and models (Ribeiro et al. 2016 and Štrumbelj and Kononenko 2014).

For more adaptive, continuously learning AI systems that closely collaborate with human end-users and that may change their behavior over time, transparency and understanding of the AI systems’ behavior is inevitably, e.g., to increase user acceptance. The exact way, how to achieve this transparency and explainability is still an open question and ongoing research shows the complexity of the entire topic (Miller 2019 and Mohseni et al. 2018). For example, users may vary the detail of transparency they wish to see, or users may react more seamlessly to the system’s behavior with higher understanding.

In this paper, we discuss different levels of transparency both from the perspective of human end-users and AI systems. In the next section, we show the different dimensions of transparency both from a human and an AI perspective. We next address potential roles and relationships during the human-AI-interaction, followed by aspects of situational awareness and time. As a result, we highlight the complexity when aiming at an appropriate level of explanations with regard to transparency in a specific situation.

2 Facets of Transparency

The existing body of research concerned with transparency and explainability of AI focus on different aspects of transparency, see Sect. 3. In this paper we will use a three-facetted-model of transparency based on the work of (Endsley 1995) and (Chen et al. 2014) regarding the situation awareness model and agent transparency.

As shown in Fig. 1, we identify three key facets of transparency. One aspect being the transparency about the behavior and the underlying intentions of the system. The second facet is concerned with the decision making mechanism of the system, including an understanding about the underlying algorithm and the integrated variables. The third facet adds an understanding about potential limitations of the system which includes an estimation of the probability of errors in a given situation.

Fig. 1.
figure 1

Facets of transparency

When determining the level of transparency in a given situation, characteristics of the system as well as the user have to be taken into account: the system can provide explanations actively or on-demand and the system can also interact in a specific way that may be interpreted as social cues by the user. The user has certain preferences and prior experience with systems and potential expectations. A facet of transparency can be achieved by the interaction of both system and user.

The adequacy of an explanation, however, can hardly be determined without taking into account personal characteristics of the user. Depending on the general technical knowledge, the time of usage and the situation awareness of the user, the required quality and quantity of the explanation to reach a certain level of transparency might vary. This effects possible relationships during interaction (Sect. 4) and is influenced by specific situations (Sect. 5).

3 Aspects of AI Explainability

AI functionalities are nowadays often enabled by machine learning models that have been trained with large data sets and that may learn when interacting with users and change their behavior over time. It has been argued that certain models intrinsically entail explanations in their decisions, e.g., decision trees, and are thus more easy to interpret, though also decision trees can become rather complex for humans to perceive and understand them (Štrumbelj and Kononenko 2014 and Došilovic et al. 2018). Complex machine learning models are difficult to interpret, and several approaches for explainability have been discussed (Ribeiro et al. 2016 and Samek et al. 2017).

It can be distinguished, whether explainability is primarily seen as a method that aims at analyzing trained machine learning model results or as a method that aims at making machine learning model results transparent for end-users. Analyzing an AI system according to all aspects is recommended (Mohseni et al. 2018). In this paper, we focus on those aspects directly related to interaction with end-users, who are not experts in technical details or the developers of the AI system.

Explainability of AI systems have several different aspects:

  • The system can use different channels to communicate explanations, such as text, speech, graphical, visualizations, or auditive signals.

  • Measures to evaluate explainability for non-expert users vary between measuring user mental models, task performance, user satisfaction, or trust, according to (Mohseni et al. 2018).

  • The main purpose to provide explainability of a model also varies, e.g., the goal might be to support trust, causality, transferability, informativeness, or ethical reasons, according to (Lipton 2016).

  • Finally, the exact content that is used to communicate explanations can be distinguished. This might depend on context and situation, user-specific preferences, or technical likelihoods. In short, a user might prefer a short but easy to understand explanation over an elaborate but difficult to comprehend explanation. The meta level can also vary, e.g., a system communicates its decision making, its technical aspects, its limitations, or options for alternative decisions, cf. (Miller 2019).

The different dimensions that have to be considered for a transparent human AI interaction are shown in Table 1. To adequately address all dimensions in a specific situation, an AI system thus requires different options to select, which information to provide for an explanations, which depth of detail, and when to provide explanations. End-users might have a higher need for detailed explanations when confronted with unexpected AI decisions than for routine decisions. However, further aspects are relevant as presented in the next sections.

Table 1. Aspects of explainability in AI systems for end-user interaction

4 Relations Between Humans and AI During Interaction

(Fitts 1951) characterized the human-machine interaction by describing the relative strengths and limitations of humans and computers, sometimes referred to as what “men are better at’’ and what “machines are better at’’ lists (MABA-MABA). Since the classification includes the full range between “only human” and “only machine”, a description of different levels of automation (LOA) became necessary, e.g. (Sheridan and Verplank 1978, Parasuraman et al. 2000), see Table 2. Despite the wide body of research in the field of LOA of the last 60 years, the question of how the human decision making process could be implemented in autonomous systems has not been answered yet. While systems with integrated machine learning algorithms are developed, that are able to learn and change their behavior over time, the situation becomes even more complex. E.g., while a certain limitation of a system (e.g., sensor fusion) might lead to the presentation of the full set of decision alternatives at the beginning, it might change over time to the next higher level of automation where only one alternative is suggested. A different facet of transparency (see Sect. 2) might be needed to ensure a suitable interaction after a certain time of usage.

Table 2. Levels of automation of decision and action selection (Parasuraman et al. 2000, p. 287)

When interacting with an intelligent systems, yet another aspect comes into play: the attribution of roles, such as the AI system being a tutor or a personal assistant. Further research will have to clarify, if different roles of the intelligent system might have implications for the recommended level of automation, action selection, and transparency. (Karapanos et al. 2009) has shown that human expectations towards a product changes over time. In terms of a personal intelligent assistant, for instance, this may also be applicable, and the way a human perceives and interacts with an intelligent system may shift over time as the user makes experiences with the system.

5 Situational Awareness and Context

As argued before, the personal characteristics of the user as well as the characteristics of the system have an impact on the recommended type of explanation and the interaction quality. Additionally, the context in which the interaction takes place is expected to have a significant influence on the interaction in general and the need for explanation and transparency in particular. The situation awareness of the user and the time of usage are key factors to influence the need for transparency and explanation in order to create trust.

According to (Endsley 1995), situation awareness encompasses the perception of the situation, the comparison of the situation, and the anticipation of a future state. In this paper, the term situation awareness will be used to refer to the characteristics of the situation as well as possible consequences of the decision making. The relationship between the situation awareness and the need for transparency and explanation however is not linear.

The situation characteristics further impacts the trust level a user places in the AI system or its explanation. Studies have shown that explanations can increase trust or the lack of explanation can decrease trust, e.g., (Holliday et al. 2016). Trust aspects are more relevant though, when dealing with severe situations. Particularly, when situational awareness is rather low, trust becomes more relevant (Wagner and Robinette 2015). On the one hand, humans may still trust and rely on systems making poor decisions (Wagner and Robinette 2015). Ideally in these situations of overtrust, a system would be able recognize its own limitations and make it transparent. On the other hand, humans also tend to disbelieve explanations given by an already untrusted systems (Miller 2019).

6 Summary and Outlook

An intelligent system that aims at making its behavior, decisions, and suggestions transparent to human users in a specific situation has to take into account various facets and dimensions, as described above. In this paper, we highlighted the various topics that lead to the complexity of such an endeavor.

Further research is needed with regard to long-term studies that show how the interaction between learning systems and users may change over time and thus vary with regard to transparency. In this respect, the impact of trust and changes in trust with the support of transparency is also an open topic.

Furthermore, transparency is not only complex and cost- or time-expensive, its wide variations with regard to a specific situation is particularly influenced by consequences of the interaction. Routine situations may not rely on transparency, while severe situations heavily depend on it. Transparency could also be offered after interaction has taken place, e.g., the situation and the underlying mechanisms how decisions were made by the system could be presented to the user after a critical situation. Such adequate ways, however, need to be studied.

Personality traits could be of interest for a situation-adequate human AI interaction: users with a need for cognition might have a higher need for explanations or technically averse users may need additional explanations. However, in severe situations, this might not be as relevant.