Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Emotions are central to many human processes (e.g. perception, understanding), and may enhance the effectiveness of some systems [22]. Emotions are composed of behavioral, expressive, physiological, and subjective reactions (feelings) [4]. An instrument may measure only one of these components [4].Therefore, many technological instruments have been proposed, e.g. some that seek to recognize emotions through computer vision or physiological sensors, and others that require users to input their feelings.

Ubiquitous computing is technology that “disappears”, with the goal of designing computers that fit the human environment [26]. An example of this type of technology are Tangible User Interfaces (TUIs). TUIs allow users to manipulate digital information and physically interact with it [11]. TUIs take advantage of users’ knowledge of how the physical world works [13], which may make them especially suitable for users without much knowledge of the digital world.

There are several scenarios in which systems benefit from acquiring information about users’ emotions, but in which users have low digital skills and therefore may have difficulty expressing these emotions. For example, a training center to introduce older and underprivileged adults to computing has trouble gathering their opinions and feelings about the course. TUIs may be less intimidating, taking advantage of their knowledge of the physical world and blending into the environment.

The goal of this work is to study which types of interfaces currently exist or have been proposed that deal with user emotions. This will allow us to understand whether populations such as the one mentioned above are well served by these interfaces. We aim to understand the characteristics of interfaces dealing with emotions, to provide as a contribution an overview of the important elements and considerations when designing an interface for users to report their feelings. To achieve this goal, we conducted a systematic literature review (SLR). SLR is a means of identifying, evaluating and interpreting all available research relevant to a particular research question, or topic area, or phenomenon of interest [15]. This technique is useful for reviewing existing evidence about a technology and identifying gaps in current research.

This paper is organized as follows. In Sect. 2, we define relevant terms for our literature review. Section 3 describes our methodology, describing the research questions, search strategy, selection criteria and how we extracted the data. Section 4 summarizes the results, and finally, in Sect. 5 we present the discussion, conclusions, and directions for future research.

2 User Interfaces: A Brief Introduction

This section presents a brief overview of the concepts of Interaction Style and Types of user Interfaces.

The concept of interaction may be understood as a metaphor of translation between two languages, while an interaction style is defined as a dialogue between computer and user [5]. Interaction style may also be defined as the way that a user can communicate or interact with a computer system [3]. Many different interaction styles have been proposed [3, 5], e.g. natural language (speech or typed human language recognition), form-fills and spreadsheets, WIMP (windows, icons, menus, pointers), point-and-click, three dimensional interfaces (virtual reality).

A user interface is the representation of a system with which a user can interact [12]. To the best of our knowledge, there is not one agreed upon taxonomy to define every possible type of user interface. Command-line interfaces (CLI) are interfaces in which the user types in commands [14]. Graphical user interface (GUI) represent information through an image-based representation in a display [12, 14]. Natural user interfaces (NUI) allow users to interact by using e.g. body language, gestures, or facial expressions [14, 27].Organic user interface (OUI) define an interface that may change its form, shape or being [8, 16]. Tangible user interface (TUI) is a user interface in which a person uses a physical object in order to interact with digital information [10].

3 Literature Review Methodology

In general, a SLR can be divided in three phases. Even though - due to space concerns - we do not show each of the phases completely, our work was developed following them. The phases are the following ones: [15, 19]:

  1. 1.

    Planning the review: Define a protocol that specifies the plan that the SLR will follow to identify, assess, and collate evidence.

  2. 2.

    Conducting the review: Execute the planned protocol.

  3. 3.

    Reporting the review: Write up the results of the review and disseminate the results to potentially interested parties.

3.1 Need for a Systematic Literature Review

Recently, there have been several proposals of user interfaces and interaction styles to report, register and share human emotions. This SLR aims to identify which technologies are being used, who the target users are, and how the technology has been evaluated. We aim to identify trends in this area, under-served populations of users, and avenues of future research.

3.2 Research Questions

The goal of this review is to find how software technologies support self-report of emotional information. However, this question is too generic, so it was sub-divided into several questions, that focus on specific aspects of the evaluation.

To define our research questions we followed the Population, Intervention, Comparison, Outcome and Context (PICOC) structure [15] (Table 1). This structure helps capture the attributes that should be considered when defining research questions in a SLR. This review does not aim to compare interventions, so the attribute comparison is not applicable.

Table 1. Research questions as structured by the PICOC criteria

A set of research questions was defined, related to understanding the types of interfaces, interactions, evaluation methodologies, of novel interfaces and technologies to self-report emotions. Hence, our SLR aims to answer the following research questions:

figure a

3.3 Search Strategy

Based on these questions, we identified the keywords to be used to search for the primary studies. The initial set of keywords was: emotion/s, mood/s, affect/s, share, interaction, self-report. With these keywords, the search string was built using boolean AND and OR operators, resulting in the following search string:

figure b

The search for primary studies was done on the following digital libraries: ACM Digital LibraryFootnote 1, IEEE Xplore Digital LibraryFootnote 2, ScienceDirectFootnote 3 and Springer LinkFootnote 4. These libraries were chosen because they are among the most relevant sources of scientific articles in several computer science areas [19]. Table 2 presents the number of papers that the search on each of the digital libraries produced.

Table 2. Number of the papers selected by each digital library

We removed duplicates automatically (and then re-checked manually), finding 56 duplicated papers that were excluded. After this step, there were 271 papers in our corpus.

3.4 Selection Criteria

Once the potentially relevant primary studies had been selected, we evaluated them to decide whether they should be included in the review. For this, the following inclusion and exclusion criteria were defined:

  1. 1.

    Inclusion Criteria:

    1. (a)

      The paper is in English.

    2. (b)

      The paper is a peer-reviewed article and it was obtained from a journal, conference or workshop.

    3. (c)

      The paper was published on or before May 2015.

    4. (d)

      The paper is focused on technologies for reporting/registering/communicating human emotions.

    5. (e)

      The paper reasonably presents the technology and its validation.

    6. (f)

      The paper present as measuring subjective emotions as its main purpose.

  2. 2.

    Exclusion Criteria:

    1. (a)

      The paper is not available online.

    2. (b)

      The paper is a survey or SLR.

    3. (c)

      The paper does not include validation of the technology.

    4. (d)

      The paper includes human-robot/agent interaction.

    5. (e)

      The paper does not include an objective measurement of emotions.

Four researchers individually read the titles and abstracts of the 271 selected papers, and applied the criteria to accept or reject papers from the study. The papers all researchers agreed should be accepted or rejected (as well as papers with only 1 acceptance) were automatically included or removed. A fifth researcher was asked to decide for papers with 2 or 3 acceptances (25 papers in total). After this step, out of our corpus of 271 papers, 18 papers remained.

3.5 Data Extraction

For this step, three researchers read the 18 selected papers, with the focus on answering the research questions introduced in Sect. 3.2. The obtained information was compiled into an ad-hoc Excel template. Moreover, during this detailed reading and analysis, the application of exclusion criteria was refined in some cases. Thus, 5 papers were excluded and only 13 papers remained for the data analysis step. These papers are presented in Table 3.

Table 3. Primary studies selected

4 Results

This section presents the results produced by our SLR. Table 4 shows the per-year distribution of selected studies, separated by publication type. 85 % of the reviewed papers were published between 2009 and 2015. The following sections present the results, structured as answers to the research questions.

Table 4. Summary of studies by publication type and by publication year

4.1 Results of Interaction Styles and Type of Interfaces

Out of the analyzed papers, 70 % presented a GUI interface, with WIMP, point-and-click, Menu and Q&A interaction styles (see Table 5). Only 30 % were TUI interfaces. The target user from the reviewed studies is in 70 % of cases generic (there is no specific target population), and only 15 % include users with specific characteristics such as patients, caregivers, workers.

Table 5. Classification of research question Q1 and Q2

4.2 Validation of Registered Emotions

Out of the analyzed studies, 80 % used additional mechanisms to validate the self-reported emotions (Table 6). The validations were e.g. measurement of physiological signals (facial expressions, gestures, heart rate).

Table 6. Classification of research question Q3

4.3 Sharing Emotions

Sharing emotions is allowed in 65 % of the reviewed proposals. They allow sharing emotions with several types of users (Table 7). The interfaces that allow emotion sharing are in 60 % of cases GUIs, and in 40 % of cases TUIs.

Table 7. Classification of research question Q4

4.4 Methodologies of Evaluation

This SLR studied evaluation methods to understand which are commonly used in this type of interface (Table 8). 85 % include an evaluation of the proposed technology, and most used mixed-methods approaches. The chosen participants were students, or users with particular characteristics, or in some cases, any available user. Only two studies conducted evaluation with users in a real context, e.g. a mental illness such as depression. Regarding the length of study, out of the studies with evaluation, 55 % specified how long it took them to make the evaluation. The average number of participants was 30 (\(min=10\), \(max=59\)).

Table 8. Classification of research question Q5

4.5 Benefits of Register Emotions

We identified the benefits of using technology to register emotions (Table 9). 50 % suggest an improvement on the goals of the study, which were either supporting self-reflection, encouraging people to reflect on their emotions, or improving emotion identification by participants. 45 % show evidence that technologies to report emotions facilitate tasks such as user experience studies and cultural research. This research suggests there is no particular evidence of differences in this aspect between articles published in conferences and journals.

Table 9. Classification of research question Q6

4.6 Discussion

The most common interaction style was WIMP, with a GUI interface. We did not find a well-defined interaction style for TUI interfaces, which may have several explanations: TUIs are newer and not as well established as GUIs, and there are fewer research projects that study TUIs in multiple real contexts. This may open an interesting field of research, that tries to uncover the interaction styles that are relevant for new interfaces, considering their particular characteristics.

It is interesting to note that 80 % of the analyzed interfaces implemented a second method to validate the emotions users reported. It is important and noteworthy that researchers recognize that due to the drawbacks, all instruments inherently have, emotions should ideally be validated both through objective and subjective methods.

Over 60 % of the reviewed studies allowed emotion sharing. This opens up another interesting aspect that needs further research, privacy: how do users feel about sharing something that is deeply personal, such as an emotion?

Registering emotions was considered to have several benefits, e.g. allowing users to self-reflect on emotional states. Delivering appropriate instances of self-reflection may benefit users, especially in contexts such as systems related to mental health, or for users at a higher risk for depression.

One challenge that is still open is to conduct evaluations of these systems with real users in real contexts of use. Naturally, this is a difficult task, as in any research with real users - however, evaluations should begin to incorporate real users to be able to truly understand the impact of self-reporting emotions.

5 Conclusions and Future Work

This work presented a SLR regarding interfaces for emotional self-report. We analyzed several dimensions of the interfaces, e.g. used technology, target user, evaluation process and benefits. The main contribution of this research is to present a rigorous and formal SLR that characterizes research in the area of user interfaces for self-reporting emotions.

In general, researchers have identified that it is important to share emotions with other users. However, our results show that most self-report interfaces for emotions are GUIs. This may mean that some categories of users (older adults, children who do not yet know how to read) may be left out of these technologies, which suggests the importance of studying these users to propose technologies with new interaction styles specific to them.

We found a small number of relevant papers, which is a motivation to continue expanding our literature review. For example, we can consider other digital libraries (e.g. ScopusFootnote 5, Wiley OnlineFootnote 6) to widen the scope of our literature review and take into account a greater number of primary studies. It would be especially interesting to explore clinical-focused journals to expand the scope of our review. The small number of papers is also a signal that this area of research requires more studies (especially involving users in real contexts) and interfaces (with new interaction styles).