Abstract
Emotional information is complex to manage by humans and computers alike, so it is difficult for users to express emotional information through technology. Two main approaches are used to gather this type of information: objective (e.g. through sensors or facial recognition) and subjective (reports by users themselves). Subjective methods are less intrusive and may be more accurate, although users may fail to report their emotions or not be entirely truthful about them. The goal of this study is to identify trends in the area of interfaces for the self-report of human emotions, under-served populations of users, and avenues of future research. A systematic literature review was conducted on six search engines, resulting in a set of 863 papers, which were filtered in a systematic way until we established a corpus of 40 papers. We studied the technologies used for emotional self-report as well as the issues regarding these technologies, such as privacy, interaction mechanisms, and how they are evaluated.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Computers may benefit from knowing about users’ situations while they are interacting, to adapt more accurately to their needs (Gross 2009). Emotions, in particular, are central to several human processes and may enhance system effectiveness (Picard 2003). For example, informal caregivers who care for an ill family member tend to have high levels of burden and depression (Papastavrou et al. 2016). Recording their emotional information may be beneficial in three ways: (1) providing the caregiver with self-knowledge and self-reflection, (2) providing healthcare teams with information for early intervention, and (3) providing family members with information to support the caregiver, by knowing how the caregiver is managing their role. However, although recording emotional information may benefit them and provide actionable information to healthcare teams and family members, caregivers may have low digital skills and feel that inputting this information is an additional burden or chore.
For these type of users, two main approaches are usually used to gather emotional information: objective (e.g. through sensors that record physiological data or facial recognition) and subjective (reports by users themselves). Several approaches have been proposed to objectively recognize emotions; e.g. through analyzing sound (Chi et al. 2011), facial expressions, eye gaze and head movement (Zhao et al. 2012). Techniques that use physiological sensors may be obtrusive and prone to noise, while observational techniques such as facial recognition are less obtrusive, but may require e.g. infrastructure.
Subjective methods are less intrusive, simple to implement, portable, and may be more accurate, though they also have possible drawbacks, e.g. users may fail to report their emotions or not be entirely truthful about them. Despite this, psychologists rely on this method to know how a patient feels (Barrett 2004). However, inputting emotional information into a system is a complex task, since emotions are nuanced, multi-faceted and have varying degrees of intensity. Recently, there have been several proposals of user interfaces and interaction styles to report, register and share human emotions. The goal of this work is to study the types of interaction mechanisms and interfaces that have been used to self-report emotional information, in order to understand the challenges and trends in this area, under-served populations of users, avenues of future research, and provide insights or lessons learned about how to best approach the design of interfaces for self-report of emotional information.
To achieve our goal, we conducted a systematic literature review (SLR), which is a way of identifying, evaluating and interpreting available research on a particular topic (Kitchenham and Charters 2007). The main goal of this review is to learn which technologies are used for self-report of emotional information. We aim to expand the knowledge gained from a previous, preliminary SLR (Fuentes et al. 2015a), by adding research questions, conducting a broader search of papers and providing in-depth analysis of results.
This paper is organized as follows. Section 2 summarizes our research area, defining relevant terms for our literature review. Section 3 describes our methodology, including the research questions, search strategy, selection criteria, and data extraction. Section 4 presents the results. Then, we discuss our findings and present our conclusions.
2 Background
In this section, we review the area of human-centered computing, and existing types of interfaces, to introduce the key concepts of our literature review.
Human-centered computing is a research field which aims at “bridging the existing gaps between the various disciplines involved with the design and implementation of computing systems that support human’s activities”, i.e., human sciences and computer science (Sebe 2010). Human-centered computing is a reimagining of classical human–computer interaction (HCI), in which some researchers consider that understanding people (with their concerns and activities) should be the first consideration in technology design (Bannon 2011).
2.1 Human–computer interfaces
A user interface is the representation of a system with which a user can interact (Jacko 2012). Classical types of interfaces are command-line interfaces (CLI) in which the user types in commands (Jain et al. 2011), and graphical user interfaces (GUI), which use image-based interaction (Jain et al. 2011; Jacko 2012). Natural user interfaces (NUI) allow users to interact using body language and gestures (Wigdor and Wixon 2011). Organic user interfaces (OUI) are interfaces that can change their shape (Lahey et al. 2011).
Ubiquitous computing is technology that “disappears”, with the goal of designing computers that fit the human environment (Weiser 1995). Ubiquitous computing has high embededness and mobility (Lyytinen and Yoo 2002). Pervasive computing follows the same principle and is highly embedded, but with low mobility (Lyytinen and Yoo 2002). Ubiquitous and pervasive technologies may be represented by different types of interfaces. One example are tangible user interfaces (TUIs). TUIs allow users to manipulate digital information and physically interact with it (Ishii 2008). TUIs take advantage of users’ knowledge of how the physical world works (Jacob et al. 2008), which may make them especially suitable for users without much knowledge of the digital world.
2.2 Emotions and computing
Emotions are composed of behavioral, expressive, physiological, and subjective reactions, or feelings (Desmet 2005). Emotions are central to many human processes (e.g. perception, understanding), and may enhance the effectiveness of some systems (Picard 2003). Emotions may be monitored through several techniques: by using sensors to measure neuro-physiological signals, by observing gestures, facial expressions and voice, and by asking users to self-report their own emotions (Lopatovska and Arapakis 2011).
Affective computing is the research area that focuses on computing’s relationship to emotions—how it influences them and is influenced by them (Picard and Picard 1997). Polzin and Waibel (2000) posited that it is important for computers to be aware of the emotions of its user, since humans have emotional experiences when interacting with their computers. Otherwise, even if the system can synthesize emotions, interaction becomes one-sided because the user’s emotions are ignored (Polzin and Waibel 2000).
There are several scenarios in which computing systems benefit from knowing information about users’ emotions. For example, persuasive technology aims to explore how to design technology that induces behavioral change in its users (Fogg 1998), so persuasive systems, explicitly focused on inducing cognitive or emotional changes (Torning and Oinas-Kukkonen 2009), require information about their users’ emotions. Some cases, e.g. users with cognitive difficulties, low digital skills, or children, may have difficulty using typical computer systems to input their emotions. New types of interfaces, such as ubiquitous computing technologies, and tangible user interfaces, may be less intimidating and provide a way to blend into the environment and make it easier for users to provide emotional information.
3 Systematic literature review methodology
3.1 Literature review methodology
We followed Kitchenham and Charters’ SLR methodology (Kitchenham and Charters 2007). We defined our research questions using the population, intervention, comparison, outcome and context (PICOC) structure (Kitchenham and Charters 2007), shown in Fig. 1. It is relevant to note that we do not aim to compare interventions, rather, we are focused on providing an overview of the research, challenges, and solutions that have been proposed in this research area.
The research questions guiding this research are the following ones:
-
What emotional information is captured in technologies for self-reporting emotional information?
-
What types of technologies are used to self-report emotional information?
-
How do technologies for self-report of emotional information handle privacy?
-
How are technologies for self-report of emotional information evaluated?
3.2 Search strategy
In a previous work (Fuentes et al. 2015a), we conducted a preliminary systematic literature review to identify which technologies are being used to report, register and share human emotions. We used the following search string, over titles and/or abstracts, for the 2005–2015 year range:
Then, we reviewed 327 papers and ended up with a corpus of 13 papers (4%) that fit the inclusion/exclusion criteria for the study. The low success rate of useful papers led us to rewrite our search string. We decided, based on the papers found (which included words such as emotional, sharing, affective, and interface, and names such as AffectButton (Read and Belpaeme 2013), Emotion Caster (Lin et al. 2009) and Mood Squeezer (Gallacher et al. 2015)), to include more word endings in our new search. We filtered papers from the last 10 years (2006–2016) and, when available, filtered results only to “Computer Science”. The resulting search string was the following one:
The “self-report*” keyword generated errors in several search engines, since the “-” character in some cases results in results being omitted. For this reason, we conducted the search with “self*” and then filtered results that did not contain “shar*”, “interact*” nor “self-report*”. We also filtered results that did not have the keywords in the title. Table 1 lists the results from the previous search, as well as the results from the new search string, and the filtered results.
We then removed duplicates and the papers that had been previously reviewed. Then, two researchers (PR and IR) independently read the title and abstract for all papers, and applied the inclusion and exclusion criteria (see Table 2). A third researcher (MM) decided on discrepancies. The resulting papers were read entirely by the main researcher (CF), who decided whether to finally include the paper or not. After the previous phases were completed, we had 23 papers for this literature review. We then expanded the search by performing a snowball on the selected papers. We manually reviewed each reference from these papers and filtered them to remove duplicates and papers outside of our year range. From this phase, we added 17 additional papers, which were then reviewed with the same aforementioned process. Figure 2 presents the flow diagram for this process using PRISMA notation (Stovold et al. 2014), including the previous SLR (SLR1) and the expansion presented in this paper (SLR2, plus the snowball process).
4 Results
This section presents the results obtained in our study. First, we discuss general information about our corpus of papers, which are listed in Table 3, along with the name of the interface (if available) that each paper presents. It is relevant to note that some interfaces are presented in several papers (e.g. AffectButton is presented in 4). Then, we answer each of our four research questions.
The distribution of papers per year and per country (of author affiliation) are shown in Figs. 3 and 4 respectively. We can see that the selected papers are more or less evenly distributed across the years of the study and across the continents, covering more of North America and Europe but with some participation of papers from South America, Asia and Oceania. The following sections present the results, structured as answers to the research questions.
4.1 What emotional information is captured in technologies for its self-report?
There are several models of human emotions used in the reviewed papers, listed in Table 4. All of these models are standardized descriptions of emotions to be used in any context, except for LPN (Lee et al. 2007) that was proposed for emotions associated with movement. The most common standard approach is Russell’s circumplex model of affect (Russell 1980), possibly because it defines a large set of adjectives to categorize emotions, which allows flexibility and covering a large range of emotions. Several papers do not use a standard method to categorize emotions, instead using previous studies to define the emotions that interest the researchers (e.g. Mood Squeezer (Gallacher et al. 2015)), while others ask simple questions (e.g. “How happy do you feel?” (Conner and Reid 2012), “How are you feeling right now?” (Killingsworth and Gilbert 2010)).
21 of the reviewed papers presented general interfaces aimed at any type of user. 14 of these papers (66.6%) used a standard emotion model and 7 used a domain-specific model. 10 papers presented interfaces aimed at a specific type of user (e.g. young people, museum visitors). In this case, 5 (50%) used a standardized model of emotions and 5 (50%) used a domain-specific model. We expected domain-specific systems to use domain-specific emotion models and general systems to use general models; however, the difference is small and we believe that this may show that general-purpose models of emotions really do capture a wide enough range of emotions to be useful to general and domain-specific systems.
4.2 What types of technologies are used to self-report emotional information?
We identified systems that report emotions through several types of interfaces; most use graphical interfaces (GUI), and others use natural (NUI), tangible (TUI) or web-based interfaces (WEB)—in combination with GUIs or as stand-alone systems. For this analysis, we group papers that present the same interface or system, so we discuss the 32 interfaces listed in Table 4. Table 4 presents the type of interfaces used in the system (GUI, NUI, WEB and/or TUI), the user the system was designed for, the approach used to self-report emotions and whether the system allows for emotions to be shared (and who they are shared with and how).
We wanted to learn how researchers had designed these systems and interfaces. Out of the 32 reviewed studies, 7 discussed how the interface was designed. The methods that were used were iterative design, user-centered design, collaboration with an artist, inspiration on a stress ball, prototype evaluation, ethnography, and participatory design. The rest of the studies base their design decisions on the results of interaction analysis, device characteristics, simplicity, the possibility of less cognitive load, etc.
Tangible interfaces (TUIs) were built for 9 of the reviewed systems. We were especially interested in this type of interface, as they may be more suitable for specific types of users (e.g. users with low digital skills), easier to use (requiring less cognitive involvement) and more entertaining (e.g. for children). The reviewed TUIs had novel interaction mechanisms: emotions were registered by shaking, squeezing, pushing or pulling sliders, hugging, kissing, slapping, caressing, holding, or using gestures. We did not find an explicit link between the use of TUIs and targeting specific types of users or domains. It is interesting to note that in most cases (62.5%), the target users of the interface are generic or nonspecific.
4.3 How do technologies for self-report of emotional information handle privacy?
Self-reporting emotional information is not an easy task, because people need to recognize their own emotions and feel secure and comfortable in recording them. Sharing this information is also complex, as emotional information tends to be sensitive and private. Out of the 32 distinct interfaces identified in our study, 16 allow users to share emotions. In our previous study, we identified that sharing emotions is complex, as the information may be too personal. We wanted to learn how these systems handle the problem of sharing, awareness, and privacy, so we studied who the emotions are shared with, through which mechanism, and whether the authors recognize the problem of privacy and how they manage it.
Who are the emotions shared with (recipients)? The reviewed systems allow users to share emotional information with specific groups of people, e.g. family, friends, or others (social network contacts, clinics, therapist, other study members). TUIs tend to allow sharing only with specific groups of people, while GUIs generally allow sharing with a broader audience.
Which technology/mechanisms were used to share emotional information? Emotional information was shared through several media, e.g. Twitter, Instant Messaging (IM), other social networking systems (SNS), text messaging (SMS) or multimedia messaging (MMS).
How is privacy managed? Registering emotions can help provide users of awareness of their own moods (Sánchez et al. 2006) and sharing emotions may promote conversation and improve working environments (Gallacher et al. 2015). However, in several scenarios in which systems benefit from self-report of emotions, e.g. therapy, safeguarding privacy is especially critical. For example, users must be prevented from sharing emotional information with unintended recipients, which may affect usability (Matthews et al. 2008), preventing users from actually using (and benefitting from) systems.
Out of the 16 studies that allowed sharing emotions, 9 did not mention privacy at all, not discussing mechanisms to preserve it nor allowing users to protect their data. For example, in one study “the participants admitted that they felt self-conscious or embarrassed squeezing the balls in front of other unfamiliar colleagues” (Gallacher et al. 2015). Developers of interfaces for expressing emotions should take into account that the act of expressing an emotion may be private, and design the interaction mechanism accordingly.
The most commonly proposed mechanism to preserve privacy was allowing users to decide how to share their information (e.g. private messaging, social networks) and with whom. Although it is important to note that digital environments may be more secure than more traditional methods such as paper, as digital devices may be password-protected or locked, the risk of wide distribution of sensitive information in case of a mistake or security vulnerability is naturally much higher with digital information. Some interfaces provide visualizations of emotions that are viewable by everyone (Angelini et al. 2015). In these cases, the design may hide the meaning of the visualization (e.g. by allowing the user to attach the emotional meaning by him/herself to each displayed color).
4.4 How are technologies for self-report of emotional information evaluated?
Technologies for self-report of emotional evaluation are frequently evaluated (88% present some form of evaluation—some studies present new methods of evaluation for interfaces that have been previously assessed). The data regarding evaluation of systems is presented in Table 5. This table presents the total number of participants of the study, the duration as reported (either total duration, or the amount of time each participant had to spend doing the required task), whether the evaluation was qualitative or quantitative (or both), and the evaluation task or instruments used. When information was not reported, it was omitted from our table.
Most studies use mixed-methods (38%) and quantitative (38%) approaches, while 13% only use qualitative methods. The chosen participants were in many cases students (15 out of 40 studies), while others use participants such as conference attendants, family members, friends, employees and older adults. Participant numbers range from 6 to 2550 (average: 119). Naturally, qualitative studies have a lower number of participants (ranging from 6 to 36, average: 15), since these methodologies tend to maximize in-depth analysis and are therefore time-consuming (e.g. video recordings, observations, interviews).
The range of time invested for evaluation was between 12 min per participant to over 11 months (average: 7.8 weeks). 34% of studies did not specify how long the evaluation process was. Although long-term studies are ideal to uncover complex adoption dynamics, there is a natural tendency for shorter evaluation periods, which allow researchers to study a larger number of users and conduct simpler evaluation protocols.
5 Discussion
Systematic literature reviews are a methodology that allows reviewing a large body of literature in a repeatable way. We reviewed the past 11 years of literature concerning systems for self-reporting emotional information. Since the search was conducted in March 2016, it is likely that some 2016 papers on this topic were not included. A limitation of a SLR is that it cannot discover papers in databases that were not searched or that do not use the selected keywords. To diminish these limitations, we also used a snowball methodology to increase our corpus. 42% of selected papers were found through this method, which shows that it is an efficient way to expand a literature review, contributing a high number of useful papers. Although we believe to have reviewed a large number of existing literature on our topic, from a broad and diverse number of sources, there may still be papers that were not examined. To further expand this review, besides the possibility of incorporating additional search engines, keywords, and a larger year range, we believe one of our exclusion criteria (“no objective measurement of emotions”) could be refined and clarified, increasing the number of accepted papers. We found this criterion specifically to be subjective and therefore difficult for the researchers to agree on, so wording it differently would possibly increase paper acceptance.
The most common interaction style of the proposed interfaces was WIMP (windows, icons, menus, pointers), with a GUI interface. We did not find a well-defined scenario of use nor interaction style for TUI interfaces, which may be because TUIs are more recent and not as well studied, and because the design methodology and target user strongly define the interaction style, with researchers trying to find novel ways to interact with technology (e.g. squeezing, kissing, gestures). There are several factors that may affect the self-report of emotions using technology: (1) the interface characteristics, as shape, design and interaction style may impact motor and cognitive processes (Lottridge and Chignell 2009b), (2) the users’ own cognitive abilities and digital skills, (3) interest, enjoyment, and motivation to use an interface, and (4) the user’s context. TUIs have the possibility of reducing some of these factors, generating lower stress levels for users and helping users identify emotions through object manipulation, as Isbister et al. state: “We felt that ‘playing’ with objects would be more fun for users than filling out a survey or thinking aloud, and might thus lead to more relaxed and creative responses” (Isbister et al. 2007). This may mean that, especially for users with low digital skills or undergoing a stressful time, this type of interface may provide benefits beyond only self-reporting emotions by giving the users “the opportunity to do something fun” (Gallacher et al. 2015), as well as promoting engagement and being more intuitive.
Self-reflection is an important aspect of registering emotions, especially for systems related to contexts such as mental health or therapy. This may provide benefits to users, and further study needs to be done on how to provide users with these instances of self-reflection to induce behavioral change. Another important aspect of registering emotions is allowing users to share them with others. We found that although most interfaces allow sharing emotions, they do not strongly consider the privacy implications of emotion sharing, or even of emotion visualization, as some interfaces display the users’ emotions. We can classify the proposed recipients of the information in three categories; from those closest to the user (e.g. close friends, family, therapists), to specific users (e.g. acquaintances, business partners, social network contacts), to any user (information that is shared publicly, e.g. posted to a social network). Few systems are designed for sharing emotional information with anyone (24%), and none of the papers in this category expressed a concern with the privacy of the shared information. However, the fact that most systems are designed for sharing with closer, selected recipients somewhat reflects the sensible nature of the shared information. The degree of privacy of emotions may also be related to cultural aspects, and we believe this is an underexplored facet as well, as some cultures may be more extroverted and value awareness and self-reflection over privacy, while others may be reticent to record their emotions digitally at all.
Regarding the evaluation of interfaces for self-reporting emotions, we found an equal number of interfaces were evaluated through quantitative and a mixed-methods approach. We believe qualitative evaluation methods are important to disentangle the complex variables involved in users’ perceptions of these interfaces—e.g. benefits, usability, adoption, long and short-term persuasiveness. Only one paper considered the digital skills of the users as a relevant characteristic, and we believe that for novel interfaces, considering the familiarity of users with technology is important to understand their perception about the interface.
6 Conclusions
We conducted a formal, systematic literature review aimed to understanding challenges, directions, and lessons learned in the design of interfaces for emotional self-report. We found it was a challenge to find research focused on self-report of emotions; most research seems to focus on automatically detecting user emotions (through physiological signals or observation), so this research area is small, but nevertheless important, as self-report is the method used by mental health professionals and can provide important input to computer applications. The fact that there are a reduced number of papers is a signal that this area of research requires more studies (especially involving users in real contexts) and interfaces (with new interaction styles).
Previous research has identified the importance of sharing emotions with other users. Our results show that most interfaces for self-reporting emotions are GUIs. This may produce some categories of users that are excluded from these technologies—e.g. users with low digital skills or cognitive impairments—which suggests the importance of studying these users and designing technologies with interaction styles that allow them to use them intuitively and easily. One challenge that is transversal for self-reporting and automatic interfaces is privacy, as emotional information is sensitive and users may not want to share it. More research on how to manage privacy and protect information is needed.
References
Angelini L, Caon M, Lalanne D et al (2015) Towards an anthropomorphic lamp for affective interaction. In: proceedings of the ninth international conference on tangible, embedded, and embodied interaction. ACM, New York, pp 661–666
Bannon L (2011) Reimagining HCI: toward a more human-centered perspective. Interactions 18:50–57. doi:10.1145/1978822.1978833
Bardzell S, Bardzell J, Pace T (2009) Understanding affective interaction: Emotion, engagement, and internet videos. In: 2009 3rd international conference on affective computing and intelligent interaction and workshops (ACII 2009), pp 1–8. doi:10.1109/ACII.2009.5349551
Barrett LF (2004) Feelings or words? understanding the content in self-report ratings of experienced emotion. J Pers Soc Psychol 87:266–281
Bialoskorski LSS, Westerink JHDM, Broek EL (2009) Mood swings: an affective interactive art system. In: Nijholt A, Reidsma D, Hondorp H (eds) Intelligent technologies for interactive entertainment. Springer, Berlin, pp 181–186
Broekens J, Brinkman WP (2009) AffectButton: towards a standard for dynamic affective user feedback. In: 2009 3rd international conference on affective computing and intelligent interaction and workshops, pp 1–8
Broekens J, Brinkman W-P (2013) AffectButton: a method for reliable and valid affective self-report. Int J Hum Comput Stud 71:641–667. doi:10.1016/j.ijhcs.2013.02.003
Broekens J, Pronker A, Neuteboom M (2010) Real time labeling of affect in music using the Affectbutton. In: Proceedings of the 3rd international workshop on affective interaction in natural environments. ACM, New York, pp 21–26
Caon M, Khaled OA, Mugellini E et al (2013) Ubiquitous interaction for computer mediated communication of emotions. In: 2013 humaine association conference on affective computing and intelligent interaction (ACII). IEEE, pp 717–718
Chambel EOM (2013) Accessing movies based on emotional impact. Multimed Syst 19:559–576. doi:10.1007/s00530-013-0303-7
Chen L, Chen G-C, Xu C-Z et al (2008) EmoPlayer: a media player for video clips with affective annotations. Interact Comput 20:17–28. doi:10.1016/j.intcom.2007.06.003
Chi T-S, Yeh L-Y, Hsu C-C (2011) Robust emotion recognition by spectro-temporal modulation statistic features. J Ambient Intell Humaniz Comput 3:47–60. doi:10.1007/s12652-011-0088-5
Choi S, Yamasaki T, Aizawa K (2015) An interactive system based on yes–no questions for affective image retrieval. In: Proceedings of the 1st international workshop on affect and sentiment in multimedia. ACM, New York, pp 45–50
Conner TS, Reid KA (2012) Effects of intensive mobile happiness reporting in daily life. Soc Psychol Pers Sci 3:315–323. doi:10.1177/1948550611419677
Desmet P (2005) Measuring emotion: development and application of an instrument to measure emotional responses to products. In: Blythe M, Overbeeke K, Monk A, Wright P (eds) Funology. Springer, The Netherlands, pp 111–123
Diener H, Oertel K (2006) Experimental approach to affective interaction in games. In: Proceedings of the first international conference on technologies for e-learning and digital entertainment. Springer, Berlin, pp 507–518
Doryab A, Frost M, Faurholt-Jepsen M et al (2015) Impact factor analysis: combining prediction with parameter ranking to reveal the impact of behavior on health outcome. Pers Ubiquit Comput 19:355–365
Ekman P, Friesen WV (1986) A new pan-cultural facial expression of emotion. Motiv Emot 10:159–168
Fogg BJ (1998) Persuasive computers: perspectives and research directions. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 225–232
Frost M, Doryab A, Faurholt-Jepsen M, et al (2013) Supporting disease insight through data analysis: refinements of the monarca self-assessment system. In: Proceedings of the 2013 ACM international joint conference on pervasive and ubiquitous computing. ACM, New York, pp 133–142
Fuentes C, Gerea C, Herskovic V et al (2015a) User interfaces for self-reporting emotions: a systematic literature review. In: Ubiquitous computing and ambient intelligence. Sensing, processing, and using environmental information—9th international conference, UCAmI 2015. Springer, pp 321–333
Fuentes C, Rodríguez I, Herskovic V (2015b) EmoBall: a study on a tangible interface to self-report emotional information considering digital competences. In: Volume 9456 of the series lecture notes in computer science, pp 189–200
Gallacher S, O’Connor J, Bird J et al (2015) Mood squeezer: lightening up the workplace through playful and lightweight interactions. In: Proceedings of the 18th ACM conference on computer supported cooperative work and social computing. ACM, pp 891–902
Gross T (2009) Towards a new human-centred computing methodology for cooperative ambient intelligence. J Ambient Intell Humaniz Comput 1:31–42. doi:10.1007/s12652-009-0004-4
Hastings J, Brass A, Caine C et al (2014) Evaluating the emotion ontology through use in the self-reporting of emotional responses at an academic conference. J Biomed Semant 5:1–17. doi:10.1186/2041-1480-5-38
Isbister K, Höök K, Sharp M, Laaksolahti J (2006) The sensual evaluation instrument: developing an affective evaluation tool. In: Proceedings of the SIGCHI conference on human factors in computing systems, pp 1163–1172
Isbister K, Höök K, Laaksolahti J, Sharp M (2007) The sensual evaluation instrument: developing a trans-cultural self-report measure of affect. Int J Hum Comput Stud 65:315–328
Ishii H (2008) The tangible user interface and its evolution. Commun ACM 51:32–36
Jacko JA (2012) Human computer interaction handbook: fundamentals, evolving technologies, and emerging applications. CRC Press, Boca Raton
Jacob RJK, Girouard A, Hirshfield LM et al (2008) Reality-based interaction: a framework for post-WIMP interfaces. In: Proceedings of the SIGCHI conference on human factors in computing systems (CHI’08). ACM, pp 201–210
Jain J, Lund A, Wixon D (2011) The future of natural user interfaces. In: Extended abstracts on human factors in computing systems (CHI EA’11), pp 211–214
Killingsworth MA, Gilbert DT (2010) A wandering mind is an unhappy mind. Science 330:932. doi:10.1126/science.1192439
Kitchenham B, Charters S (2007) Guidelines for performing systematic literature reviews in software engineering. Technical Report EBSE 2007-01, Version 2.3. Keele University, Keele, UK and Durham University, Durham, UK
Laaksolahti J, Isbister K, Höök K (2009) Using the sensual evaluation instrument. Digit Creat 20:165–175. doi:10.1080/14626260903083603
Lahey B, Girouard A, Burleson W, Vertegaal R (2011) PaperPhone: understanding the use of bend gestures in mobile devices with flexible electronic paper displays. In: Proceedings of the 2011 annual conference on human factors in computing systems, pp 1303–1312
Lang PJ (1980) Behavioral treatment and bio-behavioral assessment: computer applications. In: Sidowski JB, Johnson JH, Williams TA (eds) Technology in mental health care delivery systems. Ablex, Norwood, NJ, USA, pp 119–137
Laurans G, Desmet P, Hekkert P (2009) The emotion slider: a self-report device for the continuous measurement of emotion. In: Proceedings of the 3rd international conference on affective computing and intelligent interaction and workshops (ACII 2009), pp 1–6
Lee J-H, Park J-Y, Nam T-J (2007) Emotional interaction through physical movement. In: Human–computer interaction. HCI intelligent multimodal interaction environments. Springer, pp 401–410
Lin C-L, Gau P-S, Lai K-J et al (2009) Emotion caster: tangible emotion sharing device and multimedia display platform for intuitive interactions. In: Proceedings of the 13th international symposium on consumer electronics (ISCE’09). IEEE, pp 988–989
Lopatovska I, Arapakis I (2011) Theories, methods and current research on emotions in library and information science, information retrieval and human-computer interaction. Inf Process Manag 47:575–592
Lottridge D, Chignell M (2009a) Emotional majority agreement a psychometric property of affective self-report instruments. In: Science and technology for humanity (TIC-STH), 2009 IEEE Toronto international conference, pp 795–800
Lottridge D, Chignell M (2009b) Emotrace: tracing emotions through human-system interaction. In: Proceedings of the human factors and ergonomics society annual meeting, vol, 53, pp 1541–1545. doi:10.1177/154193120905301916
Lyytinen K, Yoo Y (2002) Ubiquitous computing. Commun ACM 45:63–96
Matthews M, Doherty G, Sharry J, Fitzpatrick C (2008) Mobile phone mood charting for adolescents. Br J Guid Couns 36:113–129. doi:10.1080/03069880801926400
Mody RN, Willis KS, Kerstein R (2009) WiMo: location-based emotion tagging. In: Proceedings of the 8th international conference on mobile and ubiquitous multimedia. ACM, New York, pp 14:1–14:4
Mora S, Rivera-Pelayo V, Müller L (2011) Supporting mood awareness in collaborative settings. In: 2011 7th International Conference on collaborative computing: networking, applications and worksharing (CollaborateCom), pp 268–277
Morris M, Kathawala Q, Leen T et al (2010) Mobile therapy: case study evaluations of a cell phone application for emotional self-awareness. J Med Internet Res 12:e10
Nagel F, Kopiez R, Grewe O, Altenmüller E (2007) EMuJoy: software for continuous measurement of perceived emotions in music. Behav Res Methods 39:283–290
Negru S (2010) A conceptual architecture of an arduino-based social-emotional interactive system. In: Proceedings of the proceedings of the 2010 IEEE 6th international conference on intelligent computer communication and processing. IEEE Computer Society, Washington, pp 93–98
Neyem A, Aracena C, Collazos CA, Alarcón R (2007) Designing emotional awareness devices: what one sees is what one feels. Ingeniare Revista Chilena de Ingeniería 15:227–235
Niforatos E, Karapanos E (2015) EmoSnaps: a mobile application for emotion recall from facial expressions. Pers Ubiquit Comput 19:425–444
Papastavrou E, Charalambous A, Tsangari H (2016) Exploring the other side of cancer care: the informal caregiver. Eur J Oncol Nurs 13:128–136. doi:10.1016/j.ejon.2009.02.003
Pfister H-R, Wollstädter S, Peter C (2011) Affective responses to system messages in human–computer-interaction: effects of modality and message type. Interact Comput 23:372–383
Picard RW (2003) Affective computing: challenges. Int J Hum Comput Stud 59:55–64
Picard RW, Picard R (1997) Affective computing. MIT Press, Cambridge
Plutchik R (1980) Emotion: a psychoevolutionary synthesis. Harpercollins College Division, New York
Polzin TS, Waibel A (2000) Emotion-sensitive human–computer interfaces. In: ISCA tutorial and research workshop (ITRW) on speech and emotion
Read R, Belpaeme T (2013) Using the AffectButton to measure affect in child and adult–robot interaction. In: 2013 8th ACM/IEEE international conference on human–robot interaction (HRI). IEEE, pp 211–212
Reid SC, Kauer SD, Dudgeon P et al (2008) A mobile phone program to track young people’s experiences of mood, stress and coping. Soc Psychiatry Psychiatr Epidemiol 44:501–507. doi:10.1007/s00127-008-0455-5
Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39:1161–1178
Russell JA, Mehrabian A (1977) Evidence for a three-factor theory of emotions. J Res Pers 11:273–294
Sánchez JA, Kirschning I, Palacio JC, Ostróvskaya Y (2005) Towards mood-oriented interfaces for synchronous interaction. In: Proceedings of the 2005 Latin American conference on human–computer interaction (CLIHC’05). ACM, pp 1–7
Sánchez JA, Hernández NP, Penagos JC, Ostróvskaya Y (2006) Conveying mood and emotion in instant messaging by using a two-dimensional model for affective states. In: Proceedings of VII Brazilian symposium on human factors in computing systems. ACM, New York, pp 66–72
Scherer KR (2005) What are emotions? And how can they be measured? Soc Sci Inform 44:695–729
Schubert E, Ferguson S, Farrar N et al (2013) The six emotion-face clock as a tool for continuously rating discrete emotional responses to music. In: From sounds to music and emotions. Springer, pp 1–18
Sebe N (2010) Human-centered computing. In: Nakashima H, Aghajan H, Augusto JC (eds) Handbook of ambient intelligence and smart environments. Springer US, Boston, pp 349–370
Stovold E, Beecher D, Foxlee R, Noel-Storr A (2014) Study flow diagrams in Cochrane systematic review updates: an adapted PRISMA flow diagram. Syst Rev 3:54. doi:10.1186/2046-4053-3-54
Torning K, Oinas-Kukkonen H (2009) Persuasive system design: state of the art and future directions. In: Proceedings of the 4th international conference on persuasive technology. ACM, New York, pp 30:1–30:8
Watson D, Tellegen A (1985) Toward a consensual structure of mood. Psychol Bull 98:219
Weiser M (1995) The computer for the 21st century: specialized elements of hardware and software, connected by wires, radio waves and infrared, will be so ubiquitous that no one will notice their presence. In: Baecker RM, Grudin J, Buxton WAS, Greenberg S (eds) Readings in human-computer interaction, 2nd edn. Morgan Kaufmann Publishers Inc., San Francisco, pp 933–940
Wigdor D, Wixon D (2011) Brave NUI world: designing natural user interfaces for touch and gesture. Morgan Kaufmann, San Francisco
Yu S-H, Wang L-S, Chu H-H et al (2011) A mobile mediation tool for improving interaction between depressed individuals and caregivers. Pers Ubiquit Comput 15:695–706
Zhao Y, Wang X, Goubran M et al (2012) Human emotion and cognition recognition from body language of the head using soft computing techniques. J Ambient Intell Humaniz Comput 4:121–140. doi:10.1007/s12652-012-0107-1
Acknowledgements
This project was partially funded by CONICYT Chile PhD scholarship (CONICYT-PCHA/Doctorado Nacional/2013-21130661,2014-63140077), CONICIT and MICIT Costa Rica PhD scholarship and Universidad de Costa Rica, and Fondecyt (Chile) Project No. 1150365.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Fuentes, C., Herskovic, V., Rodríguez, I. et al. A systematic literature review about technologies for self-reporting emotional information. J Ambient Intell Human Comput 8, 593–606 (2017). https://doi.org/10.1007/s12652-016-0430-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12652-016-0430-z