1 Introduction

The viewers were notably scared. Watching with widening eyes and tensed faces some of them even clung to their seats”. Taken from a film critique review, this excerpt describes the notable reaction of viewers to a scary scene in a film. The dramatic description depicts a film that was made with the intention of eliciting fear in its viewers. Indeed, films are known to elicit not one, but a mixture of emotions in their audience and are therefore referred to in the literature as emotion machines [36]. Affect elicitation, triggered by emotions, was found to be a compelling reason for box office success [33]. It is no wonder, then, that many are considering utilizing emotions for recommendations [32, 38, 45]. The question of how to integrate emotions into a recommender system is still an open question, and within it, a more fundamental one exists: How does one explore items, and in particular, films, according to their elicited emotions?

Films, books, and music belong to a family of products termed experience goods [28] that are hard to assess before consumption for their value. Viewing and browsing through a collection of such items by the emotions they are known to evoke may be of interest to researchers [4], product managers, and companies. The public audience might also be interested in having an additional, new entry point to browse through these media collections. Emotional information such as joy, sadness, etc. is related to the consumption experience of these goods and is thus an important factor to consider when a person chooses to interact with them [43].

Given that the emotional information of the experience with items can be represented [25], we ask how can we enable users to explore films according to their emotional experience? Many solutions exist for browsing items by their features. However, as the main reason for going to the movies is to have an emotional experience [36], emotions cannot be considered simply as additional features. Emotional experiences of movies are complex, consisting of a mixture of often conflicting emotions. This high-dimensionality of the emotional experience combined with the large number of movies create a high-dimensional big-data space. A simple solution might be to divide this space into categories with an emotional meaning, such as genres. However, it was shown that genres do not offer a clear representation of the emotional experience [25], and that most movies belong to several genres. Hence, to enable users to explore films according to their emotional experience, we take advantage of users’ natural visual and pattern recognition abilities [41] and envision a visual emotional space. In such a space, films offering similar experiences are closer to each other, while films offering contradictory experiences are further apart. Exploration within this emotional space can be studied differently for varying audiences or needs. From novice users to experts, to recommender systems or just general curiosity, various demands invite different design choices and solutions. While previous works have explored how to visualize large document spaces (according to text) [10, 13] as well as large image or video collections [2, 19], this is the first exploration looking at how to search and browse through an emotional space, visualizing a large set of items according to their emotions.

Focusing on the case of researchers and domain experts, we take a user-centered design approach to design and implement a research tool that enables film experts and researchers to explore films in an emotional space. Examples vary: communication researchers looking into the affective dimensions of genres or directors; film educators gaining a deeper understanding of similar emotional dimensions of film directors and producers understanding the emotional impact in retrospect or exploring possible emotional avenues for new movies. The primary design goal of the interactive solution is therefore exploratory, and the framework was designed as a research tool for further understanding of the emotional space of films while revealing additional layers of information upon zooming in.

The first step towards implementing an emotion-based film browsing system is to be able to define the emotions of a film. In our previous work [6, 25], we follow Plutchik’s theory of emotions [30], which defines eight discrete basic emotions: anger, fear, sadness, disgust, surprise, anticipation, trust, and joy. Using the NRC lexicon [24], we created a unique emotional signature for a film from the emotional words in the aggregated text of its online user-generated reviews. We have also obtained a large collection of films and their corresponding reviews from the IMDb database and comprised the emotional signatures for the films from their reviews [25].

In the current work, we propose a visualization of the emotional signatures in order to enable the exploration of the emotional space. We use glyphs of different colors and intensity, to account for the various emotional values. The glyphs are mapped to a two-dimensions Euclidean representation, a result of a t-SNE dimension reduction algorithm. The visualization is the basis for an emotional-based framework enabling users to view, search and interact with films according to their elicited emotions, as depicted by the emotional signatures. The solution we present (the system can be explored at http://emotionmovies.info/) is interactive, allowing users to browse through and interact with a visualized emotional space. This is the first attempt, as far as we know, to visualize the spectrum of elicited emotions for experience goods and a first insight into how users interact and search in an emotional space. While the presented visualization is for films, our design process is generalizable and allows viewing and browsing of multiple items according to their elicited emotions, whether they are films, songs or any other experience goods. To validate the utility of our solution, we performed a qualitative evaluation in which eighteen participants, including film experts, students and enthusiasts, explored the movie space. The results demonstrate that users were easily able to understand the visual metaphors used in our solution while managing to find interesting and insightful ways to explore films according to their emotions.

The paper is structured as follows. First, in Section 2, we provide background discussing related works. Next, Section 3 describes the method for the design of the interactive tool including the requirements, the data collection, the way we created the emotional signatures of the films, and the visual encoding for the visual solution. Section 4 describes the details of the system, while Section 5 describes an evaluation performed with 18 participants. Finally, in Section 6 we describe our insights from the design process and the evaluation, and provide concluding remarks in Section 7.

2 Related work

We start by reviewing several emotion theories, and specifically Plutchik’s theory of emotions that our system is designed to support. Next, we review various techniques used to visualize and allow users to browse through movies and other media collections. Finally, we review visualization techniques and systems designed to support emotions.

2.1 Theories of emotion

The study of emotion is very complex as even a uniform definition of emotion is something that most scientist do not agree on, attested by over 90 definitions of “emotion” that were proposed [31]. In everyday life, we conceive an emotion as a feeling or an inner state, often triggered by a stimulus. However, the internal experience is personal and may be confusing as we may experience several emotions at the same time [31]. Many scientists have studied emotions and several emotion theories have emerged trying to better understand and explain their dynamics. Among these theories, there are two main directions: dimensional and discrete. Dimensional emotion theories claim that emotions can be modeled according to several continuous dimensions. Barrett and Russell proposed such a two-dimensional model of emotion with valance (pleasure vs. displeasure) and arousal (high vs. low) being the main scales. A core affect, they claim, can be placed along this two-dimensional space [35]. Discrete theories, on the other hand, believe that all humans have a set of cross-cultural basic emotions. One prominent discrete theory of emotion was introduced by Ekman [11]. Ekman suggests universal emotions yet clarifies that emotions do not exist as a single affective state, but rather they are comprised of a family of related emotional states. These include: Enjoyment, Sadness, Anger, Fear, Disgust, and Surprise.

Building on this theory, Plutchik created his wheel of emotions to illustrate the basic emotions as well as the relations between them (see Fig. 2). The wheel consists of eight basic or primary emotions (Ekman’s six basic emotions plus two other emotions: Trust and Anticipation [30]), with each basic emotion having different intensities. For example, Ecstasy is a stronger intensity of Joy while Serenity is a lower intensity of Joy. Mixed emotions are a combination of a couple of basic ones, for example, Joy and Trust combine for Love. In addition, each emotion has a polar opposite. Opposing emotions are placed opposite on the wheel – joy vs. sadness, anger vs. fear, trust vs. disgust and surprise vs. anticipation [31].

Although various studies aiming to visualize emotions use both discrete and dimensional models, the discrete model is more often used [26]. In the current study, we use Plutchik’s theory of emotions, creating a visualization that analyzes emotions in films according to Plutchik’s eight basic emotions.

2.2 Visualizing and browsing through movies and video collections

The role of emotions in films has been extensively researched throughout the years. In a recent book on the subject, Tan [36] describes a variety of ways in which emotions play an essential role in the consumption of films and shows that one of the major incentives for watching films is the emotional experience they offer. Aurier and Guintcheva [5] demonstrate that the success of a film depends on how emotional narratives are perceived and understood by the audience, and how that reflects onto the audience.

Still, most movie websites and movie media visualization and browsing systems do not support viewing media items according to their emotions. Sites such as IMDb provide information and ways to search for actors, directors, characters, genres, film ratings by users, etc. Others, like Netflix and YouTube, also allow accessing and watching movies through on-demand video, providing online search and recommendations options. However, most existing websites do not support access to information conveyed in an emotional form [16].

Exploratory Browsing can be defined as:” The behavior when the user is uncertain about her or his targets and needs to discover areas of interest (exploratory), in which she or he can explore in detail and possibly find some acceptable items (browsing)” [9]. Movies and videos are one of the most significant sources of entertainment and are increasingly accessible via different channels as the Internet, social media, and TV. Because of the richer environments and the fast increasing number of available videos, new novel methods should be developed to help users browse movies and videos [14, 43]. Many researchers developed systems to allow browsing of movies, some including information on emotions or sentiments of movie data. FilmFinder is one of the first tools designed to search films according to multiple filter facets, allowing to browse the filter results and view a single film details [2]. MovieClouds is an interactive web system designed to explore, browse and visualize movies based on subtitles and audio where most of the semantics is expressed with focus on the emotional dimensions represented in the films or felt by the viewers [8]. Tanin and Shneiderman developed a system [37] to demonstrate a user interface architecture for efficient browsing of large online data. With 10,000 entries of films, the data coming from the IMDb website, users can select attributes they want to define in their query in a panel. ColorsInMotion [22] is an interactive visual browsing system of videos mainly based on colors which purpose is to refine video processing and visualizations based on other systematic evaluations, explore views for different contexts and new video properties. It allows the user to search, browse, view and compare videos. The videos are based on their dominant or average colors, and users can, for example, search videos according to a specific color. SceneSkim [29] is an interface tool to support searching for a particular scene that matches a query and browsing for specific clips within the movie using synchronized captions, scripts (a written description of the film that includes the dialogue, actions, and settings for each scene) and plot summaries. Since searching and browsing within movies is a time-consuming task, SceneSkim browses for a specific scene by, for example, searching for a particular word and returning results in which the word is related to the movie or the total screen time of the word if needed.

Summarizing, these systems provide various ways to visualize and browse through movie and video collections. However, as we have seen, very few systems include the affective aspects of movies. Since movies have a high power to affect us emotionally [36], visualization techniques incorporating emotions could help manage the complexity of the movie information by providing another important facet that should be explored [16]. In this paper, we suggest using emotions as the primary way to view a movie space and design an exploration system for domain experts. This investigation can help future systems in deciding how to incorporate emotions as either an auxiliary method or as the main entry point for viewing, recommending, or exploring movie data.

2.3 Visualizing emotions

The visualization of sentiments and emotions of opinions analyzed from text has become a well-known subject of research over past years and there is a growing multidisciplinary interest for visualization of sentiment and emotions. Kucher et al. [20] present an extensive survey of peer-reviewed papers in information visualization, visual analytics, and other disciplines, analyzing various sentiment and emotion visualization systems and techniques.

Visualization of emotional analysis is used in various domains such as sports [17, 40, 42], storytelling [23] and event-detection [26, 34]. Most systems focus on visualizing positive vs. negative sentiment that can more easily be extracted from text. Visualizing more specific emotions involves using an existing emotional model to process the underlying data. While few systems use the dimensional model (e.g., [40], which shows Tweets on a two-dimensional polar space), most systems use a categorical emotional model as the basis for their visualizations.

Many systems use a radial solution for conveying multidimensional categorical emotional information. EmpaTweet [34] employed a radar chart showing the six basic emotions in Ekman’s model. Similarly, EmotionWatch [17], used a radar chart of 20 dimensions to show user’s emotions to an event based on an analysis of Twitter posts of that event. Using Plutchik’s emotional model as the underlying model, several systems used a radar chart to show the relation between the emotions for several topics [e.g., 21]. Yu and Wang [42] show emotions from tweets of fans during the World Cup. The eight basic Plutchik emotions were mapped to their corresponding colors in the Wheel and shown in a multiple line graph. PEARL [44] visualizes an individual emotional pattern over time extracted from social media data. Their solution is timeline-based showing the eight Plutchik’s emotions as streams that change over time. The solutions enable individuals to browse and interpret their own emotional text style and how it evolves over time.

Common to these systems is that they all focus on analyzing a single event, person or item (often through time, or comparing a few selected items). In the current work, we focus on allowing to view and browse an emotion space, browsing through a large set of items showing both the overall topology of emotions as well as the details of a single item.

3 Method

Our method follows Munzner’s nested model for visualization analysis and design [27]. The top level is the characterization of the problem and the domain. Our domain deals with elicited emotions from experience goods, in our case, films, as extracted from the publicly available user-generated content of online opinions. We performed user interviews and describe the elicited requirements in Section 3.1. The second level of Munzner’s model has to do with the mapping of data into abstract operations and data types. We collected data from online reviews, analyzed this data and transformed it to meet the users’ needs for exploration, calculating an emotional signature for each film. This is described in Sections 3.1 and 3.3. Finally, we describe the visual encoding and interaction design (the third level of the model), which enable the users to conduct the mentioned tasks, in Section 3.4.

3.1 Design requirements

In designing the application, we have taken a user-centered design (UCD) approach [1]. Our target users are first and foremost people who deal with film analysis, i.e., film researchers and film lecturers, as well as people from the film industry, who wish to investigate meaning and emotions in films in the context of their work practice. To better understand the needs of this group of users, we conducted interviews with three experts. The head of the film studies department in a local college, a film instructor in a highly regarded film school, and a university communication researcher (and lecturer) specifically studying communication and emotions in films. These experts were also consulted through the design process, commenting on different design considerations. We asked participants about their needs and current practices in regard to investigating films: how important are films’ elicited emotions? Do they study films in terms of their emotions, and if so, how do they currently conduct their research and investigations? What kind of abilities would they like to have when looking at emotions in films? Etc. Interviews lasted about an hour and were recorded, transcribed, and later analyzed.

All three interviewees attested that emotions are a critical factor when studying films and that they frequently refer to emotional attributes of films. They said that currently, they mostly use their own or other’s subjective assessment of a film’s perceived emotions and that they do not know of any tool that provides the audience’s perceived emotion of a film. They said that such a tool could be very beneficial and valuable, as it will enable them to explore new ways of thinking about certain films and find new connections between films, actors, and directors.

The above translates to two different levels of requirements. The first is on a per-film basis, as they were interested in the movies’ emotional features. The second, however, calls for a broader scope, as it requires to enable (1) the exploration of emotional connections between films and (2) the comparison of the emotions of several films. Specifically, when the interviewees talked about the emotional connection between films, they referred to abstracts such as “finding connections”, “exploring”, or “connection between directors”. We were then faced with finding a way to allow filtering, comparisons, and searches for (somewhat abstract) patterns on multiple scales.

Thus, the high-level requirements for the system have been defined as creating an interactive visualization environment, in which expert users are able to effectively explore movies according to their basic emotions. To enable exploration on a per-film as well as on a planar level, additional filters should be made available from the readily available meta-data. Yet, the focal point of view should remain the emotional landscape, and the primary challenge is to create it in a manner that enables seamless orientation in and within it. To this extent, based on the interviews, we defined the following low-level requirements:

  • Enable the view of high-level distribution (topology) of movies according to their emotions

  • Enable the detection of” emotion clusters” - areas of movies showing different types of emotions

  • Search and filter according to different emotional patterns, as well as additional available meta-data for the films

  • Enable to recognize a single movie’s basic emotional pattern

  • Enable to compare between multiple movies’ emotional patterns

For these user tasks, the following underlying assumptions are made: (a) Each movie can be described by a unique emotional signature depicting the audience’s overall affective mindset elicited from the movie [25]; (b) Each emotion reflects the strength of that emotion in comparison to the other emotions, and (c) Each emotion also reflects the strength of that emotion in comparison to the same emotion in other movies.

3.2 Data collection

We downloaded review information of 9666 films between 1972 and July 2016 from the IMDb website [25]. In addition to the reviews, each movie contained meta-data including the movie’s genre (each movie possibly tagged for multiple genres), ratings, cast, and director. In the current corpus, we included only movies released between 2003 and 2014 that had at least 30 reviews in order to guarantee review volume and validity of temporal information (it is difficult to get the release weekend reviews for movies released before 1998, and recent movies may not have reached their full impact). After cleaning the data, we were left with 2937 movies and 708,618 reviews.

3.3 Data transformation: Creating emotional signatures

We briefly describe here how we created an emotional signature for each movie given its reviews. For a full description, see [6, 25]. In order to be able to distinguish movies according to their emotional footprint, each review for a given movie was annotated for Plutchik’s eight basic emotions, using the NRC lexicon created by Mohammad & Turney [24]. The lexicon was created by crowdsourcing and has a classification for over 14,000 words for each of the eight basic Plutchik’s emotions. The use of the NRC lexicon for emotion detection in a text is based on the premise that the emotion expressed in the text is the aggregate of the emotions of the words comprising it. This approach was proven to work very well on different domains [18, 23]. To attain an emotional signature of a movie, we first aggregated all emotional words in each of the eight Plutchik categories from all reviews of that movie. Then, for each emotion, we count the number of occurrences of the emotional words that the lexicon associates with that emotion out of all words in the reviews (Fig. 4, top-middle shows an example of the sum occurrences of the emotional words taken from all reviews of a single movie). Next, we transform raw word counts to representative values that can be visualized and interpreted. Raw emotion word counts were converted to relative values by computing for each movie, the number of words of a specific emotion divided by the total number of emotion words (Fig. 4 top-right shows the relative values of the continued example). Notice that these do not sum to 100% because an emotion word can belong to two or more categories. For example, “abuse” is categorized in both Anger, Disgust, Fear and Sadness. Similarly, the word “optimism” is categorized in all 4 positive emotions (Joy, Anticipation, Surprise and Trust).

Unfortunately, the extracted emotions do not share the same range and statistical properties, such as mean, median, and variance, as shown in Fig. 3 which shows the overall distribution of the relative emotional values. As these differences can highly skew and bias the visual encoding of the movies, we applied a counter-balancing transformation. The distribution of these prevalence values largely differed across the emotions. In order to achieve comparability of the emotions and movies, we applied z-score normalization (Fig. 4 bottom-left shows the z-score values). From these normal distributions we extracted the percentiles of each emotion in each movie as the final representative value (see Fig. 4, bottom-center). This value was termed the emotional signature of the movie, describing for every movie each of the eight Plutchik’s emotions.

3.4 Visual encoding

The visual design aims at representing the users’ information needs in a comprehensive, effective and consumable manner. Our solution visualizes the films as glyphs on a landscape while allowing exploration on the planar scale, and on a per-film basis. A planar visualization allows for complex and serendipitous explorations of the films and their relationships, while interaction allows to explore a single film. At first, we have created a glyph for each movie revealing its emotional signature. Secondly, we positioned each movie on a plane using dimension reduction. The layout and transformation are exemplified using the film Star Wars - Episode III movie as shown in Fig. 4. Finally, we applied a toolbox of interactions for exploration.

3.4.1 Glyph design

As stated in the introduction, the visualization aims to encode emotions using Plutchik’s basic emotions, which pose an important constraint to our glyph design. There are exactly eight emotions according to this theory, and therefore no requirements to scale up to a higher or varying number of categories. Plutchik, in his original work, has proposed a radial layout of these eight emotions as shown in Fig. 2. Plutchik’s radial design places emotions that are more similar closer than emotions that are further removed, and thus more similar emotions are adjacent and contrasting emotions are positioned opposite one another. Instead of the flower-like design, we simplified the image to a 3-by-3 grid with nine cells, and when leaving the center cell empty, we can use the remaining cells to encode all eight emotions of a movie. The layout is shown in Fig. 4 on the top left. We regard this visual encoding as a simplification rather than a significant variation of the original layout, thus keeping the adjacency of similar emotions. Each emotion in the grid has been assigned a color-hue, using Color Brewer’s Set1 qualitative color palette with eight distinct colors [15]. We used this color scheme, instead of the original represented in Fig. 2, because of its perceptual correctness, and because we wanted to apply additional information, namely the emotional signature, to the glyph. The transformation to the emotional signatures is shown in Fig. 4. Showing the four sequences of the computation from left-to-right, top-down on the initial layout as described above in Section 3.3.

The strength of each emotion has been encoded to color-intensity using linear interpolation. We allowed the user to set a lower threshold to the strength of the emotions, leaving values below the threshold empty. We set the initial threshold to 0.5, which will show only emotion values above average. The result of this color coding is shown in the last step of Fig. 4, in the bottom right corner. Anticipation, Joy, Trust, and Surprise are left empty, while Anger, Fear, Disgust and Sadness are shown in their corresponding color intensity and hue.

3.4.2 Glyph positioning

We represent each movie’s emotional signature on a plane, to provide a topological overview of movies as a function of their emotional signature. We applied t-Distributed Stochastic Neighbor Embedding (t-SNE), which is one of many techniques used for dimension reduction that is well suited for the visualization of large high-dimensional datasets. Specifically, t-SNE models each high-dimensional item by a 2D point on a plane in such a way that similar items are mapped in close proximity and dissimilar objects in far proximity, with high probability [12, 21].

We have applied t-SNE using the emotional signatures calculated for the obtained IMDb dataset of films, where similarity is indicated by proximity on the plane. We chose t-SNE over other dimension reduction algorithms since it managed to disperse the normalized emotional signatures in a manner that created distinct emotional areas, evident by the coloring, and corresponding to different genres compositions.

3.4.3 Interaction design

To be able to browse through items on a 2D plane interaction is needed. We support the basic functions of zoom + pan for browsing the movie map, as well as the basic functionality for search and filter. The following interaction components are supported:

  • Zoom + Pan – the user can zoom and pan the map to explore the 2D plane

  • Tooltip – upon hovering over a single movie, tooltip information including the emotional values of that movie is provided

  • Select – when selecting a movie, information about that movie is provided.

  • Search – support the ability to search a specific movie according to several options such as: movie name, actors names, genre and year.

  • Emotion Filtering – The movie map can be filtered according to various combinations of emotions. For example, users can choose to filter movies with a high level of joy and surprise.

  • Attribute filtering – The view can be filtered according to various movie attributes (as appear in the film’s metadata) such as: director, movie ratings, movie name, actors, and genres.

4 System description

Figure 1 shows an overview of the Movie emotion map application. The central panel (D) consists of the movie map view that includes around 3000 movies laid out according to their emotional signature distribution. On the left side, the search panel (B) enables searching for a single movie (or multiple movies with parts of the same names). Searching a movie will highlight it, showing its title in red font and highlighting its movie glyph in bold. Below the search panel is the filter panel (C) which enables filtering according to various parameters. On the right side, the movie panel (E) provides a visual representation using a radar graph of single or multiple movies, including the eight values in each of the emotional categories available using hover. The movie information panel (F) provides additional information on a selected movie including the movie’s name, participants, and a link to the movie’s page in IMDb (Figs. 2, 3 and 4).

Fig. 1
figure 1

Movie emotion map: an overview of 3000 movies laid out in a 2D grid according to their emotional signature. Panel descriptions: (a) emotional glyph legend (b) search panel to enable searching a movie (c) filter panel to enable filtering according to various criteria (d) main panel showing all movies according to their emotional distribution (e) Radar panel enabling to view the emotional signature of a single movie and compare signatures of multiple movies (f) movie information panel showing general information on a selected movie

Fig. 2
figure 2

Plutchik’s wheel of emotion

Fig. 3
figure 3

Distributions of relative emotion values across movies showing large differences. To account for these differences, z-score normalization was applied for comparability. Y-axis depicts relative word counts between 0 and 50%

Fig. 4
figure 4

Schematic representation of the method, from glyph layout and color coding of emotions (top left), via data transformation (word count, relative count, z-score, percent), to the final visual mapping (bottom right). The values are actual values of the movie Star Wars - Episode III

Fig. 5
figure 5

Radar panel enabling the comparison of multiple movies according to their emotional signature

Fig. 6
figure 6

Quentin Tarantino’s movies. Upper panel: Tarantino’s films are denoted in red on the map. They are clustered around the same area, implying a similar emotional footprint. Lower panel: (a)-(e) denote the signatures of the above indicated Tarantino films in a glyph format. All films’ emotional signatures show high levels of Anger, Fear, and Sadness. Additionally, the Kill Bill films’ signatures in (d) and (e) show high levels of Surprise, while Django Unchained (b) and Kill Bill Vol. 2 (e) both share high levels of Anticipation

Fig. 7
figure 7

Meryl Streep’s movies placed on the emotional map. The disparity of the films as seen in the map shows her versatility, playing in wide emotional variety of movies

Fig. 8
figure 8

Filters examples: (a) displays the representation on the map of films labeled with the genre: Horror by their producers. From their location on the emotional map, it is clear that the vast majority of these films have high values of negative emotions; (b) shows movies that have high levels of emotion: Surprise in their emotional signatures. Two distinct clusters emerge, corresponding to areas in which Horror films appear (upper right corner of the map) and Comedies (Lower left corner of the map)

The filter panel provides various filter options to reduce the number of movies shown in the main panel. At the top, the user can choose a combination of emotions by which to filter. For example, the current combination will filter movies with a Joy value higher than 70%, and Surprise value higher than 50%. In addition, users can filter movies according to the movie’s ratings (choosing movies with high ratings by selecting stars from a 1 to 10 interval), release date (by selecting a from and to release data), genre (from a list of given genres) or participants (choosing a specific actor, director or screenwriter).

The central panel shows all the movies, each movie represented by a single glyph. The central panel supports zoom and pan operations for easy navigation. It also supports tooltip on hover (which provides the basic emotion information of the movie) and selection of a movie. Selection will highlight the movie and provide its detail in the movie panel. The emotion glyph legend Fig. 1(a) provides a constant reminder of the visual glyph’s meaning.

The radar panel supports the detailed view of the emotional signature of a single movie, as well as comparing the emotional signature of multiple movies. For example, in Fig. 5, we compare the emotional values of three movies: Batman begins, Toy story 3, and The rise of the zombies. Looking at the Figure, we can see that the rise of the zombies (in green), being a horror movie, has strong values of fear, disgust and sadness, while conversely, Toy story 3 (in orange), being an animation movie, has mostly opposite emotions with strong joy, anticipation, trust and surprise values.

5 Evaluation

The purpose of the user evaluation was to examine the effectiveness, efficiency and user satisfaction associated with browsing and exploring through movies according to emotions. The purpose of the visualization was to allow a seamless and intuitive method for exploration in an emotional space. Hence, we wanted to explore what type of interactions and insights users encounter when they use the system, as well as determine common interaction patterns. Thus, we performed a qualitative evaluation with target users. Next, we describe the evaluation methodology followed by a summary of insights that were gained from the evaluation. Finally, we describe a few use-cases taken from the evaluation exemplifying insights that were gained from the use of the system.

5.1 Evaluation methodology

Our evaluation is placed in the data and operation abstraction level of Munzner’s nested model [27], testing target users and collecting anecdotal evidence of utility. To that end, we conducted a qualitative-based evaluation. We used the Think-aloud protocol and semi-structured interviews to examine the utility of the system for potential users. The study participants included three expert users (two film instructors and one film researcher), eight film students, and seven self-declared film enthusiasts. Altogether, there were 18 participants, with an age range from 23 to 51.

We conducted each session separately. Most sessions (other than two of the experts, for which the sessions were held remotely using Skype) were held in a quiet room, in which participants were presented with the system on a 24″ display. Participants were first briefed about the system and how it works. Both the way the movies are presented and the way to interact with it was explained. Participants were then asked to” play around” with the system as long as they wanted. After that, participants were given several tasks to complete with the system (e.g.,” find five movies that are very joyful”,” find movies that are close in their emotions to the movie: Pirates of the Caribbean”). We used the Think-aloud protocol in which participants are encouraged to think aloud while they are using the interface and performing specific tasks [39]. After completing all tasks, we conducted semi-structured interviews with participants to elicit and understand their opinions regarding different aspects of the system. All user comments from the think-aloud and the interviews were recorded and transcribed for later analysis.

We analyzed the think-aloud and interview transcriptions using the thematic analysis procedure [7]. At the start of the analysis, we familiarized ourselves with the contents of all interviews, actively reading through it several times. After making sure that we were familiar with all data, we started to generate initial codes, first at the technical level, and later at the interpretative level. Finally, we analyzed the resulted codes to understand the emerging themes.

5.2 Findings

All participants liked how the movies were presented and enjoyed the interaction with the system. In particular, the film students and film instructors said that this is a great tool to enable them to view and investigate movies in a way they could not do before, which could be useful in their studies. All participants liked the new way the system allowed them to examine and browse through films, indicating that they could easily navigate and explore films according to their emotions, as well as validate and analyze hypotheses they have regarding specific films, actors or directors. Next, we present evaluation findings according to the main themes that came through the analysis.

5.2.1 Validity of film emotions

Most participants thought that the location and emotion values of specific movies that they knew were correctly reflected in the emotional landscape. For example, one of the film instructors identified the movie” I saw the devil” as a known violent movie that triggers strong emotions from viewers. He commented about it:

“this is a director that does not feel sorry for his films nor his viewers. This is a cruel movie that caused me a lot of anger, and is indeed in its rightful place, at the edge of the red zone. Its a good movie but it doesn’t make you feel good. I see that it got 100% anger and 99% fear, and whoever analyzed this was not wrong”.

5.2.2 Clustering

All but one of the participants identified clusters of colors which reflect emotion areas. All of these identified at least two clusters, one yellow and orange color in the bottom left part of the map, which they understood that it represented a joy-related area, and a red and green area on the right of the map, which participants identified as an anger and fear area. As one participant has said:

“I understand the partition of the map. In the red area, there are movies with more violence and murder, the yellow area includes happy movies and in the middle are optimistic movies”.

Five participants also identified another bright cluster area in the middle of the map, and three other participants identified another, mostly pink area on the top right corner, that included movies with the disgust emotion. One participant did not identify certain clusters but recognized a scale or change of emotions, from happy movies on the bottom left side of the map to scary ones on the top right.

5.2.3 Glyph understanding

All participants managed to recognize and understand the Glyphs:

“The Glyph is very clear to me; I can know the dominant emotions for each movie. The legend helps me know the colors of the emotions, even though I already remember the colors by now”.

However, one problem that some participants had was with seeing details in the high- level view, mainly because of the overlapping of items:

“I must zoom in to view the Glyph better since some Glyphs overlap others”.

5.2.4 Colors and intensity

All participants recognized and knew how to identify each emotion color, and all participants said that they could recognize a color shift across the map both from left to right and from top to bottom. Regarding the change of intensity of the colors, Nine participants immediately noticed that darker colors mean higher emotional value and brighter colors indicate lower emotion value, (but still bigger than fifty percentage). As one participant said:

“That is very interesting. I can see movies with stronger colors and some with less. So if I want to watch a movie with just one emotion, I assume this emotion is dominant, I can zoom in to the stronger area [...] It is very comfortable”.

For the other participants, this mapping was less clear, and it took them more time to understand it. However, all participants understood this at the end of the session.

Most participants detected the brighter area in the middle of the map, recognizing it included movies without dominant emotions. However, two participants wrongly thought that these are movies with a low rating and possibly not so good movies to watch.

Interestingly, one participants said they would prefer to choose movies from the edges of the map rather than the center:

“If I am looking for good movies, I will look at the edges, because these are probably movies that have stronger emotions”

5.2.5 Understanding emotions

When we asked participants if they can find movies with high Joy only according to the map, all participants easily managed to find such movies. The same happened when asking for Anger, Fear, and Anticipation. For the other emotions, it was more difficult for participants to locate movies with a high value. Many of the participants did not understand the meaning of the Trust emotion. They could relate to the other seven emotions and reported that they feel those emotions when watching a movie; however, they did not know what Trust emotion means in the context of a movie, so they could not say if the value of the Trust emotion when considering a movie is correct or not.

5.2.6 Interactions

Legend - All users found the emotion legend very useful and testified that this helped them identify the emotions when looking at a specific glyph. However, eight participants thought that the legend is clickable, and commented that it would be nice if they could click on the emotion on the legend and this will filter the map according to the color they selected.

Filters - After some time, usually only after receiving specific emotion-related tasks, most participants managed to understand and use the emotion filter widget and adjust the emotion levels to filter movies according to their emotions:

“wow, I did not notice it works like that, now it is very easy for me to find movies according to specific emotions”.

However, participants said that they might also want to view movies around or under a specific value and not only above, as currently is implemented with the emotion slider.

5.2.7 Summary

All of the participants had overall very positive impressions from the system. Participants were very surprised to be able to search a movie according to its emotions and stated that it can be an advantageous feature to embed within movie search engines. This was reflected in the participants’ comments:

“The system is very friendly. I never thought it is possible to search a movie according to emotions. Until now I only search by genre or actor, now I can search by emotions together with the genre or any standard filters. I only think it is missing a picture of the movie.”.

The film instructors were especially excited about the system. They stated that the tool enables them to find clusters of movies according to their emotions; to find movies that may be emotionally similar to a given movie; to emotionally compare movies. All of these are useful new capabilities in their work that they could not have done before. As one of the film instructors commented:

“This can be very useful for finding films with a similar emotional characteristic, which I can then assess the reason for with my knowledge of the films, and highlight in my class”

The film researcher stated that the system can be a great research tool and can be very useful for examining and validating assumptions. For example, to see if certain films, according to a genre, actor, director, or any other criteria, adhere to their expected emotional structure.

5.3 Example of usage

At the start of the evaluation, participants freely explored the movie space in various ways. In this section, we provide several examples of usage that were employed by the participants to exemplify the unique affordances of the system.

The first example comes from one of the film studies students. The student said that he is very familiar with Quentin Tarantino’s movies and hypothesized that his movies would be very similar to each other from the emotional standpoint, all having much violence and having feelings of anger, fear and probably disgust. To test his hypothesis, the student searched for movies with Quentin Tarantino as the director. The results of the search can be seen in Fig. 6, which also shows an enlarged glyph view for each of the movies. The Figure clearly shows that all of Tarantino’s movies have a very similar emotional signature, being very close to each other on the map, and showing a high level of anger (red), sadness (pink), disgust (purple) and fear (green).

Next, the student wanted to look in a different direction, and thus searched for movies with the actress Meryl Streep. The distribution of the emotional signatures of these films is displayed in Fig. 7. Meryl Streep is” Particularly known for her versatility”, as claimed in Streep’s Wikipedia page and demonstrated by the rather large variance between her films’ emotional signatures and placement on the emotional map. A few films (such as Julie and Julia and Mamma Mia!) are in the general Joy area. Still, she also stars in films with high levels of Anger and Fear (such as The Manchurian Candidate) and Films that combine high levels of Joy, Anger and Disgust (such as Into the Woods).

Another participant explored the movie map by filtering movies according to the different genres. Since a movie can have multiple genre labels (genres were labeled according to the IMDb website), distribution of genres according to emotions was not always indicative. For example, drama movies were spread all around the map as they often coincide with other genres. Still, it was clear that most comedy, family, and animation movies were on the bottom left part of the view, while thriller and action movies were more to the right top side. Figure 8a shows the main view filtered to show only horror movies. Since most horror movies are distinct, they form a clear cluster. The outliers in the bottom include movies which also belong to other genres (such as comedy), such as” Scooby Doo 2: monsters unleashed”.

Finally, another student said that she wanted to search for movies that have a high level of surprise in them. She used the filter panel and filtered movies with a 70% level of Surprise (see Fig. 8b). Results show two clear clusters. It was evident from the location and color of the glyphs that the upper part contains Sad and Anger related movies that are surprising, while the bottom left part contains joyful movies with an element of surprise. Looking at the joyful cluster, she said that she knows the movie” The Prestige”, which indeed can be defined by its surprise features. Browsing nearby movies she was able to find similar movies that are both surprising and more joyful such as” Slumdog Millionaire”,” The illusionist” and” The Hangover”. She was initially surprised to see the movie” Megamind” nearby, but then immediately said that although it belongs to another genre (Animation and Family), it is emotionally similar to the previously found films.

6 Discussion

Our main goal was to visualize an emotional space and allow seamless exploration in and within it. The evaluation and usage examples showed that users were able to explore and find interesting and meaningful patterns according to the emotions of films. Still, the evaluation also raised some important ideas and problems that are worth further discussion. We discuss here some of the design decisions made when creating our solution as well as possible alternatives and ideas for future work.

Glyph design

The design of the Glyph has been proposed as a simplification of the original layout used by Plutchik. In our design choice, the categorical attribute of emotions was encoded as color hue. The value attributes (the emotion value) were encoded using the color intensity. This Glyph design is somewhat similar to the flower Glyph suggested in the Better Life Index [3]. However, in Better Life, the value was mapped to the length of the petal rather than to intensity. We used intensity because items (movies) are conceptually the same, and thus, we wanted each item on the map to have the same size.

For the encoding of the emotions, we could have also used a radial layout as was done in the Better life index as well as other visualizations depicting emotions (e.g., [40]). We would have probably followed this layout choice if the number of emotion categories was not constant. The fact that there are precisely eight emotions, enables a simpler representation in the 3-by-3 grid design. Because we applied color hue and intensity simultaneously, we had to change the original color palette to a perceptually corrected one, as proposed by Color Brewer [22]. There is always a dilemma in the choice of colors, as colors have semantic meaning attached to them. We aimed at choosing a neutral color scheme as close as possible to the one used in Plutchik’s original wheel.

Finally, an important design decision was to have a cutoff value under which not to show emotional values (currently set at 50%). From our experimentation, without a cutoff value, the map would be too cluttered with colors, and it would be more difficult to distinguish between areas. A question remains whether a different cutoff value would be better and whether it would be advised to leave the control of the cutoff value to the user.

Glyph positioning

The positioning of the Glyph has been applied using t-SNE to a 2D plane. There are many other techniques to apply dimension reduction, such as MDS, PCA, SOM, and more. A comparison of these methods is beyond our scope, however a review of these methods can be found in [21]. The choice we made is because of t-SNE’s superior performance for the data at hand. We believe there is no clear choice here, and instead, designers should try out multiple methods and pick the one that suits the data best. However, all the mentioned methods are sensitive to distribution and range differences of the multidimensional attributes. In our case, a sparse transformation process has been a prerequisite to achieving desirable results.

Interaction

The emotion filtering was useful for participants when searching for movies with distinct emotional characteristics. However, the emotion filtering can be improved allowing to determine lower bounds on the filters (to filter movies with a low level of a certain emotion). Another suggestion raised by participants was to allow to see the top 10%, 5% or 1% of the movies of a certain emotion. This can be implemented by pressing on the emotion legend for example, as suggested by a few of the participants. Another important addition to the user interface would add additional layers of labels to support semantic zooming, so when zooming in, more labels would be seen. All users liked the multiple interactions that combined metadata features (i.e., director, actors, genres) and emotions filters.

Theories of emotion

In this study, we used Plutchik’s emotional model as the basis for our visualization. This dictated our design choices and determined the shape of Glyph that would be used. Thus, our method only supports this underlying model. However, while Plutchik’s model is highly influential and highly used, other emotional models do exist. Following these models might create other layouts and visualizations. It is unclear whether other models would be more or less intuitive and clear to users, and what might be the advantages or disadvantages of using other models for the visualization of multiple items. For example, most participants in the evaluation did not relate to trust and anticipation values when it came to describing and browsing through movies. Future work will examine other emotional models and other domain areas.

Evaluation methodology

In the current study, we focused the evaluation on the data and operation abstraction level in Munzner’s nested model [27]. This was done because it was important for us to first show the need and benefit of the system. Thus, we performed testing with target users by collecting anecdotal evidence of utility. However, other evaluation methodologies are also possible. On the encoding and interaction design level, informal or formal laboratory studies could have been done, examining participant’s perceptions and interactions with the system. To increase the external validity of our solution, a field study documenting real use of the system could further substantiate its utility. Future work is needed to examine the solution from these perspectives.

7 Conclusions

In this work, we set out to visualize thousands of films utilizing their elicited emotions, as extracted from online user-generated content posted online in the form of reviews. The purpose of our research was to find how experience goods can be researched and understood by the emotions they elicit, and how to facilitate the use of basic emotions as the looking glass with which films are explored. Our choices of tools and coloring design yielded a mosaic that formed an emotional map with distinguishable areas of different emotions and gradual transitions. It was set as the primary panel to posit the centrality of the emotions in the exploration system. Secondary panels gave detailed information upon zoom-in operations and filtering options by emotions or the available metadata of the films. Thus, a film expert that was asked to evaluate the system found value in looking for the emotional area in which a specific director’s films are concentrated; another found the set of films within a specific genre with a prominent emotion. It is our hope that the system would indeed become an aiding tool in the research of films as emotion machines.

This was a first attempt to visualize experience goods utilizing their elicited emotions. While we were looking to create a research tool, other directions also exist. Can this visualization be modified to depict other experience goods such as books, songs or restaurants? How can it be used within the context of recommender systems? The system can also be extended with additional layers of information that would enable to align emotions with success signals, such as ratings and income. Finally, the usage of the system can be analyzed in a larger deployment, examining what kind of movies and emotional patterns users look for according to sex, age and other demographic information.