Keywords

1 Introduction

The last decade has seen an increased interest in player-facing post-game visualizations [5, 5, 15, 21, 29, 43, 44, 50]. Recent work has, however, revealed that existing visualizations do not often provide enough causal information for players to make connections between their actions and the outcomes they experienced [23]. Process visualizations, which present human process as a sequence of actions taken [3, 30, 37, 41, 42, 45, 46], appear well suited to preserving causal information. Player-facing, post-play process visualizations are, however, rare and typically included as a secondary feature to a visualization of another type [1, 24, 32]. As such, they are rarely the focus of research, and we know little about how players extract meaning from them, which is necessary to ensure that we design and implement them in appropriate and effective ways.

In this paper, we take the first steps to address this gap by examining how players make sense of post-play process visualizations of others’ gameplay in the context of an educational game. In particular, our research question is: “What interpretation techniques do players use to make sense of process visualizations of others’ gameplay?" We chose to focus on having participants interpret other players’ data due to the significant role that reviewing the gameplay of others’ plays in learning how to play games [36]. To answer this question, we conducted a qualitative study with 13 players of the game Parallel [49], prompting them to make sense of other players’ gameplay through a process visualization. Results revealed six interpretation techniques that players leverage to extract meaning from game data. We also identified two general sense-making methods for post-play process visualizations: the induction method and the framing method. Based on these results, we present and discuss four general design implications that should be considered in future design and development.

2 Related Work

To date, most player-facing gameplay visualizations feature either aggregate [5, 15] or spatio-temporal data [1, 24, 44]. Aggregate visualization techniques use visual elements such as percentages, graphs, and charts [5, 15, 29]. However, such visualizations do not preserve granular strategic information [20], making it difficult for players to determine where they may have made a mistake. Spatio-temporal visualizations, in contrast, present granular, action by action, data, superimposed atop a game map [1, 22, 24, 44]. However, when scaled, spatio-temporal visualizations often remove granular data, and instead focus on movement over time [44]. While informative, these visualizations loses much of the causal information.

These drawbacks created a space that process visualizations began to fill. In other domains, review of process has been valuable for optimizing human workflows [40, 41] and learning [27, 28]. In games, graph-based process visualizations are used for game analytics and user experience research [2, 3, 9, 16,17,18, 20, 30, 45]. Player-facing process visualizations, in contrast, are typically designed as timelines depicting the ordering of actions taken over the course of a game [1, 22, 24, 32]. However, these timelines are often secondary features attached to another visualization system [1, 24]. As such they are often not the focus of research. This results in a lack of knowledge about how players extract insights from process visualizations, which is necessary for informed design.

Understanding how users make sense of data is pivotal for the design of player-facing visualizations. While there is a lot of work in InfoVis [10, 34, 35, 47], in the context of games such work is sparse. Previous work investigated this question in spatio-temporal visualization [22] and demonstrated that making meaning from visualized game data requires unique investigation. However, to the best of our knowledge, no one has specifically investigated this question in the context of process visualizations.

3 Methodology

Parallel is a puzzle game designed to teach parallel programming concepts [19, 39, 49]. We chose to conduct our research with an educational game, because retrospective visualizations are already common in digital learning contexts [4, 43, 50]. We chose Parallel for this study as it is complex enough for players to demonstrate various approaches to solving problems, yet simple enough for players to become comfortable with gameplay quickly.

Using the visualization tool Glyph [30], we generated a process visualization (see Fig. 1) based on 15 key strategic Parallel gameplay actions. Each node in Glyph’s network graph represented a different in-game action and a link between two nodes indicated that at least one player in the community transitioned between those two actions. Individual player trajectories within this visualization can be highlighted as seen in Fig. 1.

Fig. 1.
figure 1

An screen shot of the process visualization, with player 8’s sequence highlighted.

13 undergraduate computer science students were recruited from universities in the United States, as they represent the target population for Parallel [49]. Gameplay took from 30 to 60 min. Players then signed up for data-driven retrospective interviews [8] conducted over Zoom.

3.1 Interview Protocol

Prompt Design. Based on previous work [22, 34, 38] we recognized two types of interpretation techniques: interaction techniques used to extract information from the visualization and cognitive techniques used to make sense out of that information. To ensure that we elicited techniques in both categories, we developed two prompts:

  1. 1.

    Could you describe this player’s actions using the visualization? (“interaction prompt")

  2. 2.

    Can you say why you think the player played the way they did? (“cognitive prompt")

Procedure. A slide deck was prepared and presented to each participant during the interview. One researcher led the interview, screen-sharing the slide deck, while two others remained silent and recorded, in text, what the participant said.

The first slide contained a visualization of the participant’s own data. On this slide, the lead researcher gave the participant basic instructions on how to read the visualization. The next slide contained a visualization of another participant, who played similarly to the interviewee. The last slide contained a visualization of another participant, who played differently than the interviewee. While displaying the second and third slides, the lead researcher asked the prompts described above. Interviews lasted about 30 min and participants received a 50$ gift card.

3.2 Data Analysis

Interview data was analyzed using a two-step, iterative thematic analysis protocol [13, 33]. The first step of the analysis identified the specific interpretation techniques that players used. Two researchers, separately, performed open coding on the interview responses. The unit of analysis was a player’s response to a prompt. They then met and discussed their initial codes to generate a code book of six interpretation techniques. The researchers then performed an inter-rater reliability check using Cohen’s Kappa [7] on 30% [6] of the data. The codes achieved an IRR score of .87, indicating very strong agreement [25].

The second step identified the overall process of making sense of the data. The two researchers separately analyzed each prompt response and marked which of the six interpretation strategies were used and in what order. The researchers then reconvened and discussed their findings. They identified two methods for engaging the interpretation techniques and performed a second inter-rater reliability check, again on 30% of the data set [6]. The method codes achieved an IRR score of .74, indicated strong agreement [25]. One researcher then labeled the remainder of the data set with the method codes.

4 Results

The six interpretation techniques are shown in Table 1.

Table 1. The six interpretation techniques identified based on analysis of players’ interaction with the community visualizations and brief definitions.

Reading the Visualization to Collect Information. Participants would try to collect information from the visualized data as a precursor to making connections between data points. For example, P9 reads the trajectory of another player, stating “They do start, test passed, sub failed, they place the semaphore, then maybe they toggle it, they place the signal, they link, maybe they move it around." Notably, reading the visualization would often encompass a read-through of the entire sequence, suggesting that participants were engaging this method to gain a holistic overview of the data. This is illustrated by P0 who said “it looks like they placed and moved semaphores and placed signals, linked some together, ran a submission, stopped the submission, placed another semaphore, maybe moved it again, toggled it, and then maybe toggled a different one, ran it, a test passed, and then the submission passed." Notably, participants did not always read the visualization, many jumped to identifying patterns.

Identifying Patterns to Inform Inferences. Participants would make general statements about the characteristics of the data. We saw two types of patterns that participants would identify:

  • Sequential Pattern: Refers to the participant identifying patterns in the ordering of actions. For example, P3 noticed that “[the other player] repeats that process of toggling then placing then linking". Recognition of these patterns is facilitated by the sequential nature of the visualization and it would likely be harder to recognize such patterns otherwise.

  • Frequency Pattern: Refers to the participant identifying a pattern regarding the number of actions taken. For example, P0 said: “They move a limited number of times, but they ran the submission a lot because it looks like they stopped it a lot."

We observed that pattern recognition would lead to inferences regarding the players who generated the data. For example, P8 described “They seem to jump back and forth a lot. They were probably thinking through a lot of their placement and movement."

Making a Comparison to Guide Pattern Identification. Comparison did not always occur, but when it did, players would typically compare patterns in peers’ gameplay to patterns in their own. For example, P0 said “They use the stop submission button, that’s interesting, I don’t think I used it at all." Often, participants would use comparison as a way to guide the identification of additional patterns. This is well illustrated by P1 “Once they laid down a solution they would test it and see if it failed or not. Whereas I don’t remember doing as much testing." Here, the participant has identified a pattern in which the subject would lay down a solution then test it. They compare this to their own gameplay, in which they did not test as much. Such comparison can help them identify more patterns (what else did the other player do differently?) and begin to generate a more formal inference.

Making an Inference to Understand the Other Player. As discussed above, inferences were informed by identified patterns within the data, which were sometimes guided by comparison. We observed inferences to be focused primarily on the player, which differs from previous work [22]. We observed two types of inferences:

  • Approach or Strategy: Refers to the participant making an inference about subject’s plan execution. For example P10 said “They saw that the test passed so their aim was to try and generalize the solution." Here, P10 infers a strategic decision that the player made (trying to generalize their solution) as a way of explaining an observed pattern in the data (that the player did not immediately submit after their test passed and instead took other actions).

  • Understanding or Expertise: Refers to the participant making an inference about what the subject knows about the task or subject. For example P11 said “I would say they probably came in with a good idea about how they were going to do the level before they started playing [since] they’re very calculated, they rarely jump back and forth between states." Here, the participant has developed an image in their mind regarding the expertise of the other player (that they had a good idea of what they were going to do) that can be used to explain an observed pattern in their data (that they rarely go back to previously visited states).

4.1 Sense-Making Methods

The interpretation techniques connect to one another to form a process for making sense of the data. We refer to this process as a sense-making method. We identify two general methods for sense-making for post-play process visualizations for games, seen in Fig. 2 and described in detail below:

Fig. 2.
figure 2

The sense-making methods we observed in terms of the ordering of interpretation techniques.

Induction Method: This method represents an approach in which players began their sense-making process by reading the visualization. They would then identify patterns in the data, and use comparison, if necessary, to generate an understanding of gameplay events. This would culminate in an inference about the other player. An example of this method is demonstrated by P7: first, they read the visualization, stating “Ran a test and it passed then worked to place the items in one sequence, and then the test failed, and then in another they stopped it again." They follow this with recognition of a frequency pattern, stating “It looks like they placed a lot". Finally, they offer an explanation, stating “they probably deleted [the signals and semaphores] instead of moving them."

Framing Method: When participants used this method, they first made inferences about the other player, often based on visually apparent details, e.g., length of the trajectory. They would then switch to collecting information, first reading the visualization, then using one or both pattern identification techniques and comparison, to generate hypotheses that justified and supported their initial inference. An example of this method is demonstrated by P12: they begin with an inference of the other player’s strategy (or lack thereof), stating “I would think that this player kind of did stuff at random, I’m not sure if there was a process that they used." They follow this by reading the visualization to collect information, stating “It seems like [they’re] going from start and then placing a semaphore [then] going from test passed to stopping submission and moving a signal". They follow this with identification of a sequential pattern (or lack thereof), stating “It doesn’t look like this graph had a lot of iterative processes. It’s a little jumbled up."

5 Discussion and Implications

It becomes apparent that inferences facilitate players’ ability to extract actionable insights from data. This finding is similar to what has been discussed in InfoViz work regarding mental models of data [26, 47, 48]. Unlike InfoViz work, the inferences here inform a mental model of the individual who produced the data rather than the data itself, similar to what was seen by Kleinman et al. in their study of spatio-temporal post-play visualization [22]. Further, in this work we see the presence of a sense-making method that begins with the inference and then collects data to enforce it. This may have been encouraged by the nature of the visualization, from which surface level information, such as length of trajectory, could be quickly extracted and used to reach a preemptive conclusion.

This suggests that process visualizations, which present data in a holistic manner, may encourage players to make assumptions about the data up front. However, there is the very real possibility that these up-front assumptions can lead to inaccurate inferences. Thus, process visualizations for post-play analysis should consider incorporating design elements that can inform players’ up-front assumptions and guide them towards correct initial inferences. One way to accomplish this could be the grouping or labeling actions inside a visualization to indicate what they mean.

Further, while previous work discusses users adjusting frames and hypotheses [22, 26], participants in our study who used the framing method did not make adjustments. In fact, it seemed that they rarely uncovered information that they recognized as contradictory to their inference. Based on our results, we hypothesize two reasons for this. The first reason is related to the participants’ familiarity with the game. In our study, participants had no prior experience with Parallel. As a result, they likely lack the domain knowledge necessary to recognize gameplay strategies in the data. Thus, process visualizations may aid players best if they are not displayed until the player has become more familiar with the game.

The second reason is related to the abstraction of the data. The presentation of the gameplay data as a trajectory of actions may have been too abstract. Including game state, recognized as important to understanding context [22], information in the process visualization could have helped players better understand what they were observing. Thus, retrospective process visualizations should consider incorporating game state information, to ensure that players are able to correctly interpret the context behind each action. This implication, along with the previous one, can help ensure that the player is equipped to correct misunderstandings about the data.

Additionally, the inclusion of comparison, as shown in the results, is not discussed in the previous work by Kleinman et al. [22], where players were not shown their own data. This suggests that the inclusion of a player’s own data is likely to spark comparison between themselves and others. Using comparison between self and others has been explored in the domain of personal informatics, though usually within the context of a user understanding their own data through the comparison [11, 31]. Here the comparison was used to understand the other player, as finding the differences in how the other player behaved compared to oneself gave participants an anchor point to begin understanding the rest of their experience.

This suggests that process visualizations can leverage comparison to help players more quickly identify connected patterns and reach inferences. Thus, process visualizations in post-play contexts should consider highlighting how the player’s own data compares to and differs from the data of the subject of analysis. This does, however, raise questions about the potential risks of prompting comparison among players, as previous work has demonstrated that players who under-perform can become discouraged when prompted to compare themselves to high-performing players [12]. Thus, process visualizations may wish to only permit comparison against other players with similar skill levels or quality of performance.

6 Limitations

We acknowledge that this study was performed on a small sample size (n = 13). However, we did see saturation in the data at 7 participants and argue that this sample size aligns with those seen in similar work [14, 22]. We additionally recognize that we only looked at a single game, only at the analysis of others’ data, and that the nature of the visualization itself likely influenced our results. As such, more work is needed to ensure the generalizability of the findings. With this in mind we present this work as a first step towards understanding how players make sense of process visualizations of others’ data during post-play analysis.

7 Conclusion

In this work, we take a first step towards understanding how players make sense of process visualizations during post-play analysis. Through a 13-participant qualitative user study, we identified six interpretation techniques that players used to make sense of process visualizations and two methods for sense-making. We discuss the implications of these findings on the use of player-facing process visualizations in post-play analysis.