Keywords

1 Introduction

Education is going through unprecedented changes across the globe. During the year 2020, 165 countries have entirely closed their primary, secondary, and higher education institutes in an attempt to stop the spread of the ongoing coronavirus pandemic (UNESCO, 2021). Even though schools were closed, many countries’ educational systems made significant efforts to provide for continuity of learning through distance education and online teaching. However, as over half of the students never worked together during the school closures (Parkin et al., 2020), collaborative learning opportunities were far from reaching their potential.

Learning analytics (LA), as a field, has a significant role in facilitating collaborative learning in the classroom, whether remote or not. LA can be used to inform educators, administrators, parents, students and other educational stakeholders with actionable insights about the collaborative learning processes of students. LA has been significantly evolving as a research field, contributing to our understanding of how collaboration occurs and can be supported in digital learning environments. However, real-world adoption and impact of learning analytics research are scarce, and far from their actual potential (Ferguson & Clow, 2017; Dawson et al., 2019; Alwahaby et al., 2021). In part, this is due to the limited amount of research focusing on the adoption and use of collaboration analytics solutions by key stakeholders in real-world settings (i.e., Zhou et al., 2021a, b).

This chapter presents the findings of our investigations on educators’ expectations and their perceptions of collaboration analytics generated from student interactions in an online collaborative learning platform. More specifically, we first present a new method of evaluating the process of collaboration from students’ online interaction data with the help of collaborative cognitive load theory and social network analysis (Kent & Cukurova, 2020). Then, we present the results of our fieldwork, investigating educators’ requirements, insights, and iterative suggestions to the visualisations of these analytics. Specifically, two research questions are of interest;

  1. 1.

    To what extent are tertiary-level educators evaluating the collaborative processes in digital learning environments and the value of descriptive metrics to do so?

  2. 2.

    What is the added value of collaboration process analytics provided by CLaP compared to more traditional participation metrics for educators?

2 Background and Previous Work

For educators to provide adequate support to their collaborating students, they need to understand students’ patterns of behaviour within the collaboration process (Van Leeuwen & Rummel, 2020). Unlike the outcome of collaboration where the impact can be measured through pre and post-test analyses, an understanding of the process of collaboration is not as direct. It involves the consideration of both “cognitive and social (interaction) aspects of the collaborative process” (Kaendler et al., 2015; Greiffenhagen, 2012). Over time, some orchestration dashboards have been devised to gather, analyse and interpret students’ digital traces in a bid to understand their collaborative learning (Van Leeuwen et al., 2019; Van Leeuwen & Rummel, 2020). These dashboards serve as visual representations, informing educators about their learners’ work, to help them track progress (Verbert et al., 2014). They also offer useful insights for teachers to track and stimulate different communication mechanisms among learners that are contributory to learning (Van Leeuwen et al., 2019), and recognise students and/or groups who need particular support (Molenaar & Knoop-van Campen, 2017).

As presented by Van Leeuwen et al. (2019) collaborative learning dashboards can be categorised into mirroring, alerting and advising dashboards. Mirroring dashboards offer information about learners to support monitoring of collaborative activity but leave all subsequent detection and interpretation of relevant information to the teacher (Van Leeuwen & Rummel, 2020). On the other hand, alerting dashboards provide alerts about classified groups that need the teachers’ support, and the advising dashboards also provide an interpretation and advice on top of the information provided to the teacher (Van Leeuwen et al., 2019). Recent LAK and CSCL publications have good examples of all three categories of collaborative learning dashboards (i.e., Schwarz et al. (2018); Voyiatzaki and Avouris (2014); Casamayor et al. (2009); Martinez-Maldonado et al. (2015); Gerard and Linn (2016); Segal et al. (2017)).

Although LA dashboards have the potential to enable educators to reflect and gain insights on their students’ collaboration, Van Leeuwen et al. (2017) revealed that the method of how educators identify and interpret the information presented on these dashboards remains predominantly uninvestigated. McCoy and Shih (2016) report that one of the contributing factors to this difficulty is the perception of teachers as mere users of LA technologies rather than considering them as co-creators of the data and visualisations. Additionally, some educators are not well equipped with the necessary data literacy skills to make sense of LA and their visualisations (McCoy & Shih, 2016). Moreover, it is important to note that most collaborative learning dashboards fulfil the mirroring function. That is, the interactions are visualized only in a descriptive manner without any meaningful interpretation of what it might mean for collaborative learning as a whole or what the teacher should do next. When teachers use such a dashboard, they have to take an interpretative stance by themselves and make the connections between observed events to the pedagogical aims (Van Es & Sherin, 2002). Since taking an interpretative approach is not what teachers instinctively do or are routinely trained to do (Van Es & Sherin, 2008), there is an urgent need to investigate educators’ understanding of collaborative learning visualisations and analytics, to (i) help them adapt their pedagogy to the observed collaborative learning analytics; (ii) better adapt the design of LA to the needs and requirements of educators.

2.1 Teacher Evaluations of Collaboration Analytics

In the design of effective collaborative LA, most available research highlights the significance of robust technical approaches (Rosé et al., 2019) and the use of learning sciences principles (Luckin & Cukurova, 2019). There is a range of other factors that are frequently overlooked, such as teachers’ preferences, the reason and usage of the collaboration analytics, or the social context in which the analytics will be used. Understanding the perceptions of educators, their needs in collaborative learning support and their perceptions of the collaborative learning visualisations are crucial for the successful adoption and the wider impact of learning analytics. Recently, there has been scrutiny of the limitations of modern LA systems (Prestigiacomo et al., 2020). This investigation is a result of the challenges that students (Jivet et al., 2018; Matcha et al., 2019) and teachers (Mangaroska & Giannakos, 2018) experience in understanding and acting upon data to enhance learning. This examination is important because the effectiveness of collaboration analytics visualisations is highly dependent on the application of insights to achieve the desired goal. Yet, very little work has been done to address teachers’ needs and understanding of collaboration analytics in real-world teaching contexts.

According to Gibson and Martinez-Maldonado (2017), teachers frequently extract “irrelevant” interpretations from collaboration analytics. As a result, they find it challenging to apply insights from the visualisation to improve learning. This research indicates that stakeholders (such as students and teachers) should be involved in the design process of collaboration analytics to support their needs. If teachers are excluded from the design process, likely, the generated analytics will not fulfil their needs and understanding. Thus far, few studies specifically focus on engaging teachers in the design of LA or undertaking significant teacher evaluation studies in collaborative learning contexts. Chen and Zhu (2019), Holstein et al. (2017), and Holstein et al. (2018) are some relevant emerging examples, yet not specifically focusing on collaboration analytics.

For instance, Prestigiacomo et al. (2020) suggested a human-centred design strategy via the concept of social translucence that can be used to design effective learning analytics. According to Prestigiacomo et al. (2020), the visibility principle advocates for the need to make relevant information available about a specific task. The awareness concept aims, above the visibility principle, to enable an interpretation of a noticeable situation to facilitate evidence-based decision-making. Finally, the concept of accountability is the dimension for social regulation to keep individuals accountable for the data they share with others. To put this into practice, Prestigiacomo et al. (2020) involved six high school teachers in implementing the three principles of Social Translucence. Their analysis showed that, under visibility, teachers want the following information to be available: tracking students’ (reading and writing) behaviour, collaboration, affect, engagement, orchestration, learning modalities, feedback and assessment. More specifically on collaboration, the teachers wanted to know how well the students work collectively and have an understanding of the individuals that participate in the group work.

More specifically in the context of collaborative learning, Martinez-Maldonado (2019) studied teachers’ preferences in collaborative LA visualisations. Focusing on the perspective of user experience, this study confirms that teachers prefer graphical explanations in the form of tracking visualizations, to text-based explanations. On the other hand, the teachers found that text-based explanations were easier for students to follow when they are linking ideas. In essence, the choice of analytics to use could be influenced by the specific pedagogy implemented. Moreover, Martinez-Maldonado’s (2019) findings show that teachers want the flexibility to configure the data the collaborative learning visualisations display. Teachers are generally under pressure to keep up to date with all activities and have to continuously decide which group or student receives their attention at any given moment (Greiffenhagen, 2012). Given the dynamic nature of collaborative learning, such flexibility offers good chances for the adoption of collaboration analytics visualisations. Similar points were raised by Holstein et al. (Holstein et al., 2018) in individual learning settings. The authors report that the value of teacher’s visualisations may depend on the extent to which they are involved in their design decisions. In collaborative learning contexts, Van Leeuwen et al. (2014) also affirm that the teacher’s beliefs of what is accountable for effective collaboration can significantly affect their use of the collaboration analytics visualisations.

Swidan et al. (2019) investigated how teachers comprehended the progression of multiple groups through collaborative visualisations. The authors found that incorrect solutions, explanations or challenges, technical problems, confusion, off-topic discourse, idleness and correct solutions are some of the situations that teachers can detect using their collaboration analytics visualisations. It’s interesting to note how teachers’ experience can also affect how they interpret the visualisations of collaborative learning. Teachers with more years of expertise tend to respond based on their preferences of the situation regardless of what is presented with the visualisations. However, novice teachers respond to the learners in a sequential pattern, as the dashboard informs them. However, as argued by Van Leeuwen et al. (2019), the mechanisms by which educators discover and perceive important data on collaboration visualisations remain understudied. The authors explain that the pattern or sequence in which teachers navigate through the visualisations affects their understanding of the data, and consequently, the decision they make to support the students learning in groups (Van Leeuwen et al., 2019). Therefore, they argue that in addition to having a data-rich visualisation, teachers must also have an inbuilt guide to enhance their use of collaboration analytics visualisations. In their meta-analyses of 26 papers on collaboration orchestration tools, Van Leeuwen and Rummel (2019) emphasised the need to investigate further how teachers engage and interact with collaboration analytics and their visualisations.

To be able to create collaboration analytics and visualisations that would be meaningful for the educators’ practice, it is expedient to (a) “understand the teachers’ needs;” (b) “understand the particular context of usage;” and (c) “understand how the design of the analytics can be aligned with their pedagogical intentions,” (Martinez-Maldonado, 2019). Before exploring teachers’ needs and understandings with regards to collaboration process analytics, we present in the next section the specific analytics built and evaluated in this study.

3 Specific Collaboration Analytics Investigated by the Study

In this study, we focused on collaboration analytics that are inspired by the collaborative cognitive load theory -CCLT- (Kirschner et al., 2018). In our previous work, we suggested a new method of measuring the process of collaboration using social network analysis to evaluate the balance between interactivity gains and coordination costs of learner communities (Kent & Cukurova, 2020). The following section presents a slight overview of this approach in support of the chapter’s interpretations. For more detailed explanations of the approach, as well as the detailed explanations of the connection between CCLT and the social network analysis metrics we used in the analysis please refer to Kent and Cukurova (2020).

Kirschner et al. (2018) argue in CCLT that the limitations of working memory (WM) result mainly from a high cognitive load, thus necessitating the need to combine multiple WMs to work collectively on an assignment. The combination of multiple working memories makes it easier for students to perform tasks in groups. In this case, the working memory capacity may increase without necessarily increasing the cognitive load of the tasks. So rather than an individual focusing their limited working memory to solve a problem, more than one learner’s working memory can be combined to solve the same problem. In such a scenario, collaboration becomes useful because it reduces extraneous cognitive load. Therefore, the goal of the collaboration is, to a certain extent, to provide just enough collective WM to overcome cognitive overload. On the other hand, when the collective WM is significantly higher than the cognitive load, the task loses its complexity; therefore, participants become less engaged since their cognitive resources become redundant. Moreover, groups in collaborative settings require significant effort to be coordinated and organised. This also negatively impacts the outcome of the collaborative process since the learners would have to put in more effort to get things going. If the learners have prior experience of working together, the opposite may be the case. Thus, collaborative learning becomes an act of striking the right balance between the WM gains from interactions (interactivity gains) and the costs of coordinating the challenges associated with operating with (and in the context of) others (coordination costs).

3.1 Analytics of Collaboration as a Process (CLaP)

Leveraging CCLT, in our recent work, we suggested a new method of measuring the process of collaboration irrespective of its particular indirect outcome evaluations (i.e., grades, group project outcomes etc.) (Kent & Cukurova, 2020). In this work, we use social network analysis to examine the relationship between the interactivity gains (IG) and coordination costs (CC) in group learning to make sense of learners’ collaboration process.

Interactivity Gains (IGs)

IGs are the cognitive wealth and benefit resulting from interactions with co-learners. Interactions, using social influence, are known to boost collaborative learning and collective performance (Bernstein et al., 2018).

Coordination Costs (CCs)

CCs are cognitive resources needed by participants to participate in the collaboration process effectively, manage their interdependencies and ideas, and complete the task collaboratively. The CCs affect collaborative learning; the more resources that are needed to coordinate the collaboration, the lower the effectiveness of collaboration (Nokes-Malach et al., 2012).

To examine the process of collaboration, we analysed the collaboration process of two communities using the CC and IG. The CC is proxied as variance in degree, and the IG is proxied as reciprocity. The students were drawn from two different postgraduate cohorts, pseudonymised Community 1 and Community 2, with 42 and 32 students, respectively.

An online discussion tool was used for data collection, and the collaboration activities were recorded for 7 weeks. The tasks for the two communities were the same and involved building a collective concept map via online discussions. At the end of the 7 weeks, Community 1, which has 42 students generated 7600 total number of interactions, 253 total number of posts and 24 total number of cross-references. On the other hand, Community 2 with 32 participants aggregated 15,515 total number of interactions, 408 total number of posts and 22 total number of cross-references among their posts. Table 8.1 shows the breakdown.

Table 8.1 Top-level breakdown of the collaboration

Collaboration as a Process (CLaP) Analytics

The interactions among the students are categorised into three dimensions, namely: Contribution Interactions. These are interactions that are related to the creations, updates and deletion of posts. Consumption Interactions. These are interactions that are related to viewing posts, viewing a map of posts, viewing attachments, searching and refreshing sub-posts (Kent & Rechavi, 2020). Organisational Interactions. These involve the connection of non-connected posts, voting and “un-voting” posts, following and unfollowing posts/learners.

Figure 8.1 shows that the two communities exhibit different patterns of growing interactions. Community 1 started with fewer interactions compared with Community 2, which started on a good note and continues to grow throughout the 7 weeks.

Fig. 8.1
An evolving interaction network of community one and community two from week 1 to week 7. The consumption type interaction and contributions are connected among the organizational interactions represented by shaded dots.

Evolving interaction networks of Community 1 (top) and Community 2 (bottom). Consumption type interactions are orange, contributions are blue, and organisational interactions are green

As can be seen in Fig. 8.2 above, in community 2, the number of contributions for all three types of interactions show increase from week 1 until the end of week 7. On the other hand, reciprocity (proxy for IG) and degree of variation (proxy for CC) values present different degree and directions of change. For instance, it seems that the higher the CC, the lower the IG and vice versa. Despite the sharp increase in the IG from weeks 6 to 7, for Community 1, the CCs did not experience a sharp decrease. This experience can be attributed to the fact that the tasks introduced were grade related, which by default could compel all students to participate in the collaboration. The impact on learning design decisions, including assessment, on the observed learning analytics, has been well established in the literature (i.e., Zhou et al., 2021a). Based on this insight, the CLaP analysis was assumed to also be used by instructors to understand possible ways to manipulate the learning activities to enhance collaboration. In this study, we intend to examine the value, or not, of CLaP visualisation for educators and test such assumptions.

Fig. 8.2
Two graphs of community two CLaP components, and collective effort. Graph 1. Three curves are graphed, reciprocity, coefficient of degree variation, and CLaP: the relation between reciprocity and coefficient of variation. Graph 2. Three curves are graphed, consumption interactions, contribution interactions, and organization interactions.

(Left) Community 2’s CLaP components, normalised to a 0–1 scale; (right) Community 2’s number of interactions, normalised to the size of the community and a 0–1 scale.

4 Methodology

To choose educators from different backgrounds and experiences to investigate their needs and understanding with regards to the visualisations of collaboration process analytics, we asked faculty members at five different institutes to respond to a survey and invited them to two workshops for introducing the ClaP analytics for focus group discussions. The details of the workshops and survey are discussed below, and the questions used can be found in the Appendices.

4.1 Participants

In total, 19 participants were recruited based on convenience sampling. The sample included both experienced online teachers and those with fewer or no years of experience. The diversity of participants was ensuring, as we were able to craft a good image of both novices, as well as experienced instructors. The participants were all based in the departments of psychology, education, and computer science and were teaching a variety of subjects related to these fields. All participants had some research interests in the area of educational technology and/or learning analytics. However, they are not actively engaged in this area. As we highlight below, most of them did not also have any particular experience in teaching in online settings. Although 19 participants are not a large sample size, compared to previous research (i.e., Prestigiacomo et al. 2020), it was a sufficient sample size for in-depth teacher evaluations.

4.2 Data Collection Phases

Workshop Part 1

In the first phase, we organised a workshop for the participants to introduce them to the CLaP analysis and the visualisations of IG and CC. The workshop was interactive, as the attendees were encouraged to react/respond to the subject of discussion in real-time. The discussion included the theoretical considerations of CLaP analytics (i.e., how the working memory and long-term memory relates to the collaboration process). We also explained the IGs and CCs in relation to the proxies that we derived from the social network analysis. Finally, we displayed some of the visualisations to the participants to have a better understanding of the metrics (posts, views, reciprocity, and the variance of note degree) we are using to analyse the process of collaboration.

Survey

After the first phase, we asked participants to fill out the survey to express their understanding of student collaboration in online asynchronous learning settings as well as their needs in interpreting those visualisations. We grouped the survey questions into two categories as summarised below. The survey involved multiple-choice questions, Likert scales, open and closed-ended questions, multiple selection questions. All survey questions used in this study can be found in the link provided in Appendix 8.1.Footnote 1

  1. 1.

    Previous Experience in Online Collaboration. – Survey Section 1, all seven questions

  2. 2.

    Collaboration Visualisations

    1. 2.1.

      Charts showing students’ participation. – Survey section 2, questions 1, 2,and 3

    2. 2.2.

      Charts showing students’ interaction and coordination. – Survey section 2, questions 4, 5, 6, and 7

    3. 2.3.

      Comparing the participation charts to interaction and coordination charts. –Section 2, question 8

    4. 2.4.

      Views about online collaboration charts – Survey Section 2, questions 9, 10, and 11

Workshop Part 2. Open discussion

After the participants completed the surveys, in the second half of the workshop, we invited participants for an open-ended discussion on the collaboration visualisations. This phase aimed to generate in-depth probes on the participants’ views of the collaborations charts as well as any potential aspects the survey didn’t cover but the participants considered as significant. The workshop was run in online settings with all participants attending together. The discussion was facilitated by an experienced researcher inviting contributions to pre-set questions in Appendix 8.2 (as derived from the survey items), and questions were followed up with explorative probes inviting explanations to answers. The visualisations were introduced with a presentation by the expert, each phase of the workshop took 90 min, and was recorded for qualitative data analysis of the discussion transcript.

4.3 Data Analysis

The survey data was analysed using SPSS 24.0 software. For the analysis, all data were inputted into the software and descriptive statistics were applied. The qualitative data analysis was conducted using Braun and Clarke’s six phases of thematic analysis (Braun & Clarke, 2006). Transcribing workshop recordings was the first step. An independent researcher developed the first thematic codes of critical moments. Afterwards, codes and data were shared with other researchers to be discussed and revised to ensure that emerging themes and quotes covered all of the collected data and that they can be audited.

5 Results

5.1 Participants Experiences in Online Teaching and Their Confidence in Reading Basic Visualisations

Based on previous research (Swidan et al., 2019; Martinez-Maldonado, 2019) and our assumption that participants’ knowledge in collaborative learning and their confidence in reading basic visualisations both might be moderating factors on their understanding of the process visualisations we studied, next we summarise their answers to these two questions.

Figures 8.3a, b summarise the experience of the participants in online teaching as well as their confidence in reading visualisations. Six of the participants did not have any experience in online teaching, while 5 of them had 1–3 years of experience in online teaching. Only 1 participant had 3–5 years of experience in online teaching, while 7 of them had over 5 years of experience. Mean = 2.9 years and SD = 2.5 years. To compliment that, 7 out of the 19 participants had no experience in designing online collaboration, while 8 of them had 1–3 years of experience in designing online collaboration. No one had 3–5 years of experience, while only 4 of them had over 5 years of experience. Mean = 2.1 years and SD = 2.2 years. On a scale of 1 (Not at all) to 7 (I am an expert), 5 participants indicated having level 5 of knowledge about CL (Collaborative Learning), while only two indicated having the same confidence level (5) of reading basic visualisations. Although none indicated having a level 7 of CL knowledge, five participants indicated having level 7 of confidence in interpreting basic visualisations. Out of the five participants who reported having a level 5 of CL knowledge, only two reported on level 5 confidence level of reading basic visualisation. Although only one participant reported having level 1 of CL knowledge, this single participant is among the four participants who declares having level 6 confidence in reading basic visualisation.

Fig. 8.3
Two bars graphs of experience in online teaching and in designing online collaboration. In graph 1 the non-experienced, 1 to 3 years, 3 to 5 years, and over 5 years have 6, 5, 1, and 7 participants, respectively. In graph 2 the non-experienced, 1 to 3 years, 3 to 5 years, and over 5 years have 7, 8, 0, and 4 participants, respectively.

(a) Participants’ years of experience in online teaching and designing online collaboration. (b) Participants’ self-declared knowledge of collaborative learning and confidence in reading basic visualisations

5.2 Evaluation of Collaboration in Online Classes

The survey reports showed that 12 out of the 19 participants do not evaluate online collaboration as part of their teaching practice. Only one participant evaluates the collaboration competence of every learner individually. Three participants evaluate collaboration competence as a group while another three participants argued that they assess both the collaboration of individuals and groups. In general, we had four participants who evaluated the collaboration competence of individuals and six who preferred to evaluate the collective collaboration competence of groups. In general, even though most of the participants used online collaboration to engage their students in active learning, most of them did not evaluate collaboration at all.

Participants’ Pedagogical Purposes for Using Online Collaboration (Table 8.2)

15 out of the 19 participants use online collaboration to engage their students in active learning. Out of these fifteen participants, ten use online collaboration for knowledge building, nine use it to build their students’ collaborative skills and another eight for communication. Only two participants said they use all six pedagogical purposes included as options in the question, and only three participants (P6, P7, and P8) listed other purposes for using online collaboration. P6 said that they use online collaboration to support “master’s projects remotely.” and P7 uses online collaboration for “Community building – developing a sense of community among the students as well as partially developing a sense of ownership.”

Table 8.2 Participants’ pedagogical purpose of using online collaboration

Criteria Participants Use to Evaluate Online Collaboration (Table 8.3)

Out of all the criteria teachers might use to evaluate online collaborations, “Quality of each student’s posts evaluated from a subject domain perspective” received the highest response; 11 of the 19 participants chose this one. Out of these 11 participants, eight also evaluate the collaboration by checking the “quality of each student’s posts evaluated from a dialogic perspective”. Interestingly, three of the seven participants with experience in teaching online also evaluate the online collaboration based on the quality of each student’s posts from the perspective of the subject domain. Notably, only P14 said they evaluate collaboration based on the “Number of posts for the whole group”. Out of the five participants (P3, P13, P14, P17, and P2) who evaluated the “Number of posts each student replies to others,” four (P13, P14, P17, and P2) have more than 5 years experience in online teaching.

Table 8.3 Survey results of criteria the practitioners use to evaluate online collaboration

5.3 Educator Interpretations of Students’ Online Collaboration

Interpretations of Student Collaboration Based on the Descriptive Statistics of Student Interactions (Table 8.4)

First, participants are shown basic descriptive metrics and asked to interpret and determine the collaboration process between two cohorts of students. The descriptive metrics included the total number of interactions, the total number of posts, the number of students, and the total number of cross-references.

Table 8.4 High-level descriptive metrics of the two cohorts

Although 17 out of the 19 participants expressed that Community 2 was more collaborative, four of them were sceptical about their decisions. P3 said that “the number of interactions and cross referencing indicated Community 2 is more collaborative than Community 1”. This participant also suggested that more data such as the “quality of interaction” could confirm the decision to go for Community 1. Although P9 preferred to have more data to determine that Community 2 was more collaborative than Community 1, they used the “total number of cross references” as the yardstick for the choice of the more collaborative community. Among the 15 participants who argued that Community 2 was more collaborative, five (P6, P7, P12, P19, and P1) made their decision based only on the total number of interactions and cross-references.

In addition to the two factors mentioned above, P6 argued that familiarity among the students and the pedagogy/course context can impact their online collaboration. More specifically, the participant said that “I think maybe students knew better each other in spring, maybe there is something different in the context”. In addition, P7 asserted that having access to the quality of interactions instead of just the total number of interactions and cross-references can give more insight into the community with better collaboration. P11 who based the choice between Community 1 and Community 2 on the “total number of interactions” had a similar opinion: “The number of posts might not be an accurate metric of collaborative learning, because there are other, more accurate metrics (quality over quantity)”. The four participants (P5, P13, and P14, P2) who have more than 5 years of experience designing online collaboration had slightly different reasons for their choice of Community 2 over Community 1. P5 only mentioned that “Community 2 looks better from the outside”. P13 and P2 said Community 2 was more interactive because they had more interactions, posts, and cross-references. For P14, the choice of Community 2 was determined only by the number of cross-referencing.

Besides basing their decisions on metrics, P15, P17, P19, P1 and P2 also based their decisions on the ratio of students to posts/interactions/cross-references. Although Community 2 had fewer students than Community 1, the posts/interactions/cross-references were higher than those of community 1.

Next, Fig. 8.4, showing the number of posts per week for each community, were shown to the participants. Participants were required to read the visualisation to determine which community is more collaborative and provide reasons to support their choice. Additionally, they were asked if their choice remains the same throughout the 7 weeks.

Fig. 8.4
Two line graphs display contribution versus weeks. In the graph of community 1, the line connects the points 5, 30, 146, 176, 215, 350, and 359, respectively. In the graph of community 2, the line connects the points 184, 305, 415, 493, 558, 596, and 614, respectively. The lines represent the contribution by posts.

Line graphs of the number of contributions (left – Community 1; right – Community 2)

Out of the 19 participants who responded to this question, 15 asserted that Community 2 is more collaborative than Community 1. P17 observed that there is an increase in Community 1’s number of contributions during Week 5. On the other hand, P12 said that Community 1 is more active in Week 3. Though most participants came up with their decision by looking at the number of posts, P9 took it one step further by looking at the ratio of students to posts. Two participants (P3 and P5) said that they could not make sense of which community is more collaborative - using the available data (Contribution by post).

According to P14 and P19, Community 2 is preferred because it yields higher contributions, which increase consistently over the weeks. P19 noted some ambiguities in certain weeks: “Overall contributions in community 2 are higher and consistently increasing. However, different periods reveal different trends. Weeks 2 and 5 appear to show a much greater increase in collaboration in community 1. Rate of increase per week would be a useful metric.”

On the other hand, P2, a participant with over 5 years of experience in online collaboration expressed that “Community 2 is more collaborative because there are more posts every week than community 1. I’m not sure of the trend since the rise of community 1 is much bigger than 2.”

Following was Fig. 8.5 below, showing the number of posts (yellow line) and views (purple line) for each community based on the number of posts and views per week. The participants were invited to view the visualisations to determine which community was more collaborative. Further, they were encouraged to provide reasons for their choice as well as indicate whether their choice remained the same for all 7 weeks.

Fig. 8.5
Two double line graphs of values versus weeks for community 1 and 2. In graph 1. Line 1 connects points 5, 146, 2125, 5476, and 6795, and line 2 connects points 5, 30, 176, 215, 350, and 359. In graph 2. Line 1 connects points 3426, 5783, 7667, 11386, 11890, and 13045, and line 2 connects points 184, 305, 415, 493, 558, 596, and 614.

Line graphs of contributions and views (left – Community 1; right – Community 2)

In this case, only 12 participants (P5, P7, P8, P9, P12, P13, P15, P16, P18, P19, P1 and P2) chose Community 2 as more collaborative than Community 1. Only nine of these 12 participants (P7, P9, P12, P15, P16, P17, P18, P19, and P1) explained the reason behind their choice of Community 2. It is worth taking note that P3, P14 and P17 who have a minimum of 1–3 years of experience in designing online collaboration found it quite challenging to decide which of the communities is more collaborative. According to P3, the data (the number of posts and views) were insufficient to make conclusions while P14 and P17 expressed that they aren’t sure which community is more collaborative.

According to P7, the choice of Community 2 was heavily predicated on the steady increase in views and postings over the weeks. Further, the participant noticed a sharp increase for Community 1 in Week 6. The participant explained that this might be because it is toward the end of the course, and everyone is trying to address everything they might have missed. On the other hand, the participant observed that the interaction in Community 2 was less during this week (6) and said that this may be because they have a steady interaction before that week and they would not have so many backlogs to address. “They may be more interested in self-learning or self-review of course-related stuff,” the participant added. According to P12, although Community 2 had more posts and views than Community 1, the Communities can be said to have almost the same level of collaboration if the ratio of views per post is considered. On account of this, the participant asserted that the number of posts is not enough metric to measure students’ collaboration. According to P19, “The overall number of posts is much higher for community 2 so I would say they are more collaborative. However, both week 2 and 5 show community 1 collaborations increasing more rapidly.”

Interpretations of Student Collaboration Based on the Collaboration Process Analytics

In this section, we first presented the graphs in Fig. 8.6 showing the reciprocity of learners’ interactions per week for each community. The reciprocity of the learners’ social network (i.e., to what extent they respond to each other) was measured to represent their interactivity gains (IG). The participants were asked which community is more collaborative and why. Additionally, they were asked if their choice would remain the same for all 7 weeks.

Fig. 8.6
Two line graphs of interactivity gain versus weeks. In the graph of community 1, the line connects the points 0.4, 0.44, 0.33, 0.35, 0.42, 0.7, and 0.72, respectively. In the graph of community 2, the line connects the points 0.42, 0.48, 0.54, 0.56, 0.58, 0.58, and 0.63, respectively.

Line graphs of Interactivity Gain (left – Community 1; right – Community 2)

Ten participants (P3, P5, P8, P11, P12, P13, P16, P18, P19 and P2) chose Community 2 to be more collaborative than Community 1 based on the IG. Overall, they chose Community 2 because in most weeks the normalised value of the IG was higher than in Community 1. Additionally, P5, P7, and P8 explained that the steadiness and consistency of the IG of Community 2 indicated that there is better collaboration. For P18, Community 2 looked more collaborative in Weeks 2, 3 and 4. However, there were no meaningful differences in the other weeks. P2, on the other hand, used the average IG over the weeks to decide the most collaborative community. Here is what the participant observed: “Community 2 is more collaborative because the average gain of community 1 is around 0.48 while the average of community 2 is around 0.54. I am not sure of the future situation according to the bigger rise of community 1.”

According to P11, the choice of Community 2 was also based on the decrease in the IG of Community 1 in the middle of the course (Week 3 and 4). Although P12, P13, P16, and P19 chose Community 2, they explained that the experience was not the same throughout the entire period. P16 stated that Community 1 was more collaborative in Week 6 & 7 and for P19, it was more collaborative in Weeks 5 to 6. “Overall community 2 is higher, but not for all of the weeks,” said P12. For P13, Community 1 showed a higher level of collaboration only during Week 5.

Although not being specific about which is the more collaborative community, P7 expressed that Community 2 had a steady IG, while Community 1 still has a higher IG despite the drop in weeks 2 and 3. P15 was similarly not clear about which is the more collaborative community but explained that while Community 1 had the highest value (presumably Week 7) of IG, Community 2 had a steady value. P17 had a hard time interpreting the visualisation in a short amount of time.

We next presented Fig. 8.7 depicting the mutuality of learners’ interactions (blue line) and the cognitive cost for learners to coordinate their actions (green line) per week for each community. This value is applied by examining the degree of heterogeneity within learners’ social networks to represent their CCs. The participants observed the visualisation to examine which group, Community 1 or Community 2, is more collaborative, and why. Similar to previous questions, they were also asked whether their choice remained the same for all 7 weeks.

Fig. 8.7
Two double line graphs of values versus weeks for community 1 and 2. Graph 1. Line 1 connects points 0.81, 1, 0.93, 0.97, 0.96, 0.93, and 0.87. Line 2 connects points 0.4, 0.44, 0.33, 0.35, 0.42, 0.7, and 0.72. Graph 2. Line 1 connects points 1, 0.93, 0.91, 0.88, 0.85, 0.84, and 0.82. Line 2 connects 0.42, 0.48, 0.54, 0.556, 0.58, 0.58, and 0.63.

Line graphs of Interactivity Gains and Coordination Cost (left – Community 1; right – Community 2)

P19 offered further insight into the graphs: “The results for community 2 seem to indicate that as mutual interactivity increases, cognitive cost decreases. This indicates a greater ability to collaborate. Community 1 does not follow this pattern in week 1 and 2, where mutual interactivity also correlates with higher cognitive costs and vice versa”.

P16 argued that “Community 2 was more collaborative”, and explained: “due to an increasing trend in interactivity gain and a decrease trend in coordination cost throughout”. It is quite interesting to note that this participant neither has a long experience in online teaching nor online collaboration. However, they seemed to be comfortable interpreting the graphs. However, P13 noted that they do not understand how CC is calculated despite their extensive experience in both online teaching and online collaboration. P5 (also with the same experience in online collaboration) chose Community 2 and expressed that the “Community 2 process grows more steadily, which I believe is better than trying to catch up in the last week.”

Similar to their previous responses to the descriptive metric graphs, P17 also emphasise the difficulty of interpreting the visualisation in a short amount of time (within the workshop presentation period). P6, who did not give a definite conclusion, suggested that the two communities differ between the IG and the CC in Week 7. According to P18, Community 1 was more collaborative because the interactions were more heterogeneous. Other comments were from P1 “Community 2 as they were consistent in their collaboration” and from P2 – “Community 2 is more collaborative because its ratio of gain and cost is higher than community 1. I think it will remain.”

Following were the graphs in Fig. 8.8 showing the difference between the learners’ gains per week for each community and their CCs. The participants were again posed with the same questions above, to decide which community is more collaborative and also provide reason(s) for their decision.

Fig. 8.8
Two double line graphs of values versus weeks for community 1 and community 2. The first line represents interactivity gain and the second line represents coordination cost. Each point of both lines is connected with vertical double-headed arrows.

Line graphs of Interactivity Gain, Coordination Cost and the highlighted difference between the two measures (left – Community 1; right – Community 2)

Eight participants (P3, P5, P6, P8, P10, P11, P16 and P2) chose Community 2 to be more collaborative. P3, P6, P11 and P16 provided explanations that relate to the highlighted differences. These participants indicated that Community 2 is more collaborative since the difference between the CC and IG was lower in Community 2. The participants explained that the closer the lines (CC & IG) are to each other, the greater the chance of Collaboration. Notably, only P6 and P16 out of the four participants observed the difference between Week 6 and Week 7. According to P6, Community 1 was more efficient than Community 2 from Week 6 through Week 7. P16, however, provided a different conclusion. The participant suggested that the two communities had the same level of collaboration in Week 6 and 7.

P2 concluded that Community 2 was more collaborative because “…its average ratio of gain to cost is smaller than community 1, and the ratio becomes smaller and smaller”. P4 and P15 found this visualisation intriguing and argued that they serve as a better “representation of the collaboration” than the previous charts with descriptive metrics. P19 was not able to make sense of these graphs. The participant expressed “I am not convinced of how this ‘difference’ is useful in determining which community is more collaborative.” Finally, P18 noted the changes across the weeks. The participant argued that “At weeks 1, 2 and 6, community 2 looks more collaborative, other weeks community 1 looks more collaborative.”

When the participants probed if they spot a change in Community 1’s behaviours in different weeks, 17 participants (P3, P5, P6, P7, P8, P9, P10, P11, P12, P13, P15, P16, P17, P18, P19, P1 & P2) observed a change in Community 1’s behaviour after Week 5. The explanations given by P5, P6, P7, P8, P9 and P11 concerned exam, end-of-term, or grade-related activities. They were all aware of the peculiarities of students’ activities near assignment periods. P7 speculated that the changes in students’ collaboration could be because they needed to complete a final project. Therefore, they needed to make up for the time they may have lost before that time. Additionally, P6 indicated that since the students are likely to have known each other better and shared a common purpose, that could also influence their interaction rate during those 2 weeks.

According to P13, P16, and P2, the changes might stem from the introduction of a new intervention/instructional design. Additionally, P16 mentioned the possibility that assessments could have been assigned at those points. Most participants focused on course-related activities, but P17 suggested that the instructor could have provided incentives to encourage students to interact before these weeks. For P19, “Week 5 shows a sudden increase in interactivity gain. Hard to say why without knowing more details- Perhaps due to an intervention (for example a particular assignment or debate) or given the time, maybe this is when students become more familiar with the online community?” P1 also agreed that better familiarity among the students could have led to the surge in Week 5.

Differences in Interpretations of Student Collaboration Based on the Descriptive Metrics and Collaboration Process Analytics Visualisations

Next, the participants were invited to comment on the type of insights they gained from the participation charts with descriptive metrics and the insights gained from the IGs & CCs charts.

The reports showed that half of the participants (for example, P13, who has over 5 years’ experience in online teaching and online collaboration) had difficulty making sense of the CC and the IG charts. On account of that, the participants could not interpret collaboration based on these graphs. On the other hand, the other half indicated that the process analytics visualisations provided better metrics (than the number of posts and views) to determine the quality of collaboration. For instance, P19 expressed that the process graphs showed how collaboration relates to cognition: “I believe the second two charts show more granular information to understand the collaborative process more deeply and its effects (or gains) on cognition as a consequence.”

Similarly, P3, P5, P8, P11 and P17 indicated that the process graphs provided a better perspective about the collaboration process. Unlike the descriptive metrics, they give an understanding of whether the students are only posting randomly, or they also respond to one another (Reciprocity as IG). Even though P17 acknowledged that the IG and CC charts provided more information about the collaboration, the participant requested that examples/training should be provided to educators to minimise the difficulty in explaining the concepts to avoid giving an explanation that is out of context. P10 was satisfied with the descriptive metrics to make necessary inferences about collaboration, whereas P7 and P1 were unsure whether the process charts offer better insights than others. P1 explained that regardless, the familiarity among the students would influence their rate of collaboration so data from other contextual information should be provided in addition to descriptive graphs and/or process graphs.

Finally, we asked the participants what other metrics educators would appreciate seeing to interpret students’ collaboration process. In addition to the number of social interactions among the students, the participants reported they would like to have metrics about the quality of the (interaction) contents. While the number of social interactions among the students could also have been counted as an indication of collaboration, that might not be an accurate indicator of the quality of the collaboration. In essence, many participants wanted the quality of the content to be taken into consideration as well as the amount of interaction. As P11 argued, “From my point of view, the number of social interaction might not be an accurate indicator as students might replies to each other with a short sentence like ‘that is a great idea’ especially if they have been given marks according to their interaction, therefore content quality should be associated with the number of social interaction.”

In addition to the quality metrics, the participants wanted to know the level of agreement or disagreement amongst the participants as well as the number of arguments initiated. Similarly, the level of off-task discussions and the specific tasks that lead to collaboration were considered equally important measures to interpret collaboration. P15 said that it would be interesting if the data graphs were linked to specific tasks to know the nature of the tasks. P19 wanted to know the exact type of collaboration activity students engages in. Another metric considered as important was the number of contributions in a specific discussion topic thread rather than accumulated measures. Finally, the participants wanted to have data about the specific individual students who participated in the collaboration including their existing knowledge of the topic discussed, their previous experience in collaboration and their familiarity with each other.

To gain a general sense of how useful the graphs discussed above would be to the participants in their practice, we asked whether they would be open to seeing any graphs from the lessons they teach online. Eleven participants (P7, P9, P10, P11, P12, P13, P14, P15, P16, P17, & P1) were interested in descriptive metrics graphs because they deemed them useful for preparing feedback to students. Interestingly, eight (P7, P9, P10, P11, P12, P13, P15, & P16) were also interested in having access to the same graphs to improve the online teaching design, and seven (P9, P10, P11, P12, P13, P16, & P1) were also interested for evaluating the collaboration process.

Out of these 11 participants, six (P7, P11, P12, P14, P15, & P16) were interested in the process analytics graphs with IGs and CCs for preparing feedback on the collaboration process. Moreover, five (P11, P12, P15, 16, and 17) of the 11 participants were interested in them for improving online teaching design, and ten were interested in accessing them to measure the students’ collaborative efforts. Most practitioners thought that the graphs can help them provide feedback and assess the process of collaboration. In general, only three (P11, P12, & P16) of all the participants wanted to have access to all the graphs for providing feedback, assessment and improvement of the collaboration process. Rarely, P4 was the only participant who did not find any of the graphs useful.

Finally, inspired by the accountability dimension of the social transparency principles (Prestigiacomo et al., 2020) we investigated to what extent practitioners agreed on who should have access to the graphs. Most participants (16) agreed that the individual learner and the course management team should have access to the graph. A significant number of participants (9) also stated that individuals should have access to other groups’ graphs. Except for P10, all participants did not consider it necessary to share the data with the public. P17 indicated that learners and instructors should have access to the data alongside training and explanations of concepts.

6 Discussion

This section is an examination of the research questions based on the participants’ interpretations of the various collaboration analytics graphs presented above.

6.1 The Extent Tertiary-Level Educators Evaluate Their Students’ Online Collaboration and the Value of Descriptive Metrics

The results above show that most of our tertiary-level educators (12 out of 19 participants) do not evaluate their students’ online collaboration activities as part of their teaching practice at all. Those who do, prefer to evaluate the students collectively as a group rather than assessing individual student’s collaborative actions. This is quite interesting to observe since all of the participants said that they use collaborative activities in their teaching. One of the workshop participants said: “I speak for myself, and probably for a lot of teachers, these kinds of things are very implicitly measured, and when they do, they usually follow heuristics. I mean a very implicit approach, like… okay … are students posting messages here? Or is the forum empty? Do they share messages and materials? To what extent are they getting on the task? To what extent they are dividing the task – you know – like you do this, you do this, and I do that, and we just assemble the different pieces. For most of us, usually, the assessment is very implicit and very heuristic, even when it is online.”

While most of them use collaborative learning to engage their students in active learning, some use it for knowledge building, few others use it for skill development. Those who said that they evaluate students’ online collaboration, said that they look at the quality of students’ posts from the domain knowledge and dialogic perspectives. Although most Learning Management Platforms do not provide a lot more than merely the number of posts and views to evaluate online discussions, none of the educators emphasized in their free responses that the number of posts and views are very valuable sources to gauge how much their students are collaborating. However, the educators’ interpretations of the graphs of the total number of posts and views suggested that they are conversant with the metrics. Using the total number of posts and views for the two communities, educators can make some sense of what is going on in their students’ online collaboration activities. These metrics give the teachers some understanding about participation levels, but less on the interactive and dialogic levels. Asides from the total number of posts and views, the educators indicate the consistent and steady increment of the metrics as exhibited by Community 2 as a reason for their decisions. Although educators were familiar and comfortable interpreting these descriptive metrics, they were not considered these very insightful to interpret students’ online collaboration.

We found that participants’ experiences with online teaching, online collaborations, and their general knowledge of graphs had little influence on their interpretation of collaboration from descriptive metrics. However, this is likely to be due to the participants’ high level of digital and data literacy in general, as the sample of our participants were selected from university-level educators. Although both novice and experienced teachers made sense of the descriptive metrics and their graphs, they expressed that the number of posts and views in online discussions is not a good indicator of collaboration. On the contrary to what is commonly found from “human-computer interface research” (Dix et al., 2004), participants did not state they needed some time to get familiar with these kinds of charts, which might be considered an advantage.

6.2 The Added Value of CLaP Analysis (IGs and CCs)

Our results show that the educators are neither aware of nor familiar with the theoretical considerations of collaborative cognitive load theory: namely interactivity costs and coordination costs. Furthermore, the number of participants who interpret the data from CLaP analysis accurately is lower in comparison to previous descriptive metrics. However, participants extract more detailed information about the process of collaboration from the graphs of CCs and IGs. The graphs of CCs and IGs provided practitioners with a better understanding of the learners’ cognitive process during collaboration. One of our workshop participants said: “From my own teaching experience, I used to evaluate my students upon their posts number and the quality of the posts themselves. But now after reading the paper and being involved in this interaction, I have changed my mind as my eyes have been opened… I think the posts number is not enough by itself, and we need to collect a lot of metrics together to evaluate the collaboration process… for me, as a teacher, to see this graph is really useful as opposed to the basic analytics as just posts number.”

Educators have appreciated the perception of the cognitive costs that the students have invested and the benefits they were able to get during this process. It is interesting to note that they “decoded” that the higher the CCs, the lower the IG scores the learners would be able to attain, and vice versa. Only in weeks 6 and 7 did the CC not decrease substantially; the IGs line, however, showed a steep increment. This peculiarity was due to the introduction of exam-related activities during the six-week period in our dataset. The participants have encountered situations like this before. They were, therefore, able to explain that the teacher must have introduced some exam or grade related activities to motivate the students and enhance their collaboration. The educators specifically reflected based on the temporal dimension of the analysis: “Looking at these two graphs, I get the impression that both groups ended up more or less at the same level, but the process is very different. I mean community two was collaborating in a more steady fashion, while community 1 is trying to catch up more on the last day. In my experience when they try to catch up at the last day, the product may be good, but given that the process is not good, learning tends to be not so good.”

These kinds of interpretations and opportunities to reflect on the collaborative patterns of behaviours of their students can potentially be valuable for teachers to improve their practice. Learning analytics graphs typically capture and visualise traces of learning events, to facilitate understanding and contemplation (Verbert et al., 2013). That is, beyond the recognition of what happened to learners during the collaboration process, the graphs should effectively initiate reflection in the mind of the instructors to make sense of how different activities or interventions they introduced influence the students’ collaboration. Although the participants didn’t have any prior knowledge of the CLaP analysis, the results indicated that it had the potential to assist educators in understanding online collaboration. Such a theory-based understanding of collaboration may help instructors to connect to learning theories to carry out various manipulations in a bid to improve the students’ interactions. While some of these strategies might yield positive outcomes, others might have a negative/null effect on the process. For example, when reflecting on being able to follow the CC angle of learning one of the workshop participants said: “If my goal for students was to develop collaboration skills (rather than domain knowledge), I might appreciate coordination costs because students can then reflect on how they collaborated and what was hindering/helping”. Another participant tried to connect CC to the pedagogical aim: “I am struck by these two graphs which actually shows that the cost is much higher in both communities, and I ask myself why are students continuing to interact… if my goal is for students to learn from the interaction – that is striking.”

It is worth noting that participants with higher years of experience sometimes ignore metrics that they are unfamiliar with. This is similar to what Swidan et al. (2019) observed when educators engage with the dashboard. Less experienced participants chose to follow the instructions on the dashboard, while more experienced participants followed their preferences. This is perhaps an example of “Illusion of Validity” (Kahneman, 2011) – where they trust their judgment more than what the dashboard indicates. Alternatively, it might be due to experts’ tendency to ‘automate’ their decision making, and not deliberate on details. These results further highlight the need for human-centred approaches (Buckingham Shum et al., 2019) to address the needs of educators.

To our surprise, most practitioners found it difficult to understand the changes in Community 1 during the last 2 weeks when we asked them to observe the changes alongside other interpretations. While this was not the case when the interpretation was made separately. This is most likely due to the density of information on the collaboration process graphs (Lim et al., 2019). Indeed, previous research indicates that some learning analytics visualisations in collaborative learning settings might increase teachers’ cognitive load (van Leeuwen, 2015), and may be perceived as “extra workload” (Chounta & Avouris, 2016). Therefore, we noted the importance of simplifying the graphs to enhance the educators’ cognitive ease when interpreting the results. Recent research into data storytelling approaches in learning analytics contexts suggests potential ways to accomplish this (Martinez-Maldonado et al., 2020). We argue that collaboration analytics visualisations may be more beneficial if they focus on fewer points/features for educators to interpret at a time (Echeverria et al., 2018). In ClaP visualisations, this might potentially be achieved through clear explanations of indexes used to calculate IG and CC values to teachers, IG and CC values’ presentation in separate graphs, and weekly progress of the CLaP values for each cohort rather than the presentation of the graph based on accumulated values. However, these assumptions should be studied experimentally, and potential solutions to supporting educators’ interpretations of analytics visualisations should come from co-design and participatory design sessions with educators.

7 Conclusions

The process of collaboration is not as straightforward to evaluate and support as its learning outcomes (i.e., students’ academic grades, group project outcomes, etc.). In contrast to the latter, where pre and post-test results can be examined to comprehend the impact of collaboration, cognitive processes must be explored in the former; not just individually, but also collectively. For more details on the operationalisation of the collaborative cognitive load theory with social network analysis and metrics, readers are referred to Kent and Cukurova (2020). However, the purpose of this chapter was to study educators’ needs concerning such analytics, as well as their interpretations of them.

The visualisations of cognitive processes must be easily understood by instructors and learners so that they can reflect on the collaboration process and glean insight from it and intervene accordingly. Here, we investigated the tertiary-level educators’ needs and interpretations of the online collaboration process. More specifically, we looked at to what extent they evaluate students’ online collaboration in their teaching, how much they appreciate descriptive metrics on students’ contributions to and views in online discussions, and to what extent the visualisations of IGs and CCs can help them get better insights into the process of collaboration.

Overall, the participants found the descriptive metrics of the total number of posts and views to be useful for “broad interpretations” and a “superficial understanding”. However, the visualisations of CCs and IGs appeared to strengthen their understanding of the collaborative and cognitive processes of communities. Such an understanding was also connected to more detailed and timely interventions for learning which would not be possible only from the number of posts and views. On the other hand, all graphs studied here were still considered limited by educators in various ways to assess online collaboration and were also considered as too complex to be quickly adopted in practice.

7.1 Limitations and Future Research

All our participants were recruited from departments that are somehow associated with learning analytics including educational psychology, learning sciences, and computer science. Although participants didn’t have direct experience in using learning analytics in their teaching frequently, it is important to acknowledge that they all had some research interest in this space and wanted to explore more in the future. This background of the participants is likely to skew some of the results presented here. Besides, knowing that this research was conducted during the height of the Covid19 pandemic, it would be interesting to know if the findings remained the same pre or post-COVID. Since most academic activities have migrated online, educators needed to have a good understanding of their learners’ online activities. It was surprising to find out that although all of them use collaborative activities in their teaching, most educators do not evaluate their students’ online collaboration at all, which might affect their ability to provide helpful feedback. Furthermore, we recognized that the participants not having detailed information about specific activities learners undertook to generate the metrics they are presented with put them in a disadvantageous position. On account of this, we consider it important to conduct similar research with participants who have a good understanding of the tasks the learner completed, and investigate their interpretations of the CLaP analytics.