Keywords

1 Introduction

Technology-Enhanced Learning is nowadays widely developed for both distance and face-to-face learning, which allows us to collect and analyze a large amount of traces of learning activities in order to help learners. By analyzing the data collected, we can understand how users learn with technology, develop new learning tools and offer a unique learning path for each learner. In the field of possibilities provided by learning analytics, Learning Analytics Dashboards (LADs) are a particularly popular approach to support learning. In recent years, several reviews have shown the growing interest in LADs [4, 11, 17,18,19]. But they also revealed the need for further research to design LADs showing a better grounding in learning sciences and learning theories, in particular due to the lack of specific visualizations for the activities of learning and teaching. Sahin and Ifenthaler [17, p. 18] identified the need to involve stakeholders to define “which metrics are important” to them. Recent work by Ahmad et al. [2, p. 66] has also concluded “that it is necessary to keep students and their opinions in the loop”. In the case of students’ LADs, previous works [16, 21] concluded that students needed adapted LADs with personalized displays. To meet this need, co-design tools have been developed to involve students in the design of LADs [5, 7, 8, 14, 15]. This approach is all the more important as, even if indicators and visualizations used in LADs can be helpful for the student, e.g. to be aware of their progress [11], it can also have negative consequences, e.g. peer comparison in some contexts [20, 21]. The importance of involving students in the design of LADs having been demonstrated, we will focus in the next part on the content of these LADs, in particular on indicators and visualizations that compose them.

1.1 Previous Works

Schwendimann [19, p. 37] defines a Learning Dashboard as “a single display that aggregates different indicators about learner(s), learning process(es) and/or learning context(s) into one or multiple visualizations”. Several studies considered existing indicators and their associated visualizations for LAD. Indicators’ definitions vary according to the context. Glahn et al. [6, p. 2] provides one that is quite generic: “Indicators are mechanisms to provide simplified information that are valuable to a task. With some background knowledge we can understand the meaning of an indicator without the need of knowing about the details of the underlying process or mechanism”. In a review on LA indicators, Morais Canellas [13, p. 107] defines a learning analytics indicator as “a calculated (quantitative or qualitative) measure [computability property] linked to a behaviour or an activity instrumented by the [traceability property] of one or more learners, visible to a user [visibility property] and which can be used to calculate other indicators”. LADs are composed of several kinds of indicators and even if some general definitions exist, Ifenthaler [9, p. 168] said “standards for indicators, visualizations, and design guidelines that would make learning analytics pedagogically effective are lacking”. In the literature [3, 7, 8, 12], we can find studies describing various indicators used in students’ LADs, but to the best of our knowledge, there is no consensus on a single exhaustive list of indicators for students’ LADs or even on a single way to categorize them. Ifenthaler and Yau [10] worked on indicators to support success in predicting learning outcomes, and although some of them can be used in LADs, they focused on identifying students at-risk and did not attempt to cover all the expectations of students’ LADs. Depending on the studies, LADs’ indicators can be classified in different categories. We can use the aforementioned definition from [19] to consider three categories: indicators about learner(s), learning process(es) and/or learning context(s). According to Jivet et al. [12], indicators used in LADs can be of two kinds: learning behaviour indicators, which provide information at the “learning process level”, and content progress indicators, which provide information at the “task level”. A study from Gartner [1] on LA indicators classified them in four different types of analysis based on the nature of the performed analysis and which are increasing both in terms of value for the stakeholders and in terms of difficulty to compute them: descriptive (what happened?), diagnostic (why did it happen?), predictive (what will happen?) and prescriptive (what should be done?).

Some previous works to produce LADs for students involved them in different phases, from conception to prototyping. Hilliger et al. [8, p. 118] identified that “the design of any dashboard should anticipate that its use could have a different effect depending on the context and the targeted user”. With this approach, they defined indicators “relevant at the moment of choosing courses” [p. 127] for a dashboard with a specific objective. This observation highlights the implicit need to consider indicators that are relevant at a given moment in time. For this reason, Gras et al. [7] developed an interactive dashboard which can be customized by first year university students, implying that “one size may not fit all” when it comes to students’ LADs.

Overall we see that all these studies seem to define indicators and visualization(s) for students’s LADs with specific objectives and an adaptation to a learning context. However there is a lack of works around the generalizability of LADs indicators (1) to different students in the same context or (2) to different learning contexts.

1.2 Objectives

In this paper, we try to investigate the need for LADs’ adaptation through a different approach than in previous works. Namely, we seek to understand the needs for student LADs, not for an artificially imposed goal, but for a goal chosen by each student themselves. The underlying assumption is that students may be more at ease to propose their ideal indicators when they do so in a real context (a given course they are registered to) and for a goal that they deem relevant in the first place. Then using data from these numerous co-design sessions organized with university students, we tried to identify which indicators (and to a lesser extent, which visualizations) are spontaneously wished for by students depending on several elements of context such as their discipline, the study duration (short or long), their level (undergraduate or graduate), the moment in the semester (at the beginning of a course, during an ongoing course or towards the end of it, right before the final exam). More precisely, we attempted to answer the following research questions:

  • RQ1: Is there a set of indicators for students’ LAD that cover their expectations?

  • RQ2: Are there shared expectations of students for their LAD? I.e. are there frequently wished for indicators desired by a majority of students who have a same objective for their LAD?

  • RQ3: Are there different expectations of students for their LAD, depending on their learning context (study duration, level) and/or moment in the semester according to the LAD’s objective?

2 Material

To answer our research questions, we organised LAD co-design sessions using a co-design tool (PADDLE or ePADDLE method [14]), online or in face-to-face sessions, with students in different contexts presented in Table 1. In this paper we use data from a total number of 108 groups of 2 to 4 students (N = 386 students overall). Each group was asked to design a LAD for a specific objective among 6 possible ones (monitoring, planning, communication, evaluation, evolution, remediation) that they were choosing themselves at the beginning of the session. A co-design session lasted an average of 91 min (SD = 25 min). For each LAD, we collected a list of indicators defined by students with a name, a description, one or more visualization(s) and a drawing of the final dashboard (see Fig. 1 for some examples of such dashboards). LADs produced by the students contain an average of 7.04 indicators or data (SD = 2.17).

Table 1. The 7 different contexts of students participating in co-design sessions

3 RQ1: Set of Indicators for Students’ LAD

3.1 Method

From the raw data, for each indicator defined by a group of students, we inferred a generic title to be able to compare the content of the different LADs. For each of the 108 LADs, we have listed indicators wished for by students and the associated desired visualization(s) they described or drew.

Fig. 1.
figure 1

Examples of LAD drawing produced by students

3.2 Results and Discussion

Overall, we listed 761 data and indicators with their visualization(s). After coding with generic titles, we obtained 54 indicators divided into 12 thematics using 24 different visualizations. Some thematics (monitoring, planning and communication) overlap with the objectives of the LADs. It is worth noting that not all of the 54 indicators wished for by students match the definitions of a LAD indicator from [12] or [13], as students sometimes wish for information that is not directly linked to their learning but which they consider useful in order to plan their learning session (such as weather or personal agenda). Figure 2 shows all 54 indicators and data sorted by the percentage of groups that listed them. We can observe that no single indicator corresponds to all LADs (the most requested indicator, which is the grade, is asked for by 60% of students) and that there is a limited number of data and indicators desired by the majority of students.

We can see that the needs expressed are not limited to indicators, but students also ask for data, whether related to learning or not. We have chosen to keep indicators and data in our analysis because the LADs designed by the students are coherent sets. This is in line with Jivet’s findings: “different tools should complement dashboards and be seamlessly integrated in the learning environment and the instructional design” [11, p. 93]. In our study, students may have defined learning environments rather than LADs.

Fig. 2.
figure 2

Indicator and data sorted by the number of groups that selected them

4 RQ2: Prevalence of Most Frequent Learning Indicators by Objectives

4.1 Method

Using the list of 54 indicators identified in RQ1, we looked for the 5 most wanted data and indicators by objective (monitoring, planning, communication, evaluation, evolution, remediation) and analyzed also the associated visualization chosen.

4.2 Results and Discussion

We obtained 17 items presented in Table 2. Almost all data and indicators listed in Table 2 were whished for by groups of different kinds of study except those marked with *. We can observe that some data and indicators are shared by different objective, by different study program but, none is wished by all groups. For the 5 most wanted data and indicators, students expressed different needs in visualization for each data or indicators (between 5 and 17 different visualizations).

Table 2. Compilation of the 5 most wanted data and indicators including ex-aequo for each objective (monitoring[MON], planning[PLA], communication[COM], evaluation[EVA], evolution[EVO], remediation[REM]) presented by % of groups concerned

By comparing the desired data and indicators according to the LADs’ objectives, we have identified some items that are shared by several learning contexts, but none of them is validated for all situations. Even if the grade seems to be wanted by a majority of groups, this data can be represented by different visualizations. This indicator is suitable for only 35% of LAD with a planning objective, perhaps because the planning objective groups together several sub-objectives such as planning one’s work for the semester or planning one’s revisions. The peer comparison indicator, which is often proposed in LADs, varies greatly, ranging from 20% to 72% of groups depending on the objective. This result confirms the need to adapt the LADs to the learning context and target, as identified in previous works. To go further, we should explore, for the same learning context (same student cohort, same study program, same year, same LAD objective), if there are shared expectations of students for their LAD.

5 RQ3: Links Between Indicators and Need Profiles

5.1 Method

To identify a possible link between indicators and need profiles, we looked at the variation of expressed needs and selected the 5 data and indicators which varied the most for:

  • the study year: first, second and fifth year,

  • the moment when the co-design session took place: at the beginning, the middle or the end of the semester.

To complete this approach, additional analysis were conducted using SAS software (SAS v9.4, SAS Institute Inc., Cary, NC, USA). Categorical variables are presented as absolute numbers and percentages. Chi-square tests or Fisher’s exact tests were used as they seem appropriate to compare proportions of a categorical variable. When a statistically significant result is found, Cramer’s V was used to estimate the strength of association between the variables. A two-tailed type I error rate of 0.05 was considered for statistical significance.

5.2 Results and Discussion

According to different learning context variables, we have identified various wishes by exploring the variations in the needs expressed by students.

Variation Between Study Year. According to the study year, indicators which varied most are presented in Table 3. We identified statistically significant links for several thematic with some study year:

  • \(1^{st}\) year students are less interested by indicators and data of the thematic information (\(p=0.03<0.05\), \(V=0.22\)) with 0% vs. 23% of \(2^{nd}\) and \(5^{th}\) year students who wanted this kind of indicators (medium association).

  • \(1^{st}\) year students are also less interested in planning data than the others (\(\chi ^2=7.35\), \(p=0.02<0.05\), \(V=0.26\)) with 35% for \(1^{st}\) year vs. 65% and 71% for \(2^{nd}\) and \(5^{th}\) year (medium association).

  • \(1^{st}\) year students are mainly more interested about data and indicators about project (\(p<.0001\), \(V=0.62\)) with 50% of \(1^{st}\) year vs. >2% for the others (strong association).

  • finally, data about personal life interested mainly \(2^{nd}\) year students (\(p=0.0057 <0.05\), \(V=0.31\)) with 33% vs. 5% for \(1^{st}\) and 10% for \(5^{th}\) year (medium association).

Table 3. The 5 data and indicators which varied the most according to the study year presented by % of groups concerned

\(1^{st}\)year students seem to expect less information to implement a learning strategy (coefficient, formative assessment). On the other hand, the are more interested with data and indicators about project management. In our study, \(1^{st}\) year students came from two different academic programs and \(2^{nd}\) and \(5^{th}\) year from another academic program which could bias the results. The majority of first year students are in an IT course with a project-based pedagogical approach, which can explain the high interest for this kind of indicators. All \(2^{nd}\) year students are pharmacy students and they have just passed the \(1^{st}\) year of health studies, which means a very intense \(1^{st}\) year. This could explain the importance of personal life for them, and they hope to find some leisure time. To refine this first result, data from several academic programs would be needed for each sample. And the type of study (duration, thematic, pedagogical approach) should probably also influence students’ expectations. The year of study should be coupled with this information to refine this result.

Variation Between Moment in the Semester. The results of the variations in students’ expectations over time are presented in the Fig. 3. These results seem globally aligned with what one might naturally expect to observe. At the beginning, students need to plan the semester with basic information (timetable, evaluation’s date, expected working time) and consider planning personal life. The information on planning decreases in the middle and at the end of the semester. They are replaced by two kinds of indicators, a learning one with the status of knowledge or skills (acquired and/or to be acquired) and a monitoring one the state of play and/or remaining work, probably to be ready for the exams. We have often seen LADs adapted to the learning context or/and adaptable by the students, but to our knowledge, no adaptation has been provided by the system according to the time. To go further to refine our results, adaptation over time by the system should be explored as an additional adaptation possibility.

Fig. 3.
figure 3

Variation over time (% of groups who requested these indicators broken down by moment)

6 Conclusion

As previous work has identified, LADs for students need to be adapted to the learning context and by students. We wish to pursue this line of work by investigating whether there were shared data or indicators between LADs with different objectives and whether it was possible to identify adaptation needs according to different variables linked to the learning context. Our results seem to indicate that there are some data and indicators more often desired by students, but in all cases, they remain specific according to the objective of the LAD. It seems that the needs of students change over time, depending on the time in the semester (beginning, middle and end). Students would like useful information to plan at the beginning of the semester and at the end of the semester, they seem to prefer indicators to assess the knowledge and skills acquired and the progress in the remaining work. This result opens new possibilities to adapt LADs according to time. Our next steps will be to try to define from these results an adaptive LAD model for students and to experiment on real LADs with students.