Abstract
Learning Analytics Dashboards (LADs) are becoming a key element in enabling learners to monitor their learning, plan and actually learn. However, LADs are sometimes not completely adapted to students, who are rarely involved in their design. Moreover, even when they are, the implemented LADs are often the same for all students, whereas previous works have shown the value of adapted LADs. Here we investigate which adaptations are requested by students, and attempt to identify which data and visualizations are suitable depending on the student’s profile. More specifically, we consider dynamic profiles as students’ expectations can vary over the course duration. By using LADs co-design sessions both online and on-site, we collected needs from N = 386 university students from different disciplines and degree level, split in 108 groups (2 to 4 students). After a manual annotation, we identified a total of 54 types of data and indicators, divided into 12 thematics. Our first analysis confirmed some previous results, particularly on the use of peer comparisons that do not fulfill every student’s needs. And we noticed other expectations according to the student’s learning context or the academic period. Future work will benefit from these results to define a model of adapted LADs.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Technology-Enhanced Learning is nowadays widely developed for both distance and face-to-face learning, which allows us to collect and analyze a large amount of traces of learning activities in order to help learners. By analyzing the data collected, we can understand how users learn with technology, develop new learning tools and offer a unique learning path for each learner. In the field of possibilities provided by learning analytics, Learning Analytics Dashboards (LADs) are a particularly popular approach to support learning. In recent years, several reviews have shown the growing interest in LADs [4, 11, 17,18,19]. But they also revealed the need for further research to design LADs showing a better grounding in learning sciences and learning theories, in particular due to the lack of specific visualizations for the activities of learning and teaching. Sahin and Ifenthaler [17, p. 18] identified the need to involve stakeholders to define “which metrics are important” to them. Recent work by Ahmad et al. [2, p. 66] has also concluded “that it is necessary to keep students and their opinions in the loop”. In the case of students’ LADs, previous works [16, 21] concluded that students needed adapted LADs with personalized displays. To meet this need, co-design tools have been developed to involve students in the design of LADs [5, 7, 8, 14, 15]. This approach is all the more important as, even if indicators and visualizations used in LADs can be helpful for the student, e.g. to be aware of their progress [11], it can also have negative consequences, e.g. peer comparison in some contexts [20, 21]. The importance of involving students in the design of LADs having been demonstrated, we will focus in the next part on the content of these LADs, in particular on indicators and visualizations that compose them.
1.1 Previous Works
Schwendimann [19, p. 37] defines a Learning Dashboard as “a single display that aggregates different indicators about learner(s), learning process(es) and/or learning context(s) into one or multiple visualizations”. Several studies considered existing indicators and their associated visualizations for LAD. Indicators’ definitions vary according to the context. Glahn et al. [6, p. 2] provides one that is quite generic: “Indicators are mechanisms to provide simplified information that are valuable to a task. With some background knowledge we can understand the meaning of an indicator without the need of knowing about the details of the underlying process or mechanism”. In a review on LA indicators, Morais Canellas [13, p. 107] defines a learning analytics indicator as “a calculated (quantitative or qualitative) measure [computability property] linked to a behaviour or an activity instrumented by the [traceability property] of one or more learners, visible to a user [visibility property] and which can be used to calculate other indicators”. LADs are composed of several kinds of indicators and even if some general definitions exist, Ifenthaler [9, p. 168] said “standards for indicators, visualizations, and design guidelines that would make learning analytics pedagogically effective are lacking”. In the literature [3, 7, 8, 12], we can find studies describing various indicators used in students’ LADs, but to the best of our knowledge, there is no consensus on a single exhaustive list of indicators for students’ LADs or even on a single way to categorize them. Ifenthaler and Yau [10] worked on indicators to support success in predicting learning outcomes, and although some of them can be used in LADs, they focused on identifying students at-risk and did not attempt to cover all the expectations of students’ LADs. Depending on the studies, LADs’ indicators can be classified in different categories. We can use the aforementioned definition from [19] to consider three categories: indicators about learner(s), learning process(es) and/or learning context(s). According to Jivet et al. [12], indicators used in LADs can be of two kinds: learning behaviour indicators, which provide information at the “learning process level”, and content progress indicators, which provide information at the “task level”. A study from Gartner [1] on LA indicators classified them in four different types of analysis based on the nature of the performed analysis and which are increasing both in terms of value for the stakeholders and in terms of difficulty to compute them: descriptive (what happened?), diagnostic (why did it happen?), predictive (what will happen?) and prescriptive (what should be done?).
Some previous works to produce LADs for students involved them in different phases, from conception to prototyping. Hilliger et al. [8, p. 118] identified that “the design of any dashboard should anticipate that its use could have a different effect depending on the context and the targeted user”. With this approach, they defined indicators “relevant at the moment of choosing courses” [p. 127] for a dashboard with a specific objective. This observation highlights the implicit need to consider indicators that are relevant at a given moment in time. For this reason, Gras et al. [7] developed an interactive dashboard which can be customized by first year university students, implying that “one size may not fit all” when it comes to students’ LADs.
Overall we see that all these studies seem to define indicators and visualization(s) for students’s LADs with specific objectives and an adaptation to a learning context. However there is a lack of works around the generalizability of LADs indicators (1) to different students in the same context or (2) to different learning contexts.
1.2 Objectives
In this paper, we try to investigate the need for LADs’ adaptation through a different approach than in previous works. Namely, we seek to understand the needs for student LADs, not for an artificially imposed goal, but for a goal chosen by each student themselves. The underlying assumption is that students may be more at ease to propose their ideal indicators when they do so in a real context (a given course they are registered to) and for a goal that they deem relevant in the first place. Then using data from these numerous co-design sessions organized with university students, we tried to identify which indicators (and to a lesser extent, which visualizations) are spontaneously wished for by students depending on several elements of context such as their discipline, the study duration (short or long), their level (undergraduate or graduate), the moment in the semester (at the beginning of a course, during an ongoing course or towards the end of it, right before the final exam). More precisely, we attempted to answer the following research questions:
-
RQ1: Is there a set of indicators for students’ LAD that cover their expectations?
-
RQ2: Are there shared expectations of students for their LAD? I.e. are there frequently wished for indicators desired by a majority of students who have a same objective for their LAD?
-
RQ3: Are there different expectations of students for their LAD, depending on their learning context (study duration, level) and/or moment in the semester according to the LAD’s objective?
2 Material
To answer our research questions, we organised LAD co-design sessions using a co-design tool (PADDLE or ePADDLE method [14]), online or in face-to-face sessions, with students in different contexts presented in Table 1. In this paper we use data from a total number of 108 groups of 2 to 4 students (N = 386 students overall). Each group was asked to design a LAD for a specific objective among 6 possible ones (monitoring, planning, communication, evaluation, evolution, remediation) that they were choosing themselves at the beginning of the session. A co-design session lasted an average of 91 min (SD = 25 min). For each LAD, we collected a list of indicators defined by students with a name, a description, one or more visualization(s) and a drawing of the final dashboard (see Fig. 1 for some examples of such dashboards). LADs produced by the students contain an average of 7.04 indicators or data (SD = 2.17).
3 RQ1: Set of Indicators for Students’ LAD
3.1 Method
From the raw data, for each indicator defined by a group of students, we inferred a generic title to be able to compare the content of the different LADs. For each of the 108 LADs, we have listed indicators wished for by students and the associated desired visualization(s) they described or drew.
3.2 Results and Discussion
Overall, we listed 761 data and indicators with their visualization(s). After coding with generic titles, we obtained 54 indicators divided into 12 thematics using 24 different visualizations. Some thematics (monitoring, planning and communication) overlap with the objectives of the LADs. It is worth noting that not all of the 54 indicators wished for by students match the definitions of a LAD indicator from [12] or [13], as students sometimes wish for information that is not directly linked to their learning but which they consider useful in order to plan their learning session (such as weather or personal agenda). Figure 2 shows all 54 indicators and data sorted by the percentage of groups that listed them. We can observe that no single indicator corresponds to all LADs (the most requested indicator, which is the grade, is asked for by 60% of students) and that there is a limited number of data and indicators desired by the majority of students.
We can see that the needs expressed are not limited to indicators, but students also ask for data, whether related to learning or not. We have chosen to keep indicators and data in our analysis because the LADs designed by the students are coherent sets. This is in line with Jivet’s findings: “different tools should complement dashboards and be seamlessly integrated in the learning environment and the instructional design” [11, p. 93]. In our study, students may have defined learning environments rather than LADs.
4 RQ2: Prevalence of Most Frequent Learning Indicators by Objectives
4.1 Method
Using the list of 54 indicators identified in RQ1, we looked for the 5 most wanted data and indicators by objective (monitoring, planning, communication, evaluation, evolution, remediation) and analyzed also the associated visualization chosen.
4.2 Results and Discussion
We obtained 17 items presented in Table 2. Almost all data and indicators listed in Table 2 were whished for by groups of different kinds of study except those marked with *. We can observe that some data and indicators are shared by different objective, by different study program but, none is wished by all groups. For the 5 most wanted data and indicators, students expressed different needs in visualization for each data or indicators (between 5 and 17 different visualizations).
By comparing the desired data and indicators according to the LADs’ objectives, we have identified some items that are shared by several learning contexts, but none of them is validated for all situations. Even if the grade seems to be wanted by a majority of groups, this data can be represented by different visualizations. This indicator is suitable for only 35% of LAD with a planning objective, perhaps because the planning objective groups together several sub-objectives such as planning one’s work for the semester or planning one’s revisions. The peer comparison indicator, which is often proposed in LADs, varies greatly, ranging from 20% to 72% of groups depending on the objective. This result confirms the need to adapt the LADs to the learning context and target, as identified in previous works. To go further, we should explore, for the same learning context (same student cohort, same study program, same year, same LAD objective), if there are shared expectations of students for their LAD.
5 RQ3: Links Between Indicators and Need Profiles
5.1 Method
To identify a possible link between indicators and need profiles, we looked at the variation of expressed needs and selected the 5 data and indicators which varied the most for:
-
the study year: first, second and fifth year,
-
the moment when the co-design session took place: at the beginning, the middle or the end of the semester.
To complete this approach, additional analysis were conducted using SAS software (SAS v9.4, SAS Institute Inc., Cary, NC, USA). Categorical variables are presented as absolute numbers and percentages. Chi-square tests or Fisher’s exact tests were used as they seem appropriate to compare proportions of a categorical variable. When a statistically significant result is found, Cramer’s V was used to estimate the strength of association between the variables. A two-tailed type I error rate of 0.05 was considered for statistical significance.
5.2 Results and Discussion
According to different learning context variables, we have identified various wishes by exploring the variations in the needs expressed by students.
Variation Between Study Year. According to the study year, indicators which varied most are presented in Table 3. We identified statistically significant links for several thematic with some study year:
-
\(1^{st}\) year students are less interested by indicators and data of the thematic information (\(p=0.03<0.05\), \(V=0.22\)) with 0% vs. 23% of \(2^{nd}\) and \(5^{th}\) year students who wanted this kind of indicators (medium association).
-
\(1^{st}\) year students are also less interested in planning data than the others (\(\chi ^2=7.35\), \(p=0.02<0.05\), \(V=0.26\)) with 35% for \(1^{st}\) year vs. 65% and 71% for \(2^{nd}\) and \(5^{th}\) year (medium association).
-
\(1^{st}\) year students are mainly more interested about data and indicators about project (\(p<.0001\), \(V=0.62\)) with 50% of \(1^{st}\) year vs. >2% for the others (strong association).
-
finally, data about personal life interested mainly \(2^{nd}\) year students (\(p=0.0057 <0.05\), \(V=0.31\)) with 33% vs. 5% for \(1^{st}\) and 10% for \(5^{th}\) year (medium association).
\(1^{st}\)year students seem to expect less information to implement a learning strategy (coefficient, formative assessment). On the other hand, the are more interested with data and indicators about project management. In our study, \(1^{st}\) year students came from two different academic programs and \(2^{nd}\) and \(5^{th}\) year from another academic program which could bias the results. The majority of first year students are in an IT course with a project-based pedagogical approach, which can explain the high interest for this kind of indicators. All \(2^{nd}\) year students are pharmacy students and they have just passed the \(1^{st}\) year of health studies, which means a very intense \(1^{st}\) year. This could explain the importance of personal life for them, and they hope to find some leisure time. To refine this first result, data from several academic programs would be needed for each sample. And the type of study (duration, thematic, pedagogical approach) should probably also influence students’ expectations. The year of study should be coupled with this information to refine this result.
Variation Between Moment in the Semester. The results of the variations in students’ expectations over time are presented in the Fig. 3. These results seem globally aligned with what one might naturally expect to observe. At the beginning, students need to plan the semester with basic information (timetable, evaluation’s date, expected working time) and consider planning personal life. The information on planning decreases in the middle and at the end of the semester. They are replaced by two kinds of indicators, a learning one with the status of knowledge or skills (acquired and/or to be acquired) and a monitoring one the state of play and/or remaining work, probably to be ready for the exams. We have often seen LADs adapted to the learning context or/and adaptable by the students, but to our knowledge, no adaptation has been provided by the system according to the time. To go further to refine our results, adaptation over time by the system should be explored as an additional adaptation possibility.
6 Conclusion
As previous work has identified, LADs for students need to be adapted to the learning context and by students. We wish to pursue this line of work by investigating whether there were shared data or indicators between LADs with different objectives and whether it was possible to identify adaptation needs according to different variables linked to the learning context. Our results seem to indicate that there are some data and indicators more often desired by students, but in all cases, they remain specific according to the objective of the LAD. It seems that the needs of students change over time, depending on the time in the semester (beginning, middle and end). Students would like useful information to plan at the beginning of the semester and at the end of the semester, they seem to prefer indicators to assess the knowledge and skills acquired and the progress in the remaining work. This result opens new possibilities to adapt LADs according to time. Our next steps will be to try to define from these results an adaptive LAD model for students and to experiment on real LADs with students.
References
Gartner Says Advanced Analytics is a Top Business Priority. https://www.gartner.com/en/newsroom/press-releases/2014-10-21-gartner-says-advanced-analytics-is-a-top-business-priority
Ahmad, A., et al.: What indicators can i serve you with? An evaluation of a research-driven learning analytics indicator repository, pp. 58–68 (2022)
Arriaran Olalde, I.N., Ipiña Larrañaga, N.: Indicators needed to design a student dashboard from lecturers’ perspectives: a qualitative study. In: 5th International Conference on Higher Education Advances (HEAd 2019). Universitat Politècnica València (2019)
Bodily, R., Verbert, K.: Review of research on student-facing learning analytics dashboards and educational recommender systems. IEEE Trans. Learn. Technol. 10(4), 405–418 (2017)
Dabbebi, I., Gilliot, J.M., Iksal, S.: User centered approach for learning analytics dashboard generation. In: CSEDU 2019, vol. 2, pp. 260–267 (2019)
Glahn, C., Specht, M., Koper, R.: Smart indicators to support the learning interaction cycle. Int. J. Cont. Eng. Educ. Lifelong Learn. 18, 98–117 (2007)
Gras, B., Brun, A., Boyer, A.: For and by student dashboards design to address dropout. In: Companion Proceedings 10th International Conference on Learning Analytics & Knowledge (LAK20), Workshop on Addressing Drop-Out Rates in Higher Education (ADORE 2020). Frankfurt, Germany (2020). https://hal.inria.fr/hal-02974682
Hilliger, I., et al.: For learners, with learners: identifying indicators for an academic advising dashboard for students. In: Alario-Hoyos, C., Rodríguez-Triana, M.J., Scheffel, M., Arnedillo-Sánchez, I., Dennerlein, S.M. (eds.) EC-TEL 2020. LNCS, vol. 12315, pp. 117–130. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57717-9_9
Ifenthaler, D.: Learning analytics for school and system management. In: OECD Digital Education Outlook 2021 : Pushing the Frontiers with Artificial Intelligence, Blockchain and Robots. OECD Publishing (2021)
Ifenthaler, D., Yau, J.Y.K.: Reflections on different learning analytics indicators for supporting study success. Int. J. Learn. Anal. Artif. Intell. Educ. (iJAI) 2(2), 4–23 (2020)
Jivet, I., Scheffel, M., Drachsler, H., Specht, M.: Awareness is not enough: pitfalls of learning analytics dashboards in the educational practice. In: Lavoué, É., Drachsler, H., Verbert, K., Broisin, J., Pérez-Sanagustín, M. (eds.) EC-TEL 2017. LNCS, vol. 10474, pp. 82–96. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66610-5_7
Jivet, I., Wong, J., Scheffel, M., Valle Torre, M., Specht, M., Drachsler, H.: Quantum of choice: how learners’ feedback monitoring decisions, goals and self-regulated learning skills are related. In: LAK21: 11th International Learning Analytics and Knowledge Conference, pp. 416–427. ACM (2021)
Morais Canellas, C.: Métamodèle d’analytique des apprentissages avec le numérique. Ph.D. thesis, Sorbonne université (2021)
Oliver-Quelennec, K., Bouchet, F., Carron, T., Pinçon, C.: Can a learning analytics dashboard participative design approach be transposed to an online-only context? In: 18th International Conference Cognition and Exploratory Learning in Digital Age, p. 9 (2021)
Prieto Alvarez, C., Martinez-Maldonado, R., Anderson, T.: Co-designing learning analytics tools with learners, pp. 93–110. Taylor & Francis (10 2018)
Roberts, L.D., Howell, J.A., Seaman, K.: Give me a customizable dashboard: personalized learning analytics dashboards in higher education. Technol. Knowl. Learn. 22(3), 317–333 (2017)
Sahin, M., Ifenthaler, D.: Visualizations and dashboards for learning analytics: a systematic literature review. In: Sahin, M., Ifenthaler, D. (eds.) Visualizations and Dashboards for Learning Analytics. Advances in Analytics for Learning and Teaching, pp. 3–22. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81222-5_1
Scheneider, T., Lemos, R.: Use of learning analytics interactive dashboards in serious games: a review of the literature. In. J. Innov. Educ. Res. 8(3), 150–174 (2020)
Schwendimann, B.A., et al.: Perceiving learning at a glance: a systematic literature review of learning dashboard research. IEEE Trans. Learn. Technol. 10(1), 30–41 (2017)
Tan, J.P.L., Koh, E., Jonathan, C.R., Yang, S.: Learner dashboards a double-edged sword? Students’ sense-making of a collaborative critical reading and learning analytics environment for fostering 21st century literacies. J. Learn. Anal. 4(1), 117–140 (2017)
Teasley, S.D.: Student facing dashboards: one size fits all? Technol. Knowl. Learn. 22(3), 377–384 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Oliver-Quelennec, K., Bouchet, F., Carron, T., Fronton Casalino, K., Pinçon, C. (2022). Adapting Learning Analytics Dashboards by and for University Students. In: Hilliger, I., Muñoz-Merino, P.J., De Laet, T., Ortega-Arranz, A., Farrell, T. (eds) Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption. EC-TEL 2022. Lecture Notes in Computer Science, vol 13450. Springer, Cham. https://doi.org/10.1007/978-3-031-16290-9_22
Download citation
DOI: https://doi.org/10.1007/978-3-031-16290-9_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16289-3
Online ISBN: 978-3-031-16290-9
eBook Packages: Computer ScienceComputer Science (R0)