Abstract
The provision of personalized and timely feedback can become challenging when shifting from face-to-face to online learning. Feedback is not only about providing support to students, but also about identifying when and which students need what kind of support. Usually, educators carry out such activities manually. However, the manual identification, personalization and provision of feedback might turn unmanageable, especially in large-scale environments. Previous works proposed the use of data-driven tools to automate the feedback provision with the active involvement of human agents in its design. Nevertheless, to the best of our knowledge, these tools do not guide instructors in the process of feedback design and sense-making of the data-driven information. This paper presents e-FeeD4Mi, a web-based tool developed to support instructors in the design and automatic enactment of feedback in multiple virtual learning environments. We developed e-FeeD4Mi following a Design-Based Research approach and its potential for adoption has been evaluated in two evaluation studies.
Access provided by Autonomous University of Puebla. Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Feedback is one of the most important features in learning, influencing positively both the feedback provider and the feedback receiver [2, 3, 9]. Hattie & Timperley (2007, p. 8) [3] define feedback as “the information provided by an agent (e.g., teacher, peer, book, etc.) regarding aspects of one’s performance or understanding”. Effective feedback interventions involve timeliness and personalization as two core aspects to keep students engaged and to benefit the learning process [3, 6, 9]. Thus, feedback is not only about providing support, but also about identifying when and which students need what kind of support.
Usually, instructorsFootnote 1 are responsible for performing all these feedback-related tasks that require additional effort and can become time-consuming. Nevertheless, the manual identification, personalization, and provision of feedback can turn unmanageable when scaling up the learning situation (e.g., many activities, many students). To this end, several tools have been developed to automate the detection of students who need support and to deliver feedback reactions in online environments. For instance, previous works, such as those by Kochmar et al. (2020) [4] and Lafifi et al. (2020) [5] suggested the use of intelligent tutoring systems as an alternative to human tutoring to achieve students’ real-time tracking and provide timely and personalized data-driven feedback.
However, literature reports that many of the data-driven tools do not consider the course context (e.g., the difficulty of the activities, the relation among course components) [7, 15]. The consideration of the course context could be achieved by involving instructors in the design of feedback strategies [14, 18]. To that end, many researchers propose conceptual and technological tools that actively involve the course instructors in fine-tuning the metrics, permitting them to detect learners who would need further support and provide feedback accordingly.
For instance, Pardo (2018) [10] proposed a data-driven feedback model, in which the feedback providers (e.g., instructors, peers) make the associations between the Learning Analytics (LA) and the course context. The author implemented this model into a digital tool, OnTask [11], enabling instructors to select different student cohorts by choosing data-driven metrics, and to deliver personalized feedback through email messages. Similarly, Liu et al., (2017) [7] presented a LA tool named Student Relationship Engagement System (SRES) to promote teacher agency by permitting the decision-making of informative features based on learners’ activity and the provision of personalized teacher-led feedback. Also, Reza et al. (2021) [13] developed a framework where course instructors create if-then rules to provide feedback in form of recommendations to MOOC learners based on their course engagement and behavior.
However, to the best of our knowledge, these tools do not guide instructors in the design of feedback (e.g., feedback suggestions based on the learning design or on the expected problems). Indeed, as Mangaroska & Giannakos (2019) [8] reported, course instructors often need further guidance on their sense-making and use of data-driven information to result in actionable feedback (i.e., feedback grounded on the course design and pedagogical theories, and informed by learners’ actions). Another significant limitation of existing LA-informed feedback tools is that the connections needed between learning design and learning analytics is limited to specific Learning Management Systems (LMSs), and do not consider analytics from third-party general-purpose tools (e.g., Google Docs, Slack), frequently used in technological-enhanced learning situations. This technological shortcoming reduces the applicability of existing research proposals.
To satisfy the above-mentioned limitations (i.e., lack of human involvement in the provision of personalized feedback, lack of guidance during the feedback design process, and lack of feedback tools connecting LMSs and external tools), we propose e-FeeD4Mi, a web-based tool developed by the authors to support the design and automatic enactment of feedback in multiple virtual learning environments. Thus, the overarching research question guiding this study is:
-
“To what extent does e-FeeD4Mi support instructors in the design and enactment of tailored data-driven feedback?”.
2 e-FeeD4Mi Overview
e-FeeD4Mi is a web-based tool that guides instructors through a five-dimension process to design and automate personalized data-driven feedback in learning management systems (e.g., Canvas, Moodle) and external tools (e.g., Slack, Google Docs). The tool includes a set of catalogues of potential problems, indicators and reactions, and associated recommendations for the configuration of the most appropriate decisions to give feedback to students. e-FeeD4Mi is based on a conceptual framework [16, 17] that involves the aforementioned process, catalogues and recommendations. Thus, its implementation in a digital tool enables the configuration of computer-interpretable feedback designs and the automation of the whole feedback procedure (i.e., student identification and feedback provision) during course runtime. The five-dimension process involves:
-
1.
Import the learning design. e-FeeD4Mi is able to automatically retrieve learning designs, including title, modules, types of configured activities (e.g., quizzes, discussion forums, peer reviews) and their temporal sequence, from mainstream learning management systems. Instructors just need to provide the LMS type (e.g., Moodle), the location of the course (i.e., URL) and their authentication bearer for external integration (i.e., credentials).
-
2.
Identify inherent features of the learning design. This step aims at reflecting about the critical points of the learning design where students can potentially experience learning issues that might require instructor feedback. To this end, e-FeeD4Mi provides instructors with a set of tools (i.e., visual labels and colors) that can be used to tag the resources and activities of the learning design (see Fig. 1 - top). For instance, instructors can tag the difficulty of the quizzes, the connections between resources, course milestones, etc.
-
3.
Select potential student problems. In this phase, and considering the reflection from the previous phase, instructors can select from a list of student problems (obtained from the literature and from evaluation studies [16]) which of them can apply to instructors’ course in general, or to concrete activities of the learning design (see Fig. 1 - middle).
-
4–5.
Configure indicators and reactions for the selected problems (Fig. 1d). For each selected problem, e-FeeD4Mi recommends a set of indicators that can potentially identify students experiencing such problems (see Fig. 1 - bottom). Instructors can choose between monitored indicators within the learning resources (e.g., low score in peer reviews) or self-reported problems. Similarly, e-FeeD4Mi recommends a set of useful feedback reactions for each configured problem, considering the classification made by Hattie & Timperley (2007) [3]: task-related (e.g., predefined message, badges), process-related (e.g., learning design modifications, student mentoring), and self-regulation (e.g., enable learner statistics) feedback.
Finally, instructors may deploy their feedback design by clicking the ‘deploy’ button. This automatic deployment involves the insertion of a LTI tool page in the course VLE (using the same instructor credentials as used for importing the LD). The LTI standardFootnote 2 avoids the need of students to authenticate again in this tool, and distinguishes between instructors and students, so that different interfaces can be provided according to users’ role. In the instructor interface, instructors can monitor and manage the configured feedback strategies (e.g., number of students identified with a problem, manual feedback reactions). On the other hand, in the student interface, learners can report those problems that were configured as self-reported and they are also notified with the different feedback reactions applied.
The adapter-based architecture of e-FeeD4Mi enables the connection of the tool with multiple VLE and external tools through pre-established contracts. Such adapters permit the automatic retrieval of learning designs, learners’ behavior tracking, and feedback delivery, all of them aiming to decrease the associated workload of the tool installation and to foster its adoption.
3 Preliminary Results
The development of e-FeeD4Mi followed a Design-based Research (DBR) approach [1]. DBR aims to tackle actual problems employing a set of iterative cycles, in a close collaboration between researchers and practitioners [1]. Likewise, we employed two cycles of inquiry for tool development, involving stakeholders in the evaluation of aspects related to the e-FeeD4Mi tool. The first evaluation took place in a 3-hour workshop with MOOC experts (N=11), who designed and implemented feedback strategies for given learning designs with e-FeeD4Mi. The second evaluation targeted instructors with previous experience delivering online courses (N=6). In this evaluation, the instructors designed and implemented feedback strategies for their own courses.
As stated in the Introduction, the underlying goal of e-FeeD4Mi is to support instructors in the design and enactment of tailored data-driven feedback. In this regard, the authors already performed an evaluation to understand the support of e-FeeD4Mi towards such an aim [16]. Nonetheless, we also considered it relevant to measure its potential for adoption, i.e., to understand if it can be used recurrently in real contexts. To measure e-FeeD4Mi potential adoption, we used the Net Promoter Score [12] together with some open-ended questions in both evaluations. The Net Promoter Score is calculated as the percentage of tool promoters (i.e., participants selecting 9 or 10 in the likelihood-to-recommend item) minus the percentage of tool detractors (i.e., participants selecting 0 to 6).
The score obtained in the first evaluation (which involved a tool version prior to the one presented in this article) was -18. This negative score together with some qualitative self-reported perceptions collected from participants revealed some usability problems that led to one single promoter and three detractors. Most of the improvements pointed out by participants served for enhancing the next version of the tool (e.g., “I think there should be an adaptive connection between the module type, potential problems and proper solution (feedback)”).
In the second evaluation, and after applying most usability improvements, the obtained score was 67 (4 promoters, 0 detractors). For comparison, in Reichheld (2003) [12], 400 enterprise tools were evaluated using the same instrument and they obtained a median score of 16. Therefore, the obtained high score together with the fact that e-FeeD4Mi evaluation was carried out with real instructors, suggest that e-FeeD4Mi can potentially be adopted in the regular practice of instructors. Nevertheless, instructors also reported some usability issues and suggested potential improvements, which will help us to enhance the next version of the tool. For instance, regarding the cognitive load, some participants proposed the use of predefined feedback templates that could reduce the temporal load of using e-FeeD4Mi, and some more options to be used as indicators and reactions.
4 Conclusions and Future Work
This paper presents e-FeeD4Mi, a web-based tool developed by the authors to support the design and automatic enactment of feedback strategies in multiple virtual learning environments. Following the DBR research approach we conducted two iterative cycles involving course stakeholders in the design of data-driven feedback, exploring the participants’ potential adoption of the tool.
The results obtained in the most recent evaluation of e-FeeD4Mi shows the potential of the tool. However, the performed evaluations came along with several limitations, mainly related to the small number of participants and the short time using the tool. As a future work, we plan to use e-FeeD4Mi for designing and providing feedback in a real course, thus enabling us to study its impact during the whole life-cycle of an online course. This evaluation will help understand, for example, the orchestration workload of feedback strategies during course enactment and the tool perceptions from learners.
Notes
- 1.
For simplicity, we refer to instructors as any person involved in the design and provision of feedback, including instructional designers, teachers and teaching assistants.
- 2.
IMS Global. Learning Tools Interoperability (LTI): https://www.imsglobal.org/activity/learning-tools-interoperability, last access: June, 2022.
References
Amiel, T., Reeves, T.C.: Design-based research and educational technology: rethinking technology and the research agenda. J. Educ. Technol. Soc. 11(4), 29–40 (2008)
Al-Bashir, M.M., Kabir, M.R., Rahman, I.: The value and effectiveness of feedback in improving students’ learning and professionalizing teaching in higher education. J. Educ. Pract. 7(16), 38–41 (2016)
Hattie, J., Timperley, H.: The power of feedback. Rev. Educ. Res. 77(1), 81–112 (2007)
Kochmar, E., Vu, D.D., Belfer, R., Gupta, V., Serban, I.V., Pineau, J.: Automated personalized feedback improves learning gains in an intelligent tutoring system. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12164, pp. 140–146. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52240-7_26
Lafifi, Y., Boudria, A., Lafifi, A., Cheratia, M.: Intelligent tutoring of learners in e-learning systems and massive open online courses (MOOC). In: Who Runs The World: Data, pp. 176–192. Istanbul University Press (2020)
Leibold, N., Schwarz, L.M.: The art of giving online feedback. J. Effective Teach. 15(1), 34–46 (2015)
Liu, D.Y.-T., Bartimote-Aufflick, K., Pardo, A., Bridgeman, A.J.: Data-driven personalization of student learning support in higher education. In: Peña-Ayala, A. (ed.) Learning Analytics: Fundaments, Applications, and Trends. SSDC, vol. 94, pp. 143–169. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-52977-6_5
Mangaroska, K., Giannakos, M.: Learning analytics for learning design: a systematic literature review of analytics-driven design to enhance learning. IEEE Trans. Learn. Technol. 12(4), 516–534 (2019)
Molloy, E.K., Boud, D.: Feedback models for learning, teaching and performance. In: Spector, J.M., Merrill, M.D., Elen, J., Bishop, M.J. (eds.) Handbook of Research on Educational Communications and Technology, pp. 413–424. Springer, New York (2014). https://doi.org/10.1007/978-1-4614-3185-5_33
Pardo, A.: A feedback model for data-rich learning experiences. Assess. Eval. High. Educ. 43(3), 428–438 (2018)
Pardo, A., et al.: OnTask: delivering data-informed, personalized learning support actions. J. Learn. Anal. 5(3), 235–249 (2018)
Reichheld, F.F.: The one number you need to grow. Harv. Bus. Rev. 81(12), 46–55 (2003)
Reza, M., Kim, J., Bhattacharjee, A., Rafferty, A.N., Williams, J.J.: The MOOClet framework: unifying experimentation, dynamic improvement, and personalization in online courses. In: Proceedings of the 8th ACM Conference on Learning @ Scale, pp. 15–26 (2021)
Rodríguez-Triana, M.J., Prieto, L.P., Martínez-Monés, A., Asensio-Pérez, J.I., Dimitriadis, Y.: The teacher in the loop: customizing multimodal learning analytics for blended learning. In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge, pp. 417–426 (2018)
Shibani, A., Knight, S., Shum, S.B.: Contextualizable learning analytics design: a generic model and writing analytics evaluations. In: ACM International Conference Proceeding Series, pp. 210–219 (2019)
Topali, P., et al.: Identifying learner problems framed within MOOC learning designs. In: 29th International Conference on Computers in Education, pp. 297–302 (2021)
Topali, P., et al.: Supporting instructors in the design of actionable feedback for MOOCs. In: 2022 IEEE Global Engineering Education Conference, pp. 1881–1888 (2022)
Wiley, K.J., Dimitriadis, Y., Bradford, A., Linn, M.C.: From theory to action: developing and evaluating learning analytics for learning design. In: Proceedings of the 10th International Conference on Learning Analytics and Knowledge, pp. 569–578 (2020)
Acknowledgements
This research is partially funded by the Spanish State Research Agency (AEI) together with the European Regional Development Fund, under project grant PID2020-112584RB-C32; and by the European Social Fund and Regional Council of Education of Castile and Leon (E-47-2018-0108488).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ortega-Arranz, A., Topali, P., Asensio-Pérez, J.I., Villagrá-Sobrino, S.L., Martínez-Monés, A., Dimitriadis, Y. (2022). e-FeeD4Mi: Automating Tailored LA-Informed Feedback in Virtual Learning Environments. In: Hilliger, I., Muñoz-Merino, P.J., De Laet, T., Ortega-Arranz, A., Farrell, T. (eds) Educating for a New Future: Making Sense of Technology-Enhanced Learning Adoption. EC-TEL 2022. Lecture Notes in Computer Science, vol 13450. Springer, Cham. https://doi.org/10.1007/978-3-031-16290-9_39
Download citation
DOI: https://doi.org/10.1007/978-3-031-16290-9_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-16289-3
Online ISBN: 978-3-031-16290-9
eBook Packages: Computer ScienceComputer Science (R0)