Keywords

1 Introduction

Feedback is one of the most important features in learning, influencing positively both the feedback provider and the feedback receiver [2, 3, 9]. Hattie & Timperley (2007, p. 8) [3] define feedback as “the information provided by an agent (e.g., teacher, peer, book, etc.) regarding aspects of one’s performance or understanding”. Effective feedback interventions involve timeliness and personalization as two core aspects to keep students engaged and to benefit the learning process [3, 6, 9]. Thus, feedback is not only about providing support, but also about identifying when and which students need what kind of support.

Usually, instructorsFootnote 1 are responsible for performing all these feedback-related tasks that require additional effort and can become time-consuming. Nevertheless, the manual identification, personalization, and provision of feedback can turn unmanageable when scaling up the learning situation (e.g., many activities, many students). To this end, several tools have been developed to automate the detection of students who need support and to deliver feedback reactions in online environments. For instance, previous works, such as those by Kochmar et al. (2020) [4] and Lafifi et al. (2020) [5] suggested the use of intelligent tutoring systems as an alternative to human tutoring to achieve students’ real-time tracking and provide timely and personalized data-driven feedback.

However, literature reports that many of the data-driven tools do not consider the course context (e.g., the difficulty of the activities, the relation among course components) [7, 15]. The consideration of the course context could be achieved by involving instructors in the design of feedback strategies [14, 18]. To that end, many researchers propose conceptual and technological tools that actively involve the course instructors in fine-tuning the metrics, permitting them to detect learners who would need further support and provide feedback accordingly.

For instance, Pardo (2018) [10] proposed a data-driven feedback model, in which the feedback providers (e.g., instructors, peers) make the associations between the Learning Analytics (LA) and the course context. The author implemented this model into a digital tool, OnTask [11], enabling instructors to select different student cohorts by choosing data-driven metrics, and to deliver personalized feedback through email messages. Similarly, Liu et al., (2017) [7] presented a LA tool named Student Relationship Engagement System (SRES) to promote teacher agency by permitting the decision-making of informative features based on learners’ activity and the provision of personalized teacher-led feedback. Also, Reza et al. (2021) [13] developed a framework where course instructors create if-then rules to provide feedback in form of recommendations to MOOC learners based on their course engagement and behavior.

However, to the best of our knowledge, these tools do not guide instructors in the design of feedback (e.g., feedback suggestions based on the learning design or on the expected problems). Indeed, as Mangaroska & Giannakos (2019) [8] reported, course instructors often need further guidance on their sense-making and use of data-driven information to result in actionable feedback (i.e., feedback grounded on the course design and pedagogical theories, and informed by learners’ actions). Another significant limitation of existing LA-informed feedback tools is that the connections needed between learning design and learning analytics is limited to specific Learning Management Systems (LMSs), and do not consider analytics from third-party general-purpose tools (e.g., Google Docs, Slack), frequently used in technological-enhanced learning situations. This technological shortcoming reduces the applicability of existing research proposals.

To satisfy the above-mentioned limitations (i.e., lack of human involvement in the provision of personalized feedback, lack of guidance during the feedback design process, and lack of feedback tools connecting LMSs and external tools), we propose e-FeeD4Mi, a web-based tool developed by the authors to support the design and automatic enactment of feedback in multiple virtual learning environments. Thus, the overarching research question guiding this study is:

  • “To what extent does e-FeeD4Mi support instructors in the design and enactment of tailored data-driven feedback?”.

Fig. 1.
figure 1

e-FeeD4Mi interfaces: (top) annotate learning design page; (middle) identify potential problems page, selecting a content-understanding issue; (bottom) feedback overview page where indicators and reactions for each problem can be configured.

2 e-FeeD4Mi Overview

e-FeeD4Mi is a web-based tool that guides instructors through a five-dimension process to design and automate personalized data-driven feedback in learning management systems (e.g., Canvas, Moodle) and external tools (e.g., Slack, Google Docs). The tool includes a set of catalogues of potential problems, indicators and reactions, and associated recommendations for the configuration of the most appropriate decisions to give feedback to students. e-FeeD4Mi is based on a conceptual framework [16, 17] that involves the aforementioned process, catalogues and recommendations. Thus, its implementation in a digital tool enables the configuration of computer-interpretable feedback designs and the automation of the whole feedback procedure (i.e., student identification and feedback provision) during course runtime. The five-dimension process involves:

  1. 1.

    Import the learning design. e-FeeD4Mi is able to automatically retrieve learning designs, including title, modules, types of configured activities (e.g., quizzes, discussion forums, peer reviews) and their temporal sequence, from mainstream learning management systems. Instructors just need to provide the LMS type (e.g., Moodle), the location of the course (i.e., URL) and their authentication bearer for external integration (i.e., credentials).

  2. 2.

    Identify inherent features of the learning design. This step aims at reflecting about the critical points of the learning design where students can potentially experience learning issues that might require instructor feedback. To this end, e-FeeD4Mi provides instructors with a set of tools (i.e., visual labels and colors) that can be used to tag the resources and activities of the learning design (see Fig. 1 - top). For instance, instructors can tag the difficulty of the quizzes, the connections between resources, course milestones, etc.

  3. 3.

    Select potential student problems. In this phase, and considering the reflection from the previous phase, instructors can select from a list of student problems (obtained from the literature and from evaluation studies [16]) which of them can apply to instructors’ course in general, or to concrete activities of the learning design (see Fig. 1 - middle).

  4. 4–5.

    Configure indicators and reactions for the selected problems (Fig. 1d). For each selected problem, e-FeeD4Mi recommends a set of indicators that can potentially identify students experiencing such problems (see Fig. 1 - bottom). Instructors can choose between monitored indicators within the learning resources (e.g., low score in peer reviews) or self-reported problems. Similarly, e-FeeD4Mi recommends a set of useful feedback reactions for each configured problem, considering the classification made by Hattie & Timperley (2007) [3]: task-related (e.g., predefined message, badges), process-related (e.g., learning design modifications, student mentoring), and self-regulation (e.g., enable learner statistics) feedback.

Finally, instructors may deploy their feedback design by clicking the ‘deploy’ button. This automatic deployment involves the insertion of a LTI tool page in the course VLE (using the same instructor credentials as used for importing the LD). The LTI standardFootnote 2 avoids the need of students to authenticate again in this tool, and distinguishes between instructors and students, so that different interfaces can be provided according to users’ role. In the instructor interface, instructors can monitor and manage the configured feedback strategies (e.g., number of students identified with a problem, manual feedback reactions). On the other hand, in the student interface, learners can report those problems that were configured as self-reported and they are also notified with the different feedback reactions applied.

The adapter-based architecture of e-FeeD4Mi enables the connection of the tool with multiple VLE and external tools through pre-established contracts. Such adapters permit the automatic retrieval of learning designs, learners’ behavior tracking, and feedback delivery, all of them aiming to decrease the associated workload of the tool installation and to foster its adoption.

3 Preliminary Results

The development of e-FeeD4Mi followed a Design-based Research (DBR) approach [1]. DBR aims to tackle actual problems employing a set of iterative cycles, in a close collaboration between researchers and practitioners [1]. Likewise, we employed two cycles of inquiry for tool development, involving stakeholders in the evaluation of aspects related to the e-FeeD4Mi tool. The first evaluation took place in a 3-hour workshop with MOOC experts (N=11), who designed and implemented feedback strategies for given learning designs with e-FeeD4Mi. The second evaluation targeted instructors with previous experience delivering online courses (N=6). In this evaluation, the instructors designed and implemented feedback strategies for their own courses.

As stated in the Introduction, the underlying goal of e-FeeD4Mi is to support instructors in the design and enactment of tailored data-driven feedback. In this regard, the authors already performed an evaluation to understand the support of e-FeeD4Mi towards such an aim [16]. Nonetheless, we also considered it relevant to measure its potential for adoption, i.e., to understand if it can be used recurrently in real contexts. To measure e-FeeD4Mi potential adoption, we used the Net Promoter Score [12] together with some open-ended questions in both evaluations. The Net Promoter Score is calculated as the percentage of tool promoters (i.e., participants selecting 9 or 10 in the likelihood-to-recommend item) minus the percentage of tool detractors (i.e., participants selecting 0 to 6).

The score obtained in the first evaluation (which involved a tool version prior to the one presented in this article) was -18. This negative score together with some qualitative self-reported perceptions collected from participants revealed some usability problems that led to one single promoter and three detractors. Most of the improvements pointed out by participants served for enhancing the next version of the tool (e.g., “I think there should be an adaptive connection between the module type, potential problems and proper solution (feedback)”).

In the second evaluation, and after applying most usability improvements, the obtained score was 67 (4 promoters, 0 detractors). For comparison, in Reichheld (2003) [12], 400 enterprise tools were evaluated using the same instrument and they obtained a median score of 16. Therefore, the obtained high score together with the fact that e-FeeD4Mi evaluation was carried out with real instructors, suggest that e-FeeD4Mi can potentially be adopted in the regular practice of instructors. Nevertheless, instructors also reported some usability issues and suggested potential improvements, which will help us to enhance the next version of the tool. For instance, regarding the cognitive load, some participants proposed the use of predefined feedback templates that could reduce the temporal load of using e-FeeD4Mi, and some more options to be used as indicators and reactions.

4 Conclusions and Future Work

This paper presents e-FeeD4Mi, a web-based tool developed by the authors to support the design and automatic enactment of feedback strategies in multiple virtual learning environments. Following the DBR research approach we conducted two iterative cycles involving course stakeholders in the design of data-driven feedback, exploring the participants’ potential adoption of the tool.

The results obtained in the most recent evaluation of e-FeeD4Mi shows the potential of the tool. However, the performed evaluations came along with several limitations, mainly related to the small number of participants and the short time using the tool. As a future work, we plan to use e-FeeD4Mi for designing and providing feedback in a real course, thus enabling us to study its impact during the whole life-cycle of an online course. This evaluation will help understand, for example, the orchestration workload of feedback strategies during course enactment and the tool perceptions from learners.