Keywords

1 Introduction

New learning approaches and the rapid technological development of the last years offer exciting opportunities for education. The digitalization provides access to these educational technologies as Learning Management Systems (LMS) for a growing number of people [1] and the Covid-19 pandemic fuels the need for effective and assisted e-learning applications even further. These factors contribute to the recent increase of LMS usage and suggest a high demand in the future. Conventional LMS are software tools that help manage the entire education process, including preparing, conducting and post-processing classes [2]. As such, they offer an entry point for learners and instructors where learning material can be stored, edited, and processed at any time. Some LMS include additional functionalities, like platforms for communicating between peers and instructors or social media components. While these conventional LMS offer a good basis for assisted e-learning, adding Artificial Intelligence (AI) functionalities such as personalized and adaptive assistance can bring the whole education process to the next level.

The influence of AI on all our lives in varying ways is believed to grow continually [3] and already we observe an increasing empowerment in education through AI [4, 5]. We agree with Chaudhry and Kazim (2021) when we define AI for the context of this paper as a computer system that can achieve a particular task through certain capabilities and intelligent behaviour that was once considered unique to humans [3]. The continuously growing demand for adaptive and personalized education [4] cannot realistically be fulfilled without AI support. Integrating AI supported functionalities into LMS can make their usage even more attractive by offering a broad variety of benefits for all kinds of users, which are involved in the education process, including learners, instructors or training managers.

Prior to implementing assisted learning functions, a couple of requirements on organizational, methodological, didactical, content-related, and technical levels need to be fulfilled. Here, AI supported e-learning functions involve the processing of usage observations to optimize learning and teaching behavior as well as e-learning content. The analysis can be purely machine-driven, for example, the utilization of data for adaptivity (e.g., for educational recommender systems), generation of learning paths or the automatic and dynamic difficulty adjustment in exercises. For this purpose, Learning Analytics (LA) data is commonly used. LA is the collection, aggregation and analysis of data generated by learners, usually generated in specific environment, such as an LMS. The learners and teachers themselves can interpret the data, typically by inspecting aggregated data visualizations in learning analytics dashboards, which, for instance, visualize which tasks have been solved, or for how long the individual tasks took to complete. Processed data can also indicate learning progress, weaknesses or learning needs. Assistive AI functionalities depend on observation data as a necessary basis for their algorithmic decisions. Processed information from LA, e.g., about a learner’s progress over multiple courses in an LMS, can be used to further improve AI methods. Additionally, AI approaches from other domains, for instance chatbots as virtual learning assistants may also lead to increased user satisfaction and, thus, to a higher user motivation for e-learning.

The primary research question of this article is (RQ1): which requirements need to be met in order to successfully implement AI supported functions in e-learning environments? When implementing different types of e-learning and assistance systems for different course environments, we often face similar challenges. Our analysis considers the experience from various sources of implementations in professional trainings and formal classes with typically 8 to 15 participants. While the application of such functions in small settings is challenging – e.g., because data is only sparsely available – it represents a large proportion of real-world scenarios for which we deploy assistive functions. Although we assume that our experiences are applicable to several other learning settings, we rely on the experiences of smaller courses.

We present a systematic requirements analysis as a guide for the initial steps when implementing an AI supported LMS for small course environments. Additionally, we report on the specific user requirements from a study, for which we designed a demonstrator software for a small-scale course in higher education. In this study, we accompanied five identical course runs on mathematical topics, each lasting approximately four weeks. We used the first course to gather requirements. During the fourth course, we were able to field test our demonstrator over a period of one week with some of the AI functionalities for the first time and use the feedback to adjust our demonstrator, e.g., control of feedback frequency or data collection granularity. The fifth course used all four assistive functionalities and evaluated the demonstrator in conclusion.

We were interested in what specific AI-enabled features would best support our users and which are generally accepted. This led to the second exploratory research question (RQ2): What AI supported functionalities do our users require? To answer the second research question, we went through a requirements engineering process, as defined by Nuseibeh and Easterbrook (2000) [6]. Based on the user requirements, we developed a software demonstrator, which offers a unique interplay of AI functionalities by using a middleware as a communication mediator and a web-based portal app as entry point for the users. This adds up to an innovative system approach. For the evaluation we mainly focused on the possible benefits of the AI assistance. This led to the third research question (RQ3): Do the implemented AI supported functionalities offer a benefit for the users?

The remainder of this paper is structured as follows. Chapter two discusses the set of core requirements which need to be fulfilled when implementing adaptive functionalities in digital learning environments. In the following, the key AI functions are presented that we have offered to our learners, followed by some general usage statistics and the results of a qualitative evaluation. The paper concludes with our lessons learned and an outlook.

2 Requirements

The first research question of this article concerns how to implement AI supported functionalities for heterogeneous e-learning system landscapes and what the challenges are. In our application context we observed various non-obvious challenges while implementing different types of e-learning and assistance systems. The systems include Learning Management Systems (LMS, e.g., Moodle) and plugins, web portals, adaptive serious games, dashboards, recommender systems and chatbots.

We first report on general requirements that should be met in order to implement AI-functionalities successfully. Then we describe the requirement engineering process tailored to the study that focuses on our users’ requirements.

2.1 Organizational Requirements

The introduction of AI supported LMS requires the involvement of different stakeholders. Above all, the responsible organizations must ensure that the application of LMS is in accordance with applicable laws (in the EU, e.g., GDPR, data processing contracts, etc.), all regulatory requirements are met (e.g., naming of responsible persons, such as data owners, anonymization, etc.) and that IT and data security concepts are state-of-the-art. In this context, Flanagan and Ogata (2017) discussed the increasing need for data and privacy protection throughout the entire Learning Analytics (LA) workflow [7]. They find that the privacy of key stakeholders, such as learners, teachers and administrators need to be protected, while still maintaining the usefulness of user interaction data. Renz and Meinel (2018) addressed the requirement to use pseudonymization for the GDPR-compliant collection of xAPI learning records and argue for the use of an appropriate middleware [8]. Moreover, the introduction of LA components needs to be adequately supported. The organization must ensure that the role holders have sufficient resources, even after the initial launch of LA [9]:

  • Operators and engineers are responsible for deploying the services, monitoring the components’ technical operability, and immediate response if something does not run as expected.

  • Data analysts ensure the fulfillment of the objective of LA components through regular evaluations and adjust data and algorithms if necessary.

  • Domain experts keep data, content and media updated according to the individual context, evaluate high-level usage as well as the need for optimizations and prepare new content.

  • Supporters for learning analytics users introduce the LA functions, motivate its usage, answer questions, help with the use of the system and explain how it works.

Depending on the size of the learning setting and number of people involved, the roles can be taken over by people that are already involved. For instance, instructors in smaller courses communicate with their learners on a regular basis. They are ideal supporters, and, in most cases, can also act as domain experts for specific course topics. Technical staff, such as data analysts as well as operators, can take care of multiple LA instances at the same time. However, it is very important to train the instructors and raise awareness for any LA particularities beforehand.

2.2 Methodological Requirements

The “appropriateness” of LA functionalities is of essential interest for the implementing institutions. However, what needs to be realized and how can it be evaluated? Most LA applications in learning environments aim to optimize learning by making it more efficient and more effective through data analysis. In the context of learning recommender systems, for instance, ‘efficiency’ describes the way to achieve a personal goal. In a small-scale course setting, a higher efficiency can optimize the process, save efforts and time to reach the course goal. ‘Effectiveness’, in turn, can directly affect the results achieved, e.g., a better mark in the exam or longer lasting knowledge [10]. Thus, the actual task of an LA function is of essential importance for design choices, development of an appropriate methodology and selecting an optimal evaluation framework [11]. There are numerous approaches and attempts to measure the intelligence of a learning system. Rerhaye et al. (2021) propose to conduct a combination of qualitative and quantitative evaluation methods [12]. This is needed to not only gain deep qualitative insights in the usage of LA functions, but also support the findings with reliable quantifiable values. The user interface, satisfaction and the user experience are also of enormous importance for an LA supported system to ensure user acceptance of the system.

2.3 Didactical Requirements

In our experience, one of the users’ biggest fears about the implementation of an AI supported LMS is that in-person classroom teaching could be replaced by just online learning. An ideal implementation concept, however, should only consider AI supported LMS for recurring and well-specified tasks, which would normally involve several members of the institutional staff. When organizers know how humans accomplish a certain task, such as analyzing learning groups at the beginning of a course or recommending appropriate learning materials, they can then consider having an LA component take on that task. We strongly believe that following a didactical concept instead of blindly replacing all classroom sessions would not only improve the learners’ and teachers’ overall acceptance of a system, but also result in better learning outcomes. The next step in a didactical concept would be to decide what to do with the information that LA provides. How can we use the results from a learning group analysis in the most beneficial way? How can we improve user motivation, push the self-responsibility in learning, help with useful reflections and keep the learner in an active role? What degree of freedom in an individual learning path can increase efficiency? As many LA applications are aimed at automating didactic activities [3], e.g., selection of learning material, it is necessary to decide on a robust didactic concept as a foundation [3]. As such, the didactic concept should be evaluated as thoroughly as the analytics’ functionalities themselves.

2.4 Content Requirements

When well-structured data on content and usage is available, learning analytics can offer added value for various users of learning systems. However, it is not sufficient to only describe the content. Learning Analytics (LA) is based on digital data and hence, content must be available in a digital and compelling format. For example, even if several learning records are collected, the best LA approaches are of little use if the content is not annotated, e.g., PDF documents. To support LA functionalities in a meaningful way, learning content must be digitally edited and enriched with metadata. Digitally edited content means that it is machine-readable and that user interactions can be tracked. Ideally, the learning content should be organized into learning units, which can be linked together, e.g., combination of single multiple-choice tasks into a quiz. The minimum metadata of interest contains the users’ activities and their achieved results. For LA, additional metadata should be provided, e.g., targeted learning time, difficulty level or knowledge type. For interoperability purposes, we encourage the use of standards, such as IEEE Learning Object Metadata (LOM) or IMS Common Cartridge. The needed metadata for LA functionalities depends on the respective application’s purpose and overall didactic concept. From our experience, the implementation and maintenance of metadata standards for one’s own content involves a great deal of effort, which, in the best-case scenario, is automated or already realized during the creation of individual content.

2.5 Technical Requirements

Finally, a successful implementation and integration of learning analytics into corporate learning environments – especially in environments of multi-institutions with distributed services - requires the use of widely accepted interoperability standards [9]. To address typical technical challenges such as IT security and network limitations (e.g., CORS) while still adhering to given data-privacy and data-protection regulations, we identified and implemented multiple core technologies and protocols which adhere to established specifications. Notable standards include the learning record specifications Experience API (xAPI) and CALIPER, which can be persisted in distributed Learning Record Stores [13] or user-controlled Data Wallets [14], LTI (Learning Tools Interoperability) or cmi5 (computer managed instruction, 5th attempt) launch specifications, as well as standards for the exchange of content metadata, such as Common Cartridge or LOM. However, not every service in a complex educational ecosystem follows the same standard, and many direct links between individual adaptive services are difficult to maintain. Therefore, we recommend a middleware architecture for service orchestration [15]. This middleware can either be standards-agnostic and allow communicating services to agree on a particular form of communication, or act as a standards-translator, e.g., between LTI and cmi5 [16].

A specific challenge arises when decentralized storage or replication of learning record data becomes necessary. We observed a typical corporate requirement: the operation of multiple as well as decentralized xAPI Learning Record Stores (LRS) instances. Each subsidiary can have its own data handling constraints, resulting in the need for individual stores. This motivates LRS replication strategies, control of the data flow, and operating dashboards under customer sovereignty.

3 Requirements Engineering Tailored to Our Users Needs

While multiple studies showed that users can benefit from AI supported LMS [17, 18], the research question remains: What specific functionalities in an LMS do our users require? Thus, before we decided on functionalities for a learning management system, we gathered the requirements of different stakeholders. The focus lies on the learners as they are the main users of the learning management system (LMS). Among others, we included teachers, authors that create the content for the LMS, and superiors that deal with education on an organizational level. Due to the pandemic, some requirements methods, like observations had to be excluded. Instead, we raised the requirements in online workshops with one stakeholder group at a time. We prepared discussion rooms where users could work alone or in groups of two people or as a group, depending on the topics and worksheets. The size of 3 to 5 participants for each workshop worked well for us, giving every individual the opportunity to engage and yet leaving enough room for discussion and brainstorming. We derived 89 user stories, which we converted into technical requirements. In consultation with stakeholders and software engineers, we prioritized the requirements and decided which AI functionalities could best support the user’s needs. Under the strict observance of the mentioned requirements, such as ethical and regulatory requirements as well as standards and norms, we focused on four AI supported functionalities for the key users learners and teachers.

4 AI Supported Functionalities for Learners and Teachers

During the requirements engineering process users and stakeholders asked for support in varying ways. Some requirements relate to functionalities that are not related to AI and were therefore not within the scope of this study. The remaining requirements were sorted, validated and prioritized with stakeholders and software engineers. To summarize, most of the requirements were related to learner support, e.g., adaptivity, exercises with feedback, learning recommendations, gamification elements, display of own learning deficits and strengths and around the clock support for questions. The second most frequently mentioned requirements concerned the instructors. The system should provide overviews of the students’ learning process, e.g., the presentation of the current knowledge and learning status as well as the most frequent errors during the course. Based on these requirements, we decided to implement four core functionalities, that address several of these requirements: a learning recommendation system, a chatbot, adaptive tasks and a learning analytics dashboard for instructors. Figure 1 illustrates the general system design with the various services interconnected by a middleware .

Fig. 1.
figure 1

General system overview with various services interconnected by a middleware.

4.1 Learning Recommender System

The learning recommender system provides learning recommendations, based on observed interactions, self-assessments, and practice successes [10]. Learning recommendations can refer to learning units which have not been worked on, or which indicate an increased learning need. The system can collect and use the following data:

  • When and how often a learner opens a specific learning content.

  • How long a learner has left a specific learning content page open.

  • How learners assessed their knowledge about the content.

  • How well the learners perform at learning exercises or assessments.

  • Assessing the learner’s prior knowledge of a learning content, e.g., by assessing how many underlying learning units have been completed.

  • Whether the learning content is relevant for an upcoming face-to-face event.

  • How long ago the learners learned the content and account for forgetting over time.

  • How well other learners from the same course learned the content.

Learning records are collected during the whole learning process for each student individually and stored in a pseudonymized way. This data is not only the basis for the learning recommender system, but it offers an overview of the student’s study progress and learning needs for instructors.

4.2 Chatbot

A chatbot system can help to answer frequently asked questions that have been implemented into the system. The chatbot recognizes the users’ question and its intent and offers the answers that fits best accordingly.

In our educational environment, the chatbot lowers system barriers as a central focal point for answering content-related and organizational questions. This offers the benefit for users of getting answers right away and around the clock and helps relieve workload for instructors, that no longer must answer frequently asked questions again and again.

Currently, the chatbot supports more than 250 topics on frequently asked questions such as “How does multiplication work?”, “What are prime numbers?”, “How do I dissolve parentheses?”, “How does the Pythagorean theorem work?”, “What are the exam requirements?” For this purpose, a glossary was connected, which enables the chatbot to recognize common terms and define them on request.

The chatbot is optimized by manually reviewing the questions asked, e.g., after the end of the course. On this basis, the developers and editors extended the system to include unknown answers or teach the system to automatically respond better to questions posed and to assign them to the appropriate answer.

4.3 Adaptive Tasks

The adaptive and gamified tasks are primarily for training purposes, e.g., repetition according to the Spaced Repetition Method [19]. In our course setting they are a facultative element because of their adaptive nature. In a mandatory setting, where students would have to complete a given set of tasks in a given sequence, any adaptation would be obsolete. To enable quick, casual and successful learning, suitable task types and motivational incentives are needed. For learning objectives in the domain of natural sciences and general knowledge, classic task types such as multiple choice, hot spot and free text are suitable. Classic gamification methods such as leaderboards, achievements and playing against each other in quiz duels form additional motivational incentives according to the ideas of immersive didactics [20].

For the study, the software has been extended by an adaptivity framework [21] and suitable content has been prepared. The latter contains tasks from mathematics, notably equations. The AI component adapts the exercise tasks, i.e., within a quiz the difficulty of the tasks is individually adjusted to the user. Conceptually, the adaptivity approach follows the 4-phase adaptivity cycle [22]. The adaptivity framework works in the analysis phase and in the phase of generation of user models. The output of the framework transitions to the selection phase and includes a new difficulty level, normalized between 0–100% [23]. The computations are personalized for individual users because the adaptivity framework primarily uses user-specific xAPI tracking data. The game itself produces a performance score by a linear weighted sum of correctness score, completion time and base difficulty category. The adaptivity framework retrieves the most recent xAPI statements (with performance score results) and computes new difficulty levels based on a windowed harmonic sum approach. The effect is a typical dynamic difficulty adjustment or so-called rubber-banding where the difficulty level depends on the users’ performance (Fig. 2).

Fig. 2.
figure 2

Dynamic difficulty adjustment for a quiz game; (left) number and type of multiple-choice questions are adjusted as well as the time budget; (right) visualization of the various metrics.

4.4 Learning Analytics Dashboard for Instructors

In learning analytics, the observed interaction data is evaluated with the goal of learning optimization. The analysis can be carried out solely by the computer, for instance by an adaptivity system, or by the users by looking at visualized selected data aggregations in so-called learning analytics dashboards. Based on the requirements analysis with the customer, dashboards were designed primarily for the instructor level. A dashboard was integrated that presents overview statistics about the course, for example on the average performance in exercises or the average usage time. This helps instructors to identify explanation needs and optimize the course or curriculum. Figure 3 depicts how this approach has been realized. Technically the learning analytics dashboard has been implemented using the open-source software Learning Locker which is an xAPI Learning Record Store (LRS) with reporting and dashboard functionalities. The various visualization components (widgets) were developed iteratively, considering the available input data and the customer’s requirements. The dashboard was embedded in an LTI wrapper for integration into the superordinate assistance system.

Fig. 3.
figure 3

Example screenshot of the realized software demonstrator, here the learning analytics view for the instructors.

Our technical solution allowed recording implicit and explicit user feedback. Implicit feedback includes the amount of pageviews or the duration of the page visit. Explicit feedback includes given answers to the exercises or self-assessment on how well the user thinks the learning material was learned on a 1 to 5 scale. This feedback is used to optimize the AI but it also gives insights for the evaluation. During the fifth course the AI supported LMS was used 167 times, with users staying online for between three and a half and eight minutes on average. For a total of 4.814 interactions, 29 on average, 97,6% of the time the LMS was used on the laptop or desktop PC; only 2,4% via smartphone. We also observed a decline in access rates throughout the week. On Monday we observed 51 visits which went down to 16 visits on Friday. The highest access rates were before 12 o’clock AM.

Regarding the AI functionalities, we overserved the following access rates:

  • 56 visits for adaptive tasks.

  • 50 visits for learning recommendations.

  • 35 visits for the chatbot.

  • 34 visits of the individual learning indicators.

5 Evaluation

The goal of the evaluation focuses on the third research question (RQ3): Do the AI supported functionalities offer a benefit for the users? As suggested in our previous work [12] we decided on a mixed method approach with quantitative and qualitative methods to evaluate. During a course with mathematical learning content at a higher education institution the participants first used an LMS without AI supported functionalities for a week, and our AI supported LMS the week after. We used an online-survey tool and conducted the surveys on the last day of usage of each LMS. Both LMS were evaluated with the same set of questionnaires, while we added questions specifically for each AI supported functionality, e.g., asking for suggestions of improvement. The set of questionnaires contained the User Experience Questionnaire [24], the Technology Acceptance Model [25], a self-developed questionnaire for learning media, the affinity for technology interaction (ATI) scale [26] as well as tailored questions about the usage of LMS in the course and demographical questions. On a voluntary basis, the participants could create a subject code, allowing for an individual coupling between the survey-answers and the use of the AI supported LMS while guaranteeing anonymity. In the last week of the course, we additionally conducted semi-structured interviews, which took place in person at the institution, with four learners and two instructors.

Due to the low number of participants that filled out the online questionnaires at both times of measurement the data from the quantitative questionnaires were not sufficient for a statistical analysis. Therefore, we mostly rely on the answers from the open questions and the interviews for the evaluation results. For the qualitative analysis we used a structured qualitative approach [27]. We report on the open-answer questions from the online-questionnaire and interpret them by using user’s statements from the interviews.

5.1 Evaluation Results

The evaluation finds added value of an AI supported assistive e-learning in a small course setting for learners and instructors. For the evaluation we distinguished between the learning recommender system itself and the integrated learning progress indicator. Six out of Nine learners who answered the open question about the added value of the learning recommender system found that the system had an added value. According to the interviews, this was due to a good clarity in comparison to the alternative system and a high clarity on which exercises had already been solved. Displaying learning deficits helped the learners adjust. Two learners did not find the learning recommender system meaningful enough and one person stated that she/he did not use this functionality. Seven Learners answered the question about the learning recommender system. One person did not find the learning recommender system helpful, as the teachers determine the order of the learning content. Six learners found the learning recommender system helpful. They liked the clarity on which content should be worked on next and reported a high motivation to reach a fully processed learning progress indicator. The functionality was rated as helpful with self-assessment.

Regarding the chatbot opinions differed. Some users found the chatbot very helpful and used it frequently. Others found the chatbot obsolete and preferred to continue using google. Only a few users tried the adaptive tasks. Here, the gamification element along with the alternation and the adaptivity of the difficulty level were praised.

All users – the learners as well as the instructors – emphasized during the interviews that they do not wish for any e-learning system to replace face-to-face teaching.

6 Discussion and Lessons Learned

A field test of the AI supported demonstrator at a higher education institution confirmed a clear benefit of the implemented functionalities, but also showed a necessary need for development and change. One limitation of the study was the small course-sample. Having more participants would enable statistical analysis of the evaluation questionnaires and would highly benefit the AI components – which work better with more input data. Therefore, we believe that having more users would lead to a higher user acceptance in respect to RQ3. While the application of AI supported LMS in small settings is challenging – e.g., because only little data is available – it represents a large proportion of real-world scenarios for which we deploy AI supported functions.

In order to implement an AI supported LMS successfully, many factors must be considered. A didactical concept is the most important prerequisite. Defining the purpose of the AI supported LMS and deciding on how to integrate the system into the course beforehand is indispensable. AI supported learning management systems should not replace face-to-face teaching but support the learning process, including preparing and post processing lessons in an expedient way. According to the authors of this article, communicating the supporting purpose and making clear that the system won’t replace face-to-face teaching does help with one of the biggest challenges in implementing any new system: gaining user acceptance. The user’s fears and reservations in terms of an AI supported LMS should be addressed, e.g., by explaining how data privacy is handled.

Learning content must be available digitally and should be appealing for students, e.g., by integrating quizzes, animation, interactive graphics, videos or even games. Additionally, the learning content must contain metadata and enable standardized collection and saving of user data, e.g., using xAPI. For an ergonomic usability, the AI supported functionalities should be accessible for users without detours. Hereby, meeting the users’ requirements should always prioritize over the technological solution. For some stakeholders, e.g., administers and instructors, training for handling the AI supported LMS can be beneficial.

7 Conclusion and Further Research

Effective support of individual learners seems possible based on this study’s experiences. Future studies should widen the scope of this study and raise requirements for team training, mobile learning and learning on the job. Long-term-studies, that accompany learners and their development over several years, could offer insights that help optimize AI supported recommendations and evaluate the learning journey. Implementing an extending range of functions might offer additional benefits for users and should be field tested and evaluated. From a technical perspective, future studies could focus on methods of data maintenance in order to optimize AI supported functionalities. As the usability and user experience (UX) have a tremendous effect on the user acceptance and the intention to use an LMS [28], usability studies should be included in all further research on LMS.