Keywords

1 Research Motivation and Contribution

The usage of learning tools with automated feedback has been the subject of multiple research, be it with intelligent tutoring systems, with explorative tools, or with even simpler training systems. All are praised to support the differentiations between the knowledge and learning processes of each learner as well as to provide feedback in arbitrary contexts of work.

However, all of them suffer from a common issue: The quality and relevance of the feedback they can provide is limited to what designers of the software considered thinkable. It has, thus, been natural to consider the teacher as a tutor that coaches learners in the usage of the learning tool, being able to advise at the right time.

In this paper, we contend that learning tools’ automatic feedback can become smart by employing the semi-automatic feedback paradigm whereby the teacher is able to complement the feedback generated by the learning tool by a relevant feedback. To this end, we describe the model of a system that supports the teacher in analyzing the learner’s process and produce a feedback that is relevant to the learner’s context of work. This support is a form of integrated learning analytics.

The model is illustrated by an implemented system containing two learning tools and integrated with two different learning management systems. The research presented here reports on learning tools and evaluations in the studies for pre-service teachers in a project called SAiL-M.Footnote 1

In this environment, as in most higher education environments, the learners are considered mostly responsible of their learning process but can be supported by individual feedback relevant to their work.

2 Literature Review

2.1 Smart Learning Environments

The term “smart learning environments” has been introduced recently in the field of learning and teaching to describe “… a third pervasive and significant revolution in instruction” (Dodds and Fletcher 2004). However, similar to the beginnings in the field of intelligent tutoring systems, to date, there is no completely agreed understanding on this term. Dodds and Fletcher define them as “… real-time adjustment of instructional content, sequence, scope, difficulty, and style to meet the needs of individuals suggests” (Dodds and Fletcher 2004). On the other hand, smart learning environments are also often understood as an improvement of physical environments with novel technologies to provide a smart, interactive classroom with increased interactivity, personalized learning, efficient classroom management, and better student monitoring (Yesner 2012). Last but not least, smart learning environments are also often related to ambient technologies, describing learning environments, which exploit new technologies and approaches, such as ubiquitous and mobile learning, to support people in their daily lives in a proactive yet unobtrusive way (Mikulecký 2012; Buchem and Pérez-Sanagustín 2013).

In the following, we would like to understand smart learning environments as systems that apply novel approaches and methods on the levels of learning design and instruction, learning management and organization, and technology to create a context for learning that provides learners with opportunities for individualized learning and reflection in a motivating way and that allow teachers to facilitate learning, providing scaffolding and inspiration based on the learner’s needs and a careful observation of her learning activities.

Therefore, we also would like to point out that in our understanding approaches in the direction of smart learning environments cannot be restricted onto the technological level, only.

2.2 Intelligent Tutoring Systems

Sleeman and Brown first coined the term intelligent tutoring systems (ITS) to describe the problem-solving steps of the student through the use of a detailed cognitive model of the domain (Anderson and Pelletier 1991), which distinguished from previous computer-aided instruction (CAI) systems (Sleeman 1982). It is therefore interesting to note that in the beginning, ITS did not denote a well-understood principle or a commonly agreed approach. Current definitions describe them as learning systems that give feedback and hints on each step (VanLehn 2011). Another perception is the one as a model-tracing tutor, where the machine takes the role of a human tutor and follows the inputs of the learner.

While intelligent tutoring systems failed for quite a while to have a real impact on education and training in the world (see Corbett et al. 1997a, b), today there exist a number of success stories, and ITS have been applied successfully in some domains. At present, systems such as ASSISTments (Feng et al. 2009; Worcester Polytechnic Institute 2013), a free Web-based service supporting math classes in US American schools, are being used by hundreds of students each year on a regular basis since 2004 and did prove its benefits in several studies (e.g., Heffernan et al. 2012). Also, technologies simplifying the development of intelligent tutoring systems even for non-programmers were made available (e.g., Koedinger et al. 2003, 2004), allowing for a faster development of such systems for different application areas.

Still intelligent tutoring systems did not fulfill all promises that were made earlier. Some reasons for a lack of penetration of intelligent tutoring systems were already stated over 15 years ago (Corbett et al. 1997a, b), but in principle, they still seem to hold today. Intelligent tutoring systems are still expensive to develop, and in practice, the development requires sufficient resources with respect to programming, despite the mentioned initiatives to overcome these limitations. For this reason, still many initiatives in this area are initiated in the fields of computer science and artificial intelligence (AI) in specific, and they often seem to focus on the deployment and improvement of interesting AI algorithms, rather than emphasizing the educational perspective and trying to enhance the cost/benefit trade-off regarding educational effectiveness (Corbett et al. 1997a, b). This may also have lead to relative low reputation of the value of intelligent tutoring systems, though VanLehn (2011) was able to show up in a recent study that the effect size of intelligent tutoring systems is nearly as effective as human tutoring.

Paradigms from computer-aided instruction targeted to cost reduction and enhanced objectivity in teaching can still be attributed as driving forces also for developments in the area of intelligent tutoring systems. As such, the idea to have an intelligent tutoring system replacing the teacher is still predominant. While this has been criticized early and requests for supportive cognitive tools in the service of explicit pedagogical goals, supporting both, learner and teachers were raised (e.g., Reusser 1993), this focus still seems to prevail in most ITS applications today.

The approach presented in this article distinguishes from the typical concepts found in intelligent tutoring insofar that it follows more closely the recommendations of Reusser (1993) to focus on a better integration into school routine and provide a smart support also for the teacher and tutor.

2.3 Assessment and Feedback

Smart learning environments and intelligent tutoring systems both relate to approaches in the field of assessment. Assessment usually has one of the following two purposes:

  • Judgment. In this type of assessment—usually denoted as summative assessment—the goal is to decide whether the learner has passed a course and at what level or grade. It is mostly an assessment of learning and used to measure students’ understanding of a specific topic (Ainsworth and Viegut 2006).

  • Development. Assessment in this context is denoted as formative assessment. Here, assessment results are used by teachers to analyze the students’ concepts and levels of understanding with the intention to adapt teaching according to the students’ needs and to provide adequate feedback, and by learners who can evaluate their advances based on this feedback. This is an assessment for learning, and the results are not to be used to grade students’ work (Ainsworth and Viegut 2006).

Formative assessment and feedback have been identified as important factors in teaching and learning with a high effect-size (e.g., Hattie 2008), and adequate feedback plays a crucial role in formative assessment (Bescherer et al. 2009; Brown 2004). Shute defines formative feedback as “… information communicated to the learner that is intended to modify the learner’s thinking or behavior for the purpose of improving learning” (Shute 2008). Brown (2004) states that formative feedback “needs to be detailed, comprehensive, meaningful to the individual, fair, challenging and supportive, which is a tough task for busy academics” (Brown 2004). Adequate feedback is agreed to represent an important factor in the support of learners, and it was found to have a potential high impact on learning, though some studies revealed that inadequate feedback may also obstruct learning (Kulik and Kulik 1988; Anderson et al. 1990). Besides several other aspects, timing was often named as an important factor for delivering effective feedback, and immediate feedback proved to be more effective than delayed one in several studies (Anderson et al. 1990; Butler et al. 2007; Shute 2008; Singh et al. 2011). However, the results are inconsistent, and Hattie (2008) identified only low effect-sizes regarding the effect of timing in his meta-studies. A possible explanation is that it depends on level (task level versus process level) and task difficulty whether immediate or delayed feedback is beneficial (Hattie and Timberley 2007).

Hattie also points out the importance of feedback not only for the learner, but also for the teacher, to synchronize learning and teaching and to make both more effective (Hattie 2008; Chap. 9 The contributions from teaching approaches - I: Feedback). This aspect relates to the aforementioned strong connection of feedback to assessment, highlighting however the information gain at the side of the teacher in feedback processes. The approach presented in the paper in particular takes on this aspect, providing a framework that allows students to receive timely and detailed feedback on specific process steps, but also allowing teachers to “make learning visible” (Hattie 2008) and to access detailed information on the learning processes of classes and individuals.

In technology-enhanced learning, a clear focus was on the automation of providing feedback to learners in the last years, leading to solutions strongly connected to the fields of artificial intelligence and intelligent tutoring systems. Lately, a different approach and philosophy was proposed, where technology gets the role to assist teachers in providing feedback in an automatic or semi-automatic way, encompassing all cases where feedback may be provided following standardized schema (Müller et al. 2006; Bescherer et al. 2011). This disburdens the teacher from managing assessments and feedback in these standard cases, allowing her to focus on interesting cases and situations where very specific feedback is required. Corresponding approaches lead to completely different system architectures, where the teacher plays an important role, and assessment and feedback are provided in a hybrid approach and a semi-automatic way. The work presented in this chapter closely follows this approach.

2.4 Learning Analytics and Teaching Analytics

Computer-based learning tools provide the benefit that all sorts of digitally available learning data can be collected and analyzed. Corresponding approaches fall in the field of learning analytics. There have been a number of proposals for defining learning analytics, which to some extent take different objectives and only partially overlap (Siemens 2011). We follow Rebholz et al. (2012) and we relate learning analytics to approaches and technologies targeted to allow for analytical reasoning facilitated by visual interfaces employed for teaching or learning. Objectives are the detection of interesting aspects and patterns in learner and learning data, building hypotheses based on these detected structures, confirming such hypotheses, drawing conclusions, and possibly communicating the results of this analytical process (Rebholz et al. 2012).

Teaching Analytics is sometimes used to denote specific approaches in the field of learning analytics. However, the term is often used inconsistently. For instance, it is used on the one hand to describe a subfield of learning analytics, focusing on the design, development, evaluation, and education of visual analytics methods and tools for teachers in primary, secondary, and tertiary educational settings (NEXT-TELL 2013). On the other hand, Vatrapu et al. (2011) use the term Teaching Analytics in the context of an extended view and model, targeting to a collaborative analysis of learning data with teaching experts, visual analytics experts, and design-based research experts.

In learning analytics, the focus is often on the prediction of student performances with respect to learning across a variety of courses and academic programs, and on the identification of at-risk students and the design of educational interventions (e.g., Arnold 2010; Zhang and Almeroth 2010; Essa and Ayad 2012). Far less frequently, learning analytics methods are being applied to monitor student performance on the level of individual tasks and learning processes. Corresponding approaches require a more deliberate design of adequate interfaces. Such interfaces shall support teachers in the effective interactive analysis of learning data and allow them to provide timely, meaningful actionable, customized, and personalized feedback to students (Vatrapu et al. 2011; Rebholz et. al. 2012).

Learning management systems such as the widespread Moodle platform offer elementary forms of user tracking, displaying such information as the start or end of use of learning resources, the visits, or the participation to pre-built activities such as polls. Moodle’s tracking, by default, is very detailed and could be used to understand a learner’s progress and thus can be considered to play a learning analytics role. However, most tracking views are of tabular nature, which quickly become unmanageable as soon as a large diversity of tracked events may appear, e.g., a diversity of user input or feedback sequence. We contend that such generic tracking systems lack the specificity of learning tools, which allows a teacher or learner to understand the steps of the solution process.

Besides such generic logging mechanisms in learning management systems, some more sophisticated approaches have appeared in learning analytics. LOCO-Analyst (Jovanovic et al. 2007, 2008) represents a logging infrastructure targeted to provide teachers with detailed information on students’ learning processes based on their interactions with learning objects. The goal is to generate meaningful feedback for educators responsible for updating and revising course material. While the LOCO-Analyst approach has similarities to the logging functionalities proposed and presented in this chapter, it seems to be restricted to track accesses to different learning objects and does not allow a fine-grained view of the flow of actions of a single user using dynamic learning tools such as those producing feedback. The FORMID project (Guéraud-Cagnat and Cagnat 2006) instead produced monitoring facilities for e-learning, supporting the online support of teachers for students’ learning activities in real time. The work presented here differs from the FORMID approach in that such a support is not tight to a learning scenario. As a result, logging mechanisms, user interfaces, and analytical views need more flexibility.

Recording learning activities and making the recordings available for further analysis require suitable logging infrastructures. The emerging set of specifications called Tin Can API (http://tincanapi.com/), the open-source tool sets Contextual Attention Metadata (https://sites.google.com/site/camschema/) and Learning Registry (http://www.learningregistry.org/) all target at capturing learning activities and store them in central repositories. The major drawback of these solutions is the difficulty of doing statistics on the data without dedicated log viewers offering views that are sufficiently representative of the learner activity to understand quickly the solution process and where the problems were met.

2.5 Security and Anonymity in Learning

A teacher watching over the shoulder of a student performing an exercise and receiving feedback about her actions is in a situation of common agreement: The student may feel like she could explore or game the machine, but he or she will certainly not want to show complete ignorance of the presence of the teacher by acting unreasonably, for example.

Learning activities tracked by a logging system and potentially displayed to a teacher create the same situation. The ignorance of being watched and the surprise of being told by a teacher that a given action could have been better done would trigger a flurry of counter-reactions in the students’ minds, which may go as far as refusing to use the learning tool. Such issues about privacy have been quite ignored by the literature in learning analytics as far as we could read.

National regulations in EU countries about the usage of Web sites boil down to prohibit the storage of personal information without consent (e.g., German Federal Data Protection Act (BDSG) §3, §4) and, thus, also requiring the allowance before a detailed tracking of users.

These two limitations thus let us conclude that:

  • Learners should be warned about being tracked and about teachers being able to view anonymous logs.

  • Learners should be allowed to stop tracking, should they want to play with more freedom.

  • Learners that wish to disclose their work sessions as a personal series of action should be able to do so. This way, they enable teachers to understand what they have done.

3 Research Design

In the following, we introduce our research design. For this, we first present example scenarios motivating and serving as a guideline for our work. Then, we provide a short overview on the general approach and the research methodology taken.

3.1 Motivating Examples

3.1.1 Scenario I: From Demonstration to Homework

Our learning situation happens in an undergraduate study with mathematics. Mary, the teacher, has been introducing the topics of mappings, injections, and surjections. After the theory and examples of different types, the lecture had a demonstration of the learning tool Squiggle-M, which explores this topic. The week’s assignment includes a few exercises to be done using the learning tool such as the exercise displayed in Fig. 8.1.

Fig. 8.1
figure 1

The Squiggle-M learning tool asking if a relation qualifies as a function

In the afternoon, Philip is doing his homework and now follows the steps of the learning tool. The first few exercises are easy with the help of the videos. He then attempts to recognize when a graph of a function is injective and faces challenges. Going back and checking definitions helps him a little bit, but he only succeeds in the simplest case of an exponential function. He thus wishes to ask his teacher and sees that the problem reports indicate this possibility. A dialog opens, similar to that of Fig. 8.2 where he can input a short description of what he wishes to achieve and a checkbox is there to give the teacher access to Philip’s logs to understand what was done.

Fig. 8.2
figure 2

Squiggle-M displaying a function by its graph and a ladder with a dialog of the student’s question to the teacher being formulated

Mary receives a notification per email. It contains a screenshot and a link to the session that Philip just did. Looking at the log, Mary can analyze what he did, reproduce it herself, and give short hints on the actions: “To evaluate injectivity, it may be useful to click twice the red bullet and lay it on the x-axis, so that you can see if you can have two points that are mapped to one.” Based on this feedback, Philip is able to employ the right tool and evaluate injectivity and subjectivity for each of the proposed functions.

3.1.2 Scenario II: Encouraging the Tools’ Usage

Jonathan is a university teacher educating future mathematics teachers. He wishes to introduce the proper use of proofs by induction, a topic that is well known to create confusion in young students but remains quite important for many proofs of the mathematical knowledge. Thus, he decides that the usage of a learning tool to train such proofs is desirable. ComIn-M (Rebholz and Zimmermann 2013) is such a learning tool. It can be run on contemporary laptops’ and desktops’ Web browsers, which also access the learning management system of the university for all students.

Jonathan contacts the editors of the learning tool, which provide him with an online learning activity. In there, he can read the instructions to deploy the learning tool within the learning management system: Simply uploading a content package will create an online resource from which students can start the learning tool. He shall make it visible a bit later.

Following the didactical design pattern Technology on Demand (Bescherer and Spannagel 2009), Jonathan first presents a few situations of proofs by inductions and its typical errors in class and then introduces the usage of the tool. To do so, he presents the tool and performs one complete exercise with it. Somewhat similarly to the Fig. 8.3, Jonathan is able to present connections between words and graphics of the blackboard and learning tool projection. His approach follows mostly the orchestrationFootnote 2 explain-the-screen (Tabach 2013). At the end of the session, he invites the students to use the tool, demonstrating how it can be started in the learning management system; one of the exercises of this week’s assignment is based on the learning tool.

Fig. 8.3
figure 3

Explaining the usage of in interactive learning tool

Because exercises are optional, he cannot be sure that the exercises will be performed. A few days later, as he feared, very few students actually attempted the requested exercise from what he can see in the log views in the Fig. 8.4: Only 4 of his 150 students have attempted, and, as he can see in the assessment results table on the right, none have succeeded. In the graphic below, red cells represent wrong solutions, light red cells incomplete solutions, while (the missing) blue cells would represent correct solutions.

Fig. 8.4
figure 4

A summary view to gain overview of the class’s usage of the learning tools

For the following exercise session, thus, a plan change is communicated so that students come with their laptops to the university. The objective of Jonathan is to let the students go through as many of the ComIn-M exercises as possible in small groups in front of the laptops keeping eyes wide open to ensure that they are progressing.

During the help session, students are first given a briefing on the mission they are to aim at, assorted with a set of practical and strategic instructions. Most of the rest of the class is spent in the classroom orchestration monitor-and-guide, where the teacher comes at each student providing individualized help on demand, answering typical help requests in just a few minutes in an attitude similar to that similar the situation depicted in Fig. 8.5. The teacher’s work there generally involves understanding the students’ states, what they have done to reach it (which can be shown or told by the students, e.g., a particular type of problem, which keeps being reported by the learning tool) and what they understand to have made these manipulations.

Fig. 8.5
figure 5

Helping students in presence

The decision to help can either be following a students’ initiative or a teacher’s observation. This observation can be over the shoulder or based on some analytics representations.

3.2 Derived Research Design

The scenarios described above describe the underlying vision of a research collaboration and project targeted to improve the quality of teaching in early semesters at university level. The project titled SAiL-M (Semi-automatic analysis of individual learning processes in mathematics) focused on the domain of mathematics, although the developed concepts can be generalized to other areas.

In this project, a design-based approach in educational research (Bannan-Ritland 2003; Reeves et al. 2005) was followed, starting with the mining and formulation of pedagogical design patterns for activating learning environments for mathematics at university level, the advancement of corresponding scenarios, followed by an adaption of learning tools, which allow for the assessment of learning processes. These tools and learning environments as well as the developed application scenarios were applied and evaluated. Here, the focus was on evaluating the effectiveness of process-oriented feedback with various diagnostic methods. The evaluation also concentrated on the local impact of the implemented methods and applied toolsets (Bannan-Ritland 2003), limited to pre- and posttests and not incorporating control groups.

In this chapter, we report selected results from this collaboration and initiative, which focus on the aspects of detailed logging of learners’ activities, semi-automatic assessment and feedback, and learning analytics solutions to support teachers in providing adequate support to learners. A general model, which has been developed in the context of the SAiL-M project, underlies the developed methods and toolsets. We introduce this model and the corresponding developments in the following section.

4 The System

The scenarios and research design discussed in the previous section led us to a novel model for learning scenarios with enhanced support for learners based on adequate and timely feedback, and a stronger integration of teachers in this process compared to standard intelligent tutoring systems. We describe and discuss this model in some more detail in the first section. Furthermore, we present tools and techniques, which were developed in a research cooperation in Germany implementing this model.

4.1 Proposed Approach: The SMALA Model of Smart Learner Support

In this section, we describe a conceptual model, which places the learning tools within an architecture that enables the automatic and semi-automatic feedback.

We call the model the SMALA model. Its purpose is to provide relevant feedback on the learning processes that occur in multiple situations of the learners. These situations are, in no particular order:

  • Learning in classroom, in a plenum, where the lecture concepts are fresh in memory.

  • Learning in the lab, in small groups, or individually, where individual assistance can be provided to specific requests.

  • Learning in homework and other rehearsal situations, where the student’s liberty to explore potential avenues is greatest.

Each of these situations implies different forms of feedback and different types of reflection on the individual learning process. In the center of it lies the learning tool, which represents the domain being learned and offers the necessary manipulatives. The SMALA model complements the automated learning tool by an architecture to support the teachers to provide feedback relevant to the learning process beyond the automated feedback of the learning tool. This complement is ensured by:

  • A deployment within the existing learning tools infrastructure of the school: the Learning management system (LMS, the normal place where learning activities are coordinated).

  • The recording of traces of the actions of the learning tools in a way that allows sequences of actions to be viewed.

  • The display of anonymized traces of the usages of the learning tools to the teachers to obtain an overall impression of their usage.

  • The display of identified traces of the usages when the learners explicitly request feedback.

The picture in Fig. 8.6 is a summary of the architecture of this model. It highlights where analytical processes happen (where the gears icons appear) and the workflow taken by the teacher to prepare the learning tool so that it is ready to be used and tracked. This includes obtaining a new activity, which encompasses the planned set of learning tools’ usage, the deployment of the learning tools in the LMS, and the invitation of the students to that place. This enables the students’ logs to be attached in the right surrounding and makes them browsable by the teacher: individually when the student asks, globally (i.e., anonymously) otherwise.

Fig. 8.6
figure 6

System architecture corresponding to the SMALA model

In this model the feedback is produced either by the learning tools’ automatic assessment, or by the teacher: individually on students’ request, or globally, for example, in classrooms.

4.2 Tools and Techniques: From Learning Tools to Analysis

In the following, two SMALA-based learning tools are presented in detail: a training tool for proving by mathematical induction and a learning tool for investigating the concept of relations and functions. Both the instructional design and the technological implementation underlying these tools are highlighted. As indicated in the motivating scenarios before, we distinguish between the students’ view on the learning tools and the analytics’ view for the teachers.

4.2.1 From Learning Design to Learning Tools to Analysis

In order to support learners in introductory math classes at university, we have integrated various Web-based learning tools into the SMALA infrastructure that offer training and advanced investigation opportunities on specific mathematical subject domains. Although the tools are quite different in focus (e.g., discovery-oriented tool versus training tool), they all rely on some common design principles:

Semi-automatic assessment and feedback. Based on the principle of semi-automatic assessment and feedback (Bescherer et al. 2011), all tools provide the learners with immediate feedback on their solutions and, if necessary, enhance the feedback by personal advice from a tutor or teacher. As soon as a learner submits a solution or partial solution, the automatic assessment component analyses the solution for correctness. As part of the analysis, the software checks for typical errors or misconceptions and generates a detailed feedback report based on the findings. If the analysis fails to verify the solution or is not able to detect and classify an erroneous step in the solution, the problem is forwarded to the assigned teacher. Using both the results from the automatic analysis of the learning tool and a recording of all interactions between the learner and the tool, the teacher can reproduce the chosen problem-solving strategy. By doing so, the teacher gets an idea of the individual learning process, and thus, possible misconceptions and errors in reasoning become obvious. In the same way, correct but exceptional solutions are forwarded to the teacher and do not remain unnoticed in the wealth of learner data arising during the tool’s usage. This approach supports our main goal of relieving the teacher from repetitive, non-demanding tasks, but involves him or her in the assessment and feedback process when creative understanding, didactic skills, or the domain expertise of the teacher are required.

Step-based tutoring system. According to the classification of tutoring systems suggested by VanLehn (2011), our learning tools are denoted as step-based tutoring systems: Learners can enter all intermediate steps that lead to the final solution, and accordingly, they also get feedback on these individual steps. As opposed to answer-based systems that only assess the final solution of a problem, our toolset analyses and assesses the whole line of problem-solving steps and aims at reducing the amount of reasoning required between individual interactions with the system. By giving feedback and hints on the level of intermediate steps of the problem-solving procedure, the learners are gradually guided toward generating a correct solution.

Tracing of learning processes. All interactions between the learner and the learning tools are recorded as events by the SMALA logging service. There are three main requirements that are fulfilled by the SMALA logging service: The logging happens transparently (without having the learner notice or feeling disturbed by the event recording), live (as a continuous, real-time stream of events) and by pseudonym. In order to account for data security and privacy issues, we ensured that logging events are not traceable back to the real person that has used the learning tool. However, pseudonyms and session identifiers that are attached to each interaction event enable the reconstruction of learning processes of individual learners, given that the pseudonym is associated to the person.

Analysis of learning processes and learning group performance. SMALA-enabled learning tools use one common infrastructure for storing and analyzing the interaction events that occur during the learning tools’ usage. Information provided by the learning tools includes, among other things, the individual steps and solutions entered by the learners, the automatic analysis results, including a success score, and feedback generated by the learning tools, event date, and time, and feedback and hint requests. The events are organized in an expandable hierarchy of objects. All learning data are analyzed in real time and displayed to the teachers in a Web-based user interface on demand. Visualizations, textual lists, as well as table-based representations are used to present the analysis results. It is finally up to the teacher to interpret the automatic analysis results and draw consequences for the subsequent instructional design. Combining technological evaluation with human expertise seems most promising to us when realizing formative assessment strategies.

4.2.2 Smart Learner and Teacher Support

By combining these design principles in one system, a technology-based learning infrastructure can be set up that uses as much automatic processing and analysis capabilities as possible, but involves the teacher in the assessment cycle whenever it is necessary to optimally support the learners in their learning processes or to improve and adapt instruction to current needs. This smooth transition between automatic and human activity supports both learners and teachers in a smart way. As opposed to typical ITS systems that only address learners, we would like to emphasize the fact that the SMALA toolset is targeted at both learners and teachers and that both of them shall benefit from technology as much as possible. In the section below, we detail the instruments that allow a smart learner and teacher view.

4.2.3 Teacher Preparation

We expect teachers intending to use learning tools with a SMALA toolset to use a learning management system such as Moodle. This enables them to share Web pages, and the learning tools we consider can be run within Web pages. Using this, as well as some LMS-integrating components, an identity of the student can be obtained and thus a pseudonym can be computed.

After having contacted editors of the SMALA server, an activity is created which defines the learning tool, access rights, and methods of identification. The teacher’s preparation involves reading the deployment instruction and copy and pasting the necessary code to the learning management system. In our project, integration components have been realized for the Moodle and StudIP learning management systems.

4.2.4 The Students’ View

Using the example of the ComIn-M and Squiggle-M learning software, the students’ view on these interactive learning tools is described. After shortly describing the idea underlying each tool, the main characteristics of the user interface and its usage related to semi-automatic assessment and feedback are presented.

Example 1: ComIn-M—Proving by mathematical induction

Idea. The learning tool ComIn-M is a Web-based exercise sheet for training proofs by mathematical induction of elementary arithmetic relations. Students can choose among different summation formulæ that shall be proven by induction. According to the principle of a step-based tutoring system, all intermediate steps leading to the final solution have to be entered. In the case of mathematical induction, this implies that not only the basic procedural steps of the proving process have to be run through, but also very fine-grained steps like individual term transformations for showing equality of expressions. Whenever the learner gets stuck or wishes to get a confirmation that she is on the right track, she can request feedback on the current state of the solution or request help by retrieving hints for the current step. The main focus of ComIn-M is to foster the procedural knowledge of applying proofs by mathematical induction and to offer learning opportunities in homework or rehearsal situations.

Semi-automatic assessment and feedback. As the learner is working through the ComIn-M exercise sheet, she enters the solution step by step like she would do in a printed workbook. All interactions between the learner and the tool are automatically recorded by the SMALA logging service. Not only the data entered by the learner, but also any feedback or hint requests are stored chronologically and time-stamped along with the automatic assessment results and feedback messages that are displayed to the learner. A typical feedback message and the related highlighting of the erroneous step in the ComIn-M user interface are shown in Fig. 8.7.

Fig. 8.7
figure 7

Automatic feedback in the students’ view

In this example, the assessment detected that the submitted solution does not utilize the induction hypothesis stated earlier in the proof. The tool informs the user about this finding and offers an additional hint for helping the learner to resolve this issue. However, the learner never receives a bottom-out hint revealing the concrete solution to the problem. If ComIn-M’s automatic assessment fails to identify the reason for an erroneous solution step, the tool explicitly recommends reporting the problem to the assigned teacher. Now, it is the choice of the learner whether she tries again and resolves potential errors on her own or whether she agrees to sending the request and, optionally, add some personal questions that remained open at the current stage of the learning process. By simply submitting the Contact tutor dialog box from within the ComIn-M exercise sheet (see Fig. 8.8), the teacher gets notified by the SMALA infrastructure and automatically obtains all information that is necessary to reconstruct the learning process of the requesting student (see Sect. 4.3.2). All subsequent direct tutoring between teacher and learner happens asynchronously by email and represents the “human” part of the assessment process incorporated in the SMALA toolset.

Fig. 8.8
figure 8

Contact tutor dialog box

Example 2: Squiggle-M—Investigating the concept of functions and relations

Idea. The learning software Squiggle-M provides an interactive learning environment for investigating the concepts of functions, relations, and their characteristics. Using different kinds of virtual learning laboratories, the student can interactively define and manipulate relations, explore different graphical representations of functions, and test her knowledge on functions and their characteristics. Particularly, the assignment laboratory and the representation laboratory invite the student to examine self-defined functions and relations. Integrated research questions guide the student through the laboratories and can be used as starting points for using the tool. The main focus of the learning tool Squiggle-M is to offer different ways of gaining an extended understanding of the notion of functions, by providing learners with novel—and possibly revealing—visual representations of mathematical concepts and their relation to each other.

Semi-automatic assessment and feedback. Squiggle-M addresses the creativity and curiosity of the learners by letting them define their own assignments and functions. The learners can use the automatic feedback feature of the tool to have these assignments analyzed and obtain information on the assignment’s properties. Similarly, the learners can enter their own function equations and have them depicted in different visual representations. As a special feature of the learning tool, the transition from one representation to the other is shown using graphical animations. Assessment in these exploratory-oriented tasks primarily aims at supporting the learners in their discovery process by providing them with extended information on the examined function. Additionally, Squiggle-M offers more concrete problems and questions that have to be solved by the users. Typically, these problems are presented as multiple-choice questions or sequences of them. An example of such a situation is depicted in Fig. 8.1.

After solving a problem, the learner can request immediate feedback on her answer and can optionally retrieve a hint for getting further help. As with all SMALA-integrated learning tools, user interactions with Squiggle-M are recorded transparently by the SMALA logging service. Thereby, it is possible to trace a learner’s path through the Squiggle-M laboratories and give individual help or explanations if she gets stuck. Because Squiggle-M relies heavily on visual representations, and because of its technological foundation (a Java applet), steps can be logged with snapshots of the learner’s screen. Thus, the teacher has the same view on the assignment or function as the learner has and can analyze the current situation. The capturing of the current visual representation is also offered as so-called “camera” feature in Squiggle-M: By pressing the camera button, the user can save a snapshot of the current assignment for later use.

4.3 Analytical Processing

In addition to the students’ view on the learning tools, the SMALA service offers an analytics view on the learning tools usage. Analytical processing occurs in different places in the SMALA system architecture: First, it is realized in the learning tools themselves, and second, it is done in the commonly used logging service (see Fig. 8.6).

4.3.1 Analytical Processing in the Learning Tools

Each learning tool has its very own automatic assessment component that analyzes incoming solutions according to domain-specific criteria. Results from this analysis are reported as immediate feedback to the learner. As described in the section above, the feedback is provided as conversational style text and directly addresses the learner (see Personalization Principle by Clark and Mayer 2011). For every detected problem, one feedback message and one or more hints are generated. It is up to the learner to decide how many feedback messages or hints are necessary for her to move on in the problem-solving process. Automatic assessment results are not only reported to the learner, but also recorded as assessment events by the SMALA logging service. Assessment events include information about the analyzed solution, detected problems, erroneous steps in the solution, and feedback messages displayed to the user. By doing so, the whole process of a learner’s activities and related tool assessment results and feedback is stored persistently in a central place. This store of event data is the basis for all further analysis that is performed by the analytics components in the SMALA server.

4.3.2 Analytical Processing in the SMALA Server

All learning data from the SMALA learning tools are collected in real time by the SMALA logging service. Essentially, the service provides two types of analysis views on the data: the views on the individual learning processes and the views on the overall learning activity and group performance. Individual learning processes are most interesting in the case of personal feedback requests. After the learner has agreed to submit a personal feedback request, the SMALA infrastructure automatically sends a notification email to the responsible teacher. As can be seen in Fig. 8.9, the email contains a link to the SMALA Web service.

Fig. 8.9
figure 9

The mail received by the teacher indicating the help request

By following this link, the teacher is directly presented with a view of the student’s session recording. All user input data and actions as well as automatically generated assessment information are listed in chronological order as a history of interaction events. Figure 8.10 shows an extract from an example recording of a ComIn-M user session.

Fig. 8.10
figure 10

The log view of an individual learning process

Using both the information on the learner’s line of action and the results from the automatic assessment, the teacher can draw her own conclusions on the individual learning process and give personal advice and feedback to the learner. Even in an asynchronous learning arrangement like this, the teacher can follow the individual steps of the learner and gets additional support from the automatic assessment system that marks and annotates erroneous steps in the process as can be seen in Fig. 8.10.

4.3.3 Overall Learning Progress and Group Performance

Obtaining timely feedback on the overall learning progress is essential for teachers to take corrective measures and adapt teaching to the current situation. Especially, in a university setting where courses have a lot of participants, it is very difficult to get information on the group performance early on in the semester. Typically, it is only at the end-of-term examinations where problems and misunderstandings become obvious. In order to address this issue, the learning analytics component of the SMALA infrastructure provides teachers with suitable overview tables and visualizations of the learning events that are generated by the interactive learning tools. Different views provide insights into the data from a high-level overview of tracked learning activities to more detailed views such as detected error types. Linking of related data enables the teacher to drill down from the general views to the more detailed views. For example, it is possible to navigate from the overview of all solved exercises to the session list for one selected user so as to investigate samples and find out why a given error type was common. From the session list, the teacher can further drill down to the chronological display of all events of an individual user session. Moreover, automatic assessment results from the learning tools are aggregated and analyzed in a way that allows the teacher to see, which error types are most frequent among individual users or how many errors of a certain type were reported for the learning group in total. An example of an aggregated view is depicted in Fig. 8.11.

Fig. 8.11
figure 11

Frequency of error types

Based on this information, the teacher can identify common problem areas that might be worth discussing in class again or that might be worth reporting to the learning tool developers. What is more, the teacher can use the log views to monitor impacts of didactical measures in class: How effective are repetition sessions? Are students more successful in solving exercises after a repetition? Can in-class tool demonstrations motivate students to do their own explorations in the learning environment at home? The SMALA analytics view aims at supporting the teachers to answer these kinds of questions and helping them to continuously reflect and improve their own teaching.

5 Evaluation and Discussion

In the winter term 2011/2012, the SMALA-integrated learning tools such as ComIn-M, Squiggle-M, and MoveIt-M were used and evaluated in the universities of education of Heidelberg, Karlsruhe, and Ludwigsburg (Germany). In order to ensure a smooth integration in the established learning platforms of all three universities, deployment scripts and guidelines were developed for StudIP and Moodle. For every participating course, an individual learning activity was set up in the SMALA environment that is only accessible by members of the registered course within one university. By doing so, data security and compliance with strict access control requirements on the learning data could be ensured.

The tools were used in a learning scenario where the tools’ usage was first demonstrated in class (by the teacher) and then used by the students for doing their homework. According to the didactical design pattern Technology on demand (Bescherer and Spannagel 2009), students were free to choose whether they wanted to use the learning tools to solve their exercises or not. Thus, using the tools was not mandatory in the setup.

Various aspects of implementing the SMALA model in a real-life scenario were considered in the evaluation. First of all, the evaluation should prove whether the SMALA toolset can be integrated in the existing everyday technological infrastructure of the participating universities. Applying the before-mentioned scripts and guidelines, all teachers were able to set up learning activities within their learning management system that offered the SMALA learning tools to their course participants. By doing so, students could log in the learning management system (LMS) and start the tools from within the LMS environment. Pseudonyms were automatically generated for every user and were used as credentials for working with the learning tools. As a result, we can state that the LMS integration for StudIP and Moodle is feasible and could be realized successfully. During the evaluation run of the SMALA toolset, the scalability of the SMALA infrastructure reached boundaries, but these could be resolved. In total, 24,655 events were recorded by the SMALA logging service during the evaluation by 156 users having run 965 sessions.

Another important aspect in the evaluation was the extra workload for teachers due to feedback requests to the teacher. Contrary to what was feared, this did not become a hurdle. We counted a maximum of eight personal feedback requests per tool and course. So the additional workload for analyzing and answering these requests was minimal. According to the teachers, the process recordings and the generated views on these processes were precise enough for them to easily identify mistakes in the solution process and explain the learner how to resolve them. Therefore, we conclude that the granularity of the process recording and the presentation of the solution steps were adequate for reproducing the learner’s problem-solving strategy.

After the evaluation period, the learners were asked to give feedback on their experience in using the SMALA-enabled learning tools. The evaluation of the learners’ questionnaires showed that many students appreciated the interactive learning tools as additional learning opportunities and that they liked the stepwise assessment and feedback feature of the toolset. However, all students indicated that they preferred pencil and paper for solving mathematical problems to using a computer for mathematical problem solving. In the same way, a vast majority of students prefer asking peers for help or face-to-face tutoring in exercise sessions to using the “Contact tutor” feature offered by the learning tools. In the evaluation group from Karlsruhe, some of the participants criticized missing bottom-out hints with sample solutions. Based on this feedback, we conclude that the majority of students are still reluctant to embrace new technology in learning environments that traditionally used to work with pencil and paper, sample solutions, and exercise lessons in the classroom. Thus, motivating and encouraging students to use new technology is a challenging task for teachers. In order to introduce innovative technology-based learning materials successfully, it is essential that the didactical setup is prepared thoroughly and that special care is taken to make the transition as smooth and easy as possible.

Finally, we collected feedback from the four participating teachers in the form of semi-structured interviews. All teachers agreed that the log views on individual learning processes are very helpful when answering personal feedback requests. However, due to the vast amount of recorded data, none of the teachers has actually tried to get an insight into the overall learning progress by following multiple individual sessions. In order to get an overview of the performance of the whole learning group, the teachers generally agreed that automatically created summaries and suitable visualizations of the data are necessary. At the time of the evaluation, only summary views for the learning tool ComIn-M were available, so the need for more tool-specific summary views became obvious. As a concrete requirement, one interviewee requested an overview of all solved exercises per user (and whether the solutions were correct or not). Ideally, aggregated views on common error types detected by the automatic assessment components of the learning tools should be provided as well. Based on these results, we have proposed various interactive visualizations of the learning data (Rebholz et al. 2013) that are prototypically implemented in the current version of the SMALA infrastructure.

6 Conclusion

Letting automatically assessed learning tools be used in and out of classroom learning is an ongoing wave whose effectiveness has been repeatedly evaluated. Few studies, however, have reported more than a changed teacher involvement. In particular, the few studies report how teachers can provide feedback to the students’ work out of classroom. Our approach describes such a possibility in a way that may bring back the teacher closer to the usage of the learning tools. This possibility transforms the learning tools into smart learning environments which can provide feedback in a relevant and context-specific way, using automated analysis or teacher-lead analysis.

The role of the teacher in the teaching analytics scenarios as we have described includes the following:

  • Introduce the usage of the learning tools, in connection with other parts of the courses, such as theory presentations, other learning tools, or expected assignments (pattern Technology on Demand, Bescherer and Spannagel (2009)).

  • Make sure the learning tool is easily accessible by students by linking to it appropriately in such an environment as the learning management system.

  • Encourage the usage of the learning tools in the relevant times of the learning process (e.g., by assignments, by organizing in-room training).

  • Employ the analytical views to evaluate the impact of the learning as can be seen in the learning tools’ usage:

    • Globally, live in classroom, employing only views that do not show individual actions,

    • Globally, and individually anonymously for sampling, when planning subsequent courses,

    • Individually with known identity, when help is being requested.

  • Revisit learning tools’ usage and formulate suggestions to developers or adjust instructions to enhance the quality of the learning tools.

In this chapter, we have described the instructional approach underlying the deployment of learning tools in such a way that they can transparently send logs describing the learning process visible in the learning tool. The feedback production is organized in such a way that predictable feedback can be provided automatically after an automatic analysis of the user’s input, while feedback that needs more expertise is requested from the teacher.

Experiments that we have run show the technical feasibility of a smart support for teachers and learners. This support is described in the SMALA model for deploying learning tools within a traditional higher education setting so that the learners are neither requested to log in nor requested to agree to a disclosure agreement: Because the learning tools are directly integrated in the learning management system, they only need to be logged in there; because of the storage of the logs uses pseudonyms that cannot be converted to any personalized information, the logs are not considered personal data and are thus not subject to most of the regulations that require, for example, their removal after a short time and the explicit agreement of the learners. Nonetheless, the normal Web page displayed before launching the learning tool indicates to the learners that their usage will be logged, and some learning tools offer the possibility to switch off logging temporarily. Questionnaires distributed after the usage periods of the learning tools have indicated that no significant concerns about the privacy were expressed by the 146 students of the 156.

On the side of the teacher, the implementation of the logging views for the learning tools described in this paper has proven effective and expressive enough. It has been shown that the log views allow teachers to properly understand the individual learning process and effectively provide feedback; the logs being collected, aside of the screenshot at time of sending the request for Squiggle-M, included each of the attempts of the learners and each of the feedbacks. The display of the log has been adjusted for each of the tools so that the actions are almost as expressive as a student’s screen. In the case of ComIn-M, the inputs include mathematical expressions, which are then displayed in the log views. Even though the formulae were stored in OpenMath and the display was made in MathML, one of the teachers asserted that the screenshots of the learner’s inputs were quite helpful.

6.1 Open Questions

Rareness of requested feedback: The offer to formulate questions to the teachers has been considered with a fear of becoming overwhelmed, but no flood happened as very few requests arrived, in comparison to the amount of learning tools’ usage or of students. Several hypotheses can be formulated to explain this fact: The first is the preference to ask in presence of their peers and teachers or tutors (and indeed, this has been the majority of answers to this question), and the second is the possibility that simply writing the question helps the learners finding the answers and thus stops their question writing process. A finer grained analysis would be needed to elucidate the best strategy to motivate asking questions productively.

Didactical Usage Patterns: The set of didactical configurations where the analytics views can be employed is not entirely investigated:

  • Clearly, the personal log can be useful for learners themselves to support a reflection on their own learning process and has been made available, but no teacher encouragement to take such actions has been made, and thus, it has almost not been used.

  • Peers and other persons in the learners’ circles may take advantage of such log views too: Within an electronic communication among peers so as to exchange discoveries or solution paths, in family circles so as to discuss and help one’s own child’s learning. The conditions and best practices of doing so similarly to the approach of ePortfolios’ assembly of receipts that prove the learning (Ravet 2009) could be researched.

  • Classrooms may take advantage of some of the logging views. A teacher may want to show a live view while the exercises are being run, so as to show the class’s progress. Is there a risk of the anonymity breach? Should teachers be endowed with a special mode so as to avoid inadvertently show individual data? Such a view is clearly helpful when formulating an encouragement to the class to use the learning tool; is it different if displayed live in the classroom when the tools are being used?

Quantity of Logging Information: The amount of logging events, and the information inside each log event, is another uncertain variable. A complete video of the learning tools’ usage is clearly too detailed to get a quick overview and probably too heavy to be processed quickly. But simple scores stating the results are clearly too light to provide an explanation. The approach we described stands in the middle between these two extremes. It needs to be sufficiently rarely sent so that a usage session holds in a few screen-pages, but it needs to contain sufficient information so that one can conclude what the user has performed as action. In such a system as ComIn-M, the learner’s input formula is an effective representation, but representing as a formula such an input as the relationship between elements of two sets in Squiggle-M is probably too compact to be effectively read. Thus far, our only criteria for the informativeness of a logging event’s display have been that it resembles the user’s input or view. What are other criteria for other learning tools? Is the log display of the car simulator described in TinCanAPI’s storyFootnote 3 sufficiently effective to show the simulator usage to a teacher that knows the simulator a bit? How could it be done for a dynamic geometry system?

Privacy: Furthermore, the best practice to ensure a feeling of privacy among the students’ remains to be defined. For example, we have observed that some tutors started to progressively remember the pseudonym of learners and to formulate expectations when drilling down from global views to individual views. This contradicts the role of pseudonyms, which, precisely, are meant to hide the identity of the learners. It may be that more anonymization is required in the log display (so that remembering cannot be done) and in the URL of the log views, or it may be that ethical guidelines should be expressed to teachers so that bad surprises such as the mention of a typical error in an observed session within the classroom course can not occur.

6.2 Vision

To sum up, we present our vision of a smart learning environment: an environment that combines intelligent tutoring enhanced by human expertise and learning analytics enhanced by human analytical skills in one system.

As Fig. 8.12 shows, smart interfaces wrap the diversity of learning tool-specific assessment, logging, and learning analytics components and provide a homogeneous view on the learning content and the recorded learning data, respectively. Ideally, this environment supports the interplay between learners, teachers, and technology in such a way that smart interaction becomes reality.

Fig. 8.12
figure 12

Smart interaction—Interplay between learners, teachers, and technology