Keywords

1 Introduction

Modern classrooms are full of technology, such as digital projectors, interactive whiteboards, and student devices. However, classroom teaching rarely utilizes the potential provided by the available technology. Lessons are often driven by linear slides presented by the teacher through a central projector, leaving little room for interactivity or individualization. While many teaching tools exist within this space (as discussed in Sect. 2.1), there appears to be a lack of solutions and research looking to utilize the augmented classroom to facilitate individualized learning for the students. Moreover, existing research often leans towards automation of assessments and quiz-related activities, without direct interaction with a teacher. And while there is an increasing amount of presentation tools with reasonable facilities for attendee interaction, they lack forms of content individualization, and are primarily designed based on a philosophy that “one size fits all”. Even the recent increase of distant learning, supported by video-conference tools such as Zoom and Google Meet, does not appear to address these issues and instead it presents a more complex landscape where the augmented classroom is partially or completely distributed.

In this paper we focus on individualized content delivery to a live classroom; in particular, we are interested in what can be considered individualized lesson content, and what constitutes a usable and efficient delivery of such content.

Our first step was to systematically survey existing tools and related literature (see Sect. 2). To learn about current orchestration practices we then conducted an investigation through a questionnaire, addressing 22 teachers from Danish and Norwegian educational institutions, focusing on the adopted tools and approaches. From our literature we identified a range of approaches to orchestrating individualized content, while the analysis of questionnaire data revealed that teachers mainly use presentation tools such as PowerPoint alongside other exercise-focused tools. The findings suggested that it could be possible to design and implement a usable and efficient classroom orchestration tool, capable of facilitating the delivery of individualized content in a live classroom setting. Our working hypothesis is that such a tool can be developed using data from existing classroom orchestration and teacher’s experiences with existing tools and techniques. To test this hypothesis, we proceeded by defining requirements, then design and iteratively implement a minimum viable product (MVP) of this new tool. Finally, we performed and analyzed data from a task-based comparative experiment, complemented by post-test interviews. This mixed methods approach was designed to capture both subjective experiences about the efficiency of our MVP, and objective parameters, such as the amount of work and time required to complete the test tasks.

The rest of the paper is organized as follows: Sect. 2 presents related work and a survey of existing software tools; Sect. 3 presents our findings from the preliminary study about current orchestration practices. Section 4 discusses requirements, design, and implementation of the MVP. The experiment and general discussion of the findings are found in Sects. 5 and 6. Section 7 concludes the paper.

2 Related Work

According to recent research, during the last decade digital classroom environments have reached the point where each student has access to one or more devices connected to a wireless network [3,4,–[5]. Harper and Milman [6] reported in their review of 10 years of literature, that by utilizing this potential, it is possible to achieve more meaningful individualized instruction. They reported positive findings with regards to learner achievement, and that these environments can provide a more enriched learning experience. However, they found mixed results regarding student engagement partially attributed to the initial motivation of using new software (see [6]).

Adapting to new, digital tools is typically a complex process for both learners and teachers, and in this context it becomes important to evaluate the effects of digital presentations against that of traditional oral instruction. Moulton, Türkay and Kosslyn [7] for instance compare Microsoft’s PowerPoint and the online alternative Prezi, against oral instruction. They provisionally conclude that software-aided presentations are more effective than oral presentations for persuading audiences, but found no evidence indicating benefits towards learning outcome, recollection, or understanding of the content. Moreover, Apperson, Laws and Scepansky found in their study [9] that the use of PowerPoint had a positive effect on learners’ engagement and that teachers’ likeability was improved. However, in another study, Chou, Chang and Lu [10] could not reject the possibility that the positive findings on long term knowledge acquisition were due to the novelty effect [11].

Other research into increasing engagement and learning through digital tools such as Kahoot!Footnote 1 reported positive findings with regards to learner-teacher engagement and capturing learners’ interest. Kahoot’s quizzes also provide a break from long learning sessions, allowing for reflection and discussion of content. The novelty effect could play a major role with an online tool like Kahoot!, designed to be colorful and playful, however, we found little research focusing on it. Moreover, other studies about the use of Kahoot! have found non-conclusive indications of increased learning [14], and no correlation between perceived students’ engagement and resulting assessment grade [13].

Moving from digital tools to the process of running a live classroom, we considered the concept of classroom orchestration, defined as “How a teacher manages, in real-time, multi-layered activities in a multi-constraints context” [18]. However, Roschelle, Dimitriadis, and Hoppe [18] remark the lack of a consensus on this definition; in addition, they describe how aspects of classroom orchestration deserve more attention, particularly with regards to typical teachers’ problems within the domain of orchestration, such as curriculum design, deployment of assessment (formative as well as summative), and the use of time and spatial resources. An important find in [18] is that classrooms are complex environments, and that teachers’ role is often to adapt materials to their specific classroom’s configuration. We have observed similar roles in teachers also in our research on primary schools in Denmark [2]. According to [18] orchestration is becoming more structured, moving away from ad-hoc solutions invented by individual teachers, and showing instead a “diffusion of innovation” perspective.

The complexities of orchestration are mirrored in the diversity and specificities of learners. This project focuses on the idea of individualized content delivery, which in turn is based on differentiated instruction. According to [19] “differentiation is responsive instruction designed to meet unique individual student needs”, and it enables students to learn in the same environment using the same curriculum, by differentiating the learning tasks, outcomes, and entry-points to the students’ needs (see also [8]). The findings in [19] also point to the importance of appropriate grouping of learners, as a central feature of the learning environment; the authors also observe “working with students in small groups is often aligned with differentiated content or products of instruction”, and this alignment extends to text selection (or more in general in our case, content selection), so that learners are faced with relevant contents, appropriate to their level of expertise.

While in literature differentiated instruction appears to be the most used label, both individualized and personalized instruction or learning is also found in the same context, and there appears to be mixed consensus on its use. This paper will therefore use individualized learning as an umbrella term.

2.1 Existing Software Tools

There is a variety of tools intended to aid teaching, ranging from pure presentation tools, to quiz and assessment tools, as well as classroom management software. The latter has not been included in this analysis as it typically deals with the planning and orchestrating of classes in general, and not live content delivery.

This evaluation is based on the systematic literature reviews method, and in particular on the approach discussed in [16] which pertains the review of technical and software-related papers. The software tools in this evaluation were identified through online searches for a wide range of related terms, as well as based on the recommendation from a focus group of teachers, acting as experts; the final list of software includes: Kahoot!, PowerPoint, Google Slides, Zoho, Show, Prezi, Nearpod, Creedoo, Peardeck, SlideDog, Socrative, Quizlet, Quizziz, Mentimeter, Storyline, and Zzish. A set of data points was gathered for each software tool, identifying its presentation options, non-linearity, content and interaction individualization, attendee management, and other relevant features. From the constructed feature matrix (an excerpt of the matrix is visible in Table 1), it is apparent that many tools have overlapping goals and features since they address many of the same problems. For example, the interactivity within Google Slides and PowerPoint is limited at the authoring activity, since real-time collaboration and interaction is possible only through the presentation and only when it is not in active presentation mode; this makes these tools less viable for large scale individualization and interactivity. Some of the tools in our matrix are primarily quizzing and assessment tools, with limited or no options for content presentations, while others are more traditional slide-based presentation tools.

Overall, the two most significant shortcomings of the identified software are the lack of support for presenting individualized content to participants, and to organize groups of attendees (as in Table 1). No tool appears to support individualisable contents, with the exception of only Zzish, that offers a very limited support, enabling teachers to specify some additional content for students depending on how they did in a quiz. Interestingly, some tools do offer interactivity features, mostly in the form of an option to register the individual user’s interactions; however, the interactions themselves are not individualisable. An important aspect of attendees’ management is grouping, and in our review only Socrative provides a presenter-managed organization of learners’ groups, while Quizlet and Quizziz have an automatic grouping option.

Table 1. Excerpt from the feature matrix with key findings.

3 Preliminary Study: Orchestration Practices

We conducted a preliminary field study on the practices and tools used in the orchestration of augmented classroom live lectures, with focus on individualization. The resulting questionnaire was sent to Danish and Norwegian teachers, and we collected responses from 22 teachers: 5 working in primary schools, 6 in middle schools, 9 in high-schools and 4 university teachers. The main purpose of the questionnaire was to identify what tools teachers use to individualize their lesson content, what content gets individualized, and what factors affect how they individualize it; most questions were in the form of multiple choice and Likert scales, with additional text answers to further elaborate where necessary.

The range of different approaches to orchestrating individualized content identified through the questionnaire shows that teachers mainly use presentation tools, such as PowerPoint, alongside other exercise-focused tools, to piece together a teaching toolset which works well for each individual teacher. While these teachers appear to share much of their pedagogical theories and reasons behind individualization approaches, we could see little consensus on how to put this knowledge into practice. Individual teachers’ choice seems to be the norm, suggesting a lack of an up-to-date, theoretically founded consensus concerning digital educational tools in teachers’ training.

Figure 1 shows at which institutions the respondents are currently teaching; since some respondents were involved in both primary and middle school levels, Fig. 1 shows the resulting 24 data points.

Fig. 1.
figure 1

Educational level of teaching for respondents.

All 22 respondents reported that they use digital tools to aid their teaching; of these, PowerPoint and Kahoot! are the most used for teaching in general, with 19 (86%) and 18 (82%) respondents respectively using them. These were followed by interactive whiteboards with 10 respondents (45%). All university teachers use PowerPoint, complemented by other tools, such as Kahoot!. PowerPoint is also the most used tool for delivering individualized lesson content, with 9 respondents (41%), followed by Quizlet with 5 respondents (23%). A summary of these findings is visible in Fig. 2. In Fig. 2 each value represents usage by one respondent, with multiple unique responses allowed per respondent; university teachers’ responses are highlighted in blue; among university teachers the most used tools were PowerPoint and Google Slides (75%).

Fig. 2.
figure 2

Tools used specifically for delivering individualized lesson content.

The respondents were also asked to indicate what types of content they normally individualize. We found that 19 respondents (86%) individualize questions, exercises, and assignments; furthermore, 10 (45%) individualize informative content, such as concepts and general theory, but 9 of these responses were in both categories. All university teachers reported that they individualize questions, exercises, and assignments, and half of them also the informative content.

In conclusion, our data shows that PowerPoint is the most used tool for teaching, as well as being the used most tool to deliver individualized content, followed by Quizlet. These findings formed the basis for the design our final experiment, discussed in Sect. 5.

3.1 Requirements

Based on the data from the questionnaire and the systematic review of software tools, we established the core requirement for the new tool: being able to deliver individualized content to students in a classroom. Use cases were used to collect essential requirements (see Fig. 3). We further specified the actors by constructing personas for the relevant stakeholders (following the approach in [17]) i.e., teacher and student, and integrated the personas directly with feedback from stakeholders. Table 2 shows the two resulting personas.

Fig. 3.
figure 3

Use-case diagram summarizing requirements.

From the analysis of our data and the personas, a set of functional requirements was defined, along with a set of quality attribute scenarios (QAS), which addressed the non-functional requirements of the software. A quality attribute is a testable property of a system which is used when measuring how well a system delivers its functionality; while there are several categories of quality attributes, the key ones for our new e-learning tool are performance and usability. Performance quality attributes typically measure how long it takes to complete a given task when a particular event occurs. In our case, we are addressing performance by considering the delay times in our distributed e-learning tool: for example, we require that when the teacher moves the presentation forward one slide, this change should be visible by all students in less than one second (i.e., a loose definition of real-time, that fits with the web-based nature of our new tool). Also, when a student is working on an interactive slide (e.g., a short quiz or an exercise to be solved), the time required for her solution to be submitted to the teacher through our tool, should take no more than one second. Considering that the data exchanged by our actors are rather simple, and that a class is usually composed of a small number of users, using the software at the same time, meeting the timings specified in the QAS’s did not pose a challenge for the MVP.

The other important quality attribute we considered is usability, defined as the ease with which users interact with the system to achieve a desired task; usability is strongly correlated with the users’ experience of a system, and in particular with the sense of how efficiently it operates. The response measures associated with usability are typically dependent on the interactions of individual users, which make them more complex to test and verify (as discussed in Sect. 5, where our main test is presented).

Table 2. Teacher and student personas: definitions and objectives.

Having defined functional and non-functional requirements, we also wanted to have a user story for our tool. In the scenario, a teacher can create sessions during a lecture, and the students will join these sessions using online devices; the students are then presented with individualized content. This content is organized as a semi-linear presentation, i.e., a linear sequence of slides with each slide potentially consisting of multiple variations, to account for classroom diversity and individual student problems or skill levels. Different students will therefore be shown different variations of the same slide, with varying degrees of complexity and support in the examples and explanations. Control over which parts of the content are accessed is either given to the teacher, or optionally to the students themselves. Students can be dynamically grouped in real-time, to simplify the distribution of contents and to provide a collaborative experience. The individualized content can offer various degrees of interactivity, ranging from reading a text to simple exercises, eventually allowing students to submit results to the teacher. Teachers are able to observe students’ progress either in real-time, during in-lecture activities; moreover, in our scenario specific content can also be selected and delivered to the individual student as part of preparation for a lecture, allowing the teacher use more of lecture time for discussion and reflection, in line with the flipped classroom approach.

4 Design and Implementation of Prototype

A typical three-tier architecture was chosen for the new tool, which was designed considering the requirements and the scenario (Fig. 4). The main components are a server, a database, and two specialized clients: a teacher and a student client (a need discussed also in [2]).

The student client provides students with an individualized view into the session created by a teacher, which is handled and synchronized through a server. Any related data such as lesson content and persistent user data is stored in the database. The teacher client enables control of what content is being shown to each attending student using a semi-linear presentation structure in addition to tools for managing the classroom session.

Fig. 4.
figure 4

Design-level component diagram of the e-learning tool.

4.1 Presentation Format

In order to facilitate the individualization of lesson content efficiently and flexibly, we need to reconceptualize sequential presentations. Existing presentation formats such as PowerPoint are entirely linear, and every student is presented the same material in the same sequence. Instead, we propose to enrich the structure of a presentation so that it can still be considered an almost-linear sequence, but we allow each slide to consist potentially of multiple versions, to account for learners’ diversity (as depicted in Fig. 5). Some slides of the sequence can be simple slides, while other are allowed to be slide collections, i.e., vertically stacked slides meant to explain the same concept in multiple ways from more formal to more practical, for instance.

With this format a teacher can anticipate the need to explain the same concept at 2 or 3 different levels or use examples at various degree of complexity to cover a particular topic. According to our scenario, during class, each student uses our client application, and gets assigned individual sub-indexes for a slide collection: the result is that students see different content, while the presentation remains on the same overall slide index. In Fig. 5, when the whole class is working on slide 2 (which is in fact a slide collection), some students will see slide 2a, while others will see slide 2b on their client application. Each simple slide inside a slide collection can then be assigned by the teacher, or optionally requested by students themselves, in an attempt to ensure that all students see the content which is most suited to them.

Fig. 5.
figure 5

Presentation format: each column indicates a slide in a presentation. Slide 2 and 5 are slide collections containing simple slides, to allow for individualization.

From a technical point of view, in our format a presentation contains slides of two types: simple slides and slide collections (which can contain other simple slides). Slides are implemented via a composite design pattern, and that also allows for easy extensions of the simple slides, such as multiple-choice slides, slides with embedded animations or interactive simulations. Simplified versions of these interactive slides are in fact implemented in our prototype.

4.2 Implementation

The design is implemented as an MVP prototype, capable of running online as a client-server system, and with enough functionalities to allow for testing.

Of the components in Fig. 4, the server is the one providing the majority of the required functionalities. It is responsible for sessions created by a teacher and acts as a communication relay between all clients connected to a particular session, through communication protocols that were derived from our requirement. The server keeps a centralized global state of all sessions, including a session identifier, presentation slides and current viewing state, and a list of attendees. The server manages individualization by keeping track of the specific slide that is presented to each attendee, as well as the dynamic state of the interactive slides (e.g., the answers generated when a learner engages with one of the multiple-choice slides). After every interaction received from a client, the server updates the client’s state.

The system also has two types of clients (visible in Fig. 6): the teacher and student client. They are both implemented to be lightweight and capable of displaying the relevant session data and managing the receiving and sending of events to the server. Both clients maintain a reference to the server, which is used to publish and subscribe to events to and from the server. A client also maintains a local state to make the system more responsive, however, the local state is overwritten whenever newer data is received from the server. The student client allows its user to view the current slide in a live presentation, and also to interact with the interactive slides. The teacher client is designed to behave similarly to the student client and therefore includes the same functionalities. However, it also keeps information about the session id, the list of connected attendees (including the dynamic state of their assignments and interactions), and an outline of the entire presentation. A dashboard view presents these data to the teacher and allows the orchestration of the flow of the lecture.

Fig. 6.
figure 6

Initial GUI mockup for the teacher (on the left) and student client (on the right).

The left part of Fig. 6 shows the teacher client. In clockwise sequence, starting from the top-left screen:

  • the list of all available presentations;

  • the main presentation view with the list of attending students on the right;

  • the bottom-right screen shows the content assignment window, where the students can be assigned to the slides of the current presentation;

  • finally, the bottom-left screen shows the presentation view along with an overview of the presentation: a stacked rectangle indicates a slide collection.

The right part of Fig. 6 shows instead the mockup of the user interface for the student client: the student can login (marked as 1 in Fig. 6), then she could be shown a simple slide (marked as 2), a multiple-choice slide (marked as 3), or a “text answer” (a simple kind of interactive slide we used in the test, as discussed in Sect. 5).

Figure 7 is a composite image showing five student clients and one teacher client (in the bottom-right corner). The teacher client shows the assignment screen, while the 5 students are all assigned to one of two possible sub-slides, indexed 4a and 4b. The slides 4a and 4b are interactive and Fig. 7 shows that some students have already submitted their answers and received automatic feedback.

Fig. 7.
figure 7

The two kind of clients in the implemented prototype.

The prototype was developed by one of the authors, following an agile approach, with a backlog, sprints, and milestones; GitHub was used as code repository and for version control. The tool prototype is composed of a Node.js backend server managing all active sessions and clients. The backend server and clients are written in TypeScript and communicate using a custom communication protocol over the Socket.IO framework. Moreover, both clients are implemented in React. Functional testing was performed periodically during sprints, and the system performed adequately with all technical tests achieving well within their set targets (see requirements in Sect. 3).

After the final sprint, the final prototype was deployed to a cloud service provider with a combined teacher and student frontend client, and a backend server managing each session and all necessary data persistence. The prototype ran on virtualized server hardware called a Droplet on the cloud computing service DigitalOcean. The server used Ubuntu 18.04.1 and ran on a single virtual CPU core with 1 GB RAM and 25 GB disk space. Moreover, Docker was used to manage the builds during sprints, and to run them on the servers. Finally, the database functionality was initially implemented using a database-as-a-service product from MongoDB called Atlas. However, due to the need for persistence only extending to the actual presentation content at this stage, data persistence through local files was used instead, storing the presentations as JSON files.

5 Experiment

After finishing the development of the initial MVP prototype, an experiment was conducted to assess the usability and performance of our prototype. The test participants were a focus group of 14 university students (from the University of Southern Denmark). The participants were chosen in part because they represent a convenience sample for the authors, given that Corona restrictions were in effect during the development of this project. However, according to our experience with e-learning at the University level, and according to what stated by Boelens et al. [12], these students are highly diverse in interests, competencies, and readiness for learning. They are, consequently, an appropriate sample when investigating better and more efficient tools to manage individualized learning.

The experiment was organized as a variation of A/B testing [1], with the participants performing predefined tasks divided in groups, with or without our tool; we added an element of role-play to the test, by asking some participant to act as teacher and others as student. Simple scripts-like instructions were provided to prime the participants in their roles, called “presentation brief for teachers” and “for students”; they were introduced to the participants during the pre-test meeting. The students-participants were asked to select a specific role among: Strong Student, Slow Student, Lazy Student, and Normal Student. Roles were explained in the brief, so that each student-participant could role-play the chosen role appropriately. Each session was recorded, observational notes were taken by the authors, and all participants were given a follow-up survey focusing on perceived usability and efficiency of the tool.

5.1 Test Preparation and Setup

To perform this experiment, we needed an actual presentation that could take advantage of our new semi-linear presentation format, so we developed one covering some historical and some technical topics (in order to show the potential of semi-linear presentations in both the humanistic and the technical context).

The systematic survey and the questionnaire were the basis to decide which parts of the presentation should be individualized, and how this individualization was to be controlled: by either the teachers-participants or the students-participants. We then manually converted the presentation to both our new tool’s format, and PowerPoint, with the interactive content delivered through separate documents representing each slide collection, with each student assigned one slide from each of these documents. Student responses were handled through a Zoom session’s chat. Effort was taken to ensure minimal deviation from the original presentation, for both formats, which is why, while used by most teachers, we did not use Quizlet as it did not have suitable functionality for this scenario. The resulting structure of the presentation is visible in Fig. 8, and the tasks were designed to show that with our MVP both teachers and students can be in control of a presentation’s flow at different times.

The experiment was executed in four separate groups, where two were test groups, and two were control groups. Each group consisted of one participant acting as the teacher, and three acting as students. The “teachers” were selected randomly within each group, and for the second session of each group, one random “student” was asked to repeat the experiment as a “teacher”. The test group used the new software tool, while the teachers-participants of the second group (i.e., the control group working with the currently used tools) presented the same material and performed the same tasks with their student-participants. All participants were physically present during the experiment, with the exception of two who attended remotely via a Zoom meeting.

Through two test group sessions and two control group sessions data was gathered from three points of views: each session’s duration was recorded, and both the teacher and student participants were surveyed upon completion of the session, asking for their perceived efficiency of the system.

Fig. 8.
figure 8

Presentation structure for experiment.

5.2 Findings

The desired outcome of both groups is to successfully deliver individualized content. Hence, we decided that measuring the time it takes to achieve the desired result would indicate its efficiency: shorter time is regarded as an indication of higher efficiency. However, as it is critically important for a classroom orchestration tool to be adopted by its users, both students’ and teachers perceived, subjective opinion on the system’s efficiency is also be considered. Data triangulation was therefore used to combine task-completion timing, used as a quantitative efficiency measure, with the qualitative, self-reported post-test surveys.

The first quantitative data we collected is the session duration, visible in Table 3. According to the table, the prototype tool behaved measurably more efficient than the currently used tools (i.e., PowerPoint), with the test groups on average completing their sessions 37.6% faster than the control groups.

Table 3. Session duration measurements and descriptive statistics.

The subjective measurements made by the participants regarding the efficiency of the system were recorded using a 5-point Likert scale ranging from “Very inefficient” (scored as −2) to “Very efficient” (scored as 2). Both teachers and students-participants were asked to rate various aspects of a session’s efficiency, with a final overall efficiency at the end. Students in the test group reportedly considered their session more efficient than those in the control group; as for receiving exercises, the test group reported higher efficiency, with a mean 0.33 higher than the control group. With regards to submitting answers, the control group reported a marginal lead on efficiency, with the mean difference being 0.17 in favor of the control group. However, in the final question of overall efficiency the test group reported a mean of 0.17 higher than the control group.

Given the small number of teachers-participants, a quantitative analysis of the data from their post-test interviews would not lead to statistically significant data; nevertheless, we computed four key statistics: Efficiency of showing content to students (a), efficiency of assigning content to individual students (b), efficiency of observing student progress (c), and overall efficiency (d). Each of these were rated from −2 “Very inefficient” to 2 “Very efficient”, and a compound statistic was created by averaging them, as shown in the diagram of Fig. 9. The figure also shows a similar statistic for students, based on four perceived efficiency measures: Efficiency of getting started with a session (a), efficiency of receiving exercises (b), efficiency of submitting answers (c), and overall lesson efficiency (d). An interesting find was a clear separation between the two teacher groups with respect to the overall measurement of perceived efficiency (d): both teachers of the test group reported the same overall efficiency of “Somewhat inefficient”, which resulted in a mean score of −1, a total of 3 points less than the control group, where both teachers reported the overall lesson as “Very efficient”. This is a negative result for our prototype, therefore, we decided to conduct informal interviews with the participants acting as teachers, in order to possibly gain more qualitative data. We found that the teachers of the control groups thought our tool performed very efficient in the tests, but voiced concerns as to its scalability, in particular with respect to managing large numbers of answers from the students. Also, the teachers-participants in the test groups said that the prototype was efficient, but complained about the user experience, the unfamiliarity with the tool, and that the lack of visibility of what each student-participant was seeing and doing, explaining how these problems were responsible for the low overall efficiency score they had self-reported. One of them stated that the unfamiliarity with the process of assigning students to groups with the prototype, in particular having little familiarity with neither the content nor the students, made the experience “overwhelming”.

Fig. 9.
figure 9

Perceived efficiency: test and control groups.

Figure 9 shows the result of compounding all the self-reported data regarding efficiency into a single statistic. While the students in the test group reported better perceived efficiency, for the teachers we found the opposite, with the control group perceiving better efficiency with the current tools.

6 Discussion

From the results in Sect. 5, it is of course not possible to draw a clear conclusion circa the efficiency of our MVP. However, the goal of this project is to establish the requirements, design criteria and feasibility of a digital, live classroom orchestration tool to facilitate the delivery of individualized content. Moreover, the experiment presented in the previous section is only an early test, with a convenience sample of users. It provided insights into the usability of the current design of the prototype, but more experiments will be needed, involving our network of primary and secondary school teachers in Denmark.

Being aware of the limitations of this first experiment, we complemented efficiency with other parameters, such as technical performance, usability for both teachers and students, and perceived efficiency. The performance requirements, such as fast delivery of presentation to the students and of assignments’ solutions back to teachers, were easily achieved by our prototype, mainly because of the limited number of users in our experiments, which in turn resulted in a low load on the system. Benchmark testing performed during development showed that the current architecture of the prototype, based on Node.js, React and Socket.IO, has the potential to scale up to a realistic number of participant (e.g. primary school classrooms in Demark range typically from 15 to 30 pupils, and university classrooms usually do not exceed 50 students); however, some effort would be needed to streamline the structure of the semi-linear presentations and the storing of individuals’ assignments and interactions.

Considering the limited user interface, the unfamiliarity of the new tool, and the fact that the contents (i.e., the presentation in Fig. 8) were created by the authors and not decided by the test participants, the prototype did satisfy our main usability requirements, as supported by our analysis and observations. However, teachers-participants reported that the user interface was “confusing”, as they struggled to find some of the information they felt they needed during their tasks. They also reported that a major hindrance was not being able to see what each individual student was seen, dynamically, as the presentation proceeded. This suggests that adding a “group view” to the teacher’s client (similar to the gallery view in Zoom) showing a simplified view of each student’s client, could improve clarity. Moreover, to help teachers manage large numbers of presentations, the teacher client should adopt a dashboard design pattern.

The analysis of the data we collected during the development and testing of the MVP also revealed that teachers have a practice of individualizing content, which involves forms of content differentiation, to adapt it to the students in a class. To assess our MVP, we formulated a practical definition of efficiency of content delivery, measured by both objective timings of the session and subjective, self-reported experiences by both types of users (teachers and students). In the process of defining how a presentation can be individualized by a teacher, we defined a semi-linear presentation format, with simple slides and slide collections; simple slides can also be interactive, for example containing multiple-choice questions. Interviews and our early observations show that the participants of our experiment responded positively to the idea and could work with these semi-linear presentations. We have not yet investigated the editing of the semi-linear presentations, but the data so far collected on our prototype supports the need to be conservative and provide consistent and known user interfaces to teachers; we are therefore considering a variation of existing presentation-authoring tools, to avoid the problems that teachers could face by having to adapt to an unfamiliar interface.

Finally, an interesting discrepancy emerged in our triangulated data about the efficiency in the experiment’s tasks: all participants reported a lower efficiency than what we measured. In previous research with e-learning and Scandinavian teachers, this phenomenon is typically associated tacit knowledge in skill practices (as in [20] and [1]). That in turn suggests that to improve our prototype further, we should rely on ethnographic methods and long-term observations not only of the experts in orchestration practices (i.e. teachers), but also of the socio-material relations that exist in the educational institutions, among teachers and other stakeholders (such as administrators, pedagogy specialists and technology experts working for schools), because tacit knowledge is embedded in the processes and can require to look at the actors and roles in organizations.

7 Conclusion

In this paper we focus on individualized content delivery to a live classroom, in particular, we are interested in what can be considered individualized lesson content, and what constitutes a usable and efficient delivery of such contents. A knowledge gap is identified, in the lack of widely adopted efficient digital tools in this domain, and a working hypothesis is that a new tool for individualized content delivery can be designed and implemented, that is based on data from existing classroom orchestration and teacher's experiences with existing tools and techniques. We developed a testable minimum viable product and performed a preliminary experiment that compares the identified current approach against our new tool. Most teachers that answered our questionnaire do deliver individualized content, following various criteria as to how and what to differentiate in their contents; moreover, teachers currently adopt a presentation tool as well as an interactive testing/quizzing tool for individualization. Based on these findings, we realized the need for a different kind of presentation structure; we therefore defined a novel semi-linear presentation format that allows for grouping of slides, as well as interactive, quiz-like slides.

Our MVP was tested with encouraging results for an early prototype, and the negative feedback we received was mainly focused on unfamiliarity and the rather crude user-interface. However, the MVP is fully functional and fulfils the majority of the technical requirements; and this early evidence supports our belief that the new tool has the potential to help teachers in orchestrating lectures with individualized contents in the augmented classroom, including the increasingly relevant blended scenario where part of a class attends remotely.

Future work includes better usability and improved UI, especially for the teacher client. Further tests are needed to explore the computer-supported collaborative learning potential of the tool. We are currently developing more examples of semi-linear presentations, across different school subjects, and planning further tests involving classes in local Danish institutions.