Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Introduction

The moment of truth for a learner occurs when the acquired knowledge and trained skill set has to be recalled, practised and performed outside of the safe environment we find in the classroom. Reality does not provide the range of experimental set-ups where the experiment serves the demonstration of learnt theory but includes all the side effects that influence the outcome. The guidelines in textbooks, or instructions in the classroom, are replaced with an unsupervised environment where decisions include the selection of tools and methods as well as further attributes like sequences, colleagues, safety, resources, effectiveness or efficiency. In a real-world setting, failures can result in serious consequences; there is no reset button or option to go back in time for another run.

While the existing skill gap between the qualification of graduates from educational institutions and industry expectation has many dimensions, we focus on two factors in this chapter: mass vs. individualised education and the lack of authenticity. First, the classroom is not a homogenous group of equal learners; instead, we are faced with classes of students from diverse backgrounds and sometimes with dissimilar needs (Wood and Reefke 2010). It is safe to say that each individual learner has unique needs in processing information and acquiring the knowledge that is needed to make sophisticated decisions in known and, especially unknown, situations. We have to acknowledge the individuality to help create an effective and efficient learning process, a condition that is generally not achievable in classrooms or on-the-job training with respect to available resources, such as time and staff. Second, learning from a book or video does not necessarily prepare a student for the tasks and challenges they are likely to experience in their professional life. Instead, many students struggle to go from the ‘academic’ understanding of theory to understanding of practice and how to use this to frame their actions in a real-life working situation.

In this chapter, we describe the development of an immersive authentic virtual learning space, depict mechanisms to increase engagement and motivation, introduce the inclusion of indirect guidance of the learner and explain how immediate formative feedback to the learner can be achieved. This project (known as nDiVE) is ongoing; however, preliminary results and a resulting framework to create future learning units will be demonstrated and outlined in this chapter.

Stories as the Canvas for Individualisation

Storytelling is an ancient profession that creates an illusion to immerse the audience in a fictional or non-fictional space. The storyteller orchestrates words, gestures, expressions and other stylistic means (e.g. suspense) to create a draft framework, which is padded by each listener in a (slightly) different way. The story defines the space and its objectives, the narrative, or the path to reach these objectives and should involve the individual experience and knowledge; allowing one to explore the space and use creativity to find solutions (Reiners et al. 2014a, b).

The framework addresses self-guided and motivated learning, focusing on understanding and critical thinking (Friedman 2006). The suggested virtual learning spaces are not considered to replace the traditional classroom but provide an opportunity to evaluate and validate the readiness of the acquired knowledge and skills. The story is told beforehand to ‘provide relevance and meaning to the experience. It provides context’ (Kapp 2012, p. 41). The virtual learning space provides the means to individualise the story, where rough milestones on the path are defined and incentives are given to either attract learners to a certain direction, to encourage them from going in the wrong direction, or passing on guidance or options on how to continue. Stories are the ‘original form of teaching’ (Pederson 1995, p. 2), yet we have to include state-of-art technology to reach out for learners of today and to sculpt and design the narrative to match individual learners (Reiners et al. 2014a, b).

Immersion and Virtual Learning Space

Immersion is ‘the subjective impression that one is participating in a comprehensive, realistic experience’ (Dede 2009, p. 66), whereas the experience does not depend on how or with what means the immersive presence was created. In our research, we further advance the perceptual immersion (Biocca and Delaney 1995), i.e. using head-mounted displays such as the Oculus Rift (Oculus 2013) or a Computer Assisted Virtual Environment (CAVE, space with projections of the environment on all walls) to submerse a user into a virtual environment.

Making research more difficult, both in common and academic language, the term ‘immersion’ is regularly used loosely and even imprecisely. In common language it may be acceptable to be imprecise with language, yet it creates a challenge for the researcher to capture the right phenomenon in surveys, interviews or experiments. In academic language, the definition distinguishes itself from various other human experiences related to concepts such as flow, engagement, involvement, presence and cognitive absorption (cf. Jennett et al. 2008; Slater 2003). In relation to discipline, context, scenario and experience, it can translate to anything from an affordance of a technological system (e.g. Mestre and Vercher 2011; Slater and Wilbur 1997) to a psychological concept or an experience (e.g. Witmer and Singer 1998; Brown and Cairns 2004).

Palmer (1995) suggests technology-generated immersion could be considered different from psychological immersion. This involves the depth of engagement with a given task, such as found with relatively simple games (e.g. solitaire on a PC). According to Ermi and Mäyrä (2005), to understand what a game is, we need to learn more about the act of playing and the experience of gameplay. The authors observed Finnish game-playing children and their non-playing parents, conceptualising a heuristic gameplay model covering three distinct components: sensory, challenge-based and imaginative immersion (SCI-model) (cf. Browns and Cairns 2004). The authors validated their model by analysing Finnish video game players (N = 193) with self-evaluation questionnaires addressing three different immersion dimensions, observing that many of the popular games received high scores in immersion. As the model has been developed studying only the Finnish population, with the input and output of questionnaires perhaps slightly too directly validating the model, it should be noted that more research is needed to assess the model. This is similar with the case involving a small study by Brown and Cairns (2004) who used Grounded Theory to study seven gamers through open-ended questions to understand and define immersion. They report that players use immersion as a term to describe their involvement with games. Three levels of involvement were found: engagement, engrossment and total immersion. System or human barriers, such as game construction or concentration, control access to each of these engagements. They also note, that ‘each level of involvement is only possible if the barriers to the level are removed’ (Brown and Cairns 2004, p. 2).

Although more research in this area is still needed, such studies support the fact that well designed games, despite definitions, have the ability to ‘draw people in’ (Jennett et al. 2008, p. 641), and emerging frameworks and definitions can prove valuable vessels on the journey to understand how to develop and achieve similar results with immersive virtual environments and gamification in various contexts, for example education. Gamification is ‘a process of enhancing a service with affordances for gameful experiences in order to support user’s overall value creation’ (Huotari and Hamari 2012, p. 19). The convergence of video games and immersive virtual environments, namely, game design mechanics used in immersive virtual environments mediated by a head-mounted display and other controls promoting more embodied user experience, is taking research in new directions.

Authenticity, Fidelity and Realism

Authenticity is the representation of real-world tasks. When these are recreated in virtual environments, realism (exactness in visualisation) and fidelity (exactness in functionality) become essential as authentic activities mirror the ‘kind of activities people do in the real world’ (Herrington and Kervin 2007, p. 223). In authentic learning activities, students find and solve problems with the ‘complexity and uncertainty of the real world’ (Herrington 2006, p. 3). Simulations and virtual environments provide spaces for authentic learning experiences to occur where real-life costs and consequences are avoided (Gregory 2011). The authentic simulations discussed here should be realistic and the relevant activity emerges due to the interaction between the participants, the simulator and the context (Rystedt and Sjoblom 2012, p. 785).

The ‘Carrot and Stick’ Framework

Many commonly used educational practices focus on implicitly coercive approaches to encouraging student compliance with educators’ wishes—applying a ‘stick’ approach. However, the term ‘carrot and stick’ implies both a threat of punishment coupled or used in conjunction with something tempting, dangled in front of the student in order to motivate them. While some elements may be used as an extrinsic motivator of this sort (e.g. the use of gold stars for young children that can be redeemed for a reward when they collect enough), the use remains somewhat limited, with most practices designed to punish non-compliance (e.g. detention or loss of grades under some circumstances). Effective education may require application of sticks but there is increasingly room to include carrots.

With self-directed learning, the focus shifts further away from the use of punishments and coercive acts to ensure compliance while enabling the student to engage in their self-directed study. A self-directed study suggests design with greater reliance on intrinsic motivation and may be supported by self-determination theory as an approach to create more powerful forces that encourage instincts supportive of learning (Ryan and Deci 2000). Note that a ‘carrot’ by nature is a reward but we tend to think of this as a purely extrinsic motivator; where you make progress, you are rewarded.

Within a self-guided learning environment, supported in a virtual world or gamification, the use of both carrots and sticks can be easily applied. Sticks can be as simple as providing greater points for completing a task directly or more quickly; failure to comply results in fewer points or perhaps in explicit penalties where points are deducted. An extrinsic carrot may involve rewarding progress towards objectives, e.g. allocating points to specific tasks in a way that also provides feedback on overall progress in the learning scenario. Appropriate design to provide scalable challenges to learners may also include the use of artificially intelligent software agents or bots (i.e. non-player characters) that adapt to learners and continue to push them (Wood and Reiners 2013).

Intrinsic motivator carrots include many of the contemporary gamification mechanisms applied to create and support intrinsic motivation to support tasks. The use of multiple lives or save points enables experimentation and trials to take place without negative consequences for the student, encouraging a range of actions to be attempted and enabling the sating of curiosity about a particular situation. The use of leaderboards can provide a learner with an overall perspective of success and how they are fairing against others in a way that motivates them to continue as they collect points rather than becoming frustrated when being penalised for failing tests (McGonigal 2011). The use of ghost or shadow images (Hebbel-Seeger 2013) or other comparative techniques enables observation and reflection on performance with the ability to learn from others, or from oneself, or from the difference in previous attempts in comparison to the attempts of others.

Gamification processes can aid the shift away from extrinsic motivators as carrots towards developing intrinsic motivation. Application of appropriate gamification design can increase the propensity of a student to engage in self-directed learning. In this way, the ‘carrot’ can become a multidimensional or multi-layered approach to developing, encouraging and supporting the students’ willingness to explore and engage in the learning environment. The ultimate focus is to shift the development of motivators away from extrinsic to intrinsic motivators over time while increasingly reducing the use of sticks in the environment (Landers et al. 2015); in this way, gamification can be supportive when designed to run in parallel with authentic learning (Wood et al. 2013).

The Two Faces of Technology

It is simple to equip existing learning spaces with state-of-the-art technology. But it is not a simple process to use the technology to improve the learning. The short life cycle of modern technology causes continuous changes and adaptations where the focal point risks integrating the technology while sacrificing an emphasis on the story as a justification of the investment. Efforts to demonstrate improvement of the learning and teaching quality through integration of technology are a simple strategy for the administrators and managers; yet, this neglects the content and the story, which is the key to success (Bates and Sangra 2011). Another aspect is how the story is told by the storyteller and experienced by the audience. Here, technology can also become the only means of creating the visible environment and therewith the needed context. While immersion can be achieved by imagination alone, it can be intensified when multiple sensory impressions are combined (Cummings et al. 2012). For example, the learner can look at photos of an unknown city; yet it is almost impossible to derive the general dynamics, such as the noise or the aromas from this media format. Technology enables addition of further dimensions, e.g. using a movie to add the dynamic or using sound to be immersed from all directions.

We use technology in our framework to add further dimensions to the learning space. The selection of the technology is based on our objective of an authentic skills acquisition. The skill gap implies the deficiency of theory for the later practice; forcing additional training requirements after graduation. Skills training needs authenticity and so context and functionality of the learning space should closely match real requirements. A preliminary study to select the most promising technology included traditional media, virtual worlds on traditional 2D monitors and 3D displays (similar to the 3D TV’s that have become popular as part of a home entertainment system) as well as emerging technology, e.g. head-mounted displays (HMD) (Reiners et al. 2014a, b). Recent releases of HMDs have focused on providing consumer-ready equipment that promises to eliminate large investments in dedicated rooms such as CAVEs.

Figure 5.1 depicts examples of technology supporting interaction with the virtual learning space, but also allows records to be created of the activities for later analysis. The interaction is further distinguished between input and output, i.e. triggering the sensory system. Input concerns tracking the body in the real world and mapping these movements onto an avatar. This has traditionally been accomplished with a keyboard and mouse; contemporary controllers use emerging technology (e.g. head movements with sensors in the head-mounted display; walking in a harness; arm movements monitored by camera systems or wearable sensors; facial expression captured with cameras; voice using microphones and earpieces; or [the ultimate, but still speculatively] in-the-future, by-thought-controller). Output directly addresses the sensory system, i.e. vision (using displays; either head-mounted or fixed display), audio (e.g. surround sound headsets), touch (e.g. data or haptic gloves), or force (e.g. pressure vests). Further technology is needed to gather data about the learner as well as the environment. Most relevant is data about any change in the environment (e.g. moving objects), sensor data from input and output devices related to the virtual space (e.g. viewing direction and object in focus) and the learner itself (e.g. EEG reading, video stream). The data is used to document the learning progress, but also to create formative feedback (see Reiners et al. 2014a). The set-up allows synchronisation of protocols from both the real world and the virtual environment; especially, the change in biometric data in the real world implies a feedback from the virtual environment (Gibson and Jackl 2015). This can be used to support the measurement of immersion and awareness of the virtual space. Further data is collected from the environment by protocolling every action and event in short time intervals.

Fig. 5.1
figure 1

Variability of input devices and recording methods of output data: (1) head-mounted display; (2) walking platform; (3) game controller; (4) space tracking systems for hand movements; (5) close range hand movement traction; (6) person movement tracking; (7) visual observation of the candidate; (8) EEG recording; (9) EKG recording and (10) interview and surveys

Preliminary experiment in Reiners et al. (2015b) examined the intensity of immersion in 3D virtual worlds (Second Life using a traditional 2D monitor rather than the version for head-mounted displays) compared to the use of a head-mounted display. Despite the difference in user-control (third person vs. first person), participant comments and ratings unanimously indicated the outcome: the HMD experience supported the ‘feeling of being there’ far more than the 2D display did. In addition, the sensory feedback was experienced more intensively, including minor negative side effects such as dizziness. In addition, the experience was considered as ‘more authentic’ as the navigation was more aligned with the body. Note, participants may not consider ‘realistic’ and ‘authentic’ to be different concepts.

Gamification and Game Design

An early and well-known example for game design using storytelling is the game Dungeons and Dragons (Loh 2007), in which the players form a team and undergo an adventure guided by the storyteller to ensure progress. The game design provides the storyteller with god-like features enabling them to interfere and direct events at any time while still allowing players the freedom to decide on the narrative, i.e. which path to follow.

The storyteller has the most influence on the learner’s decision (Bauman and Briggs 1990). However, in self-guided explorative learning spaces, it is either the learner undertaking the journey (Danilicheva et al. 2009) or the environment itself providing the guidance. This is commonly done in modern, so-called open world games, which allow the free exploration of the environment. Yet, a good game design blends the storyline with the chosen narrative of the player, such that non-player characters (NPC) or an event reminds the player of the overall game objectives, so players are ‘nudged’ in the desired direction (Reiners et al. 2014a, b). These reminders may be weak (e.g. an NPC talking about something while the player walks by) or compelling (e.g. all paths but one are blocked by obstacles to drag the player in one direction). The same mechanism needs to be applied in learning spaces; the scope of the narrative must be suitably broad for learners to engage a sense of curiosity and develop intrinsic motivation for learning, while being limited to enable the instructor to ensure completion of learning objectives and course outcomes. The environment has to continuously track the player against the expected narrative and ensure that milestones are achieved and key actions are performed. For example, the loading of a container on a vessel requires that: (1) it is validated that the container has to be loaded on the vessel; (2) that the goods are first deposited in the container; and, then (3) the container is transported to the container bridge to be loaded. The player must fulfil these rules to be successful; yet, there is free choice, among others, of transport vehicle, path and speed. Figure 5.4 provides the narratives of an expert, and Fig. 5.5 of an inexperienced learner, (see also the Evaluation and Formative section).

Examples for Explorative Authentic Learning Spaces

We conducted experiments to evaluate how a learner explores the virtual learning space based on a general induction of the objectives. The scenario is called ‘Open Container’ and is set on an active virtual container terminal. The background story is that customs inspected a container and forgot to secure it according to safety guidelines. The learner’s task is to go to the container (a map where the location is provided) without getting injured, killed or simply disturbing the working process. Note that the container terminal is unrealistic as it contains an unusually high number of dangerous situations.

The learner is wearing the Oculus Rift with headphones for a high degree of immersion, while using a PlayStation controller to move and turn the avatar. The head and viewing direction is aligned with the movement of the Oculus Rift. The player starts outside of the container terminal in a room where the controls are explained and a video about the objective is shown. Figure 5.2 shows the view after leaving the container. There are three dominant points of interest: the warehouse on the left, a large building ahead and the container terminal. The given task, as well as an open gate, increases the attraction for the latter one. Experiments showed that most walked directly through this gate.

Fig. 5.2
figure 2

View after leaving the starting location: Left image shows the view to the warehouse and parts of the building; right image is the container terminal with the open gate to the container terminal

Figure 5.3 elaborates the path and methods to indirectly guide the students. In these experiments, we left open the options to stray from the ideal so we could investigate the participants’ behaviour; however, we provided visual hints (signs), points of attraction (open areas, gaps), blocks (container blocking a path) or dynamic actions (moving vehicles, fire) to direct the participants in the desired direction. While the learner is unable to pass through a container, signs are indicators and do not actively block exploration. Figure 5.3 shows triggers that start an activity and possibilities to die or be injured on the terminal. Triggers in this scenario are (1) providing guidance towards the container terminal by voice, (2) as in (1), but indicating that a sign said ‘wrong way’, (3) starts the backup of the truck, (4) starts the bridge to move towards the learner, (5) ignites the fire, (6) closes the container while the learner is in it and (7) drops the container while the learner is under it. The red areas indicate the location where a learner can die, e.g. (1) crushed by the truck, (2) burn in the fire, (3) locked in a container, (4) hit by the bridge or (5) a dropping container. The scenario is further explored in Reiners et al. (2014b).

Fig. 5.3
figure 3

Overview of the container terminal scenario. Yellow markers are triggers (17), red indicates an area where the learner can die (15), white numbers are milestones that the learner should reach (14), the blue line shows the path an expert walked while solving the task

Evaluation and Formative Feedback

The evaluation in these experiments was manually conducted by analysing the log files generated during experimentation. The log file captured the coordinates, the viewing direction and the object in sight in 0.25-s intervals. For the analysis, the data was visualised using a self-developed tool to create heatmaps (colour coding to highlight the range of differences observed) and analysed by categorising the behaviour throughout the experiment. In addition, the data is used to replay the whole experiment, allowing the experimenter to gain further insight from different perspectives, including the view of the student. This step is chosen as a result of the early stage of the implementation. See below for more details on the next stage of the environment and how gamification is used to motivate replays.

The focus of the experiment was on whether the scenario design enabled participants, without detailed instruction, to acquire necessary information and complete desired tasks while understanding the inherent dangers of the scenario. The participants were involved in a public event at Curtin University (Festival of Learning; http://www.curtin.edu.au/learningfortomorrow/festival-of-learning.cfm)Footnote 1 and had the choice to participate in the experiment (using the container terminal experiment), use another head-mounted display for a roller coaster ride or not participate at all. We noticed a slight convergence towards the roller coaster; yet, we had no disappointed participants if they had only the option of the container terminal. The participants’ identities were completely anonymous; no personal information or bio data was recorded. The introduction to the scenario included a short explanation of how to navigate with the PlayStation controller, their overall task to find the one container, introduction to the map outside the start location and outlining risks on a container terminal. Figure 5.4 shows two heatmaps; the left one being an example of one participant successfully finding the container, the one on the right shows overlay of all the participants in the experiment. It shows the variety of paths chosen, with a clear indication of the main and also anticipated path leading to the container. Of the 52 participants, 36 (69.2%) moved directly towards the gate, while four (7.7%) walked briefly past the gate, but realised immediately that they had to go back. Eight (15.4%) chose to explore the house despite the given tasks; four (7.7%) walked in a completely different direction. Only 20 (38.5%) finished the task by arriving at the specified container; 22 (42.3%) died in the process (the most frequent cause of death was the falling container as it was overlooked, closely followed by being locked in a container when participants ignored the ‘no entry’ sign). Twenty-eight (53.9%) explored the virtual environment before finding the goal. General feedback was that they enjoyed the opportunity to explore a space that is not accessible in the real world. Only six (11.5%) had to stop the experiment early as they felt light dizziness.Footnote 2

Fig. 5.4
figure 4

Heatmaps showing the path of one participant (left) and all participants (right). The intensity is based on a log function (to emphasise areas of interest) over the accumulated time that an avatar was at a certain position. Note that a brighter colour (yellow or red) implies more time was spent by an avatar at this position. For example, on the left side, the red spot half way to the target indicates that the student reacted on the moving bridge, waited for its path to find a way without getting under the hanging container

This experiment is used to understand how learners address an unknown virtual environment, i.e. not their field of study. We used a container terminal; a scenario most participants know of and have a rough understanding of the processes that happen. The short introduction let them know about risks (most of them are common knowledge as they included fire, moving vehicles and other large objects) but providing them with no special training. All but a few of the participants had not experienced HMDs, which added to the novelty of the experiment. Observations supported the above-mentioned experiment regarding the usability and immediate capability to navigate in the virtual environment; an experience that we did not observe from other environments such as Second Life. We recognised physical reactions with some participants, but less intensive compared to scenarios like roller coaster and racing simulators. Feedback after the experiment suggests that this is related to the real-world relation of the scenario and the focus on the task as a subliminal distraction. The analysis of the individual heatmaps showed that most participants were following given attractors to choose their paths and also focused on getting to the final destination. The low success rate, as well as the large number of deadly accidents, demonstrated that container terminals were not safe environments and training and extensive induction should be undertaken.

We consider the immediate formative feedback as one of the most important components in a self-explorative environment. While the first stage is using visual feedback for guidance and messages in case of errors, we are also implementing two methods for an automated evaluation of given activities including various gamification mechanisms; see Reiners et al. (2014b) for more details. The first method uses defined milestones to derive this from the activities and, most importantly, the shortfalls. Figures 5.3 and 5.5 show some milestones on the container terminal that the learner should reach according to the scenario. For the example of the container terminal, milestones could be:

Fig. 5.5
figure 5

Comparison of narratives—expert and learner

  • Walking through gate (1): The learner enters the actual learning space; the passage of time or distance walked could provide an indication of how quickly the learner understood the task (reading maps or instructions) or found the relevant area.

  • Passing the truck (2): Indication that the learner walked in the correct direction, did not follow a path that was indicated as a no-entry zone and did not get hit by the truck. The milestone is reached if the learner turns right before the truck; the alternative of passing quickly behind the truck is considered as unsafe.

  • Past the second container bridge (3): The main path is past the second container bridge, yet it is important to consider movements as well as potential risks walking below the container. The milestone is reached beyond this area.

  • Passing the fire (4): The next milestone is either walking the north-east loop or waiting for the fire to be extinguished and the explosion to subside.

  • Finding the target container (5): Reaching and entering the container that was indicated on the map as the final destination.

Figure 5.5 shows the comparison of narratives done by the expert and learner. The first step is the alignment of matching milestones, e.g. (e1) and (l2) for entering the container area. Even up to this point in time, the expert and learner followed different narratives, e.g. the learner triggered [T:1] and walked further. This variation allows us to deduct feedback of the learner based on predefined templates associated with triggers and analytics of the collected data as described above. The milestone (l1) is created by trigger [T:1], which is associated with walking in the wrong direction. The immediate feedback is a message (voice or text) about the objective and where to go (guidance); furthermore, the analytics show larger distances than expected, which may indicate an exploration of the environment or that the learner is lost. The information with additional hints can be passed to the learner (‘You walked in the wrong direction [template] and covered a distance of 250 m instead of 180 m, requiring 30% more time than expected [analytics]. Please go back and use the gate to the container area [guidance]’). The relative straight walking line shows that the guidance helped to get back on track and reach the milestone (e1). A similar strategy is used in the following steps, whereas ignorance indicates overlooked signs/restricted areas, recognition indicates the awareness of danger (not dying) and exploration indicates a longer walking distance than expected. In this example, the learner died by walking in a container that was ready to be closed and shipped. We focus in our framework on milestones and the deduction of what happened in between by using collected data and/or information from triggers and other objects. We are not detailing the behaviour or gestures of avatars in detail; see Fardinpour and Reiners (2014) for some examples on how an analysis of human behaviour can be further processed.

The training scenario is gamified in different ways. First, feedback on performance relative to previous experiences and overall outcomes is provided based on records of sectors defined by milestones and time. Second, quality of the experience is indicated by an overall score, i.e. based on time, absence of errors, and most importantly, not running into a deadly situation. Accuracy (short distance), time or recognition of danger are positive (to get a carrot), while other situations are punished (stick) by adding penalty points. Third, additional gamification elements include rewind functions allowing learners to reinspect a situation, competitive incentives and being in an authentic environment.

Conclusion

Virtual training environments have changed significantly over the last decade. Second Life created the critical mass of users demonstrating that it is time to consider real alternatives for textbooks and restricted classroom. Virtual worlds expanded the lecture by breath live into the learning material and created a new understanding of collaborative learning in distance education. However, Second Life is an open space not primarily designed for education but the smallest denominator to allow any discipline to find tools and processes without being overwhelmed. The lack of educational support became the seed of other environments; SLoodle and Vacademia are just two examples that show how to link the world to subsidiary systems for supporting the learning process.

In this chapter, we demonstrated the ongoing research project nDiVE (http://www.ndive-project.com), which focussed on an authentic and immersive learning process in the context of Health and Safety within Logistics Infrastructures. We shifted the focus from the collaboration (as done in Second Life and similar environments) to individual goal-oriented training units; supported by emerging technology to increase the feeling of being present in the learning space. The importance is set on authenticity and the unrestricted and unsupervised exploration of the space; yet supporting the learner with supportive but hidden guidance and formative feedback. The learner is challenged to find a solution, but has to pay the same caution, as it would be necessary in the real world. nDiVE provides a preliminary framework suggesting an orchestration of hardware to experience the space, software to create the context, the automated observation and feedback generation and guidelines how to create the authentic processes that define the learning experience.

The experiments showed the validity of the ‘carrot and stick’ approach. Scripted events, as well as, placed objects can be used to direct the learner in the right direction and provoke the anticipated behaviour expert demonstrated in similar contexts. The underlying story with various narratives offers the learner objectives without restricting possible pathways to find the target. The analytics of the learners’ tracks further supported a deeper understanding for the educators in designing learning scenarios and identifying knowledge gaps.

The head-mounted display limits the collaborative learning experience as long as the environment does not allow multiple human-controlled avatars at the same time similar to Second Life. Same restrictions apply to trainers or educators; as observation and feedback is limited to a debriefing afterwards as any other communication during the learning process would reduce the immersion. Thus, we are extending the framework to log activities and changes of the environment and reproduce the whole learning experience on a 3D cylinder space (curved 3D display with a width of approx. 6 m; using glasses for the 3D view). This allows the projection of all activities similar to a movie with rewinding and forward opportunities, but also choosing the appropriate perspective such as first person or surveillance camera. The external debriefing adds another layer of evaluation to the learning experience. The enclosed learning space supports the training by allowing to redo the scenario after receiving formative feedback from the system; yet human trainers are still required to validate the final training outcome or answer in depth questions.