Keywords

3.1 Introduction

3.1.1 The Vanishing Identity of the Traditional Societies

A crucial problem of contemporary traditional societies is the protection and continuity of their cultural identity. The pressure exerted during the last decades by urban societies on rural societies has led many villages of South Eastern Europe to lose a significant part of their identity.

Starting from the premise that traditional crafts represent one of the crucial features of the identity of rural societies, it follows that their disappearance, due to the phenomenon of modernization, is particularly prejudicial to traditional societies.

3.1.2 A Brief History of Rural Romania

For centuries Romania was a country with a predominantly agricultural economy which, after World War II, was affected by a rapid process of industrialization and collectivization [13], changes which severely undermined the traditional rural society [52].

The period of the communist regime introduced more dramatic social changes, with even the creation of a new folklore, a new ideology, new forms and materials replacing the traditional ones, and the oral culture based on visual narratives, specific to the village mentality being replaced with a culture of the text, based on foreign models. This process continued during the last decades at an accelerated pace, with practically all the ancient traditions being forgotten within the span of two generations.

Vădastra, a village in Olt County, southern Romania, is a representative example that illustrates the crisis of the Romanian rural communities. It is for this reason that it was chosen as the place where the attempt to revitalize part of the traditional customs and technologies would be made, aiming to recreate with the help of the archaeological experiments, some of the ancient eco-technologies.

Initiated a decade ago, a series of research projects,Footnote 1 coordinated by Gheorghiu [18] from the National University of Arts Bucharest, succeeded in developing in Vădastra a small ceramics’ center inspired from the ancient technologies. Aside from the initial scientific objectives, these projects also had strong educational goals, to be achieved by involving the young villagers in the process of recovery of their cultural past [47].

The current research project conducted by Professor Gheorghiu at Vădastra—“The Maps of Time—Real communities-Virtual Worlds-Experimented Pasts” (Grant PN II IDEI) continues this trend, while attempting to transfer the educational content and programs resulted from experimental archaeology to the IT area, with a strong focus on e-learning.

3.1.3 The Maps of Time Project

Our research is part of the project “The Maps of Time”, Grant PN-II-ID-PCE-2011-3-0245, in which both traditional and modern e-learning technologies were developed. The latter insist not only on the identification, storing and transmission of the data about traditional technologies, but also on the use for educational purposes, of visual narratives about the ancient technologies, thus trying to revitalize a local custom dating back to the 18th century, that of painting visual narratives with moral and educative subjects on the facades of various secular and religious buildings (Figs. 3.1, 3.2).

Fig. 3.1
figure 1

A narrative on the ceramics’ decoration

Fig. 3.2
figure 2

A visual narrative written on the cultural community center’s façade

Consequently, our interest was focused on a paradigm based on multimedia visual narratives (also called hyper-story), implemented with the help of software agents, and designed to develop the cognitive capacities of young people, while simultaneously bringing an old tradition back to life.

After the successful experiment of revitalizing ceramic technologies in Vădastra,Footnote 2 we continued the process of recovering traditional crafts by focusing on local textile production, a field of craftsmanship famous in the region in the distant past [39]. Many of the old village women still practice the traditional craft, but this knowledge was not transmitted to the young generation, who ignore it completely. Therefore this subject was prioritized for our pedagogical activity in Vădastra.

The current chapter is structured as follows: (a) A section on related work; where related learning theories and implementation solutions are discussed highlighting the differences between the latter and our solution; (b) A section describing the agent-based learning paradigm, including a functional description of the overall learning solution and its components; (c) A section on implementation details focused on the software agents’ functional role; (d) A section discussing experimentation of different scenarios of application use, presenting the learners’ perspective of the e-learning solution; (e) A section on educational novelty and outcomes analysis; (f) Conclusions and future work, as the final section of this chapter.

3.2 Related Work

3.2.1 Theoretical Basis of the Educational Applications’ Design

To deliver an educational message, teachers resort to modern pedagogical theories based on cognitive sciences and psychological behavior. The educational content has to be developed considering these theories, i.e. during a process named instructional design.

According to the instructional design theory, instruction should be organized in increasing order of complexity for optimal learning [40].

Robert Gagné [41] created an instructional theory that is currently used in the instructional design in different learning settings. One of his theories defines “curriculum as a sequence of content units arranged in such a way that the learning of each unit may be accomplished as a single act, provided the capabilities described by specified prior units (in the sequence) have already been mastered by the learner” [41]. In our case, this means the correct learning of fundamental stages of some of the traditional technologies which formed the identity of the Vădastra village (e.g. the textiles), represents a first stage of the learning process.

In [18] the author stresses that “learning software can be distinguished from pure information systems by the fact that its design is based on some conceptual models of teaching and learning.” For the design of our educational software we considered theories of learning that applied to our paradigm.

3.2.2 Hypermedia Learning Environments

In [49] the author defines multimedia as a “new medium and a new communication technology” and he mentions that the “justification for learning with multimedia is that aspect of learning and instruction which may be designated as enrichment of learning.” Schulmeister also considers multimedia “a second Gutenbergian revolution” [49]. In some implementations, the multimedia systems are complementary to the traditional learning style, while in others they are used as standalone systems aiming to completely substitute the professor’s presence.

A hypermedia environment contains all kinds of media combinations: text, graphics, sound, and video. “Hypermedia is the integration of a computer and multimedia to produce interactive, nonlinear hyper environments” [49]. A hypermedia system resembles the human way of thinking, based on connections [18]. Some authors consider that it contains “the non-linear chaining of information which in a strict sense must exist in at least one continuous and one discrete medium” [53]. Schulmeister cites [36] as the one that defined the term of hyper-learning systems as “the combination of multimedia or hypermedia and learning; not a single device or process, but a universe of new technologies that both process and enhance intelligence.” Schulmeister also states that a multimedia system must be “reactive, proactive and reciprocal interactive”. It is important to remember that multimedia by itself cannot produce learning outcomes, as Clark emphasizes in [9], the latter being the result of the underlying instructional design.

Reciprocal interaction is implemented in modern computerized systems, like virtual reality, in which “learner and system may reciprocally adapt to each other” [49]. We conclude that a multimedia system is a highly interactive system. In such complex informational systems different modalities and techniques are used to inter-connect the constitutive media, e.g. system-user interactivity, narrative structures, artificial intelligence or software agents, which ensures the “navigability, adaptivity, reactivity” of these systems [49].

Regarding the modern cognitive theories and learning and teaching paradigms, the multimedia learning systems support an autonomous self-paced learning style and a constructivist learning paradigm, which is a form of active learning, or “learning by doing”. According to this model, the learners develop their knowledge in a participative way, gaining a practical experience in real situations or in environments resembling the real ones [14]. The active learning is typically implemented with environments featuring a kind of immersion. The hypermedia systems also support the non-linear learning, and try to recreate the spontaneity of the learning process. The learner is presented a multitude of learning opportunities and subjects from which he can select the desired ones based on individual preferences, level and educational necessity [14].

In our research project we have attempted to implement a modern paradigm of learning traditional technologies within their genuine historical contexts, based on the local tradition of visual narratives, an educational strategy tailored for children between the ages of 8 and 12 years old. We surmised that the transformation of simple visual narratives, describing the technical processes, into digital stories would produce an efficient tool of mobile and contextual learning for children.

Since the stories are “narratives of true or fictional events that intend to capture and involve learners actively” and have “a topography, and spatial and temporal dimensions” [43], they were ideal for our contextual approach. A special type of narrative is the hyper-story, based on multimedia content and able to develop “cognitive structures that determine tempo-spatial relationship and laterality in early age children” [43]. Through this description Sanchez and Lumbrera emphasize the importance of spatial and temporal features in the process of learning, or, in other words, the importance of the receptor's situation, an aspect which we took into consideration in our approach. By exploring the environment children learn in a process similar to a play; this is why our paradigm could also be described by the term “learning-by-playing”.

3.2.3 Mobile Augmented Reality

As a personal learning style, mobile learning is defined as “the acquisition or modification of any knowledge and skill through using mobile technology, anywhere, anytime and results in the modification of behavior” [15]. Mobile learning also supports the “situated or context-based learning” model [1], which stresses that learning has to take place in “true learning contexts”, and contain “authentic activities and assessment” [12].

In order to implement a mobile-learning system with hypermedia simple or complex narrations, we considered the technology of the Augmented Reality (AR) on mobile devices.

Augmented Reality is a computer technology which allows the dynamic overlapping of multiple computer generated content layers, on a live-view camera stream, while tracking the user’s movements and view changes. AR is seen as supporting a new immersive visualization paradigm and a natural human-computer interaction. It is for this reason that AR technology is adopted in educational projects, to mediate an enhanced visual and cognitive perception, and to support several new learning styles: mobile, situated in context, ubiquitous and nomadic, social. The content may comprise texts, 2D images and graphics, 3D graphics and animations, audio and video files superimposed on the surrounding reality, captured by means of a video camera, or optical systems (HMD or glasses). These augmentations have to be integrated naturally and in real time on the video stream represented by the real scene, making the user perceive a new cognitively augmented scene. This is achieved by means of complex and advanced techniques, such as real-time tracking and 3D registering. The technical definitions of AR processes can be found in the reference paper of Azuma [2].

Some authors [40] consider that video information does not represent an augmentation of reality. When these augmentations are visualized outside the AR process, they are referred to as “actions” attached to AR, but when integrated in the real video stream using video-in-video techniques they are part of the AR process. The use of the video augmentations has great pedagogical value, and opens the way to integration between AR and multimedia systems, leading to a new form of hypermedia-learning system, the generic Mixed Reality (MR) system [11].

The AR/MR systems allow the augmentation of the user’s perception of the real world, but also the use of existent visual and spatial abilities and an enhancement of the interaction capacities of the users [19]. Specifically, compared to MR systems, the AR applications allow the implementation of context sensitive applications. When the context is geographic, the AR applications act as true information browsers. These kinds of AR applications present a non-linear navigation metaphor for content visualization and make use of remote data services. The augmented data delivery is determined by certain “triggers”. The similarity with web applications is obvious, except that the data request is triggered not by the user, but by a geographic context. From this architecture we conclude that AR applications are strongly dependent on online connections, a feature which can affect the usability of AR applications for learning purposes.

Consequently, our approach is to develop an AR learning tool as a native application for Android i.e. to use integrated hardware and software capabilities to store both the application and the content. The native application approach has the disadvantage of being restricted to specific mobile platforms.

The present AR technology allows a seamless integration with social networks by means of service mash-ups. In our approach we used combined software agents with social network mash-ups.

Having thus described the AR processes’ capacity to augment a user’s understanding of reality in its context, we must now question whether, for an educational application, a simple visualization in an AR view is sufficient even if it is ideally implemented (as defined by Azuma [2]: 3D content overlaid in real time on the live video image). We consider that this is sufficient only for achieving a first cognitive level but not for a learning purpose.

That is why we proceeded to investigate technologies to be associated with AR in order to develop an educational application. Furthermore, targeting a group of primary and secondary school children, this kind of application needs to include elements to engage this category of learners and to offer a certain level of application control. Analyzing modern software engineering technologies we considered software agents, which are discussed later in this chapter.

3.2.4 Agent-Based Approaches

The learning process is a dynamic one and traditionally mediated by the teacher. Modern models of e-learning, are focused on a learner-centered model, and pedagogically mediated by technology. Conrad and Donaldson [10] define several methods for creating engaging courses in the e-learning environments.

Software agents as well as AI (Artificial Intelligence) are used in educational software to create intelligent applications, adaptive to the context of their use, to support this learning paradigm and also to capture the learner’s interest.

Narrative based educational application use software agents under the form of animated humanoid agents, namely in the form of characters, which are designed with the purpose of rendering the lesson more engaging, to compensate for the absence of the teacher, or to provide help to students in different situations occurring during the educational process. The agents guide the learner through the learning process according to the pedagogic strategy.

In multimedia systems, agents serve as “personal metaphor guide (guides, agents, tutors) or historical personages” [49]; they make the connection between the learner and the educational system. An example is the one implemented by Sanchez and Lumbreras [44] and Sánchez et al. [43].

Most implementations of software agents represent complex applications with elements of artificial intelligence, e.g. digital storytelling, collaborative, gaming. Software agents were extensively used in Virtual Reality based educational applications having visual interfaces.

In [4] the authors perform a brief review of software agent implementations in AR environments, as these are more recently integrated within AR environments. An early AR application with humanoid agents is the ALIVE system [37]. In [8] a live video avatar of a real person is placed into a Mixed Reality setting, and interacts with a digital storytelling system with body gestures and language commands. In [3] are experimented interaction techniques with virtual humans in Mixed Reality environments, which played the role of a collaborative game partner and an assistant for prototyping machines. For [7] the agent is a virtual playmate assisting children in a natural storytelling play with real objects.

In [19] the authors review implementations of Mixed Reality applications which use the technologies of software agents for the control of the actions/tasks and also as a user interface metaphor.

Archeoguide [17] is an example of an outdoors AR application that uses X3D data format and runtime to describe the content and a dynamic runtime behavior, which contains elements of AI logic written in JavaScript. An X3D scene consists of three different layers: background video, 3D reconstructions and the user interface [17].

AMIRE it is an authoring language developed within an EU IST Program, completed in 2004, dedicated to efficient creation and modification of Mixed Reality (MR) applications, authoring metaphors, and generic design recommendations and procedures [20].

As we can see, state-of-the-art approaches exist for different educational purposes and learning settings. Of these, only [17] and [34] use software agents in mobile AR settings, similar to our approach. In [17] a proprietary AI solution was developed for an intelligent and adaptive touristic guide. In [19] open technologies were used for an educational application using visual agents which assist the learners. In our approach we used non-visual software agents as AI components in order to support a narrative learning application with both pre-determined and non-linear learning paths. For the implementation we chose both open source and commercial SDK.

3.3 Description of the Agent-Based Learning Paradigm

3.3.1 Motivations for Present Work

We believe that e-learning can be a solution for the support of the preservation and continuity of many cultural traits of rural societies, such as textile technologies, ceramics technologies, as well as various other crafts.

In our case of vocational learning the successful integration of new e-learning technologies is due to the application of a blended learning strategy, i.e. a combination of traditional classroom teaching (or workshops), and e-learning specific technologies, to cite only the videoconference interactive system, or the use of virtual learning environments or mobile learning. This strategy is described by Schlosser and Burmeister [48] as using the “best of both worlds”.

To kickstart this pedagogical project of the revitalization of the traditional textile technology, the local school was equipped with a Sony Bravia monitor with a CMU-BR100 video camera, and with Skype Internet connection. The first video conferences were conducted in the summer of 2012, with the courses delivered by the staff of the Textile Department of the National University of Arts Bucharest and attended by both village teachers and students. In the beginning, face to face hands-on lessons (consisting of presentation of techniques and individual work with groups of village students) were performed using replicas of Roman looms.

In the subsequent stage the project set the basis for the development of a community of learning which advanced the blended-learning approach by creating a platform of experimentation of the technologies, as well as a virtual and interactive e-learning system.

To support a new paradigm for informal learning, as a precursor to further vocational training, an original software application was designed for Android mobile devices (smartphones and Tablet PCs), integrating at least a GPS receiver and a rear video camera.

The application assists children as they progress through each lesson in a mobile real/virtual environment based on the Augmented Reality technology and the concept of geo-referenced Points of Interest (Figs. 3.3, 3.4). Thus, the application is dependent on a well-defined geographic context.

Fig. 3.3
figure 3

The application and the list of POIs

Fig. 3.4
figure 4

The mobile AR application

The educational content is structured under the form of hypermedia narrations (as individual lessons), consisting in walkthroughs within a multimedia inter-connected content. Each lesson uses 3D reconstructions of the traditional objects and of the original environments in which these objects were used. The children select a specific digital narration according to their learning objectives. The whole application integrates several specialized functional modules including a framework for software agents. The application was developed following the Android application model.

3.3.2 General Presentation of the E-Learning Solution

In our research project we have attempted to implement a modern paradigm of learning traditional technologies within their genuine historical contexts, based on the local tradition of visual narratives, an educational strategy tailored for children between the ages of 8 and 12 years old. We surmised that the transformation of simple visual narratives, describing the technical processes, into digital stories would produce an efficient tool of mobile and contextual learning for children.

Since the stories are “narratives of true or fictional events that intend to capture and involve learners actively” and have “a topography, and spatial and temporal dimensions” [43], they were ideal for our contextual approach. A special type of narrative is the hyper-story, based on multimedia content and able to develop “cognitive structures that determine tempo-spatial relationship and laterality in early age children” [43]. Through this description Sanchez and Lumbrera [44] emphasize the importance of spatial and temporal features in the process of learning, or, in other words, the important of receptor’s situation, an aspect which we took into consideration in our approach. By exploring the geographic and historic environment, children learn in a playful manner, searching for Points of Interest (POIs) representing different learning stages; this is why our paradigm could also be described by the term “learning-by-playing”.

Following the choice of implementing our solution on mobile equipments (smartphones and tablets) we promoted the mobile learning paradigm.

The programmatic power of the software agents’ technologies was leveraged in order to implement a multi-agent system. One of these agents is the “learning agent”, which has the main task of mediating the learning process. The agent-based learning solution will be described in the following sections.

The learning paradigm experimented in our project consists of a hypermedia learning system in the context of an Augmented Reality—based environment. The software application provides a logical suite of stages of learning of the traditional technologies (textiles, glassware), while leaving the children the freedom to choose a learning path, i.e. a specific lesson. By using our application, the children can see all the stages of a technology, both as a pure technique and a technique in a cultural and historical context.

This educational scenario is addressed to children and contains elements to attract and persuade them to travel through a complete lesson. If this does not happen, i.e. a lesson is not completed, this status is persistently stored, such as at the next use the application is able to make the recommendation to resume the lesson from where it left off.

For our paradigm we therefore needed to implement scenarios based on multimedia sequences and an application to provide intelligent, adaptive behavior. This required an implementation of methods for monitoring the learning process to provide feedback on the application usage, and a behavior adapted to the context of use. Instead of traditional AI programming, we chose to use software agents, due to their ability to incorporate both logic and communication mechanisms, and therefore adequate for designing very specialized functions.

3.3.3 Basic Concepts of Software Agents

In computer science software agents represent intelligent software modules, which can act in the name of the user, or other software module by means of an agency. This represents “the authority to decide which, if any, action is appropriate” [38]. In the domain of Human-Computer Interaction (HCI), the agents are referred to as interface agents or user agents [35].

Software agents also represent a modern programming paradigm [50], Agent-Oriented Programming (AOP), different from the procedural or objectual ones, but close to the concepts of methods and functions.

There are several definitions of the software agents: “persistent software entity dedicated to a specific purpose” [51]; “self-contained program capable of controlling its own decision making and acting, based on its perception of its environment” [55]; “effective for organizing and programming software applications in general, starting from those programs that involve aspects related to reactivity, asynchronous interactions, concurrency, up to those involving different degrees of autonomy and intelligence” [45].

According to [55] a software agent is defined as a hardware-software entity having the following essential attributes:

  • Autonomy: agents can operate without the direct intervention of humans or others, and have control over actions and internal states;

  • Social ability: agents can interact with other agents, as well as with their users;

  • Reactivity: Agents can perceive their environment and respond on time to the changes;

  • Pro-activity: agents can take the initiative and display a behavior based on an objective.

This autonomy allows agents to accomplish tasks which are defined in terms of “behaviors”. Because of their autonomy, software agents can be taught by high-level descriptions so that the user is freed from complex low level operations. With this programming model difficult programming problems are solved such as concurrency, security or context-sensitivity [45], and also predictive agents can be developed.

There are several open source agent-based technologies on mobile devices, such as JADE [21], JaCa [22], Jadex Android [23], Agent Factory Micro Edition [24], 3APL [25].

In [45] is presented the JaCa-Android as an integration of two existing agent programming technologies: Jason and CartAgO. Jason is an agent programming language based on AgentSpeak language, the most employed agent-based language, and also a platform [26] developed in Java. CartAgO is a framework for programming and running the agent-based applications [27]. We took into consideration this platform for our agent-based implementation, due to the fact that it is a high-level framework and it provides an Android version.

Jason’s architecture is based on the BDI (Belief-Desire-Intention) computational model, which defines the behaviour of individual agents and offers a model for agent reasoning. The agents can be reactive (i.e. react to events) or pro-active (try to achieve a desired status). On the environmental side, CArtAgO uses the notion of artifact as an abstraction to define the structure and behaviour of environments and the notion of workspace as a logical container of agents and artifacts. Artefacts represent the environment resources and tools that agents may dynamically instantiate, share and use [45]. The advantage of JaCa-Android is that it provides a high-level developing framework based on abstractions that facilitate the development of agent-based mobile applications on a mobile platform [45].

The advantage of the programmatic power of the software agents is counterbalanced by the disadvantage of their complexity and steep learning curve. A high-level framework allows a software developer to concentrate on the design of the application functionalities, contributes to a lower development time and is less error prone. Nevertheless, there still remains the issue of understanding which types of software agents and development framework are suitable for a given application and the need to allocate time for an evaluation of the existing frameworks as there are several frameworks offered by an open-source community, at different development stages.

3.3.4 Components of the Agent-Based Learning Paradigm

The paradigm we propose has the following structure:

  1. 1.

    An informal learning - by - playing and mobile paradigm focused on skills and technology;

  2. 2.

    A situation in context and information visualization using Augmented Reality technology;

  3. 3.

    A digital narrative using hypermedia (i.e. linked multimedia content);

  4. 4.

    Sensitiveness to the context and usage with the help of software agents, i.e. dynamic adaptation of the didactic content and of the application interface.

In the present chapter we will discuss only the learning of a single craft technology, that of traditional textiles. It is well known that textiles, as well as ceramics, was a conservative technology [5], remaining almost unchanged during millennia. Consequently, the methods of weaving documented in Vădastra village date back at least from Roman times. This rationale combined with the fact that Vădastra, like many other villages in the region, was built overlapping a Roman villa rustica, prompted us to design the lessons on textiles with historical references, thus allowing the children to experience a lesson of history, beside the technological lesson.

As a result, the process of learning proposed by us is based on the concept of a Hypermedia Digital Storytelling Scenario (HDSS). A HDSS could be defined by the following formula:

the dynamic content (the 3D architectural reconstructions + the 3D technological objects reconstructed) + the story of technology + the performance of the experimentalist (Fig. 3.5).

Fig. 3.5
figure 5

Hypermedia digital storytelling graph (UML diagram)

The educational application designed for this purpose, named MapsofTime-LearningTool, offers the following 3 learning levels, presented in order of their gradually increasing cognitive complexity.

  1. 1.

    The first stage begins with the Contexts Digital Storytelling, or, in other words, the visual story of the place, which is a narrative about Romans and their technologies, positioned via GPS within the local geography, and presented under the form of video films with the re-enactments produced by the team of experimentalists of Vădastra. For this purpose a couple of horizontal and vertical Roman looms [6, 54] were reconstructed. To increase the veracity of the technological performance, performers were dressed in Roman costumes and acted in a reconstructed villa rustica’s workshop and outdoors (Figs. 3.6, 3.7). Thus children were able to visualize the textile technology in its ancient context, a historic information layer enhanced by the performance of the contemporary technologists.

    Fig. 3.6
    figure 6

    3D reconstruction of a Roman villa rustica (Contexts Digital Storytelling)

    Fig. 3.7
    figure 7

    Artist showing weaving techniques (Contexts Digital Storytelling)

  2. 2.

    The second stage is the ObjectsDigital Storytelling, in which the 3D reconstructions of the objects for weaving,Footnote 3 furniture (Figs. 3.8, 3.9) and interior design details were included. Children can play with the animated virtual reconstructions, understanding the meaning and function of the displayed objects.

    Fig. 3.8
    figure 8

    3D reconstructions of the objects (ObjectsDigital Storytelling)

    Fig. 3.9
    figure 9

    3D reconstruction of a Roman vertical loom (Objects Digital Storytelling)

  3. 3.

    The third stage, the Technologys Digital Storytelling was designed for those who acquired a high level of technical knowledge, since it presented in detail the processual stages (chaînes-opératoires) of the technology studied (Fig. 3.10). The children truly interested in learning the technology can access this level of the hyper-story made of video films describing the technical stages,Footnote 4 with images captured from both the observer’s and the performer’s perspectives. To contextualize these operations some images had links to the first stage of the hyper-story.

    Fig. 3.10
    figure 10

    Textile technological gestures (Technologys Digital Storytelling)

  4. 4.

    The practical part of the hyper-story, the Craft Approach, was materialized under the form of workshops organized for the village children,Footnote 5 where they worked with the reconstructed Roman looms (Fig. 3.11). This was a good opportunity to develop their skills and discover real talents. At this stage too, children can access the first stage of the hyper-story.

    Fig. 3.11
    figure 11

    Hands-on lessons with children about weaving techniques using a vertical loom (Craft Approach)

In the hyper-story we designed, the child has limited decisional control; s/he can choose a story, for example the visual story of the place, but cannot intervene in the (pre-determined) order of the technological stages. On the other hand, the child has the freedom to combine the stages of learning in a real-augmented environment with the practical experimentation of the technology.

3.3.5 Description of the Software Agents

In order to support the concept of HDSS, we implemented the following categories of software agents (Fig. 3.12):

Fig. 3.12
figure 12

The agents and a diagram of their inter-communication

  1. 1.

    User-Profile Agent (UP-A)

The UP-A agent manages the user profile by name, associating it with the result of the current lesson completion status, in relation to the chosen technology (textile, glass, ceramic). This agent delivers the welcome message and also transmits information to other agents (the learning agent, the user interface agent).

  1. 2.

    AR Context Agent (AR-A)

The context can be considered an agent for information filtering, making the application context sensitive. To support this, a geographic area was predefined in order to contextualize the learning solution. The area represents at large an archaeological location from Vădastra village. In his area, for each of the taught technologies, 3 geographic points of interest (Geo-POIs) were defined by their coordinates, representing the three learning levels (according to their description in Sect. 3.3.4).

The geographic context is given by the GPS, compass and accelerometer sensors of the mobile devices. In the AR-A agent the AR functionality and logic were embedded: receiving and interpreting the information from the sensors; deciding first if the user is in the pre-defined area; only in this case and according to the chosen technology, the three POIs will be displayed. For each selected POI, the AR software overlays augmented information over the video stream, and ensures the 3D registration and tracking processes.

The AR-A agent also manages the hypermedia links inside of each learning level and communicates with LE-A, the learning agent.

  1. 3.

    User Interface Agent (UI-A)

The interface displays the POIs to which the narrative lessons are attached and allows a non-linear (at choice) navigation among different learning levels related to the technology in context (e.g. textile technology). The geographic points of interest (Geo-POIs) are displayed either on a Google map or in a list (Fig. 3.3), and have attached colored labels with information.

The UI-A agent adapts the interface according to the user’s profile, making use of the messages received from UP-A. For example, the POIs representing learning levels completed by a particular user which returns to the application will be displayed with a different color background.

  1. 4.

    Learning Agent (LE-A)

This agent supervises the learning process: the agent receives message from the AR-A agent regarding the completion status of a particular learning level, and transmits specific messages to the UP-A agent. When the child returns to the application, he is free to choose a learning level, but he also receives a status information or recommendation due to collaboration among AR-A, LE-A and UP-A, UP-A and UI-A agents.

The application usage information monitored by LE-A can be used for further investigations, e.g. lesson completion rates.

  1. 5.

    Social Agent (SL-A)

This agent implements the connections with social networks. The learners can share their experience by sending an image and text by email, to a Facebook or Twitter account. The SL-A agent integrates the underlying service mash-up, by implementing an OAuth authorization service [28].

The control of one learning level comprising several digital stories, i.e. the link between the multimedia components in a learning scenario, occurs in a linear manner for users, in order to direct them through a thread of a story. The scenario follows the template:

  • The real environment is assigned the 3D reconstructions;

  • The 3D reconstructions are followed by presentation of 3D objects;

  • The 3D objects are followed by video presentations.

In the AR view of the application, the navigation through an individual learning level is achieved by means of interactions with the virtual objects, i.e. attached events which serve to advance to the next element of the story.

3.4 Implementation Details

3.4.1 Software Integration Challenges

Our learning application is designed to combine several software technologies: AR, multimedia content, 3D content, software agents on Android mobile platform, social media.

Software agent technology was chosen as a solution to provide specialized software components in order to manage the learning application as a whole. The existing AR platforms can manage the AR processes and the social media integration, but cannot provide support for managing a complex custom application with AI behavior.

From the software implementation point of view we had the choices, according to [45, 46] to either port agent-based software technologies on mobile devices or use frameworks developed for mobile devices. To integrate a software agent framework we needed to select an AR development framework not an AR platform which acts as Content Management Platform.

3.4.2 The Android Platform

The AR application was designed for Android smartphones and tablet PCs. The integration of different functional modules was possible by developing a native (device-specific) applications on the Android platform. This offers an open source software stack for mobile devices which encompasses the operating system (Linux-based), native libraries, Java runtime (virtual machine), the Android application framework (SDK) and user applications. This open architecture allows for modularization and facilitates the development of complex applications.

The Android SDK is an object-oriented Java-based framework, consisting of a high-level set of classes, providing abstractions for developing mobile applications for Android-enabled mobile devices. The main components of the framework are:

  • Activity Manager: controls the life cycle of all Android applications;

  • Fragment Manager: manages user interface elements (e.g. a list of menu items);

  • Services: a service does not have a GUI and runs in the background for an indefinite period of time;

  • Content providers: manages specific applications’ data that can be shared with other applications. The data can reside in an Android SQLite database or XML files;

  • Resource Manager: manages application resources, such as literal strings, images, XML files;

  • Location Manager: provides location information related to the device;

  • Notification Manager: allow application to notify the user regarding different events.

The Android framework also uses low-level synchronization mechanisms in case of complex applications with asynchronous events. This behaviour is similar to thread-based programming used in other programming languages. The agent programming model also uses asynchronous communication, but managed through an agent-based framework, which abstracts the low-level platform communication implementation.

3.4.3 The Mobile Augmented Reality Application

For the development of a Mobile Augmented Reality application, three options are available:

  1. 1.

    First, and the most common, use of a client-server architecture, which involves the AR platform, the application developer or third-party server for content management and a client software residing on the mobile device and downloadable for different mobile platforms; in this case, the content delivery is dependent on the internet connection.

  2. 2.

    Second, is to develop a native application for a specific mobile platform, package it with the content and install on a mobile device;

  3. 3.

    The third, is a combination of the first two options: the native application and the content are installed on a mobile device, and those can be upgraded from time to time on an internet connection.

We chose to build an application following the third architecture model, i.e. a native applications less dependent on an internet connection bandwidth and availability but using it for periodic application and multimedia content update. The development of a native application enabled us to seamlessly integrate the software agent framework with the AR application and ensured an optimal performance, but have raised the complexity of the application design, implementation and test.

Three commercial AR platforms were analyzed: Layar [30], Wikitude [31] and Junaio [32], with the final selection being Junaio from the Metaio company. The Metaio AR SDK offers more options for developing geographical AR applications using a modern, powerful and flexible programming model (Fig. 3.13), allows development of interactive scenario-based AR applications and supports multiple tracking models (optical and non-optical).

Fig. 3.13
figure 13

The architecture of Metaio’s development tools [32]

Regarding the authoring of the Augmented Reality scenarios for each individual multimedia story, i.e. learning level, we used UML (Unified Modeling Language) diagrams (Fig. 3.5) and Metaio Creator [29] which is a visual authoring tool. The application was then deployed to a native Android application using the AREL export function.

Augmented Reality Experience Language (AREL) is the JavaScript binding of the Metaio SDK’s API, which uses a static XML content definition. AREL is a scripting-based programming model which can be employed for highly interactive Augmented Reality experiences, in conjunction with open web technologies such as XML and HTML5.

The Metaio SDK also allows the multimedia components for an individual scenario (learning level) to be defined in the XML file. After the application is loaded on the device, these components are stored locally, so the Internet connection is necessary only for updates.

The Android application model uses views for each information screen. Our application uses 2 main ones—MetaioSDKViewController and MetaioSDKViewActivity, which contain all the system calls required to control the augmented reality experience (e.g. open the camera). These custom classes can be extended and overwritten for a custom application logic.

To delineate the geographic area of interest, we applied the geo-fencing technique: the area of interest being limited to the perimeter corresponding to the Roman villa archaeological site. Once the user exits this area, the application no longer provides points of interest.

The POIs are marked using the billboard concept, i.e. a pop-up with a background image, icon and text.

3.4.4 The Multimedia Educational Content

The studied context, i.e. the Roman villa rustica and its objects, were modeled in Autodesk 3ds MaxFootnote 6 and used in applications and MD2 Wavefront OBJ formats. This latter format allows animations, e.g. rotations, etc., and the 3D objects contain geometry and textures packed in zip archives. The virtual tour through the courtyard was also implemented with the help of 3ds Max, and contains renderers in a MP4 compressed video format.

All the digital assets (images, videos, 3D models) were packaged in small files (under 5 Mb), according to the constraints imposed by the mobile devices, and also for efficient transfer over an internet connection.

Our multimedia, superimposed on an AR scene, is based on several digital assets: images, videos, 3D models. We authored the three multimedia hyper-stories using Metaio Creator which can implement an interactive multimedia scenario, e.g.: playing full screen (streamed) videos; showing/playing any content (2D, 3D, video, sound) in AR mode based on the supported tracking technologies; starting, stopping, looping animations, videos, sounds; moving, rotating, scaling any content (2D, 3D, video); automatic content adjustment to the tracking object. In order for the content to be 2D/3D registered on the real life target scenes, several manual calibrations were necessary.

3.4.5 Social Media Integration

In Android, interactions among components are managed with a messaging mechanism based on the concepts of intent and intent filter. An application can request the execution of a particular operation offered by another application or component, by providing to the O.S. an intent with the information related to that operation. The O.S. will handle this request locating a proper component—e.g. a browser—able to manage that particular intent. The intents manageable by a component are defined by specifying a set of intent filters.

The connection of the learning application to web sites, email, Twitter and Facebook accounts is done using Android intents and is implemented using an OAuth authorization service, as user authentication is required to access these networks. This process is also called service mashup.

3.4.6 Software Agents Implementation

Our main concern in using software agents was to implement a mechanism to support a flexible behaviour of our application designed to be used for learning purposes. Instead of incorporating AI elements or other logic control we considered the software agent paradigm on mobile platforms more challenging. We designed software agents having specialized tasks to perform, capable to ensure the overall functionality of the application and provide assistance on the learning process. We mainly leveraged the communication mechanism among software agents, which we considered very adequate for our task. We did not implement self-learning mechanisms or other advanced AI functions, but this can be done further using the existing software agent programming model.

Regarding the BDI (Belief-Desire-Intention) architecture, we focused on programming plans for agent’s “intentions”. Intentions represent a deliberative state of the agent, which is translated in the execution of a plan, i.e. sequences of actions performed by an agent to achieve one or more “intentions”.

For software agent programming we have selected the JaCa—Android framework [55], with which we programmed several software agents characterized by a clear underlying logic and communication channel among them.

Using JaCa we programmed Jason agents with the logic to execute tasks required for our mobile learning application with the purpose to:

  1. 1.

    Manage the overall application;

  2. 2.

    Make the application adaptive to the usage scenarios;

  3. 3.

    Implement asynchronous communication mechanism among different application components, i.e. agents having different roles and tasks, by programming a reactive agent behaviour.

Navigation in the hypermedia space is performed on two levels:

  1. 1.

    A simple interaction with the digital augmentations in the AR view, which are treated by the Augmented Reality agent (AR-A), according to the scenario developed using Metaio Creator software utility and further customization. This provides a predefined individual navigation through multimedia story elements;

  2. 2.

    Navigation supervised by the learning agent (LE-A), which detects scenario completion status and informs other agents about this condition, which can further assist and inform the learners about their learning process status.

The first level is based on UI events and on an XML list containing digital assets, and associated to the Geo-POIs. The other level uses specific software agent communication and logic.

The learning agent (LE-A) receives a status message regarding the lesson completion, and together with the User-Profile agent can construct a learning profile of the current user and generate specific messages. For example, detecting that a user repeatedly does not complete a learning path, will recommend him either to go through the entire lesson or to chose another learning level which suits him.

The LE-A agent can also store data regarding the application usage in an Android SQLite internal database. This data can be extracted from the device at a later time for analysis of the learning process with our application.

3.5 Application Experimentation

The children were involved in the testing phase of the application. They used Android smartphones and PC tablets (Fig. 3.14) provided by the research project.

Fig. 3.14
figure 14

Children using the tablet PC version of the application

The start page of the AR application allows user registration and selection of one of the technologies (at this application stage, textiles). The registration is optional; in case it is overridden, the user profile agent (UP-A) manages the application using an anonymous user account.

The selection of the technology (i.e. textiles) acts as a filter for the displayed Points of Interest (Fig. 3.3) corresponding to associated learning levels (L#1, #L2, #L3). A POI selection further triggers the start of the hypermedia story, consisting of linked multimedia augmentations over the AR view. The scenario completion is supervised by the learning agent (LE-A).

The children were presented the usage scenario: upon opening the application and inserting their names they received a welcome message and a history of their use of the application. When approaching the areas of interest, in this case the Roman dwelling, the application displayed the geographic POIs and the children could use radio buttons to select the Roman technologies (currently TEXTILES, CERAMICS and METAL, in the future versions). When the children left the GPS selected area, this information disappeared from the application’s display. Therefore, through a series of trials the children would situate themselves inside the perimeter of the Roman dwelling, this situational game being the first educational stage of the application. The children then chose a technology to learn about, for instance “TEXTILES”, using the touch-screen display. This triggered the display of the specific lessons under the form of billboards floating on the screen. In the label of the billboard, a short text described the lesson. The children chose one of the billboards and entered a corresponding pedagogical stage. Once a lesson was completely visualized, the colour of the billboard changed.

A detailed presentation of the use of the application would follow the subsequent steps:

Lesson1/Level 1 (“the visual story of the place”):

The AR agent augments the image of the real context with a reconstruction of a 3D Roman villa rustica seen from the front; when the user clicks on this object, a film with the re-enactment of a technology starts. After the film ends the user can return to the real image of the context and select another lesson.

Lesson 2/Level 2 (“the visual story of objects”) is composed of several stories.

Level 2a (“the visual story of objects—the loom”): The AR agent provides a virtual tour of the Roman villa and stops the tour in front of the 3D reconstruction of a loom. By clicking on this image, a 2D image of the loom and a series of texts with historic and technical explanations appear followed by another 3D reconstruction, which can be rotated to see all the details. Once the film ends, the user can return to the real image of the context and choose another lesson.

Level 2b (“the visual story of objects—furniture”): The AR agent provides a virtual tour of the Roman villa, and stops the tour in front of the masters’ room, equipped with furniture and daily objects. By clicking on this image, a 2D image of the room appears followed by another 3D reconstruction of the room, which can be rotated to see all the details. Once the film ends, the user can return to the real image of the context and choose another lesson.

Lesson 3/Level 3 (“the technological digital storytelling”):

The AR agent completes the real context with the reconstruction of a Roman villa rustica seen from the front; by clicking on different objects, a series of video films on technological gestures are opened, which will help the user to better understand the process studied. During each learning lesson the users can send an e-mail message with comments or an image capture using a Facebook or Twitter account.

Lesson 4/Level 4 (“the craft approach”):

This level is the applied for the practical part of the hyper-story. For example, the teaching of the weaving techniques to children between the ages of 8 and 12 years old was intended to develop the coordination of the gestures, a high degree of skill, the attention and, last but not least, the endurance. Although it is intended to be a solitary activity, the teaching of weaving to groups of children could develop their interactivity and cooperation skills in solving technical problems. During the practical workshops a skilled group of children was identified, and the contact with them was continued after the completion of the experiments, with the help of Skype and Facebook. The technological lessons were completed with information on the history of textiles, as well as with the presentation of unconventional techniques and new approaches to fiber art during the videoconferences which followed the experimental campaign in the summer of 2012.

The children rapidly understood how to use performant devices but had to repeatedly try the application in order to understand our learning scenario. They provided us useful suggestions regarding the design of the application (e.g. UI interface, text messages), in order to be motivated to continue to use the application beyond the experimental phase.

3.6 Educational Novelty and Outcomes

In our project we have experimented a “learning-by-playing” teaching method and mobile-learning in real and virtual contexts, as a personal style of learning. Also some components of social learning were integrated.

The learning application implements hypermedia narratives with AR and several software agents, which communicate with each other in order to create an adaptive and coherent application that serves our teaching strategy: the learner has the freedom to choose a learning path from three learning levels; the overall learning process is guided to help the learner to obtain a coherent knowledge.

The customization of the application according to the user profile also captures the children’s interest. This was confirmed by a survey which asked the children and teachers about their preference between an application insensitive, or sensitive, to the user profile.

The application works in a geographical context, which conditions the children to place themselves outdoor and seek the historical areas, geographically defined. Positioning in the real context stimulates an active learning through a play-like discovery. The novelty of the project approach resides in the simultaneous presentation of information on the objects or technologies, and that of the culturally formative context.

We tested the application on small groups of children. These were presented with the whole application and were then given the mobile devices to use the application in several campaigns lasting 2–3 weeks. The last such campaign took place in April 2013. This approach was based on the fact that the learning process is slow. We periodically verified, together with the school principal and the fiber artists, the changes that occurred in the children’s levels of knowledge and skills.

As observed during the experiments, the mobile Augmented Reality application for educational purpose had a proven, positive impact on children and young audiences, both in formal and informal educational settings. This is due to the contextual, user-centric information and the direct connection with true learning contexts, in this case, both real-life surroundings and virtual reconstructions. These types of applications are examples of blended learning solutions, which in conjunction with traditional methods can be efficient learning tools. Our learning paradigm helped both children and teachers, the latter gaining a better understanding of where to insist on teaching and evaluating the learning outcome.

We also evaluated these changes during the practical workshops (“the craft approach”) with a group of school children of the Vădastra School. Following the first test period, the school children proved that they had appropriated the specific terminology: loom frame, warp, weft, sheds, heddle rod. Also they had learned the difference between a plain weave, a kilim weave and a knotted weave and different types of weaver’s knots. The problems encountered (not only in this age group but in any age group) were noticing and understanding the difference between sheds and picking up the right threads using the hand, keeping a straight border. Corrections were made through successive demonstrations accompanied by explanations. Following the first workshop a group of school children that had the skill and wish to master weaving techniques was identified.

The next experiments introduced the children to different weaving technologies (prehistoric, Roman and modern). What followed was a tour of the experimenting site, a tour in which the pupils learned and worked with each technology. In the prehistoric house, during a series of demonstrations, they displayed their understanding of the mechanism of weaving and various weaving stages. In explaining the techniques and tools the teacher had used information already acquired in the previous experiment. The pupils discovered that the few major differences between the two bar loom, with which they had worked before, and the backstrap loom were: the tension system using the weaver’s body and a fixed point, the maximum width of the fabric, the presence of heddle rods, and auxiliary tools such as the weaver’s sword.

The educational exercises carried out with the children from the Vădastra village school demonstrated that this method of “teaching in virtual contexts” had a strong influence on the young generations. How this method can represent a successful means of preserving part of the identity of traditional societies in the third millennium will only be fully appreciated over a longer follow-up period, but we already have obtained positive results.

3.7 Conclusions and Future Work

The teaching experiments that we conducted during the course of one year, both by traditional methods and with an educational application, allowed us to experience a paradigm with original pedagogical elements, designed for learning traditional technologies in their historical context. We believe that expanding the perception of the physical world with AR virtual elements can help cognitive and educational processes.

In this chapter we presented significant results of our research regarding the implementation of our MapsofTime-LearningTool application, focusing on the role of the software agents for creating narrative e-learning tools and on evaluation of the effective educational outcomes. We conceptualized a learning paradigm and applied it using advanced software technologies, i.e. software agents and Augmented Reality on Android mobile devices. As much as possible, open and flexible software technologies were used. The prototype application has the architecture and functionalities of a native application for Google Android mobile platform, which combines different metaphors and technologies: geographic Points of Interest (Geo-POIs) for discovering and accessing different learning levels; digital stories based on inter-connected multimedia content (text, graphic, video); software agents and social media.

The navigation through the multimedia space of a learning level is performed by simple interaction with virtual objects in the AR view. The overall learning process is supervised by a learning agent. The software agents’ tasks and the agent-based environment controlling the communication among them were programmed using the JaCa-Android framework, which provides a high level of abstraction and facilitates the application development. These agents have different roles within our application: monitoring and storage of the completion status of each learning level for each registered user; learners’ assistance in using the lessons in a consistent manner, based on the history of the application usage; achievement of a general application autonomy.

The AR application was designed to create a mixed reality environment, where what mattered was the user’s positioning in the defined geographic area, and not his ability to locate a particular geographical point. The AR application by itself could not produce important learning outcomes. That is why the integration of software agents in an Augmented Reality application for the implementation of complex educational applications is justified as the integrated AI control can actively engage and support children in the learning process and provides a coherent pedagogical structure. This kind of application can support the blended learning paradigm, which is based on finding learning environments and media complementary to the traditional classroom ones. The agent-based paradigm is also very well suited to a non-linear learning style, which has recognized cognitive outcomes.

The learning paradigm and the digital narratives are original, made of a combination of technical video lessons with artistic performance videos, object annotations and simple animations. The application and multimedia lessons will not by themselves make the children achieve the level of skill required to manufacture traditional objects, this necessitates years of practice, but they can make them understand how that object was produced in past times. Our modern learning paradigm is intended to be used together with traditional face-to-face learning sessions.

As future developments we will extend the application to other traditional technologies (ceramics, glassware) and historic periods (e.g. prehistory). The agent functionalities will be further developed with predictive functions, e.g. to provide further recommendations based on user learning preferences. An Android SQLite database will be employed to store historic usage data, to be further analyzed by means of Business Intelligence (BI) tools in order to offer valuable information to the school community. These data could also help improve the application and prototype certain functions in order to promote the adoption of these kinds of e-learning solutions.

The user base will be extended by testing the application in other schools.