Keywords

1 Introduction

Augmented Reality (AR) consists of the combination of the real world with virtual elements through a camera in real time. This emerging technology has already been applied in industrial fields such as production and maintenance with several benefits, e.g. time reduction to locate and perform a task, improvement of the learning process and increment of overall efficiency [1, 2].

In education, similar approaches have been taken in order to improve the comprehension of abstract concepts such as electronic fields, and enhance the learning process making education more interactive and appealing for students [3, 4].

One of the main innovations of the ICT FP7 project E2LP [5] consists of an Augmented Reality Interface (ARI) that detects different electronic boards and superposes relevant information over their components, serving also as a guide through laboratory exercises [6].

This document presents the main features of the ARI, the necessities it covers, its development and integration in the E2LP platform.

2 Objectives

The main objective of the ARI is to provide students and teachers from Electronic Laboratories a support tool that will provide useful information about the boards and exercises through a user-friendly interface and using Augmented Reality technology. Sub-objectives of this point are the scalability and adaptability of the tool, providing easy access to the further introduction of contents and features in the system and allowing its future expansion beyond the scope of the project.

From the educational point of view, being encompassed in the general scope of the E2LP project, the ARI has been designed according to the task taxonomy established within the project [7], providing a wider range of AR utilities for Basic Exercises and serving as a support tool for the rest of the exercises, i.e. Problems and Projects (see Table 1).

Table 1 Classification of augmented reality actions according to task taxonomy

3 ARI: The System

The ARI system (Fig. 1) consists of:

Fig. 1
figure 1

Augmented reality interface overview (left) and magnifying glass bottom view (right)

  • An articulated arm with a touchscreen and a webcam attached.

  • A multi-feedback pointer.

  • A mini-PC integrating the logic of the system.

The schema in Fig. 2 shows the main connections and data exchange between the elements of the ARI.

Fig. 2
figure 2

Functional diagram of ARI

The mini-PC receives the camera frames and the position from the magnifying glass’ sensors and tactile pointer. It processes the information against its database to display in the touchscreen the augmented data.

Users interact with the touchscreen to select components, exercises and scenarios.

4 ARI Components: Software

The software of the ARI is divided in two sections: the tracking software that allows augmented content to be displayed; and the users’ interface to interact with the system and introduce new exercises.

4.1 AR Tracking Software

The tracking software of the ARI has been developed using OpenCV [8], an open source computer vision library which has all the necessary capacities for the picture processing and tracking [9]. This library is fully compatible with OpenGL [10], the open graphics library that allows the creation of the augmented reality layer, displaying the required information over the video stream layer, and that will be used due to its standardization and multiplatform nature.

For the software tracking, the task has been divided in two sections: a main process based on the image-tracking (markerless) of the board and a secondary marker-based tracking, where small patterns (Fig. 3) have been placed in the main E2LP board.

Fig. 3
figure 3

The chosen markers for the E2LP board

4.1.1 Markerless Tracking

Image-based or markerless tracking allows the detection of real elements such as electronic boards and is more robust than marker-based tracking against partial overlap, i.e. the object to be detected doesn’t have to be always in full view of the camera. Thus, this method has been selected as the main tracking system.Footnote 1

Image-based tracking is performed by searching characteristic points (key points) or features of the images, using in this case corner-based feature detector algorithms. In this area, OpenCV offers a set of algorithms included in the package Features2D.

The main algorithm selected for this task has been ORB (Oriented BRIEF) [11]. This algorithm is implemented in OpenCV and has the characteristic of being invariant to image rotation offering detection of partially rotated pictures. This feature has been considered as essential for the project because it erases possible problems deriving from the fact of not knowing the initial position of the board regarding the camera.

For the tracking process, two different datasets (groups of pictures to be detected) have been defined: one containing the image of the whole board and another one containing the four main quadrants and two of the lateral views of the board.

The reason of this classification is the use of the magnifying glass: when users are pointing at the whole board only one image of it is required. However, if they want a closer look, only part of the board will be in view requiring a smaller section of the board to be compared with (Fig. 4).

Fig. 4
figure 4

Partial detection of the board on a close-up. MMC component highlighted in AR

Once extracted the key points of the image, BRISK (Binary Robust Invariant Scalable Keypoints) [12] is used to extract the descriptor vector.

The process of finding frame-to-frame correspondences can be formulated as the search of the nearest neighbour from one set of descriptors for every element of another set. It’s called the “matching” procedure. There are two main algorithms for descriptor matching in OpenCV: Brute-force matcher and FLANN-based matcher. In this case, the former has been due to its better permanence with ORB.

To improve the results (remove false matches) the KNN (K Nearest Neighbour) algorithm is used, that determines the probability of a detected point to be correct based on its surrounding points and then RANSAC (Random Sample Consensus).

Figures 5 and 6 present the matches calculated without KNN and with KNN filtering respectively. Figure 7 illustrates the improvement after RANSAC is used.

Fig. 5
figure 5

Matches without KNN filtering. Image from the webcam (left) and saved original (right)

Fig. 6
figure 6

Matches with KNN filtering. Image from the webcam (left) and saved original (right)

Fig. 7
figure 7

Matches after RANSAC algorithm. Image from the webcam (left) and saved original (right)

In order to improve the stability of the system and obtain a better superposition of the AR elements, another stage has been added to the processing system. This new stage refines the homography obtained from the previous stage after generating a new matching process but with the frame coming from the camera rotated, obtaining the results shown in Fig. 8.

Fig. 8
figure 8

Final matches after the second processing. Image from the webcam (left) and saved original (right)

Although this process adds computational load to the overall system it provides a greater stability and accuracy in the calculations of the pose matrix estimation for the AR components, avoiding fluctuations and fixing them.

4.1.2 Marker-Based Tracking

This kind of tracking is based on the recognition of some patterns (called markers) with very specific characteristics (see Fig. 3): they are square and black and white (or two colours with big contrast between them).

In this project the marker-based tracking has been implemented as a secondary tracking system to be used in specific cases where the image based recognition system is not fast or reliable enough, and as a secondary calibration method.

The software detection of these markers is accomplished as follows:

  • Conversion to greyscale of the frames coming from the camera.

  • Binarization (i.e. black and white conversion) according to a threshold.

  • Detection in the resulting image of 1–4 markers at the same time.

  • Removal of erroneous matches.

  • Decodification.

  • Calculation of the rotation/translation matrixes of the markers.

4.2 Users’ Interfaces

Taking into account the objectives set at the beginning of the document, the user interface has been developed following mobile devices’ designs. It consists of a main window where augmented reality is displayed, a lateral bar on the left side for navigation purposes and a message bar on top of the main window to display information and instructions. When users first execute the software they access directly the “Board Discovery Mode”. In this mode, when they point at the electronic board with the camera they can see the names of the main components on top of them (Fig. 9).

Fig. 9
figure 9

User interface in board discovery mode

When touching a component (either clicking with the mouse if using a regular screen or with the finger if using a touchscreen) the datasheet of the component is automatically loaded on the screen. This way, students don’t need to go through books or PDFs to locate its technical specifications.

For the beta software five exercises from three different subjects of Computer Engineering have been chosen and developed and are currently being within the universities of the consortium.

Attending to the classification in Table 1, each exercise provides, apart from regular theory and requirements PDFs, augmented information such as the components to be used during the exercise (those components appear highlighted, similar to Fig. 9 and when touching them an explanation of their functioning is displayed) or instructions to follow (step by step indication of which interfaces and components have to be connected). In addition, an extra exercise, completely different from the others and called “E2LP Board Discovery Exercise” has also been codified. This exercise serves as the first contact with the E2LP main board: through three levels of increasing difficulty, students have to demonstrate their knowledge of the main components of the board by pressing them when asked.

4.2.1 Teacher Interface

Parallel to the user interface, a new one called “Teacher Interface” has been designed and developed, to allow educators to add their own exercises with AR features to the system.

The interface has been designed following the same objectives of usability of the user interface, in order to avoid a long learning process. Accessible through the user interface, it allows the creation, edition, deletion and import/exportation of exercises (Fig. 10). The last characteristic is though for an easy exchange of exercises between different universities or centres.

Fig. 10
figure 10

Teacher interface. General window (left) AR display option (right)

5 ARI Components: Hardware

In order to make the ARI fully interactive with users, its hardware components (articulated arm’s tracking and pointer’s vibrotactile systems) must be connected to the main software system and interface.

5.1 Articulated Arm

The AR software is able to communicate with the arm through a serial protocol that allows the software to erase a certain level of uncertainty making use of the geometric model (Fig. 11) of the arm to delimit the group of images to be compared. In this way, the process is accelerated and the general system is more robust.

Fig. 11
figure 11

Geometric model of the articulated arm

In addition, once the board has been detected, the system stops processing frames, meaning that it saves resources for other tasks and allows also users to interact with the board (e.g. connecting cables or using the tactile pointer) without losing the AR elements from the screen (Fig. 12).

Fig. 12
figure 12

Improvement of stability of the AR system by adding the use of the arm

To accomplish this purpose, the following hardware development process has been followed:

Once the mathematical model of the arm has been identified, a sensing strategy has been implemented. The criterion of choice is a compromise between the less expensive solution, the most robust and the easiest to integrate without modifying the mechanical structure. The best identified compromise is the use of MEMS inclinometers sensors. However, because inclinometer measures the angle with respect to the gravity axis, it means the angular position of an axis collinear to the gravity such as the vertical articulation of the arm should be measured with another sensor.

To solve this limitation, a specific absolute magnetic angular sensor has been implemented as showed on Figs. 13, 14 and 15. The specific PCB developed allows the integration of the sensor inside the base of the articulated arm but also to collect data from inclinometer using SPI BUS and a low cost 8bit Microcontroller.

Fig. 13
figure 13

Magnet fixed on a spring

Fig. 14
figure 14

Sensor working principle

Fig. 15
figure 15

Integration inside the arm base

5.2 Tactile Pointer

The tactile pointer acts as component selector, allowing users to see the datasheet information of the selected component as they would do through the touchscreen (Fig. 16).

Fig. 16
figure 16

Some of the components are accessible with both the touchscreen and pointer

The technical principle of this approach consists in generating a magnetic field under points of interest of the E2LP board and detects them thanks to a magnetometer embedded inside the interactive pen as shown on Fig. 17.

Fig. 17
figure 17

Working principle

Figure 18 shows that the MAG-ID board is placed under the E2LP board inside the blue box and shows also a zoom-in picture of one inductor printed on the 2 layers MAG-ID PCB allowing a generation of constant magnetic field. The “MAG-ID PCB” is composed of 31 printed inductors or “magnetic tag” with different shapes depending on the size of the targeted component.

Fig. 18
figure 18

The “MAG-ID” detection board

The dissipated power reaches 0.75 W allowing the MAG-ID board to be USB self-powered compliant. However, this approach implies the magnetic tags should be switched-on one by one and lies to a magnetometer data acquisition at each magnetic tag commutation. Considering a magnetometer refresh rate of 160 Hz, to check the 31 magnetic tags, the overall refresh rate of the tracking system is lower than 5 Hz, which is acceptable for a human machine pointing application.

Similar to the articulated arm, the AR software receives through USB signals the component chosen with tactile pointer and displays the information on the screen (Fig. 19).

Fig. 19
figure 19

Tactile pointer launching information of the touched component

5.3 Tactile Feedback Accessory

Besides supplying the localization of the zone of interest for the augmented reality process, the role of the Haptic pen is to “make tangible the invisible” thanks to innovative interactions metaphors. The objective of this system is to allow the student “to feel” the physicals characteristics that describe an electronic circuit as the frequency, the nature (analogical or digital) or the flowing current intensity.

This approach, based on a tangible object, has to incite the student to explore an invisible world and so to become aware by the experiment of the function of the various components which compose an electronics board. To carry out this function, the same pen than used for localization purposes is used and includes 3 different vibration range electro-magnetic actuators (Fig. 20).

Fig. 20
figure 20

Actuators inside the tactile pen

Figure 21 shows 7 classifications used to develop tactile metaphors. The motivation of this classification is to help students memorize thanks to the sense of touch.

Fig. 21
figure 21

Classification used for the tactile metaphors implementation

This classification corresponds to the different families of function that we can find on a typical electronic board. The objective of the tactile accessory is now to create 7 different vibrotactile metaphors patterns easy to distinguish without ambiguity.

Figure 22 shows an example of one vibrotactile pattern implementation.

Fig. 22
figure 22

Example of metaphor with three actuators

It appears that the vibrotactile feedback is well perceived by the user thanks to the wide range of frequency and amplitude vibration provided by the innovative combination of vibrotactile actuators. The next important step is the user evaluation to check if metaphors are relevant and effectively help students to memorize electronics concepts easier than before.

6 Conclusions and Scalability of ARI

The augmented reality interface described in this document consists of an interactive and user-friendly system aimed at helping students discover and use the E2LP boards through augmented reality and vibrotactile capabilities.

At the present moment, apart from the E2LP main board, the ARI also includes the detection of two E2LP extension boards.

However, the versatility of the ARI’s tracking system would allow the inclusion of new boards developed outside the scope of this project. This capability supports the growth of the E2LP system once the project finishes, opening also the door at the possibility of students creating their own extension boards and presenting them to the rest of the class or the teacher using the AR interface, as a new and more collaborative and interactive way of learning.