1 Introduction

In the last three decades, augmented reality (AR) has been widely adopted in many industrial fields and its efficient practical adoption has been demonstrated especially in maintenance, assembly, training, and collaborative operations.

In particular, AR technologies have been appreciated for their capability to enhance the human perception of the environment by overlaying additional computer-generated visual information onto the vision of the user through specific devices, such as smartphones, head-mounted displays, and projectors [1].

Thanks to these capabilities, AR is considered one of the main enabling digital industrial technologies that will support the wide scope of challenges concerning Industry 4.0 and facilitate the digitization of the manufacturing sector. Among various enabling digital technologies, AR represents, in fact, a promising innovation accelerator that will support workers [2] and bring smart factories to a higher level of efficiency by speeding up the entire production chain.

As abovementioned, AR has been efficiently applied in a large variety of application fields, but its potential is still under-exploited and there are a number of areas that could benefit from this technology. Among them, in the oil and gas sector, there is the need of a structured tracking and keeping of the design changes and improvements that usually occur in the production process and, in particular, during tubing installations. In fact, the complexity of the piping networks and systems often requires modifications and improvements to be made on-site, with the consequent performing of a number of design changes that occur especially in the first productions of small-series products made on order. These design changes consist in a series of adjustments and improvements performed by qualified factory workers to minimize the number of bendings, reduce overall dimensions, streamline pipe routes, and increase the visibility and manoeuvrability space for an easier assembly of the components. At the moment, this sector lacks structured and reliable methods and tools that satisfy this need and support workers to keep track and preserve information about these design changes. This entails a significant increase in the production cycle’s time and causes slowdowns and complications in the maintenance activities because of the project variations.

On the basis of these considerations, the paper presents an AR tool that provides support at the workplace to easily detect and collect design variations by augmenting virtual 3D models, as defined in the project plan, on the actual design. The proposed tool runs on a consumer smartphone and adopts hybrid tracking techniques to perform an accurate tracking of the exact position and orientation of the user and then registers the augmentations properly. In particular, the user interacts with the superimposed virtual models by setting their graphics appearances, to enhance and facilitate the comparison between them and the real object, and by adding 3D annotations at the points where design discrepancies have been detected. In addition to the 3D annotations, the proposed AR tool allows users to collect images and automatically send all this data to the technical office in order to prevent loss of information. A field study has been carried out with end-users to evaluate their acceptance by means of usability studies, based on objective and subjective metrics, and personal interviews. Experimental results show that the proposed AR tool provides medium-to-high levels of usability and has been positively accepted by all the participants involved in the study.

The present paper is organized as follows: Section 2 describes the prior work of using AR technologies to detect design discrepancies in the industrial field; Section 3 describes the research’s aim and context of the application; software and hardware architecture, and user interaction of the developed AR tool are detailed in Section 4; user studies are described in Section 5 and the outcomes are presented in Section 6; and finally, Section 7 contains the summary of the research.

2 Related works

In the last three decades, a plethora of AR-based systems and tools have been proposed to support workers directly at the workplace. Most of these systems focus on the adoption of AR technologies as an assistive instrument for providing instructions. In this manner, users can keep focusing on the task domain without continually switching their attention between the task and a separate manual [3]. Virtual assistance is, in fact, one of nine design elements most representative of the Industry 4.0 [4] that has been declined in a variety of technical and technological AR-based solutions. The first application dates back to 1992 [5] where an HMD device was adopted to show workers drilling distances and positions. Afterward, other HMD [6,7,8] and display-based [9] solutions have been proposed to support an industrial process by means of the AR technology. To augment information directly into the worker’s field of view, various studies have investigated the potentialities offered by projector technologies [10,11,12,13]. Mobile phones [14] and tablet-based systems [15,16,17] have been also used for providing AR instructions. Although there is extensive use of AR for virtual assistance applications, this technology can address almost every aspect of a product life cycle [18,19,20,21].

About the adoption of AR for performing discrepancy check between real parts and their associated construction data, only a few systems have been proposed. Furthermore, these systems cannot perform a real-time check [22] because they are encapsulated into a CAD viewing software [23]; in addition, these solutions offer a limited manoeuvrability and precision, due to their dimensions and the low-resolution camera [24].

These limitations are overcome by the proposed AR tool because it runs on a handheld device that can be easily adopted by operators at the workplace and adopt hybrid tracking techniques to perform a robust real-time superimposing and registering of virtual objects and real objects in a real environment.

3 Research’s aim

The research presented in the paper aims to support and simplify the work carried out by the operators at the workplace and, furthermore, to formalize the overall communication among all the actors involved in the process in order to assure an efficient communication and minimize misunderstandings and loss of information.

As abovementioned in Section 1, the research focuses on a real case study, observed in the oil and gas sector, related to the tubing installation and maintenance activities that usually require modifications and improvements to be made on-site. This need for design variations is more evident especially during the first productions of small-series products made on order. In fact, because of the customization level required and unexpected issues arise directly at the worksite, the design defined by the technical office usually requires modifications and changes, consisting in a series of adjustments and improvements performed by qualified factory workers, in order to minimize the number of bendings, reduce overall dimensions, streamline pipe routes, and increase the visibility and manoeuvrability space for easier assembly of the components.

At the moment, the technical office defines and prepares the design of various products and share these information and 3D models with the other sectors and plants via a product data management (PDM) system. The person in charge of the inspection or production of the specific product adopts the data shared by the technical office to prepare technical drawings that are provided to the workers. Subsequently, they use the technical drawings as guidelines to perform on-site assembly and maintenance activities but, as abovementioned, they can make intentionally design variations and adjustments. When this happens, the workers take notes of the design changes that are given to the person in charge of the inspection. This one completes the notes with some pictures of the product, as made by the workers, and returns all these data via email to the technical office. This paper-based practice is not an efficient and solid procedure, with different points of weaknesses that could entail a significant increase in the production cycle’s time and cause slowdowns and complications in the maintenance activities. In fact, information and notes may be lost or may not be exhaustive because they are incomplete, partial, and incorrect. In any case, it is not possible for the technical office to infer from 2D sources, such as technical drawings and pictures, a precise determining of the position in 3D space of the design changes.

Taking account of the opinions, suggestions, and issues reported by the personnel concerned, an AR tool has been developed to improve and make more efficient and reliable the existing paper-based approach.

In particular, as depicted in Fig. 1, the AR tool can be adopted both by workers and inspectors to easily detect and take digital annotations of the design changes. In particular, the AR tool provides an augmented view of the final product by performing an accurate superimposition on it of the 3D models, as designed by the technical office. In this manner, the user can take data about the spatial position, typology, and dimensions of the design changes and also pictures of the augmented view and of the final design, as it is. All these data are then automatically sent via email by the AR tool to the technical office. Furthermore, the storing of this information on the PDM allows users to reload annotations taken in previous work sessions in order to check, modify, and accept them, or to continue to take annotation.

Fig. 1
figure 1

Workflow process with the introduction of the AR tool

4 The AR tool

The proposed AR tool consists of an application developed with the Google Project Tango Development Kit [25]. This software development technology, introduced by GoogleTM in 2014, has been preferred to other solutions, such as the Vuforia Augmented Reality SDK [26] because it offers hybrid tracking techniques that combine vision-based and sensor-based methods to calculate device’s motion and orientation in 3D space in real time.

In particular, the application takes advantage of both marker-based and natural feature–based techniques and combine these ones with a sensor fusion technique that uses the various sensors (motion tracking camera, 3D depth sensor, accelerometer, ambient light sensor, barometer, compass, GPS, gyroscope) equipped on the device to remember areas that it has travelled through and localize the user within those areas to up to an accuracy of a few centimetres.

4.1 Software and hardware architecture

In the last years, Tango technology has been enabled for a small number of consumer smartphones and tablets. In this regard, the proposed AR tool runs on a Lenovo Phab 2 Pro smartphone [27]. This is an Android device equipped with a Qualcomm Snapdragon 652 (1.80GHz) processor, a 16-MP camera, and integrated depth and motion tracking sensors (accelerometer, digital compass, gyroscope, proximity sensor, ambient light sensor). In general, an AR system contains four hardware components: a computer, a display device, a tracking device, and an input device [28]. But in this case, since all computations are carried out on the device itself, there are no other external hardware components required for the data input and processing.

Figure 2 depicts the software architecture of the AR application. It consists of six different modules, programmed in 3D Unity, each one dedicated to one or more specific operations.

Fig. 2
figure 2

Software architecture

A real-world live video is feed as an input from the camera of the smartphone device to the AR module which adopts this input to deploy on its virtual 3D models. In particular, the AR module extracts all the image frames to perform three main different functionalities implemented by means of the Tango SDK: marker detection, motion tracking, and video overlay.

The marker detection capability has been exploited to recognize the presence of a marker in the real-world scenario and then automatically execute the superimposition of the virtual objects on the real ones. In particular, once the QR marker is sensed by the device, the AR module calculates its relative position and orientation and assigns to it a local reference system. In this way, it is possible to align the 3D models’ reference system with that of the marker and then perform a precise superimposition of the virtual objects on the corresponding ones.

To this end, the virtual models are imported and elaborated by the 3D model importer module. This module has been programmed in order to overcome some limitations of the Unity architecture which cannot import 3D models with more than 65,000 vertices. Thanks to the 3D model importer module, when this threshold is exceeded, the algorithm splits the model into sub-meshes and implements a hierarchy structure in order to re-create the entire model when requested. Once the import process is ended, the meshes and their associated information are saved by the Data Management module into the 3D models’ database. The Offset module has been developed by exploiting the inbuilt functionalities of the Monobehaviour class of the 3D Unity engine to edit position, orientation, and scale of the 3D models augmented to the scene.

Once the marker is detected and the virtual models are automatically overlaid on the real-world objects, the AR module takes advantage of the visual-inertial odometry algorithm, embedded in the Tango device, to perform a consistent motion tracking. In particular, the motion tracking capability is implemented by combining the information provided by the inertial sensors and camera of the device to track its own movement and orientation through 3D space. Furthermore, in addition to inertial sensors (gyroscope and accelerometers), the algorithm uses a dedicated wide-angle motion tracking camera to include visual information that allows estimating the rotation and linear acceleration with high precision.

The colour mode module has been programmed in order to allow the user to customize the graphics properties of the 3D models in terms of colours and transparency levels. In particular, this module sets parent/child relations among 3D models and applies a script to each model to control its material properties and change, from opaque to transparent, the rendering mode by operating on the alpha channel.

The 3D note module is dedicated to the creation and editing of 3D annotations. It adopts Raycast and collision functions to recognize user touch and detect contacts between the ray originated by user touch and 3D models of the scene. In particular, contacts are detected by means of mesh collider components that are associated with each 3D model displayed on the screen. The information about the position in 3D space of the contact is adopted to automatically generate a 3D sphere used to draw user’s attention to the presence of a 3D annotation. The sphere is created through the geometry primitive functions of the Monobehaviour class. For each 3D note, a set of data, consisting of the typology and spatial position of the annotation, and the relative textual contents introduced by the user are automatically saved into a local database. These data handling operations are performed by the data management module that is responsible for reading and writing operations carried out between the modules and the two different databases. In particular, the data management module has been implemented through the SQLite library which is a relational database management system. The SQLite library has been preferred to other database management systems because of its effective compatibility with Android operating systems.

About the sharing of the data, all the information relative to the 3D annotations, organized and exported into a CSV format file, and the pictures taken through the device’s camera are automatically attached to the mail sent by the AR tool to the technical office. The mail service has been implemented by means of the .NET Framework according to the SMTP (Simple Mail Transfer Protocol) standard for electronic mail transmission. This functionality has been programmed in order to let users set custom SMTP server parameters.

The custom user interface has been implemented through the Unity UI system. All the UIs present controls that allow users to interact with the virtual 3D models and manage the data flow.

It is worth noting that the AR tool has been specifically developed for the Tango platform which, starting from March 2018, is no longer supported by Google developers in favour of ARCore development kit [29]. Similarly, Apple has announced ARKit framework [30] that presents the same capabilities of ARCore, but even though these SDKs enable reliable and robust hybrid tracking techniques at the moment, they do not make it possible to use makers for deciding ex ante a fixed point of reference for the virtual models.

4.2 User interfaces

The proposed AR tool allows users to easily identify design changes and take 3D annotations by superimposing the virtual 3D models, defined in the design stage, on the actual design. Furthermore, as detailed in Section 3, the proposed tool formalizes and makes more efficient the data exchange and communications among the various actors engaged in the workflow process. In order to provide a better comprehension of the main functionalities provided by the AR tool, this section focuses on a case study consisting of an air handling unit (AHU) on which the user verifies the presence of design discrepancies between the actual tube routes, as realized by the workers, and the 3D models designed by the technical office.

In order to perform the augmented visualization, the tool employs a visual marker in order to set position and orientation of the virtual 3D models. As depicted in Fig. 3, a marker (black and white two-dimensional barcode printed on paper) has been placed on the lower-left edge of a large metal box of the AHU. Then the user can launch the AR application. When the AR tool starts, the first screen offers to the user a set of options to allow him/her to create a new session, recall a saved work session, and access to a control panel for setting network connection parameters. The image in Fig. 3 shows a screenshot of the AR tool when creating a new working session. Most of the screen is dedicated to the live video stream from the camera of the smartphone device, and a menu bar is displayed horizontally across the bottom of the screen, while on the top-right corner, there is present a battery status indicator. This menu allows the user to interact with the AR tool by selecting, through direct manipulation, from a list of nine buttons. For the proper use of the tool, the command buttons have been placed in the menu according to the logic and chronological sequence of operations that have to be performed by the user.

Fig. 3
figure 3

Menu bar and live video of the real scene streamed from the camera of the device

The scene size, acquired by the camera, has a resolution of 1440 × 2560 pixels. By clicking on the first button on the left, the device camera records the real surrounding, while the application is searching for a predefined marker. When the tool detects and recognizes the marker, the virtual model is immediately and automatically displayed on the display with a specific position and orientation (Fig. 4).

Fig. 4
figure 4

3D models, in blue colour, augmented on the real scenario

The second button is an optional feature that, where necessary, can be adopted by the user to perform an accurate positioning of the virtual model on the physical one. By clicking on this control button, a menu is displayed on the right side of the screen (Fig. 5) with two sets of text fields that allow to respectively define a new position or translate the augmented 3D models with respect to the reference system anchored to the marker. In particular, the user can specify new coordinates along the X, Y, and Z axes or proceed through user-defined offsets along one or more axes. This feature becomes very useful when the marker requires being placed in a not easily accessible point or for very large components in which small positioning errors of the marker may entail big alignment inaccuracies of the 3D model.

Fig. 5
figure 5

Menu bar with main commands, in the bottom centre of the screen, and menu for fine positioning operations on the right

Once the virtual components are correctly augmented on the physical product, the user can possibly customize the visual style of each 3D model in order to simplify the detection of the design discrepancies. In fact, as depicted in Fig. 5, default setting provides a shaded display of the augmented virtual models in which it is not possible to discern the various components of the assembly. In this regard, by selecting the third control button, counting from the left, a checkbox menu is displayed on the left side of the display device (Fig. 6). This menu shows a list of the components of the virtual assembly of the AHU that for this case consists of two sub-assemblies: the large metal box that contains fan and filter compartments, heating, and cooling elements and the pipes. The user can select one item at a time in order to hide the component by clicking on the toggle button, change its colour by means of a colour palette, and set its transparency level through a slider displayed on the right side of the screen.

Fig. 6
figure 6

Menu for the setting of the visual style properties of the virtual models

When the user has concluded the customization of the visual style properties of the virtual models, he/she can proceed by adding 3D notes simply by selecting the desired point of the 3D model in which he/she can anchor the annotation. Once the desired point is selected, a window pops up in the foreground of the visual interface by providing commands that allow to add a 3D annotation to the virtual model. In particular, the user has to compile a text field in which to record the information and details about the specific design change, specify the category of annotation from a dropdown list, and define the size of a 3D sphere that identifies the placement of the note on the virtual model. A predefined colour is automatically assigned to each sphere, which is indicative of the presence of a note, depending on its category. Figure 7 shows the box and pipe components represented with different colours, in order to clearly differentiate them, and a semi-transparent visualization to easily compare the virtual and the actual pipes and detect design variations between them. Furthermore, Fig. 7 depicts three different 3D notes that have been added to the virtual models where design discrepancies have been detected.

Fig. 7
figure 7

Augmented 3D models and annotations

In addition to these main functionalities enabled by means of the first three command buttons of the menu bar, the other six buttons facilitate a specific interaction. In particular, the fourth button is a toggle that allows to hide or show all the 3D annotations at the same time. The two following buttons allow the user to take pictures of the design changes and to email the data collected in the work session. 3D notes and screenshots are, in fact, automatically saved and stored in a database that can be reloaded on request by the user. Then users can load annotations taken in previous work sessions in order to check, modify, and accept them or simply to continue the tubing installation activities that usually require more than 1 day of work.

The last two command buttons allow respectively to display a help user guide on the main functions of the tool and to end the current work session.

As detailed in the previous section, the AR tool requires no external hardware but a Tango-enabled smartphone. Thanks to its compact dimensions, it is a handheld device that lends itself well to be used in different industrial contexts. In fact, the user can easily hold the device with one hand and interact with the dominant hand (Fig. 8).

Fig. 8
figure 8

User while interacting with the AR tool

5 Material and methods

The AR tool has been developed according to a UCD (user-centered design) approach [31] in which end-users have been involved from the initial design stage, for the definition of the first design concepts to the last prototyping and assessment stages to obtain an immediate recognition of users’ needs, behaviour, and satisfaction. In fact, the UCD process allows for improving users’ experience and makes an easy-to-use interaction system that can be easily understood and interpreted by a large variety of audiences, even by technologically naïve users. According to the UCD’s recommendations, the proposed AR tool has been evaluated by means of user studies that have been carried out with representative users on a real case study at the BHGE Nuovo Pignone plant located in Vibo Valentia (Italy).

It is worth pointing out that these user studies have been undertaken following the positive results obtained in a laboratory exploratory study conducted to evaluate the potentialities offered by the proposed AR tool when compared with the traditional instruments, i.e., technical drawings and CAD systems, used in this application field. The study was conducted by comparing user performances while performing a design discrepancy check on a test sample specifically designed to simulate a simple real case study. In particular, the experimental findings showed that the proposed AR tool gains similar results with the traditional instruments in terms of effectiveness and very encouraging results about its efficiency [32].

Differently from the laboratory study, the field experimentation with end-users detailed in the paper aimed to reach two main purposes: assess the usability of the AR tool by means of objective and subjective metrics and collect personal opinions of end-users in order to realize if they are inclined or not to adopt AR-based instruments.

Figure 9 shows the case study selected for the user studies that exemplify recurring tasks usually carried out by workers at the workplace. It is a medium-sized baseplate on which the workers have to perform tubing installation by connecting various components and systems according to the lines and fitting designs defined by the technical office. In particular, as depicted in Fig. 9a, the case study focuses on two pipes placed on the upper deck of the basement and assembled together by means of a flange connection. The two pipes are characterized by their position on the top face of the basement, diameters, lengths, and bend radius. The verification of all these parameters, by checking their consistency with those defined at the design stage, is carried out by the end-users by means of the AR tool which provides an augmented visualization of the 3D models on the real ones (Fig. 9b). In particular, as detailed in Section 5.3, users have to perform specific tasks and interactions. In this manner, the usability of the proposed AR tool is assessed by analysing human performance in target acquisition tasks thorough metrics suggested by the general agreement from the standards ANSI 2001 [33] and ISO 9241 [34]. According to these standards, the usability has three dimensions: efficiency is measured from time on task; effectiveness is measured with errors; and satisfaction is summarized using any of a number of standardized satisfaction questionnaires (either collected on a task-by-task basis or at the end of a test session) [35, 36].

Fig. 9
figure 9

User while interacting with the AR tool © 2019 Baker Hughes, a GE company, LLC - All rights reserved

5.1 Participants

A total of 20 representative users have been recruited. The subjects involved in the user study have knowledge and experience with the detecting of design discrepancies and with the existing paper-based internal procedure for the handling of these data. The participants have been divided into two groups:

  • G1: 8 factory workers (all males) with average age 48.5 (min 43, max 61; SD = 7.8);

  • G2: 12 engineers/technicians (10 males and 2 females) with average age 36.8 (min 25, max 54; SD = 7.69).

As depicted in Fig. 10, the subjects of the first group were not familiar with the use of smartphones and tablets, and only 50% of them have previous experience with this kind of devices. By contrast, the subjects of the second group were all familiar with smartphones and tablets. Both groups of participants were naïve with AR technologies.

Fig. 10
figure 10

Smartphone and tablet frequency of use among the groups

5.2 Procedure

The procedure consisted of three main steps during which a human assistant supervised each test.

In the first step, the assistant provides each participant with a quick demo of the AR tool and the sequence of tasks to perform during the test. The sequence of the tasks is the following:

  • Task 1. Marker detection;

  • Task 2. Hide component;

  • Task 3. Set colour;

  • Task 4. Set transparency;

  • Task 5. Add 3D annotations in correspondence of design discrepancies;

  • Task 6. Edit the content of one 3D annotation;

  • Task 7. Hide 3D annotations;

  • Task 8. Send email.

In the second step, each participant has conducted the abovementioned eight assigned tasks without any limitation in time. The assistant observed while each subject performed the tasks in order to note down completion time, number and cause of errors, and each time they encountered any difficulty or they were faltering while using the AR tool. Furthermore, a screencast with audio of the operations performed by the participants on the AR tool has been recorded in order to perform an accurate offline check and analysis of the acquired data.

In the third step, each participant has been invited to fulfil a satisfaction questionnaire and to perform a one-on-one personal interview aimed to comprehend their personal opinions about the proposed AR tool and catch all their possible personal judgments and suggestions for improvements. The satisfaction questionnaire has been developed on the basis of standard questionnaires whose definition is based on psychometric methods [35]. In particular, the questionnaire is a simplified version of the PSSUQ [36, 37]. It is composed of six items that follow a preliminary survey about users’ demographics (gender, age, work area) and technological experience. The first five items are 7-point graphic scales (Likert scale), anchored at the endpoints with the term “strongly disagree” for 1 and “strongly agree” for 7. The last item allows users to clearly express their opinions and preference between the AR tool and the traditional paper-based approach to note down design variations.

6 Analyses and results

Descriptive statistics, t test, and analysis of variance tests have been used to analyse the effects of the AR tool on time and error rate. All analyses have been conducted using the statistical packages Microsoft Excel and IBM® SPSS. The statistical significance level was set at p < 0.05.

6.1 Objective metric results

Figure 11 shows the results for the average number of errors occurred in each task. As abovementioned in Section 5.2, each task requires the accomplishment of a specific operation that can be performed by means of one or two interactions with the UIs of the AR tool. The selection of a command different from that needed to complete the requested task is considered as an error. The request of suggestion, by the user to the assistant, has been considered an error too.

Fig. 11
figure 11

Average number of error for each task

As depicted in Fig. 11, the higher number of errors has been observed in task 3 in which the user was invited to change the colour of one component by selecting the correspondent name on the menu displayed on the left side of the screen (Fig. 9) and then clicking the desired colour on the colour palette. In particular, the workers and technicians have concluded this specific task with, respectively, an average error of M = 3.75 (SD = 2.7) and M = 3.50 (SD = 5.1). As emerged by users’ comments gathered during personal interviews, the large number of errors, occurred in task 3, is probably due to the fact that users instinctively clicked on 3D models instead to operate on the dedicated menu. Such issue is common to both groups; in fact, an independent sample t test (Table 1) shows that there is no significant difference in the number of errors occurred performing task 3 between the two groups of users (t(18) = 0.126, p = 0.901). The second largest number of errors has been observed in task 5 in which users should have add annotations over design discrepancies. Even if the number of errors made by the first group is slightly higher than that of the second group, also in this case, there is no statistically significant increase from G2 (M = 1.8, SD = 2.7) to G1 (M = 2.5, SD = 2.4) (t(18) = 0.561 and p = 0.581).

Table 1 Independent samples t test analysis of error rates between the groups

As illustrated in Fig. 11, in most tasks, the workers have made more errors than technicians and engineers but overall, as outlined in Table 1, there is no statistically significant difference between the two groups (t(158) = 1.869, p = 0.063). Differences between the two groups are statistically significant only in task 6 and task 7 in which the results of an independent samples t test are respectively t(18) = 3.286 and p = 0.004 and t(18) = 5.659 and p = 0. Tasks 6 and 7 are both related to 3D annotations. In particular, all the workers have made the mistake to edit their textual content of a 3D note and hide their 3D labels displayed on the virtual models. On the contrary, these errors have been committed respectively by 33% and 16% of the G2 participants.

Figure 12 shows the comparison between G1 and G2 user groups of the mean execution time required to perform each task. With the exception of tasks 1, 4, and 8, it takes a longer time for the workers to complete the required tasks.

Fig. 12
figure 12

User while interacting with the AR tool

The results showed in Fig. 12 have been submitted to independent t test analysis in which outcomes are grouped in Table 2. What emerges is that there is no statistically significant difference in the completion time between the two groups. These results demonstrate then that the two groups have spent equivalent times to accomplish the various tasks.

Table 2 Independent samples t test analysis of completion time

An analysis of variance was conducted to compare the completion time on the different tasks carried out by the participants of the same group. A one-way ANOVA showed that there is a statistically significant difference among the task completion times, respectively F(7.56) = 6.263 and p < 0.05 and F(7.88) = 4.470 and p < 0.05 for the G1 and G2 groups. In particular, a Games-Howell post hoc test revealed a significant difference in the completion time of tasks 3 and 5 with that of the tasks 7 and 8 for the G1 group and the completion time of tasks 5 with that of the tasks 2, 7, and 8.

From all this, it can be deduced that for both groups, the most problematic tasks, that required also more time to be accomplished, were referred to the colour setting of the 3D models and the insertion of 3D annotations.

This outcome has been confirmed by the participants’ comments; in fact, some of them have clearly expressed their difficulty regarding the selection of 3D models on which to insert a 3D note and the identification of the right functionalities to change colours to the 3D models.

6.2 Subjective evaluation

Figure 13 shows the results of the satisfaction questionnaire fulfilled by users after performing the tasks. The first five questionnaire items, based on a Likert scale, have been gathered in three groups, addressing very important components of user satisfaction related to the AR tool. In particular, two items have been designed to assess the system usefulness, one for the information quality and two for the interface quality.

Fig. 13
figure 13

User satisfaction questionnaire results by groups

The histograms indicate medium-to-high levels in subjective opinions for both groups; nevertheless, these findings point out some differences between the two groups. In particular, the group of workers presents lower levels of usability compared with those of the second group which presents a minimum score of six out of seven.

An independent sample t test shows a significant difference in the system usefulness between the two groups of users (t(38) = 3.65, p < 0.05), two-tailed with engineers (M = 6.5, SD = 0.59) scoring higher than workers (M = 4.62, SD = 1.99). The magnitude of the differences in the means (mean difference = 1.87; 95% CI, 0.79 to 2.96) was large (eta squared of 0.25). This result reveals that the G2 group has expressed more interest than the G1 group for the usefulness of the proposed tool. Nevertheless, both results of the usefulness of the tool are positive.

On the contrary, the information quality results of an independent t test show no statistically significant differences between the two groups (t(18) = 1.77, p = 0.09). In particular, the workers have assigned an average value of 5.32 with a SD of 1.38 to the information quality, while engineers, an average value of 6.16 and a SD of 0.94.

About interface quality, independent samples t test revealed a significant difference between the two groups, in particular (t(38) = 2.71, p = 0.01) with engineers (M = 6.1, SD = 0.61) scoring higher than workers (M = 5.6, SD = 0.5). The magnitude of the differences in the means (mean difference = 0.18; 95% CI, 0.12 to 0.87) was large (eta squared of 0.16).

An independent samples t test shows a significant difference in the overall user satisfaction between the two groups of participants (t(94) = 3.58, p < 0.05) with the G2 group (M = 6.37, SD = 0.68) scoring higher than the G1 group (M = 5.29, SD = 1.43).

Figure 14 shows the results of the fifth question of the satisfaction questionnaire, which asks participants to clearly express their preference for the proposed AR tool to the detriment of current paper-based practice. The findings revealed high values for both groups which resulted in a large consent for the proposed tool. As depicted in the graph, the results show a small difference (mean difference = 0.83; 95% CI, 0.29 to 1.37, with an eta squared value of 0.21) that has been confirmed by an independent t test. In fact, the t test results confirmed a significantly statistical difference (t(18) = 3.25, p = 0.004) with engineers scoring higher (M = 6.8, SD = 0.39) than workers (M = 6, SD = 0.75).

Fig. 14
figure 14

Subjective preference results

This result is consistent with those obtained from the satisfaction questionnaire. Then, on the basis of the subjective metrics, the prosed AR tool has been valued very positively by the engineers and technicians involved in the user study, and in a positive manner by the workers. This slight difference of opinion could be justified by the different levels (Fig. 10) of familiarity in the use of smartphone and tablet devices. In fact, half of the workers have never used touchscreen input devices, differently from the second group of participants in which more than a half makes frequent and daily use of this kind of handheld information appliances.

7 Conclusions

The paper has presented an AR tool that runs on consumer handheld smartphones, to support the detection of design change at the workplace and to formalize the flow and exchange of information among the different professional figures involved in the design and production process of products for the oil and gas sector. These functionalities have been developed taking advantage of the hybrid tracking techniques to augment 3D models, as defined in the project plane, on the actual design.

In addition to the main features, the AR tool’s capability to augment virtual models on the real scenario enables it to be used also for assembly instructions. The virtual model, in fact, could be adopted as a reference to guide operators in the assembly activities.

Field experimentation carried out with end-users resulted in medium-to-high levels of usability of the proposed AR tool. Furthermore, the participants involved in the user study have clearly expressed their positive opinion about the efficacy of the AR tool and have shown interest and openness toward the possibility of introducing this instrument as a support for their working activities.