Keywords

1 Introduction

In the industrial field, it is not uncommon to find discrepancies between the design defined by the technical office and the correspondent product. The causes have different origins and in the majority of cases are due to unexpected issues that arise during the manufacturing and prototyping activities because of the technology adopted, operators’ capacities or environmental context. These design discrepancies are quite common in the Oil&Gas sector, and in particular, in the production departments for the manufacturing and assembly of small lot sizes of tubing and piping. In fact, because of the complexity of the piping networks and systems, they often require modifications and improvements to be made on-site by workers which intentionally perform a number of design changes and adjustments in order to minimize the number of bendings, reduce overall dimensions, streamline pipe routes, and increase the visibility and manoeuvrability space for easier assembly of the components. These design variations, if not properly and promptly detected, unavoidably lead to a significant increase in the production cycle’s time and can cause slowdowns and inefficiencies both in the design, production and maintenance areas.

At the moment, a very common solution for the detection of these design changes consists of a paper-based approach in which workers write down a few notes about the discrepancies on papers or on technical drawings. These annotations, sometimes accompanied by photographic material, are then gathered and send to the technical office that will take care of the updating of the CAD models on the basis of which the next technical drawings will be generated. Clearly, this is not an efficient and reliable approach, in fact, the hand-written notes may be lost or may not be exhaustive and consequently lead to miss-understandings. In any case, it is not possible for the technical office infers from 2D sources, such as technical drawings and pictures, a precise determining of the position in 3D space of the design changes.

From the perspective of Industry 4.0, Augmented Reality (AR) represents a key technology that can overcome these limitations and inefficiencies. In fact, in the last decades, AR technology has been successfully and efficiently applied in many sectors of the industrial field to help workers to accomplish several tasks [1,2,3,4,5] thanks to its capability to superimpose virtual data directly on the real environment. In particular, the display technology adopted for the development of AR-based solutions typically relies on head-mounted displays (HMDs) [6, 7], projectors [8, 9], and mobile devices [10, 11] such as smartphones and tablets. To the detriment of these varieties of technological solutions, the question about which device or display solutions fit better the industrial context of the future smart factories remains still open. This is due to the fact that there is not an ultimate technology that is useful for all types of environments with sufficient accuracy and reasonable cost, but the kind of solution to adopt it is strictly dependent by the specific context of the application, contingent users’ needs and industrial constraints.

About the abovementioned industrial context, the specific AR application should allow workers to conduct the detection of the design discrepancies at the workplace while watching the superimposed images on the real scenario. To this end, digital projectors offer the advantage to superimpose virtual data directly on the real environment, but this requires to be placed in fixed positions and are characterized by limited areas of usage, surface-based distortions, lower color fidelity and visibility of projection. HMDs devices provide more freedom of movement, and the user can use both hands while watching the superimposed images but they suffer from bad ergonomics and excess of weight. Furthermore, both projectors and HMDs are expensive devices and require a workstation for the running of software applications. As a consequence, mobile devices could be more adequate for the specific application field. If, on the one hand, mobile handheld devices require both hands for the interaction, on the other hand, users can bring them at the workplace and interact with the AR application by moving and rotating the standalone device in the real space.

On the basis on these considerations, the paper presents a novel AR tool that provides support at the workplace to easily detect design changes by augmenting virtual 3D models, as defined in the project plan, on the actual design. The proposed AR tool requires no external hardware but a smartphone device. In this manner, the user can easily hold the device with one hand and interact with the dominant hand. The paper focuses on an exploratory and comparison test in which the AR tool is compared to technical drawings and CAD systems, traditionally adopted for the individuation of design variations, in order to investigate and quantify the impact and the validity of the proposed tool over the other two.

This first exploratory lab test activity has been carried out before proceeding with field testing in order to collect a preliminary set of usability data that will be helpful to quantify its potentials and improve its design in terms of functionalities and ease of usage. This first lab test is part of the development process of a novel instrument that should support workers at the workplace. Then further lab and field, assessment and validation, tests will be carried out within the context of a user-centered design approach, with usability testing conducted throughout the whole designing process in order to develop a usable handheld AR-based tool.

2 Research Aim and Test Materials

As stated in Sect. 1, an exploratory study has been carried out to evaluate the effectiveness and efficiency of a new AR tool that allows to detect design discrepancies between real parts and their associated construction data. In particular, the main goal of the testing is: compare the proposed AR tool with traditional instrument adopted for performing design discrepancy check by measuring user performances while completing the task.

The test sample has been specifically designed in order to simulate a simple real case study in which the physical item presents some variations with respect to the design defined in the project plan. In particular, Fig. 1a depicts the 3D model of a set of five tubes that consists in single socket pipes, of the diameter of 40 mm, assembled by means of 45° and 90° bend and equal branch connection pipes.

Fig. 1.
figure 1

3D model of the piping (a) and its physical prototype with ten design discrepancies (b).

The correspondent full-size physical prototype (Fig. 1b) has been made with push-fit lightweight PVC pipes that have been placed by means of hot glue on a thin stiff sheet of transparent plexiglass with the approximate dimensions of 83 cm (length) × 70 cm (height). On each pipe of the physical mock-up (Fig. 1b) two design changes have been intentionally introduced, for a total of ten design variations as compared to the 3D model (Fig. 1a). The design changes are limited to length and inclination of pipes, with the exception of a missing component on the first pipe on the left.

3 AR Tool

The AR tool consists of an application developed with Google Project Tango Development Kit [12]. This software development technology, introduced by Google in 2014, has been preferred to other solutions because it offers hybrid tracking techniques that combine vision-based and sensor-based methods to calculate device’s motion and orientation in 3D space in real-time. In particular, the application takes advantage of both marker-based and natural feature-based techniques, and combine these ones with a sensor fusion technique that uses the various sensors (motion tracking camera, 3D depth sensor, accelerometer, ambient light sensor, barometer, compass, GPS, gyroscope) equipped on the device to remember areas that it has travelled through and localize the user within those areas to up to an accuracy of a few centimeters.

Between the years 2014 and 2017, the Tango technology has been enabled on a limited number of consumer Android devices: the Peanut mobile phone and the Yellowstone tablet produced by Google; the Zenfone AR smartphone produced by ASUS and the Lenovo Phab 2 Pro [13] smartphone. All these devices have in common the hardware architecture required to enable Tango technology. In particular, the motion tracking is achieved using a high-performance accelerometer and gyroscope sensors, depth perception happens thanks to a proprietary RGB-IR camera, and area learning entails the recording of it all for real-time or rendered data. Because of the need of these special sensors recently Google has introduced a new augmented reality system, known as ARCore [14], that works by exploiting the hardware already on board on the consumer Android smartphones.

The developed application runs on a Lenovo Phab 2 Pro smartphone. It is an Android device equipped with a Qualcomm Snapdragon 652 (1.80 GHz) processor, a 16MP camera, and integrated depth and motion tracking sensors (accelerometer, digital compass, gyroscope, proximity sensor, ambient light sensor). Since all the computations are carried out on the device itself, there are no other external hardware components required for the data input and processing.

The following illustration (Fig. 2) depicts the software architecture of the AR application that takes as input images of the real scenario framed by the camera and 3D models in order to combine this information to create the augmented reality visualization provided to the user by means the display of the handheld device.

Fig. 2.
figure 2

Software architecture of the AR tool.

The different modules, depicted in Fig. 2, have been programmed in 3D Unity [15] and each one is dedicated to one or more specific operations.

A real-world live video is given as an input from the Android cell phone camera to the “Scene capturer” module. Then the “Marker detection” module gets access to this data in real time by means of the Tango SDKs in order to recognize the presence of a marker in the real world scenario.

When a marker is detected, the “Data management” module gets access to an external file system in which the 3D models are stored and sends this data to the “Marker detection” module. This last module calculates the relative position and orientation of the detected marker and assigns to it a local reference frame on which the 3D models’ coordinate system is aligned. All these data are sent to the “Augmented image generator” module which performs a precise superimposition of the virtual objects on the real world scenario.

The “Marker detection” module is responsible also for motion tracking capabilities. In particular, it takes advantage of the visual-inertial odometry algorithm, embedded in the Tango device, to combine the information provided by the inertial sensors and camera of the device and then track its own movement and orientation through 3D space.

The “Color mode” module uses functions to set material properties and change, from opaque to transparent, the graphics properties of the 3D models. These properties are made available by the “User Interface” module that displays a menu by means of which the user can perform a real-time customization of the graphic appearances of the 3D models (Fig. 3). In particular, default setting provides a shaded display of the augmented virtual models in a light gray color but the user can change this color by means of a color palette displayed on the right side of the screen and set transparency level through a slider. The customization of the visual style of the 3D models makes easier the detection of the design discrepancies because it allows to clearly distinguish between the augmented virtual models and the real world background.

Fig. 3.
figure 3

3D models augmented on the physical prototype.

4 Usability Testing

As abovementioned in the previous sections, a usability testing, consisting in an exploratory and comparison test, has been carried out in order to collect empirical data while observing users using three different instruments to perform a realistic task for the detection of design discrepancies.

The test has been carried out in the laboratories of the Department of Mechanical, Energy and Management Engineering (DIMEG) of the University of Calabria. All the participants were students enrolled in the first year of Masters degree in mechanical engineering. In particular, for this comparison test, 34 volunteers have been selected, 26 male and 8 female, with age ranged from 22 to 30 (mean 24.5, standard deviation 1.9). The number of participants has been chosen according to the most influential articles on the topic of sample size [16, 17].

As abovementioned all the subject involved in the comparative test were from an engineering background in order to assure the involvement in the study of users that have, as prerequisite knowledge, abilities in the use of CAD systems and preparation and interpretation of technical drawings and then they have the skills and the knowledge to comprehend if the developed AR tool could be a valid alternative to the traditional instruments for the detection of design variations. In fact, all of them were familiar with 2D engineering drawings and expert in the use of 3D modelling and CAD systems. On the contrary, none of the participants had any previous experience of augmented reality devices for the visualization of 3D models superimposed on a real scenario. This lack of knowledge and practical experience is functional for the user study because it reflects a characteristic of the user profile. In fact, as abovementioned in Sect. 1, the targeted users are workers that could adopt the proposed AR tool directly at the workplace.

Figure 4 shows the instruments and metrics adopted for the independent groups design test. The 34 subjects have been in fact separated in three homogeneous (age, gender and visual acuity) groups in order to assign to each group a specific instrument, i.e., a technical drawing, a CAD system and the proposed AR tool.

Fig. 4.
figure 4

Comparison test plan.

Each specific instrument has been then evaluated in terms of its effectiveness and efficiency on the basis of users’ results while performing a target acquisition task. This evaluation has been carried out according to the ISO 9241-11:2018 [18] standard. In particular, the effectiveness is the accuracy and completeness with which users achieve specified goals while the efficiency is related to the resources expended in relation to the accuracy and completeness with which users achieve goals. In this study, the metrics adopted to measure effectiveness and efficiency are respectively: success rate and task completion time.

About the task, each participant was asked to detect the design variances between the physical and the virtual prototype (Fig. 1) through the assigned instrument (Fig. 5).

Fig. 5.
figure 5

Participants while using technical drawing (a), CAD system (b) and the AR tool (c).

The comparison test consisted of four sessions. In the first session, the participants began the test filling out a background questionnaire. In the second session, a tutor provided the participant with the assigned instrument and only for the AR tool a very brief demonstration (usually lasting about 30 s) about its usage has been provided. Then the tutor assigned the task to the participants. In particular, the participants were informed to adopt the assigned technical drawing or digital data as reference in relation to which detect the design variations made on the physical prototype. No mention was made about the number of discrepancies. In the third session, each participant carried out the task without any limitation in time. In order to perform the required task, the participants had at their disposal a measuring tape and a pencil to note down the design discrepancies. In the fourth session, the test ended with post-test debriefing interviews to follow up on any particular problems that came up for the participants, and collect preference and other qualitative data.

5 Data Analysis and Results

Descriptive statistics and analysis of variance tests have been used for statistically significant differences among the three groups. All analyses have been conducted using the statistical packages Microsoft Excel and IBM SPSS. The statistical significance level has been set at p > 0.05.

As abovementioned in the previous section, two separate statistical models have been adopted to analyse the effect of the instrument in the identification of design discrepancy task on time and success rate.

The following bar chart (Fig. 6) shows the success rate of the number of design variations identified by participants by means of the three different instruments. Error bars represent 95% confidence interval. The chart presents similar value of the average success rate, specifically 74.54% with SD of 9.3 for the group that performed the test with technical drawings, 74.54% with SD of 16.9 for the group with the CAD system, and 72.50% with SD of 6.2 for the AR tool.

Fig. 6.
figure 6

Success rate of completion task for each group.

These data have been compared by means of an ANOVA test. The Levene’s test for the homogeneity of the variances presents a value of 0.05, so it has been possible to confirm the null hypothesis. The ANOVA results show that F(2,31) = 0.12, p = 0.88 that means that there is not a significant effect of the tool on the success rate.

The following bar chart shows the average time of each group to complete the task, error bars represent 95% confidence interval. In particular, the participants with the AR tool completed the test in an average time of 1 min and 43 s, and standard deviation SD of 0.43 min. Differently, the participants with technical drawings and CAD models completed the test respectively in 12.50 min (SD 1.73) and 10.89 min (SD 2.20) (Fig. 7).

Fig. 7.
figure 7

Average time requested by each group to complete the task.

The completion time was analyzed using one-way ANOVA test which results confirm a significant effect of the tool on the time required by users to identify design variations with F(2,31) = 165.46 and p < 0.05. However, the Levene’s test has come out as highly significant with a significance value that is < 0.05. So the null hypothesis is violated, that means that the variances of the groups are significantly different. This is quite evident in the abovementioned descriptive statistics results in which the standard deviation of the G3 group is 3 times less than the G1 group, and 5 times less than the G2 group. Since the variances are significantly different alternative statistics have been adopted. In particular, the Brown-Forsythe and Welch versions of the F-ratio have been investigated. Both these tests are still highly significant in fact their results show a significance value < 0,05. In particular, the Brown-Forsythe F-ratio presents a value of F(2,20.11) = 158.52 and p < 0.001, and the Welch F-ratio is F(2,14.85) = 289.07 and p < 0.001. As a consequence, it is possible to attest that there is a significant effect of the adopted instrument on the time required to identify the design discrepancies. A Games-Howell post-hoc test has been carried out to find out where the differences between the groups lie. The results confirmed that the group that uses the AR tool presents significantly different results from the other two groups (p < 0.001). On the contrary, there is not a statistically significant difference between the G1 and G2 groups (p = 0.159). In particular, the group that makes use of the AR tool presents a reduction respectively of 87.3% and 85.5% in time if compared to the group with the technical drawing and CAD model. Similar results are obtained when the median score is adopted as comparison tool [19], also in this case the AR tool brings down the completion time by 88.8% and 84.9% respectively of the G1 and G2 groups.

In conclusion, the comparison test produced interesting results about the adoption of AR technology for the detection of design discrepancies. In fact, even if all the participants were novice in the use of the AR tool they performed relatively well and reached similar results in terms of effectiveness of the groups that used technical drawings and CAD systems. Regarding the efficiency, considering the mean time to complete the task scored with the technical drawing and CAD system as reference, since these are the instruments at the moment commonly used to perform this kind of operations, it is surprisingly evident that the group with the AR tool performed better than the other two groups. In fact, the detection of design variations is significantly faster using the proposed AR tool compared to technical drawings and CAD systems. This outcome suggests that the use of AR minimizes the cognitive load of divided attention induced in attending to both the physical prototype and the related design data. In fact, divided attention tasks demand high mental workload [20] and increase the difficulty rather than simplify the operations [21]. As a consequence, this leads to a reduction in the efficiency of the user’ processing capacity. Furthermore, for users with poor divided attention, any interference may alter the task that they are doing simultaneously.

Then the AR tool has proved to be an interesting solution that simplifies comparison of real objects and three-dimensional models and, consequently, can support users for the detection of design discrepancies. Its main advantage is related more to the efficiency than to the effectiveness when compared to the paper- and CAD-based solutions. It is, therefore, a valid solution, which is worth continuing to work on and to develop additional features and make it a complete user-friendly instrument able to support workers at the workplace within a smart factory context.

6 Conclusions

The paper has proposed a novel tool, based on AR technologies, that has been specifically developed to support workers for the detection of design variations between the 3D models prepared by the designers and the physical prototypes, as they have been made and assembled at the workplace.

The paper has focused on an exploratory and comparison test that is of fundamental importance because it lays the ground for the following development stages. In particular, the test aimed to compare the effectiveness and efficiency of three different instruments by means users can detect design variances. The test has been carried out within a controlled laboratory environment that is not affected by the many issues that usually arise into an industrial environment. Then this exploratory test represents a preliminary study of the proposed AR tool, carried out in an environment with ideal conditions with a specific typology of users, that will be further subjected to an iterative cycle of lab tests to expose usability deficiencies and field experimentations under real conditions of use.

What emerges from the experimentation is that the proposed AR tool provides similar results with the other instruments in term of effectiveness and an evident benefit in term of its efficiency when compared to other instruments traditionally adopted for the detection of design discrepancies. In conclusion, it is possible to validate the assumption that the proposed AR tool could be efficiently adopted as an alternative to technical drawing and CAD systems to easily support users for detecting design discrepancies that occur between the 3D models and the correspondent physical prototypes.

Furthermore, differently from the other two instruments taken into consideration for the comparison test, the proposed AR tool has been conceived as a handheld device in order to be used by workers at the workplace. In addition to these main features, with a view to the Industry 4.0 context, the AR tool could be integrated with the product data management (PDM) system to automatically store and share the data acquired by the other professional figures involved in the development and production processes.