Keywords

1 The Building Maintenance Instruction Manual

Building maintenance and repair costs are influenced by several factors such as hours of operation, use type and intensity, location, and overall condition. While numbers vary from one facility to the next, on average, these costs could range anywhere from 1% to 5% of the initial building cost per year [1]. When accumulated over the building lifetime, the cost of maintenance and repair may even exceed the initial construction cost. Recent technological advancements in architecture, engineering, construction, and operation (AECO) domains have created new opportunities for transforming the current practice of building maintenance using new visualization and sensing technologies.

Evidently, a product is not considered complete and ready to use if not accompanied with proper information or documentation. Information is indispensable for utilizing all the features any product has to offer and if consumers are expected to put that product in good use. Traditionally, an instruction manual is supplied with a manufacturing product or commercial good, which contains key information on how to use the product and what to do in case repair or maintenance is needed. For example, the European Union (EU) legislation titled “Usable and safe operating manuals for consumer goods: guideline” specifies that a product is complete only when accompanied by an instruction manual [2]. Delivering or selling a product without an instruction manual is against the law in many parts of the world, and in such cases, the user is entitled to full assistance or cost reimbursement. Besides, the distribution of industrial products within the EU region requires a declaration of conformity for the product, and the distributor should bear responsibility for any issues or losses arising from the lack of conformity [2].

In turn, the British Standard BS 8210:2012 titled “Guide to facilities maintenance management” is aimed directly at building owners, operators, and facility managers [3]. In this standard, the manual is characterized as a maintenance document which provides technical instructions for preserving an item or restoring it to a state where it can perform its function. This same publication highlights that preparing a manual offers significant advantages by providing a clear statement of intent and necessary actions.

According to this standard, the maintenance and repair steps taken by the product (i.e., building, facility) manufacturing firm should be formalized in a manual, which should be updated periodically, and may pertain to broader documentation incorporated into a facility handbook [3]. In addition, BS 8210:2012 cites other related standards including (i) BS EN 82079:2012 titled “Preparation of instructions for use – Structuring, content and presentation – Part 1: General principles and detailed requirements,” which aims to provide the general principles and specific requirements for the design and creation of all types of use instructions for all types of products’ users, and (ii) BS EN 13460:2009 titled “Maintenance – Documentation for maintenance,” which deals with the operational phase and equipment life cycle and describes a list of documents required for maintenance [3,4,5].

In Brazil, the 1990 Law No. 8078, better known as the Consumer Protection Code , provides for consumer protection and other standards (Brazil, 1990). In this law, a product is characterized by any property, movable or immovable, material, or immaterial. Inserted as fundamental rights of the consumer is the knowledge of the appropriate information about products and services, with correct specification of quantity, characteristics, composition, quality, taxes, and prices, in addition to the risks they present. Article 50 of the referred law stands out for establishing parameters to be followed by the contractual guarantee . The warranty term should be standardized and adequately state what the warranty consists of, as well as its form, term, place, and burden on the consumer. The installation and product instruction manual must be delivered in a simple, didactic language and with illustrations.

Although the concept of instruction manual has evolved over time, its format and delivery mode have to the most extent remained the same (i.e., traditional printed form, computer disk, or hypertext). Regardless of the type of media used, the manual is often filled with textual or tabular information or, at best, basic illustrations. Recent advancements in visualization technologies have created new opportunities to change how people with different levels of knowledge and/or learning styles interact with and understand contents or objects in their surroundings. What makes this new approach even more practical is the ubiquity of personalized technologies which has given the new generation a natural ability to use mobile and intelligent tools and a strong affinity to integrate new devices into daily tasks [6].

In this chapter, the authors focus on the “building” as the object (i.e., product) to be clarified by the instruction manual. Zevi [7] states that the traditional method of building representation (floor plans, elevations, sections, and photography) does not completely represent the architectural space because of fragmentation and ambiguity in the two-dimensional (2D) drawings used. This same author opines that:

The plan of a building is nothing more than an abstract projection on the horizontal plane of all its walls, a reality that nobody sees except on paper, whose only justification depends on the need to measure distances between the various elements of the building, to the workers who are to perform the work materially. [7]

The research challenge addressed in this chapter can be best described as follows: the current delivery method of building maintenance instruction manual (BMIM) in textual format and technical language does not cater to the needs and level of understanding of the end users (i.e., building owners, facility managers), who are, in turn, not motivated enough to use it. As a result, it does not fulfill its role of guiding building maintenance and use activities.

The way we, as humans, perceive reality is continuously changing, as digital technology revolutionizes our view of the world and our interface with the surrounding environment. New information and communications technologies (ICT) constantly challenge the status quo by creating new ways of thinking and living, and human relationships depend on constant metamorphosis of informational devices of all kinds [8]. Augmented reality (AR) is one of these technologies that can be used to enhance the understanding of specific contents. AR is a fast-evolving field of research and development with significant growth potential.

AR combines virtual and real-world scenes in an effort to increase user perception and interaction [9]. It can also be understood as the overlapping of computer-generated information about the real environment through the user’s view, differing from virtual reality (VR) in that objects in AR coexist with real environment objects [10]. In a typical AR application, at least one computing device (computer or mobile device) equipped with a visual display (e.g., monitor, projector, head-mounted display) and an image capturing device (e.g., camera) is used. In more robust computational applications, or when involving multiple users, researchers have also used web systems [11]. Regardless of the type of AR application, the unique advantage of AR is to use the real world as background, allowing the possibility of interaction with the virtual environment.

Research involving AR in AECO is a current trend. Rankohi and Waugh [12] conducted a literature review involving 133 articles from AECO journals that contained the keyword “augmented reality” and identified, among other aspects, the following types of applications for AR: (i) visualization or simulation, (ii) communication or collaboration, (iii) information modeling, (iv) access to information or evaluation, (v) monitoring, (vi) education or training, and (vii) safety and inspection. Other studies have proposed innovative ways of using AR to enhance construction education by combining written content with 3D viewing through AR [10, 13]. According to these studies, although students often have excellent theoretical knowledge, they do not know how to apply them in practice. In a different study [14], AR was used for operation and assembly tasks by comparing the assembly of parts through the isometric design of hydraulic installations and AR visualization , and it was found that using AR could lead to improved productivity and assembly performance by reducing cognitive workload. Similarly, Hou et al. [15] analyzed the effect of using AR on the user’s cognitive load in assembly tasks and discussed how this approach could shorten the learning of new assembly workers. The study also compared the printed assembly guidebook with the designed AR system, and results indicated a positive impact demonstrated by the increase of the users’ learning curve and the reduction of mistakes.

Wang et al. [16] addressed the gap between building information modeling (BIM) and AR and proposed a conceptual framework that integrates the two so that the physical context of each building activity can be viewed in real time. They suggest that for this junction to be effective, AR must be ubiquitous and act in conjunction with tracking and detection technologies . Similarly, Olbrich et al. [17] studied the problem of information visualization in AR based on BIM models and stated that the challenge is to give the agents involved in the building’s life cycle access to the management system through on-site inspections and seamless exchange of information.

Considering BMIM within the context of the existing literature, the potential of creating a more interactive, accessible, dynamic, and instructive BMIM can be explained. This association can extrapolate the visualization limits and move toward an approximation with the constructed object (i.e., building). Thus, this chapter seeks to enhance the performance and adoption of the BMIM, through a closer relationship with the end user that, in most cases, is not familiar with construction terms. Using AR in facility management and building operations is relatively new and underexplored. Facility managers regularly visit managed spaces. Using mobile devices with information visualization in AR has shown to improve the quality of maintenance and operation tasks [18]. Timely information support plays an essential role in problem containment and diagnostics [19]. In this sense, a facility manager as well as the owner and the project manager may adopt the BMIM as a management tool for building maintenance and operation activities. Thus, the authors envision an innovative application of expressing concepts related to the way users interact with the BMIM.

The presented work in this chapter is multidisciplinary involving areas of design, construction, facility management, and computer science. The contribution of this research is to clarify the most appropriate ways to incorporate AR in the building maintenance instruction manual (BMIM) and qualify the performance improvements in the use of the AR-enhanced manual following NBR 14037 [20]. Therefore, the general objective of this study is to present the design and evaluation of an AR-integrated BMIM to improve the quality of building operation and maintenance. In order to guide the research and achieve the proposed objective, as described throughout this chapter, the design science research (DSR) method , also known as constructive research [21], is applied.

2 Augmented Reality and Facility Management

Visualization has gained increasing credibility among construction researchers and has been considered as one of the top four IT domains in this field [22]. Although every construction project is unique, several tasks (including periodic inspections and preventive and corrective maintenance) are repeated throughout the life of many constructed facilities. Visualization is a powerful tool to enhance the user/stakeholder experience and along the lifecycle of a construction project. Conflicts and other essential aspects in project execution can be understood in the pre-construction phase using a variety of visualization and simulation techniques. Visualization tools have also been used to improve and revolutionize current design approaches, jobsite safety [23], and on-site diagnostics in combination with real-time sensor data [24]. In AECO, physical objects often need to be related to their information, making AR a great candidate to achieve this by assisting users to view the environment complemented with the necessary information, united in a single interface [25].

In the past two decades, the AECO community has examined different approaches, including VR and AR, to improve communication, visualization, and coordination among different project stakeholders [24]. For Nee et al. [26], most collaborative visualization systems in AR are systems based on design visualization. In these systems, virtual models are presented in AR to the designers in order to facilitate the decision-making process. Parallel to the virtual model, knowledge about objects such as metadata, design information, and annotations can also be viewed in AR to facilitate decision-making in these collaborative systems.

There is also a growing tendency for users to interact directly with the information associated with the production process. AR can integrate these modalities in real time into the work environment, which is useful for manufacturing , assembly, training, and maintenance activities. Moreover, AR can provide users with a path of direct and intuitive interaction with information in the manufacturing process [26]. Several studies prove the use of AR in this regard. For instance, AR was used in the construction planning phase to assist in decision-making [27] and has been applied to training and simulation of the machining process of CNC machines [28]. AR has also been deployed to assist the design and intuitive assembly through gestures and manipulation of virtual objects [29], as well as for inspection and instruction in construction [30].

Wang et al. [16] presented a conceptual approach with several examples of how AR can be incorporated into BIM and listed potential use cases such as connecting design information with physical environment, mental model synchronization for communication and project control, monitoring and feedback by comparing as-built with as-planned information and visualizing discrepancies between design and production, and finally site and storage planning. In assembly, operation, and maintenance tasks, experiments have shown that the use of AR can improve operator’s understanding and process control [28, 31].

Overlaying digital information on the views of the real environment using AR can assist workers to implement the correct assembly procedures with greater accuracy and error reduction. In a particular study that used laboratory experiments, researchers observed the reduction of task completion time by 55% and the reduction of assembly errors (by reducing rework) by 46% [14].

Maintenance activities, such as preventive and corrective activities, are almost always established according to predefined procedures and accepted protocols. Workers in this area need to be trained to carry out specific maintenance-related tasks. These professionals sometimes need to seek assistance from support systems and specialists in the field. Training on maintenance tasks can be done using 2D printed materials and VR simulation systems. However, VR technologies are rarely applied to maintenance, where interaction with actual physical equipment is required. AR visualization has an advantage in these applications in that the user interface can be ubiquitously designed to allow for an unobstructed view of the real physical object while accessing necessary instructions and maintenance data [26].

Graf et al. [32] add that with the increasing use of BIM in AECO, new opportunities arise to assist facility maintenance and operation professionals in taking advantage of the building’s lifecycle-related BIM information and real-time environment simulation. In particular, the combination of BIM, computer vision, and tracking technologies will enable future applications for as-built capture and viewing of “just-in-time ” (JIT) operation and maintenance information.

A study by Irizarry et al. [33] investigated the situational awareness integrated into the context of FM, BIM, and AR. First, the as-built BIM model of space was conceived, and the geometry of this model was simplified to be exported, in an appropriate format, to 3D visualization software. Next, 360° panoramic images were generated (a camera was positioned within the BIM model). The simplified geometry was then imported into Google SketchUp to geo-reference and position physical markers called Information Surveyed Point for Observation and Tracking (InfoSPOT) . The usability and the interface quality of the developed prototype were successfully tested in a user study, presenting InfoSPOT as a low-cost solution that utilizes AR-integrated BIM for facility management.

3 Systematic Literature Review

Systematic literature review (SLR) is part of DSR (aka constructive research) method, previously explained. According to Ref. [34], SLR is a scientific investigation that was first adopted in the late 1980s, due to a large number of publications produced and the absence of an appropriate literature review methodology. SLR is a means of identifying, evaluating, and interpreting all available research relevant to a particular research question, topic, or phenomenon of interest [35]. Studies that contribute to the systematic review are called primary studies , and a systematic review is a secondary study. Systematic reviews should be performed according to a predefined search strategy. The search strategy should allow the completeness of the search to be evaluated. In particular, researchers who conduct systematic reviews should make every effort to identify and report research that supports the research hypothesis, as well as identify and report research gaps [35].

SLR is different from conventional literature review in several aspects. In particular, SLR begins by defining a protocol that specifies the research question and the methods that will be used in the study. In addition, SLR is based on defined search strategies intended to detect as much relevant literature as possible. These search strategies should be documented, so readers can access their credibility and completeness. Furthermore, SLR requires explicit inclusion and exclusion criteria and specifies the information to be obtained from each primary study, including the evaluation criteria. Finally, SLR is a prerequisite for a quantitative meta-analysis [35]. An important aspect of SLR is the protocol validation by a subject matter expert. If this evaluation yields unsatisfactory results, the protocol should be reformulated [34].

In this study, SLR is performed to identify existing work in AR applications related to building assembly, maintenance, and operation along with instruction manuals. The following questions guide the review of literature: (a) what prototypes are produced in AR applications? and (b) what tracking techniques are used in AR applications?

The time range used in conducting the SLR is from 1997 until September 2019. The starting point of 1997 is based on the article A Survey of Augmented Reality by Ronald Azuma which was published in Presence: Teleoperators and Virtual Environments of the Massachusetts Institute of Technology (MIT) . This is one of the pioneering studies in AR which systematically introduces the concept of AR and presents an extensive bibliography in this field of research [36]. In the presented research, the developed SLR protocol consists of planning, conduction and information extraction, analysis, and synthesis of results. The steps are summarized in Table 21.1.

Table 21.1 Summary of SLR protocol

During the synthesis of results, timelines are elaborated considering the number of publications and the tracking method used. Also, a hierarchical categorization of activities in AECO is used [39], which is presented in Table 21.2. The artifacts are categorized according to the first three levels, namely, Area (AA), Application (AP), and Activities (AC).

Table 21.2 Hierarchical categorization of AECO activities [39]

3.1 Existing Studies

Figure 21.1 shows the temporal evolution of AR studies in the areas of building assembly, operation, and maintenance or instruction manuals. According to this figure, while initial work in these fields began in 1999, the SLR revealed no relevant literature in the years 2000, 2001, 2004–2008, and 2016. Additionally, as of 2011, a significant growth among the selected articles is observed, with the highest numbers occurring from 2013 (7 articles), reaching a peak in 2018 (11 articles).

Fig. 21.1
figure 1

Temporal distribution of AR studies applied to building assembly, operation, and maintenance or instruction manuals by year of publication

This quantitative analysis suggests that AR applications used in the building assembly, maintenance, and operation are part of a relatively new theme and are experiencing a slow growth. According to Ref. [40], areas such as mechanics and aeronautics started to draw attention at the same time that the term “augmented reality” was first coined in the early 1990s. In short, the identified sources covered a period of 23 years, and the sample analyzed found a late appearance of these applications, specifically as related to building maintenance and operation. However, this area of research is on the rise and has an emphasis on creating functional prototypes.

An analysis of the tracking methods used in existing AR applications developed for building assembly, maintenance, and operation and/or instruction manuals identifies three different techniques that include using markers, markerless method (natural feature tracking), and sensors. As shown in Fig. 21.2, among the 73 articles analyzed, 55 explicitly characterize some form of tracking systems. It is found that the use of markers is predominant in 58% of the studies, followed by the use of sensors and markerless methods with 22% and 20%, respectively. It is also found that as of 2019, the use of newer tracking methods (i.e., sensors and markerless) is on the rise (Fig. 21.2).

Fig. 21.2
figure 2

Tracking types used in AR applications by year of publication

Lee and Akin [19] used markers as a tracking technique for the development of an AR application in operation and maintenance of equipment and justified their approach by citing the reported inaccuracy of other tracking methods when utilized inside complex buildings and indoor spaces. In other studies, researchers have used sensing techniques such as global positioning system (GPS) and six degree-of-freedom (6DOF) tracking systems to locate objects for visualization applications. For example, Schall et al. [41] developed a GPS-based method for operation and maintenance that uses AR for various verification tasks in urban installations . In this study, the accuracy achieved was less than 30 cm. In markerless tracking method, Olbrich et al. [17] presented the visualization of information based on BIM models, using feature point-based tracking (based on the characteristics of the points). In this study, a system for creating mobile AR applications was proposed, in which 3D models coexist with semantic information. The combination of BIM with AR provides a visualization of construction-related data on-site and provides support for documenting content via mobile devices.

3.2 Artifacts Categorization

Among all the studies that were gathered in the SLR process, those that resulted in the creation of AR application prototypes for assembly, maintenance, and operation of the building and/or instruction manuals are listed and characterized in Table 21.3. This categorization is based on the mixed reality taxonomy for operation and maintenance tasks at AECO [39], presented in Table 21.2.

Table 21.3 Characterization of AR prototypes applied to building assembly, operation, and maintenance or manual

Visualization is the most recurring activity being the subject of the largest number of studies (24) and covers all areas of application listed in Table 21.2. Besides, other activities observed are assembly and monitoring which are covered in 12 and 10 studies, respectively, followed by equipment control (6 studies), planning (4 studies), and, finally, handcrafting and robotics fabrication (each in 1 study) activities.

Among the prototypes found, monitoring applications [1719], visualization [33], and editing [14, 15] are closely related to the topic of this study. Lee and Akin [19] present an AR prototype aimed at equipment maintenance and highlight the following information types: (i) maintenance information (that includes specifications, subsystems, and components, as well as agents involved and maintenance history); (ii) operating information (that includes performance data or data flow); and (iii) geometric representation (that helps the maintenance professional to better understand the equipment status and its location). The study by Olbrich et al. [17] explores the visualization of information, through AR, based on BIM models and a user-centered annotation mechanism. The importance of this study, within the context of FM, is that it emphasizes the use of information added to BIM using AR visualization. Moreover, the survey by Irizarry et al. [33] shows the use of BIM for FM through the creation of 360° panoramic images generated from the model. The work of Hou et al. [15] presents an assembly system for measuring cognitive work and compares traditional assembly with assembly using AR. Finally, the survey by Hou et al. [14] presents an application for the assembly of hydraulic installations assisted by AR, in which a sequence of hydro-sanitary parts was assembled, and the assembly time and execution errors were checked.

4 Methodology

As previously described, the research methodology adopted in this study is based on the DSR method, also known as constructive research . In particular, the specific steps followed include a SLR of AR applied to building assembly, maintenance, and operation or manual, characterization of features of AR that can be applied to the BMIM, development of proposals for AR incorporation in the BMIM as well as proposals to incorporate the BMIM in the environment, and, finally, a comparison of user performance when presented with different forms of visualization (tablet computers and smart glasses) through experiments with measurements following the NASA TLX protocol . We adopt the outline presented in Ref. [93] according to Fig. 21.3 and explained in the following sections.

Fig. 21.3
figure 3

Designed research methodology based on the DSR method

4.1 Identification of the Problem

The first step of the designed methodology is the identification of the problem in the form of addressing the following research questions: Does the incorporation of AR into BMIM stimulate gains with respect to the tasks being visualized? Does the choice of information delivery device used for AR visualization influence gains? Does the scale of AR visualization (i.e., small scale on paper vs. real scale in the object environment) influence gains? In this study, gain is defined as a measure of workload perceived by the user when completing a task.

4.2 Design and Development

The design of the artifact must consider its internal characteristics and the external context in which the artifact operates. Development corresponds to the process of the constitution of the artifact itself. For a better understanding of the perception of the BMIM by users, a satisfaction survey with apartment owners (in Goiânia, Brazil) was carried out [94]. The satisfaction survey included four questions regarding the BMIM:

  1. (i)

    Did you obtain the necessary clarifications when consulting the manual?

  2. (ii)

    How satisfied were you with the manual?

  3. (iii)

    If you have read or consulted the manual, what were the reasons?

  4. (iv)

    Suggest possible improvements to the manual.

Considering the results of this survey and the outcome of the SLR for identifying AR prototypes, one can observe how various stakeholders deal with the BMIM.

From the user’s point of view, the satisfaction survey carried out revealed that the respondents rarely consulted with the manual (were indifferent to the manual), showing disinterest in its use and the technical information it presented. Another observation from this survey was that when users chose to use the manual, it was merely for checking items for use and maintenance and to understand equipment operation and verify accident prevention. In addition, it was found that building owners were in favor of a new form of presentation of the manual. There was a general tendency toward delivery methods that are interactive, present visual content (e.g., 3D model of the building), and make use of mobile devices.

Furthermore, the survey found that not all builders and developers create manuals, rather they hire other professionals/companies to do the job, and the resulting manual conforms to the building’s use and standard [94]. The preferred manual format is, in the vast majority of cases, textual, followed by digital content (computer disk or memory stick). Likewise, few companies invest in creating websites and online spaces for their BMIMs, and no builder in the sample population prepares the manual in the form of an application for mobile devices. In verifying the adherence of the manuals to NBR 14037 [20], it becomes evident that the sections of the BMIM that contain information on guarantees and technical assistance are the most complete among all sections.

As for the structuring of the content of the evaluated BMIMs, it is found that most do not follow the structure recommended by NBR 14037 [94], although the particular structure adopted is perceived to add information by component and facilitate its understanding. Still, the manuals are not very attractive because they do not use visual aids that help increase user comprehension. The structuring of the content of most of the evaluated BMIMs provides further evidence that information is often aggregated by building component. This premise is therefore followed in the proposed AR artifact in this research.

In designing the AR artifact in this research, the marker-based tracking method stood out as an option for its accessibility (easy to print and install) and scalability as, unlike sensor-based techniques, it does not require sophisticated sensors and calibration steps [19, 41].

Two prototypes are introduced and evaluated in this research: (i) traditional BMIM plus AR and (ii) incorporating BMIM within the physical environment using AR. Markers are assigned to the printed BMIM to facilitate user interaction with both textual and media contents in AR. The scale of visual data presented in AR is chosen to be 1:100 (when visualizations are overlaid on the printed BMIM) or 1:1 (when visualizations are inserted in the physical environment).

4.3 Artifact Evaluation

Dunser and Billinghurst [95] suggest four types of assessment to evaluate AR applications, which are (i) experiments that study human perception and cognition, (ii) experiments that examine user performance in a task, (iii) experiments that examine user collaboration, and (iv) usability and system design evaluation. In this research, evaluation is performed with end users through experiments that examine the user’s performance in a task, as well as contingency heuristics that explain the artifact limits and its conditions of use [93].

We adopt the NASA Task Load Index (NASA TLX) method [96], which is a multidimensional assessment process that provides an overall workload index based on a weighted average rating of six factors, namely, mental demand, physical demand, temporal demand, performance, effort, and frustration. Three of these factors relate to the demands placed on the individual (mental demand, physical demand, and temporal demand), while the other three factors measure an individual’s interaction with the task (performance, effort, and frustration). Each factor is described below:

  • Mental demand: Amount of mental activity and perception required, for example, thinking, deciding, calculating, remembering, looking, searching, etc. Was the task easy or difficult? Simple or complex?

  • Physical demand: Amount of physical activity required, that is, pushing, pulling, turning, controlling, activating, etc. Was the task easy or difficult? Slow or fast? Slow or strenuous? Restful or laborious?

  • Time demand: Time pressure felt due to the rate or pace at which task elements occurred. Was the pace slow and slow or fast and frantic?

  • Performance: Level of success in accomplishing the task objectives. How satisfied was the user with his/her performance in achieving these goals?

  • Effort: How hard was it to work (mentally and physically) to achieve performance?

  • Level of frustration: How insecure, discouraged, angry, stressed versus safe, grateful, contented, relaxed, and complacent did the user feel while performing the task?

Pairwise factor comparisons determine the degree to which each of the above six factors contributes to the perception of the workload of each task by the user. Magnitude assessments on each subscale are obtained after each performance of a task. Rankings of the factors considered most important in creating a workload for a task are given higher weight in calculating the overall workload scale, thereby increasing scale sensitivity [96].

The first requirement is that each individual assesses the contribution (i.e., weight) of each factor to the workload of a specific task. There are 15 possible pair comparisons of the 6 scales. Each pair is presented to the individual, who will choose one factor from that pair that contributed most to the workload of the task performed. The number of times each factor is chosen is computed. The score may range from 0 (not relevant) to 5 (most important) [96].

The second requirement is to obtain numerical ratings for each scale that reflects the magnitude of that factor in a given task. The scales are presented on a rating sheet, and the individual responds by marking at the desired location (gross rating). Each individual’s overall workload score is calculated by multiplying each rating by the weight assigned to that factor by the individual himself (adjusted rating). The sum of assessments, for each task, is divided by 15 (the sum of weights) to obtain the overall workload score of the individual on that task. The value of the overall workload is obtained by averaging individual weighted workloads [96]. Table 21.4 presents the factorial experimental plan of the evaluation with the individuals of the developed solutions.

Table 21.4 Experimental plan for individual assessment of proposed solutions

A pilot experiment is performed with university students, staff, and teachers, as well as shoppers at a building supply store as volunteers. Three manuals are evaluated: (A) traditional BMIM, (B) traditional BMIM plus AR (Manual Augmented Reality or MAR app), and (C) environment incorporating the manual with AR (Living Augmented Reality or LAR app). Two forms of visualization are applied to prototypes B and C, which are AR viewed on a tablet and AR viewed through smart glasses. In each of the five assessment scenarios, the NASA TLX measurement method is applied.

4.4 Explicitness of Learning

This stage aims to explain the lessons learned during the research process, considering the results observed in the evaluation stage. Success and failure points are also described. This approach ensures that research can serve as a reference and as a basis for knowledge generation [97]. In this study, we verify if the proposed solutions can be applied to any BMIM or sections of the manual, identify the limitations of this application, and suggest improvements to enhance knowledge in this subject.

4.5 Generalization to a Class of Problems

This step allows the advancement of knowledge in DSR. Generalization makes the acquired knowledge replicable in other similar situations through the use of inductive reasoning [93]. We further verify if the prototypes of the proposed solution for the BMIM can be used in a variety of assembly, maintenance, and instruction tasks related to building engineering, construction, and operation.

5 Design and Development

5.1 Artifact Design

The tangible outcomes of this research are two applications: (i) Living Augmented Reality (LAR) which incorporates BMIM into an AR environment and (ii) Augmented Reality Manual (MAR) which supplements a traditional paper-based BMIM with AR [98]. Both prototypes include a sample maintenance activity selected from one of the manuals collected in the cataloging and classification stage. The activity chosen for the prototypes is the replacement of a toilet float [94].

The traditional BMIM contains step-by-step instructions (in text format) to perform this activity, which are as follows: (i) carefully open and remove the cover of the coupled box; (ii) detach the floater; (iii) take it to a building materials warehouse to serve as a model for the purchase of a new one; and (iv) with the new float in hand, fit it exactly where the old one was taken from. The same instructions are also used in developing the LAR and MAR applications, each in two versions: one for tablet and one for smart glasses. For both applications, markers are used for tracking and registering the virtual objects in the real world.

In the LAR application, the virtual toilet model has a 1:1 scale and is superimposed on the real toilet where the marker is fixed. Furthermore, the step-by-step floater replacement activity is demonstrated in a transcribed AR animation. The MAR application, on the other hand, uses AR to overlay a 1:10 scaled virtual toilet model on a printed manual. For evaluation, all four prototypes (LAR and MAR, launched on tablet and smart glasses) are evaluated against the traditional BMIM (print format).

The tablet computer used in this study is an Apple iPad Air running on Apple’s iOS operating system with Wi-Fi, 64 GB capacity, 9.7-inch retina display with gyroscope, 3-axis accelerometer and ambient light sensor, A7 64-bit architecture chip, M7 motion coprocessor, approximately 10-h battery life, and wireless and Bluetooth connectivity.

The second type of display device is the Epson Moverio BT smart glasses running on Android version 4.0 with Dual Core 1.2 GHz processor and 1 GB RAM and 8 GB internal memory (with expandability up to 32 GB with an SD card), Dolby Digital Plus sound system, wireless and Bluetooth 3.0 connectivity, and integrated sensors that include a VGA camera, gyroscope, GPS, accelerometer, compass, and microphone.

Figure 21.4 presents a navigation scheme of the AR application. From the “home screen,” the user has the option to learn about the app by clicking on the “About” button, print and place the marker by clicking on the “Instructions” button, or select “Start” to continue to the application. For better navigation, from the “Instructions” and “About” screens, the user can access the “Start” screen, which will trigger the device’s camera. A guide will then appear to help the user aim the camera at the marker, which will lead to the virtual model on the marker. On this same screen, there are the buttons, namely, “Replace floater” and “Information,” and a third button that allows the user to return to the “home screen.” When accessing “Replace floater,” the user will see a step-by-step animation in AR that can be controlled during playtime. By accessing the “Information” button , the user will view technical specifications, warranties, supplier’s contact information, and technical content about the toilet model and issues related to hydraulic installation and sewage systems.

Fig. 21.4
figure 4

Navigation scheme of LAR and MAR applications

5.2 Artifact Development

As shown in Fig. 21.5, the steps taken to develop the LAR and MAR applications for Apple and Android devices followed the sequence of Revit (https://www.autodesk.com/), 3DS Max (https://www.autodesk.com/), Maya (https://www.autodesk.com/), Adobe Photoshop (https://www.adobe.com/products/photoshop.html), Unity (https://unity.com/), and Vuforia (https://www.ptc.com/en/products/vuforia) programs.

Fig. 21.5
figure 5

Stages of artifact development

The first step of the development is virtual object modeling during which the toilet and its components are first modeled in Autodesk’s BIM Revit . The basic toilet model is obtained from the object library (Fig. 21.6) and is further enhanced by adding the coupling box model created after a commercially available prototype [99] (Fig. 21.7).

Fig. 21.6
figure 6

BIM of the toilet in Revit

Fig. 21.7
figure 7

Floater engine reference : Astra model (a) and internal engine breakdown in Revit (b)

In the second step, the BIM model (toilet plus coupling box mechanism) is exported to 3DS Max in .fbx format for adding textures and materials (Fig. 21.8). It is noteworthy that exporting the BIM model to 3DS Max leads to the removal of the model’s non-geometric information. In the third step, the 3D model is exported to Maya to create the floater replacement animation in .3DS format. Here, the front face of the coupling box is made transparent in order to allow a better visualization of the internal components.

Fig. 21.8
figure 8

Adding texture and materials in 3D Studio Max

In the fourth step, the user interface (e.g., interaction buttons, “Instructions” screen, icons), as well as markers, is created in Adobe Photoshop . In parallel, programming and inserting AR features into the 3D model using Unity/Vuforia is performed (fifth step). In particular, the animation is exported to Unity and Vuforia to create navigation and add AR features in .fbx format. When exporting the animation to Unity/Vuforia, some model textures and materials are lost, but new textures are inserted into the model to accommodate this loss of information

Concerning marker design, the characteristics considered were asymmetrical pattern, high contrast, no repetition of patterns, and richness of detail. In particular, as shown in Fig. 21.9a, the image in the center of the magnifying glass represents the virtual content that is displayed in AR. On the other hand, the symbol under the magnifying glass identifies the action (i.e., maintenance, visualization). For example, the image of a hammer and a wrench corresponds to a maintenance activity, while the image of an eye corresponds only to information visualization. Similarly, the marker illustrated in Fig. 21.9b is designed to display material or wall covering information (wall image in the center of the magnifying glass and an eye at the bottom) without requiring the user to perform any task. All markers have the same outline which resembles a mobile phone, indicating that they are intended to be viewed via mobile devices . Since, for both the LAR and MAR applications, the object of interest is toilet, the marker illustrated in Fig. 21.9a is used in both applications. The Vuforia Developer Portal Target Manager is used for marker classification using a five-point scale. In this rating system, one star corresponds to poor marker design quality, and five stars represent excellent marker quality. Figure 21.10 shows the results of marker classification, in which points used for feature tracking are highlighted in yellow color. According to the results, the designed marker is rated as five stars by the Vuforia Developer Portal Target Manager . Both LAR and MAR applications have been developed with the extended tracking feature, which utilizes environmental features to enhance crawl performance and maintain virtual object visualization, even when marker visibility is interrupted. This feature is especially recommended for architectural objects that are viewed in scale and perspective..

Fig. 21.9
figure 9

LAR/MAR current (a) and future (b) markers

Fig. 21.10
figure 10

LAR/MAR marker rating

Figure 21.11 shows snapshots of the first LAR design test on the tablet computer. This test served the purpose of material adjustments (texture and transparency in model visualization) and refining the user interface (screens, step-by-step visualization of floater replacement, positioning of buttons). Figure 21.12 illustrates another LAR test on the tablet computer. Here, the AR model view is embedded in the real environment, and the scale of the virtual model is adjusted to match the real environment. Finally, ambient lighting interference checks and marker positioning are carried out.

Fig. 21.11
figure 11

LAR tests, tablet version (material adjustment)

Fig. 21.12
figure 12

LAR tests, tablet version (scale adjustment)

Figure 21.13 shows the next step of the LAR tablet version, with a graphical interface and implemented interaction buttons inserted. In addition, to improve the visibility of the textual information at the bottom of the screen corresponding to buoy replacement, a blue banner with 70% transparency is added to the text background. Lastly, Figs. 21.14 and 21.15 show screenshots of the final version of the LAR tablet version.

Fig. 21.13
figure 13

LAR tests, tablet version (GUI adjustments)

Fig. 21.14
figure 14

Application screen of LAR tablet version

Fig. 21.15
figure 15

AR view screens of LAR tablet version (https://youtu.be/7rDDuKeEiwk)

The design of the MAR application follows a similar procedure but requires adjustments to be made to some displayed textual information, scale and placement of the 3D model (model must be rotated to align with the printed marker), and marker scale. The final version of the MAR application (tablet version) is illustrated in Fig. 21.16.

Fig. 21.16
figure 16

Application screen of MAR tablet version (https://youtu.be/T90HXxrV5yU)

Similarly, the smart glasses versions of both applications are developed. Snapshots of the final version of LAR and MAR applications (smart glasses version) are shown in Figs. 21.17 and 21.18, respectively

Fig. 21.17
figure 17

Application screen of LAR smart glasses version

Fig. 21.18
figure 18

Application screen of MAR smart glasses version

5.3 Artifact Publication

In this phase, both applications were made available for Android and iOS operating systems . To ensure that the developed applications meet the requirements of each system, they must first undergo a review and approval stage. Initially, the TestFlight tool (in iOS) was used to publish the trial (beta) version of applications by inviting select users. Upon receiving an email invitation, the user could download and run the application. The TestFlight tool was linked to iTunes Connect (Fig. 21.19). Publishing the applications in beta enabled testing, alongside making corrections and adjustments, and preliminary user evaluation.

Fig. 21.19
figure 19

Application publishing of LAR tablet version (for iOS)

In the Android system, the Google Play Console platform was used, which allowed application management and testing through a link sent to registered users (Fig. 21.20). A beta version of the application was published to check for errors and inconsistencies before official publication (Fig. 21.21).

Fig. 21.20
figure 20

Application publishing of LAR and MAR tablet version (for Android)

Fig. 21.21
figure 21

Application testing of LAR and MAR tablet version

6 Evaluation and Findings

The artifact evaluation phase was carried out in a building material store (Fig. 21.22) and the School of Civil Engineering, Architecture and Urbanism – FEC (LAMPA Design Methods and Automation Research Lab) (Fig. 21.23). At the beginning of the session, each participant reviewed and signed an informed consent form and answered a profile identification questionnaire (age, gender, education level, and level of familiarity with the activity, manual, and technology). As shown in Fig. 21.24, participating individuals then completed a building maintenance activity (i.e., changing a toilet float mechanism) using information delivered by one of the five types of manual, namely, traditional (print) BMIM, MAR application (tablet version or smart glasses version), and LAR application (tablet version or smart glasses version). At the conclusion of the experiment, participants completed a NASA TLX questionnaire to report measures of mental demand, physical demand, temporal demand, performance, effort, and frustration. The activity lasted an average of 15–20 min, including the completion of the questionnaires. Each individual performed only one experiment. In total, data were collected from 100 participants, 20 individuals in each experiment (information delivery system): (A) traditional BMIM, (B) MAR application, and (C) LAR application. All individuals were able to complete the assigned activity.

Fig. 21.22
figure 22

Application evaluation at the building material store

Fig. 21.23
figure 23

Application evaluation at the university campus

Fig. 21.24
figure 24

Step of float replacement

6.1 General Workload Measurement: NASA TLX

For workload measurement, the following evaluations are performed: (i) characterization of the total sample, (ii) workload and factor analysis, and (iii) workload analysis considering the perception filters.

6.1.1 Total Sample Characterization

To characterize the entire sample of 100 participants, the profile identification questionnaire was adopted, which asked for information such as age, gender, education level, and level of familiarity with the activity, manual, and technology. The age range of the participants is illustrated in Fig. 21.25a, which reveals that 42% of the sample is 18–24 years old, 15% of individuals are 25–29 years old, 10% are 30–34 years old, 8% are 35–39 years old, 8% are 40–44 years old, and the remaining are older than 44 years (including 2% who are 60 or older). Although more than half of the participants (57%) are between 18 and 29 years old, the sample covered all age groups. Also, as shown in Fig. 21.25b, of the entire sample, 42% are female, and 58% are male.

Fig. 21.25
figure 25

Age range of participants (a) and gender distribution (b)

Also, according to Fig. 21.26, the analysis of participants’ education levels reveals that 2% of the individuals have incomplete primary or secondary education, while 12% have completed high school. Fifty percent have attended undergraduate-level classes, but only 19% have completed this level of education. Finally, 8% answered that they have completed undergraduate studies with professional specialization , and 9% have completed graduate-level studies. Although the participants presented all levels of education, more than half have incomplete or complete higher education level.

Fig. 21.26
figure 26

Participants’ education level

Moreover, participants indicated their level of familiarity with the activity by answering a question about whether or not they have performed the same task before. For this question, 66% of individuals stated that they had not exchanged a toilet coupling mechanism before, and 27% indicated that they had performed this activity in the past. The remaining 7% had observed someone else performing this activity (Fig. 21.27a). Similarly, when asked if they had ever consulted a BMIM, 82% of the individuals answered that they had not done so, while only 18% had used such manual before. This data reveals that the vast majority of participants were unfamiliar with a BMIM (Fig. 21.27b). Finally, when asked about their familiarity with technology, 96% of users answered that they are familiar with a tablet computer, while 41% had familiarity with AR applications, and 24% were familiar with smart glasses (Fig. 21.27c).

Fig. 21.27
figure 27

Level of familiarity with the activity (a), BMIM (b), and AR technology (c)

In summary, most of the sample is composed of 18- to 24-year-old male individuals with incomplete higher education who have never performed the activity nor have they consulted a BMIM. Also, the majority have experience with tablet devices, are relatively familiar with AR applications, and have no mastery of AR glasses.

6.1.2 Workload and Factor Analysis

Initially, total workload (TW) was considered comparatively among the five experiments (Fig. 21.28). The experiment that achieved the highest workload was the one with the task supported by the traditional manual with a TW of 35.7 points, followed by the paper-based BMIM plus AR (MAR application) visualized with smart glasses with a TW of 32.7 points. The manual incorporated into the environment viewed with AR with smart glasses (LAR application) achieved a TW of 29.5 points, followed by the manual incorporated into the environment with AR viewed on tablets (MAR application) with TW score of 28.5. Finally, the best performance was achieved with paper-based BMIM plus AR (MAR application) visualized with tablets with a TW of 26.0 points.

Fig. 21.28
figure 28

Total workloads

These values indicate that experiments conducted with MAR and LAR applications using tablet devices have reported a lower workload than experiments using smart glasses. Therefore, it can be inferred that the way AR is visualized or the level of familiarity with the visualization device influences user performance.

In turn, individuals who used the BMIM on paper plus AR (MAR application) and the BMIM built into the environment with AR (LAR application), regardless of the device type used, were charged less than individuals who used the traditional BMIM. This demonstrates the better performance of the BMIM when assisted by AR.

Comparison of all factors (i.e., mental demand, physical demand, temporal demand, performance, effort, and level of frustration) of each experiment reveals that the mental demand of the BMIM on paper plus AR (MAR application) with smart glasses achieved the highest index (175.4 points) (Fig. 21.29). This could be due to the limitations of the smart glasses used. The Moverio BT smart glasses does not have a camera that produces good image quality, and the small size of the virtual model makes it difficult to view the task of changing the float.

Fig. 21.29
figure 29

Workload considering demand factors

On the other hand, the mental demand that reached the lowest value corresponds to the BMIM incorporated in the environment with AR (LAR application) viewed on a tablet device. The decisive factor is the visualization of the virtual model in 1:1 scale overlaying the real model, allowing an immediate association of the task that must be performed by the user, which confirms the influence of the type of visualization on user performance. Besides, the camera and screen of the tablet device have good quality. Comparing all five factors, it is observed that mental demand was the factor that obtained a higher degree of importance from users.

Regarding physical demand, it is observed that the paper BMIM achieved the lowest score with 46.4 points, while the BMIM incorporated in the environment with AR (LAR application) viewed on a tablet device reached 77 points. It is inferred that the paper manual performed better in terms of physical demand since it did not require users to handle a peripheral display (Fig. 21.29).

In the analysis of temporal demand, the BMIM incorporated in the environment with AR (LAR application) viewed on a tablet device reached the lowest index with 51.9 points. This demonstrates that users are faster in visualizing the activity when the virtual content is superimposed on the real object. Already the traditional BMIM plus AR (MAR application) with smart glasses reached the highest temporal demand (85.9). It can be said that viewing the object on a smaller scale and outside the physical location of the task interfered with the time spent by users (Fig. 21.29).

In the performance analysis, individuals stated how satisfied they were with performing the task. A lower score in this case corresponds to a better performance and vice versa. The traditional BMIM plus AR (MAR application) on tablet scored 41.9 points, showing a better performance. In comparison, the traditional manual achieved the worst performance index with 148.8 points. The paper BMIM plus AR (MAR application) viewed with smart glasses and the BMIM built into the environment with AR (LAR application) with tablet version scored very close, with 83.5 and 87.3 points, respectively (Fig. 21.29).

In the analysis of the effort factor, the traditional manual obtained a score of 54.3, and the paper manual plus AR (MAR application) viewed with tablet and smart glasses scored 54.8 and 59.1, respectively. Meanwhile, BMIM built into the environment with AR (LAR application) viewed with smart glasses achieved 64.4 points. However, BMIM built into the environment with AR (LAR application) viewed with tablet presented the highest effort score of 82.5 points. As the traditional manual does not require the handling of peripheral display devices, it can be said that working with a traditional BMIM should lead to the lowest effort score. However, the difference in score among all groups was not significant (Fig. 21.29).

In the analysis of the level of frustration, the traditional manual was the one that most frustrated individuals in performing the task, as opposed to individuals who used the AR technology (Fig. 21.29). Participants who used the traditional BMIM achieved the highest score with 50.9 points, while this score for those who used AR was significantly lower (more than half, in some cases).

In summary, the analysis of the TW and individual workload factors shows that the paper BMIM plus AR (MAR application) viewed on tablet is the best solution, reaching the lowest TW score. However, it is noteworthy that the score difference for the second and third place is not significant. Thus, this analysis also indicates the potential of the BMIM incorporated into the AR environment. In contrast, using the traditional manual leads to the worst performance . Therefore, we conclude that incorporating AR in BMIM stimulates gains regarding the proper use of the manual.

6.2 Workload Analysis Considering Perception Filters

The third type of analysis considers the influence of perception filters on the calculated workload. Perception filters applied were age, gender, education level, and familiarity with the maintenance task performed, AR technology, and associated devices. With respect to the representativeness of the experiment, we considered the reference group to be the Brazilian population (208,317,492 people), and thus the sample size of 100 individuals and reliability of 90% impose a margin of error of 8.25% on the results. The coefficient of determination between the workload and age in a linear regression is 0.469 (Fig. 21.30). Therefore, it can be inferred that the age filter has a probable influence on workload perception. In particular, the older the individual executing the task with the support of AR, the smaller the perceived workload.

Fig. 21.30
figure 30

Average workload according to age filter

As for the gender filter, it is observed that the TW of female participants was 38.87, which is higher than the average value, while the TW of male participants was 27.97 and below the average value. Although these values are close, this difference leads us to believe that performing the task was easier for males than it was for females (Fig. 21.31), indicating gender influence on the execution of the task supported by AR.

Fig. 21.31
figure 31

Average workload according to gender filter

Considering the level of education , individuals with high school education presented a workload average of 34.61 which is higher than the overall sample average of 30.48. In comparison, individuals with incomplete high school education and those with incomplete elementary school obtained a TW score of 2.50 and 15.67, respectively, which is lower than the overall average of the sample. Individuals with other levels of education, however, remained close to the overall average (Fig. 21.32). Thus, this analysis allows us to infer that for individuals with incomplete high school and incomplete elementary school education levels, the task presented greater ease than for other participants. However, it must also be noted that the presence of participants with this level of education was not uniform among experiments.

Fig. 21.32
figure 32

Average workload according to level of education filter

Regarding the level of familiarity with the activity of float exchange, it is observed that individuals who have previous knowledge of this activity have a lower workload (24.65) than those who have never performed the task before (32.16). Also, individuals who had previously observed someone else performing this activity scored above average workload (36.69). The interesting observation here is that second-hand knowledge (gained through observing the activity performed by someone else) has no positive influence on the participant’s performance, leading us to believe that first-hand knowledge (gained through self-practice) has a positive influence on workload (Fig. 21.33).

Fig. 21.33
figure 33

Average workload according to previous task knowledge filter

Considering previous experience with BMIM and its relation to the average workload, it appears that individuals who never consulted the manual reached a lower workload (29.93) than those who had previously consulted the manual (32.83). However, we observe that the workload values are very close. Thus, it can be inferred that prior consultation with the BMIM does not significantly influence performance (Fig. 21.34).

Fig. 21.34
figure 34

Average workload according to BMIM consultation filter

Finally, considering the level of familiarity with the technology, we found that individuals who have experience with AR glasses and those familiar with AR had a lower workload (26.22 and 27.52, respectively) than those who had familiarity with tablets (30.42). Even though the values are very close to the average workload of 30.48, familiarity with the AR technology seems to reduce workload (Fig. 21.35).

Fig. 21.35
figure 35

Average workload according to device familiarity filter

7 Generalization

The proposed artifacts apply to a class of problems that aim to deploy AR specifically for assembly, maintenance, and instruction tasks. This study finds that the incorporation of AR technology can make a significant contribution to corrective and preventive maintenance, described in NBR 5674 [100]. The artifacts derived from the DSR method are presented as constructs, a model, a method, and prototypes. Constructs are the parts of the model that schematically represent the application. The model traces the relationships between the constructs, which are the maintenance component, the AR marker, and the AR visualization, as shown in Fig. 21.36. Methods describe the steps necessary to develop the AR application (Fig. 21.37). Figure 21.38 presents the two prototypes resulting from the model and method developed in this research.

Fig. 21.36
figure 36

General model by activity

Fig. 21.37
figure 37

BIM and AR model process applied to operation and maintenance

Fig. 21.38
figure 38

AR application prototypes developed and evaluated in this study

Figure 21.39 presents the identification of possible points of incorporation of AR acting as a BMIM facilitator. Items marked with blue magnifying glass symbol indicate that AR can enhance the existing information by overlaying virtual objects. Items marked with gray magnifying glass symbol indicate that AR can enhance the existing information by overlaying textual information. Whenever the component demands an instructional task, we proved the benefit of accomplishing that task by incorporating AR visualization.

Fig. 21.39
figure 39

AR incorporation points into to the complete BMIM

The proposed general model for incorporating AR into a task (Fig. 21.36) has been mapped throughout a complete BMIM considering the identified insertion points (Fig. 21.39). This scheme guides future implementations of AR in all parts of BMIM (Fig. 21.40). To achieve this goal, markers, sensors, or other types of tracking devices can be embedded in different building components allowing seamless presentation of information and instructions from BMIM throughout the building.

Fig. 21.40
figure 40

AR incorporation framework throughout BMIM

8 Conclusion

Considering the three forms of evaluation performed, by measuring the workload presented, it is concluded that the proposed artifact met the desired requirements for its application. In the analysis of workload and associated factors, the BMIM in paper plus AR viewed on a tablet device was identified as the best solution, while using the traditional BMIM led to the worst performance. Therefore, it was found that the insertion of AR visualization in BMIM can stimulate gains (i.e., better performance) regarding the use of the manual.

Regarding the analysis of the workload factors , the traditional BMIM plus AR viewed on tablet achieved the best ratings for performance and frustration. The traditional BMIM (with no AR) achieved the best performance in terms of physical demand, and BMIM incorporated in the environment with AR (LAR application) viewed on tablet achieved the best temporal and mental demand scores. As for the worst performances, the traditional BMIM assumed this rating in frustration level and performance, and BMIM incorporated in the environment with AR viewed on tablet was the worst in physical demand . The manual incorporated in the environment viewed with smart glasses reached the worst rating in mental demand, temporal demand, and effort. This analysis highlights the areas of improvement for future technology development to maximize user performance and stimulate gain.

In the context of workload analysis considering perception filters, we observed that, in general, age may influence workload when task is performed with the support of AR. It was also found that male participants found the task to be easier and individuals with lower education levels showed more positive workload influence. Besides, prior experience with task and familiarity with technology were found to favorably interfere with performance by decreasing workload. However, prior consultation with BMIM does not seem to influence user performance.

The analysis presented in this study points to potential improvement in BMIM through incorporating AR. Results highlight the influence of individual factors on workload for each type of implementation and point facility and building managers to where efforts should be invested for improvement. Considering the two types of information delivery devices, the tablet assumed a better performance. Further analysis demonstrates that the insertion of AR technology, regardless of deployment method (distributed in the environment or overlaid on paper manual) or delivery device (tablet or smart glasses), improves user experience with BMIM. In short, it is proved that the insertion of AR technology, regardless of the insertion (distributed in the environment or paper manual) or employed device (tablet or smart glasses), acts favorably integrated with the BMIM. Also, very close workload scores resulting from the execution of the task supported by the BMIM-enhanced environment indicate the potential of BMIM to be aligned with Industry 4.0 and the Internet of Things. This assertion is reinforced by Bock [101], who opines that construction automation technologies are rapidly merging with the built environment, becoming part of buildings, components, and furniture.

9 Data Availability

All data, models, and code that support the findings of this study are available from the authors upon reasonable request, with the exception of proprietary or confidential data which may only be provided with restrictions (e.g., anonymized data).

Supplementary Video Material

Examples of the applications described in this chapter can be seen in the following video(s): https://www.youtube.com/watch?v=7rDDuKeEiwk and https://www.youtube.com/watch?v=T90HXxrV5yU