1 Introduction

Today’s manufacturing industry is undergoing a fourth revolution, the so-called Industry 4.0. In this technological revolution, digital interconnection makes machines more intelligent and autonomous [1]. However, human beings still play a very important role in manufacturing systems and affect the performance of such systems [2, 3]. The flexibility of production and the diversity of demand put forward higher requirements for human skills [4]. Thus, human-centered design is currently a popular approach for improving the design of production processes and the competitiveness of products under the background of Industry 4.0 [5]. Modern industrial products often involve complex mechanical structures, especially in large machinery domains. Although today’s machines are highly automated, manual operations by personnel still play a key role in industrial maintenance and assembly processes (IMAs) to maintain or add value to the product [6], especially for small-batch products. Thus, human factors/ergonomics (HFEs) are a significant design element for complex products [7] because suitable human–machine interfaces enable engineers or users to easily assemble or maintain products to keep them running while avoiding cost increases and occupational safety threats.

The ergonomic considerations in IMA usually include working posture and accessibility factors. However, it is not easy to design good HFEs for IMA in terms of the traditional philosophy and methods of product development. Aviation products are a typical example. In particular, the beneficiaries of HFE design are end users such as flight crews and maintenance staff. These two types of end users are both impacted by ergonomic issues because the human–machine interfaces with which they interact directly affect their work. The design of HFEs for daily users such as pilots has received considerable attention. The modern cockpit is full of HFE considerations, and in general, the pilot can manipulate an airplane without significant interaction-related challenges. In contrast, as another important class of end users, assemblers or maintainers often face a poorly designed workplace in which to conduct heavy assembly or maintenance tasks [8]. Maintainers spend considerable time solving problems due to design defects related to accessibility issues [9]. Furthermore, it should be noted that ergonomics in IMA is an important cause of air crashes [10]. For example, in 2007, the aviation accident of China Airlines CI120 was caused by the loosening of a nut due to poor accessibility design, which led to a fire and, ultimately, the explosion of the aircraft [11].

There are several reasons for the dilemmas facing ergonomic design for IMA. The human–machine interface of a product should be well defined in the early design stage, which is the period in which designs can be most feasibly changed and improved. However, ergonomic evaluations are usually conducted only once a physical prototype has been manufactured. Even if HFE-related design problems are found in this stage, it is likely that no changes can be made due to the design freeze and unacceptable costs. Second, identifying and addressing human factor issues require multidisciplinary knowledge. This knowledge can be either explicit or tacit [7]. A designer cannot be expected to be an ergonomic expert when designing a product. In addition, designers, engineers, and users often have different understandings of products. Consequently, a situation often arises in which a designer is satisfied with a design effect, but the maintenance staff complains because they have to use inconvenient maintenance tools [12]. Therefore, ergonomic issues for IMA should be considered part of system engineering [13]. Concurrent engineering (CE) is required to integrate different teams of designers, ergonomic specialists, and engineers to design a system. This integration enables information and knowledge sharing between the different teams and the identification of the requirements of a product design when a design change is feasible rather than in the inflexible manufacturing stage.

As industrial products become increasingly sophisticated, the abovementioned ergonomic problems in IMA still have not been effectively solved. However, the coming fourth industrial revolution (Industry 4.0) may offer solutions to this dilemma. The concept of Industry 4.0 originates from a German proposal supported by the federal government, academic institutions, and manufacturing companies [14]. Soon, it became a popular focus both in practice and in academia [15]. The aim of Industry 4.0 is to develop advanced production and manufacturing systems to improve productivity and effectiveness and add value throughout the entire product lifecycle by integrating a series of emerging technologies [16,17,18]. To date, smart manufacturing has been the main strategy of Industry 4.0 [19]. To achieve this goal, one of the key challenges is to link a physical system to its counterpart in virtual space, called a digital twin [20]. Such a digital counterpart of an actual product or process can allow designers to simulate the manufacturing or maintenance process of the product before it comes out. By creating a digital twin of a physical system, corresponding data can be integrated into the virtual space and processed using the techniques of big data and analytics. Then, these data can be used to achieve improved value throughout the product lifecycle [21]. Several front-end technologies, such as virtual reality (VR), the Internet of Things (IoT), and Big Data and Analytics, have been adopted to achieve the digital twin and make manufacturing “smart” [14].

Digital twins have been successfully applied in the usage phase of products. However, its applicability in design is still weak. The reasons are twofold: the lack of effective methods to build the digital twins of products or processes in the design stage and the lack of physical prototypes. VR has been considered an important supporting technology for building digital twins [14]. Based on the concepts of digital twin and VR technologies, this study proposes a VR-based method to help designers anticipate ergonomic issues in manual operations in IMA, aiming to help maintenance workers improve the ergonomic level of their workplace. In this method, designers can effectively conduct an immersive simulation in their familiar design environment, and quantitatively evaluate ergonomics based on real data. Referring to the concept of the digital twin, the complete virtual counterpart of a real IMA scenario composed of products, personnel, and resources is built in the early stage of design with multidisciplinary knowledge. By applying VR technologies, engineers can “anticipate” the manual operation process in an immersive environment. The motion data of users gathered by sensors can drive the virtual environment (VE) to run similar to real IMA processes. Then, based on simulation data and evaluation methods, ergonomic factors such as working posture and accessibility can be evaluated and optimized effectively. In the remainder of this paper, a brief literature review on ergonomic assessment methods and VR technologies in the IMA process is first presented to better contextualize the proposed method. Then, the proposed method is presented in detail. A pilot study is reported to demonstrate the feasibility of the proposed method. Limitations and future work are then discussed, and finally, the conclusions are presented.

2 Literature review

The purpose of this literature review is not to carry out a systematic and comprehensive survey on related topics but rather to contextualize the proposed framework against the background of ergonomics and VR.

2.1 HFE/ergonomic designs for the IMA process

According to the international standard ISO 6385–2016, “human factor” and “ergonomics” have similar definitions, referring to a “scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles and methods to design to optimize overall human performance.” HFE issues widely exist in the process of manufacturing, assembly, and maintenance. In the early days, HFE design was conducted by means of physical experiments involving physical models, leading to cost increases and market launch delays [22]. Advances in computer-aided technology (CAX) changed the way such work was conducted. Designers began to use effective computer tools to assist in their work. By building digital human models (DHMs) to simulate IMA tasks, designers can conduct preliminary analyses of HFEs [23]. In mid-1995, the F-16 program of Lockheed Martin Tactical Aircraft Systems (LMTAS) used digital mockups and DHMs to perform the HFE design evaluation to achieve the short span needed for the F-16. Abshire and Barron [24] pointed out the advantages of using DHMs to evaluate the maintainability and human factor design in the F-16 program: cost reduction/avoidance, shortened schedules, standardization of analysis and evaluation, and great exchange of information. Subsequently, many studies have focused on HFE design for practical engineering by using DHMs and CAX tools. Feyen et al. reported an initial study on the ergonomic design of an automobile assembly process [25]. Lawson et al. studied how to use DHMs and VR to optimize the ease of entry and exit for a vehicle [26]. Jung et al. proposed a two-step method of generating a group of DHMs of different sizes to properly accommodate the design requirements of a target population [27]. There are multiple benefits of applying DHMs in human factor design, including improved design efficiency and enhancement of customer satisfaction [26, 28,29,30].

In addition to DHM, the working posture evaluation method is also important for ergonomic design. There are several methods to help engineers evaluate human posture, force, spine, and other ergonomic indicators [31], such as the National Institute of Occupational Safety and Health (NIOSH) equation [32], Rapid Upper Limb Analysis (RULA) [33], and the Ovako Working posture Analysis System (OWAS) [34]. Each of them has its own applicable scope and features. Bortolini et al. [4] summarized the characteristics of these evaluation methods (Table 1). The OWAS is a widely used assessment method to describe working posture. However, its description of posture is qualitative, lacking an accurate description of posture in the middle state. RULA and the Rapid Entire Body Assessment (REBA) have similar implementation steps and characteristics. Compared with REBA, RULA can evaluate wrist rotation and upper limb load [35]. On the other hand, REBA can provide a more detailed and comprehensive evaluation of lower limb comfort. Studies from different fields indicate that RULA and REBA are two effective methods to evaluate working posture, while the former is more rigorous and indicates a higher risk level [35,36,37]. The European Assembly Worksheet (EAWS) can thoroughly evaluate the ergonomics of the work conditions, which includes four evaluation modules, namely, basic postures/postures and movements of trunk and arms, action forces, manual material handling, and upper limb load in repetitive tasks. However, there are many items in the EWAS form that need to be estimated and measured manually by experts, which makes EAWS time-consuming and quite subjective in practical use [38].

Table 1 Features of the ergonomic assessment methods summarized by Bortolini et al. [4]

The DHM and ergonomic evaluation methods mentioned above have been integrated into desktop simulation software as analysis tools, such as Siemens Jack and DELMIA. With IMA tasks becoming increasingly complex, this kind of desktop-based simulation and analysis is not very competent for the current requirements of human-centric design. A complete IMA task is a dynamic process that consists of many continuous subtasks. Based on DHM and animation editing, desktop software can simulate only a limited number of human actions, and the reproduction of tasks is fragmentary and static [39]. In addition, the definition of key actions in software needs to be completed by users, which requires high experience and knowledge. For the same maintenance task, different engineers may define different working postures. This brings uncertainty and subjectivity to simulation and analysis [40]. Lawson et al. [26] pointed out that DHMs lack physical characteristics, which can also lead to low validity and reliability of the evaluation results. Savin [41] reported that significant underestimation of exertion forces may occur when using the default values given by the software in DHMs.

2.2 VR technologies in ergonomic design for manual assembly and maintenance

VR is considered an enabling technology in Industry 4.0. It can help users quickly create a virtual replica of a product, IMA process, or workshop in an immersive environment to achieve the purpose of virtual manufacturing (VM) [39]. As one of the directions, the adoption of VR for ergonomic design in manual IMA processes has been studied and applied in practical industrial scenarios in recent years to help designers make visionary decisions [42, 43]. In this scenario, VR can quickly simulate an IMA process in an immersive environment, in which users become participants rather than “bystanders.” Driven by motion capture (MOCAP) data from users in the VR simulation, human actions are accurately and naturally defined, and the simulation can be conducted in real time [44]. Evaluating accessibility and working posture are two main application fields. Aromaa et al. [45] designed a comparative experiment to test the augmented reality (AR) system and VR system for ergonomic design of the maintenance platform of the stone crusher. The results show that the VR system is more valuable in evaluating visibility, reachability, and tool use. Keller et al. [46] proposed a VR platform for optimizing the reachability and component integration of a fusion reactor, which consists of a stereoscopic screen, a haptic device for force feedback, a motion tracking system to capture the point of view, and a flystick for natural interaction. The platform can share the data with the CAD software, so engineers can perform the design review with the full-scale virtual prototype in the VE. By using a haptic device and flystick, simulation of assembly and maintenance operations can be conducted, and assembly clearance and clash detection will be checked in an effective way. In addition, Loiuson et al. [9] also used a VR system to analyze the accessibility and working posture for the assembly and maintenance of fusion reactors. The benefits of active body localization and tactile feedback for ergonomic evaluation are verified. In view of the problem that VR is difficult to further use due to the lack of a structured method, Peruzzini et al. [39] proposed a virtual manufacturing simulation method based on VR. This method can simulate the assembly process of factories in VR environments. Motion data can be captured, and OWAS, REBA, and EAWS can be used to evaluate ergonomics to improve the human-centric design of factories. Based on VR and digital twins, Caputo et al. [5] presented a digital twin-based framework to minimize the time to design and develop a new production line in the early design stage of production. This framework can predict production problems caused by ergonomics in the design stage and allow these errors to be corrected ahead of time. Human factors such as posture, force, and repeated operations on an automobile assembly line can be accurately evaluated. Michalos et al. [47] proposed a collaborative VR simulation environment that can carry out collaborative simulation and visualize ergonomic indexes to help engineers optimize the workplace of assembly tasks.

The value of applying VR for ergonomic design is not only to provide an immersive VE for designers in the early stage of design but also to measure the design by using the data generated by immersive simulation. For the latter, accurate virtual scene modeling, human MOCAP, and virtual body twinning are necessary. A better physical representation of virtual elements will improve the accuracy of simulations [48]. VR users need dynamic virtual body substitution in a VE to enhance the perception of body-environment relationships [9]. In addition, VR platforms should improve integration with commercial simulation software and make full use of CAD data instead of authoring simulation content from scratch.

Although there are many mature commercial VR systems, they are not designed for product support activities, such as design, manufacturing and maintenance. Therefore, to further improve the efficiency of VR in ergonomic design, VR systems should be integrated with traditional product design applications. In this way, users can conduct immersive simulation in realistic virtual scenes and realize ergonomic measurement based on real motion data. The integration with the CAD model in a design application can also improve the accuracy of immersive simulation and reduce the workload of designers in simulation content creation.

3 Methods

3.1 A VR-based workflow for ergonomic assessment in IMA

In the traditional over-the-wall development process, serialization makes the work of each phase heavily dependent on the work of the previous phase. Ergonomic design, which requires knowledge and information integration, can only be carried out in the later stage. To improve the information exchange and shorten the design time of ergonomics in the whole product development process, a VR-based method for ergonomic design in the IMA process is proposed (shown in Fig. 1). The proposed method effectively evaluates and redesigns the ergonomic factors before the engineering stage, making the ergonomic design parallelized and effective.

Fig. 1
figure 1

The VR-based framework for ergonomic design and validation

The concept of CE is implemented. Through VR-based technologies, designers can simulate working postures in realistic IMA scenarios that can be quickly generated by traditional manufacturing software. Motion data of the user will be captured during immersive simulation for further analysis. Ergonomic issues can be analyzed and evaluated based on observations and metrics. If the ergonomic design meets the initial ergonomic requirements, the design work will be passed into the engineering phase. If design defects in ergonomic design are found, design tradeoffs need to be made. Unacceptable design defects that lead to serious personal safety hazards or failure to perform maintenance tasks must be optimized. Mild or moderate design defects need to be discussed by multidisciplinary experts. Not all design defects need to be improved because sometimes the optimization of these defects may lead to the decline of other key performance parameters of the product. For example, installing pedals on a helicopter fuselage can improve the operational convenience of maintenance personnel. However, the improvement of this issue means an increase in weight. Therefore, ergonomic and structural design experts need to make design tradeoffs to decide whether to improve the design. If design optimization is necessary, a new round of ergonomic design will start.

3.2 Ergonomic analysis in the IMA process

Building a virtual scenario that can faithfully reflect the actual IMA process is the key to the implementation of the proposed method.

To achieve this goal, it is necessary not only to fully understand the end user (maintainers and assemblers) needs but also to picture the elements and interactions in the IMA scenario. There are several models available for human factor analysis, such as the people, environment, actions, and resources (PEAR) [49] and software, hardware, environment, and liveware (SHELL) [50] models, which provide exhaustive ergonomic elements and interactions for consideration. Figure 2 shows the analysis of an aircraft maintenance process based on the SHELL model for ergonomic design.

Fig. 2
figure 2

Ergonomic analysis of maintenance process based on SHELL model

Based on the SHELL model, ergonomic issues in the maintenance process can be classified into six components and six interfaces. The hardware in the maintenance process includes two types: maintenance objects (products) and maintenance resources necessary to complete maintenance tasks. The software refers to the information fed back to the maintainer by the maintenance object during the maintenance process, such as the sign and text on the maintenance cover. Liveware refers to maintainers and other auxiliary personnel.

The environment refers to the physical environment in which maintenance events are carried out. Among the six components, there are five maintainer-centered interfaces and one interface between the object and resource. Multiple interfaces will exist simultaneously during maintenance. At different interfaces, i.e., interaction process, ergonomic factors will be reflected, such as operation posture and accessibility.

Thus, through the analysis based on the SHELL model, the ergonomic-related elements and interactions in the virtual IMA scene can be determined in the early design phase.

3.3 Virtual representation of the IMA process in the design stage

Based on the analysis in Sect. 3.2, the virtual scenario of an IMA process is the digital replicas of the product, worker, resources and the data flow among these elements, with which designers can simulate a real IMA process in a highly credible way. The designer needs to translate the ergonomic requirements into a specific design practice. The design features should be reflected in the virtual counterpart as the basis for future virtual validation. The virtual scenario of the IMA process is updated with the development of the design. The virtual scenario consists of the following elements:

  • Virtual prototype: A virtual prototype contains all necessary information about a product. Unlike the traditional digital mockups produced by computer-aided design (CAD) software, a virtual prototype not only includes the geometric data of the product to represent its size and structure but also has interactive functions as a physical prototype to help engineers evaluate designs and make optimization decisions based on VR [51]. For ergonomic design, a virtual prototype should enable the interactive simulation of the IMA process.

  • Virtual human: The manual operation in IMA is a process involving human–machine interactions. Therefore, a digital model that can accurately represent the physical characteristics of a human operator is very helpful for simulating the IMA process together with a digital product to ultimately investigate HFE considerations in engineering [52, 53]. In this study, the virtual human can be regarded as the digital twin of humans in the virtual IMA scenario, driven by the motion data captured from real humans. For the simulation of the IMA process, anthropometric data and kinematic data of the human body are key parameters for modeling a complete digital human.

  • Virtual resources: For an IMA task, resources such as tools, support equipment and technical manuals are necessary for workers to perform a task smoothly. The virtual prototype of these resources should also be included to enable the realistic simulation of an IMA scenario, which can help engineers make far-sighted design decisions considering assembly or maintenance requirements.

  • Data flow: As mentioned above, a virtual representation of an IMA scenario consists of a product, human being, resources and interactions. Virtualization includes not only the structure of these elements but also the data flow between them. Therefore, an access point is needed through which data from the real scene can be captured to drive the virtual elements replicating the behavior of the physical system. For an IMA process, the motion data of a human being in the real world constitute the main information allowing the mapping between the physical system and the virtual scenario. Accordingly, methods for using motion data from sensors to drive a digital human’s behavior should be developed.

Figure 3 shows the data flow of the virtual representation of an IMA scenario. Motion data are continuously exchanged between the physical world and the digital world. The virtual IMA scenario will change with the simulation, which mainly depends on the user’s interaction events. The complete implementation of these steps depends on several supporting technologies. With virtual prototyping technology, virtual prototypes of the product, worker, and resources can be built to constitute a complete virtual IMA scenario. Using field sensors, the motion data of the human user can be acquired, which constitute the main data input to drive the virtual scenario. Synchronization and facticity are the two main advantages of using sensor data. With the evaluation method based on analytics, the data generated in the simulation process can be analyzed to evaluate the ergonomic issues of interest. Immersive simulation and evaluation is the most important capability of a VE, which will be described in the next section.

Fig. 3
figure 3

The data flows of the physical and virtual IMA scenarios

3.4 System implementation

In the proposed method, several VR support technologies are studied to realize immersive simulation. The main software and hardware resources are as follows:

  • Dassault Systèmes DELMIA: an application for simulation of manufacturing, service, and other support activities of products. DELMIA provides a solution for human task simulation and analysis, in which manikins are available to simulate action or posture. In DELMIA, the creation of action or posture of manikin is realized through the editing of keyframes.

  • PhaseSpace Impulse: An active optical MOCAP system captures complex human motion in real time. The infrared (IR) cameras detect the positions of the markers (light-emitting diodes (LEDs)) attached to the object and transmit these data to the central processing unit, which calculates the actual position of the object. These position data can be used for further processing. Each IR camera has a resolution of 3600 * 3600 at 960 Hz.

  • Cave Automatic Virtual Environment (CAVE): CAVE is an integrated immersive environment system that usually consists of three to six projected walls to form a cubic space. CAVE can provide users with multichannel stereo vision and hearing to create a sense of immersion. Users can walk in CAVE and obtain the proper view, similar to observing real objects.

  • TechViz XL: A 3D data visualization software that can display CAD models on a 1:1 scale on any type of VR system without data conversion. It supports multichannel visual rendering and allows users to interact with virtual objects and make changes.

  • Virtual reality peripheral network (VRPN): The VRPN consists of a series of class libraries and servers, which are used to establish a transparent network interface between applications and physical hardware devices in the VR system. VRPN has a C/S architecture, in which the server is responsible for communicating with peripheral physical devices, and the client (i.e., application) is responsible for requesting data from physical devices and using these data to define interaction events.

  • Posture analysis tool: In this study, an analysis tool was developed to accurately evaluate the risk levels of working posture. This tool integrates the configurable RULA algorithm and can modify the judging criteria and items according to the actual evaluation needs. Combined with the motion data generated by immersion simulation, the working posture is evaluated accurately.

Figure 4 shows the hardware environment of the proposed method. Using the human task simulation module in DELMIA, a realistic virtual maintenance scenario composed of virtual products, resources, and manikins can be constructed. Then, the TechViz XL plugin can be imported to render the virtual scenario built in DELMIA. Therefore, it can be visualized realistically in CAVE.

Fig. 4
figure 4

Hardware environment of the proposed method

However, DELMIA can only conduct desktop-based simulation by keyframe animation, and there is a lack of real data to drive the postures and movements of manikins. In the proposed method, PhaseSpace Impulse is adopted to capture the real MOCAP data of the user. To obtain accurate MOCAP data, 8 IR cameras and 41 optical marker points attached to the human body are set during the immersive simulation. However, the raw MOCAP data cannot be directly utilized by DELMIA. Therefore, we developed a client and a PS-DELMIA plugin to process raw MOCAP data from the PhaseSpace Impulse system and use it to drive the posture and movement of the manikin in DELMIA. Specifically, the coordinate information of each marker attached to the user’s body is obtained based on the VRPN protocol, through which the key node information for driving the limb movement of the manikin can be obtained. Then, using the automation application programming interface (API) provided by DELMIA, we convert the coordinate data of markers into two arrays. These two arrays are used as parameters of several API functions to drive the posture and movements of the manikin, respectively. MOCAP data can also be exported as local files for future processing and analysis. In this way, while users conduct immersive simulation in CAVE, MOCAP data can be used to drive the twin behaviors of manikins. In immersive simulation, the user can use a navigation controller to interact with virtual objects, such as scene roaming and assembly or disassembly operations.

Based on the observation in immersive simulation, users can make a preliminary analysis of ergonomic design. In addition, ergonomic factors can be measured quantitatively by using evaluation metrics and MOCAP data. Considering the features of the manual operations of aerospace products, this study uses RULA as an evaluation metric to evaluate the working posture. RULA can quantitatively evaluate the ergonomic risk of upper limbs, which are the main body part participating in the IMA process of aerospace products. The coordinate information of 41 marker points in MOCAP data will be transformed into human joint vectors, and the calculation of RULA will be completed by using this vector information. We write a configuration file for the decision rules of some body segments in RULA. In this way, users can flexibly adjust some RULA calculations to make them more in line with actual needs. Meanwhile, the configuration file allows users to set the judgment items manually in RULA, which are difficult to decide automatically. Figure 5 shows the function flow of the proposed method.

The implementation of the proposed VR system requires the following steps:

  • Step 1: Create a virtual IMA scenario in DELMIA. The CAD data of products, resources, and manikins should be virtually prototyped with real parameters. Add necessary interactive events and constraints for these elements to meet the simulation requirements. Generate the whole virtual scenario according to the actual layout.

  • Step 2: Configure TechViz XL to load the virtual scenario in DELMIA. Assign specific interactive functions to the navigation controller.

  • Step 3: The IMA scenario will be rendered in CAVE after configuring TechViz XL. With the MOCAP data and 3D tracking glasses, CAVE can provide users with the proper view in a similar way to looking at a real scenario.

  • Step 4: Initialize the PhaseSpace Impulse system. Start the VRPN server to open the streaming of data. On the one hand, MOCAP data are used to create interactive events in the simulation. On the other hand, they are used to generate the twin behaviors of the manikin. For the latter, the PS-DELMIA plugin is required to convert the MOCAP data. In addition, the manikin needs to be bound with the user for data mapping.

  • Step 5: Immersive simulation of the IMA process. According to the designed assembly or maintenance steps, users conduct immersive simulation in CAVE, during which postures and movements will be generated. The motion data of users during the simulation process will be captured for the twin behavior of the manikin and the analysis of ergonomic issues.

  • Step 6: Make an ergonomic evaluation. Using MOCAP data, RULA can be carried out to evaluate the musculoskeletal disorder (MSD) risk of posture. In addition, the manikin’s twin behavior can be used to evaluate visibility and accessibility. Users can manually set some parameters through the configuration file of RULA, if necessary.

Fig. 5
figure 5

Functional flow chart of the proposed method

4 A pilot study

4.1 Case description

To validate the feasibility of the proposed method, a pilot study in engineering practice on a satellite was conducted. The crowded space of a satellite cabinet is usually occupied by cables and delicate components. As a kind of small-batch and sophisticated product, a satellite requires a significant number of manual operations for the ground assembly and maintenance tasks before launch. The quality of these manual operations has a considerable impact on the stable on-orbit operation of the satellite. In this pilot study, the main concern was to verify two different ground maintenance plans before launch and the feasibility of installing a support beam for the comfort of the worker and to avoid potential threats to the maintenance tasks caused by human factors. Based on the proposed framework, the following inputs are determined to build a complete virtual maintenance scenario. These data are provided by satellite manufacturing enterprises.

  1. 1.

    A virtual prototype of the satellite: It should include the detailed geometric data of the satellite cabin as well as the kinematic characteristics of the product that are related to the maintenance tasks, such as the bending properties of the flexible cables.

  2. 2.

    Anthropometric data of the maintainer: These data are utilized to model an accurate digital human so that the model can truthfully reflect the body size of the real maintainer and enable precise assessments.

  3. 3.

    Resources for the maintenance scenario: These resources include maintenance tools, support equipment and other auxiliary resources. The presence of these resources will influence the accuracy of the assessment and decision making.

  4. 4.

    Maintenance plans: A complete virtual counterpart includes not only virtual elements that correspond to the physical entities in the scenario but also information on the interactions between these elements. Based on the procedure, the user can interact with the virtual prototype of the satellite through the established rules to simulate a maintenance task.

The virtual scenario is created in DELMIA. It should be noted that to improve the efficiency of the simulation, it was necessary to properly simplify the virtual prototype to reduce the amount of data by simplifying or deleting unnecessary parts. Figure 6 shows the virtual maintenance scenario after simplification. The outer shell of the satellite was simplified because the details of its structure are rich but unnecessary for ergonomic design. Only the manikin with customized body dimensions, the transfer trolley, and the simplified satellite were retained.

Fig. 6
figure 6

The virtual scene of the maintenance scenario

The goal of the task considered in this assessment is to dismantle the innermost device (CA-3-C) in the cabin. There are two maintenance plans. The first plan (plan A) consists of the following steps:

  1. 1.

    Disconnect the electrical connector of the MP-3-C_ASM.

  2. 2.

    Remove the fasteners connecting the MP-3-C_ASM and the cabin.

  3. 3.

    After the removal of the fasteners, remove the MP-3-C_ASM from the hatch.

  4. 4.

    Disconnect the electrical connector of CA-3-C.

  5. 5.

    Remove the fasteners connecting the CA-3-C and the cabin.

  6. 6.

    Remove the CA-3-C.

The steps of the second plan (plan B) are similar. The only difference between plan A and plan B is that a support beam is installed in the original position of the MP-3-C_ASM between steps 3 and 4 to fix the body posture of the maintainer (see Fig. 7).

Fig. 7
figure 7

The maintenance steps of plan A

A user wearing a MOCAP suit simulated the working postures in the two plans (see Fig. 8). Then, ergonomics was assessed based on the MOCAP data generated from the PhaseSpace Impulse. The issues of interest included two considerations, i.e., the necessity of placing a support beam in the cabin to improve the comfort of the human trunk and potential ergonomic design defects that could lead to failure of IMA tasks. The user will wear tracking glasses in CAVE to obtain 360° immersive vision. In addition, he can use a Sony navigation controller to interact with products and resources in the VE.

Fig. 8
figure 8

The designer simulates the different postures and actions of the two maintenance plans with a support beam (1) and accessibility (2)

The ergonomics will be analyzed and evaluated in two ways: qualitative judgment and quantitative measurement. Qualitative judgment is based on a user’s immersive interaction and design experience. In addition, the checklist was developed by experts with abundant experience in satellite IMA to help users conduct design verification in an immersive environment. Quantitative measurement is based on the MOCAP data and configurable RULA algorithm. Figure 9 shows the working posture analysis tool developed in this paper, in which the user can configure the criteria to meet the need of accuracy and actual demand.

Fig. 9
figure 9

Working posture analysis tool with configurable RULA method

4.2 Discussion

To study the time efficiency of the proposed method, we use the desktop-based approach (DELMIA) to simulate plan B in the above case. The performance indicators (PIs) proposed by Peruzzini et al. [39] are adopted as the evaluation indicators and adjusted to conform to the proposed method. Table 2 shows the PIs of the desktop-based method and the immersive VE proposed in this paper.

  • Time for virtual scenario creation: The time it takes to build a virtual scene. Since both methods are based on DELMIA, the time spent is the same.

  • Time for simulation preparation: Time needed to initialize the scene, adjust the spatial layout, and configure resources.

  • Time for task simulation: Time spent making desktop simulations or conducting immersive simulations.

  • Time for posture adjustment (with manikins or users): Time spent adjusting unreasonable working posture (only exists in desktop simulation).

  • Time for ergonomic assessment: Time spent evaluating visibility, accessibility and working posture. For desktop simulation, keyframes need to be positioned and filtered.

Table 2 PIs of the desktop-based method and the immersive VE proposed in this paper for plan B

We can conclude that the proposed method takes less time to complete the simulation than the desktop-based method, and the time efficiency is improved by nearly 50%. Because data mapping between the immersive VE and DELMIA is realized in the proposed method, the virtual scenario created in DELMIA can be directly used for simulation. There is no need to recreate the scene in other VR software, such as Unity 3D. However, before the start of immersive simulation, users need to make preparations, such as initializing the MOCAP system and configuring interactive events. This usually takes more time than desktop-based methods. The time for task simulation is the largest difference between the two methods. In the desktop-based method, every action and posture of the manikin needs to be edited frame by frame. In particular, the fine operation of the hand usually takes considerable time to adjust each finger joint. However, in an immersive VE, all postures and actions are generated in real time by the user’s MOCAP data. This has a huge time advantage over the desktop approach and is more realistic. There is no need to adjust posture as in the desktop-based method. Ergonomic analysis can be conducted while simulating the immersive VE. However, the desktop-based method needs to screen the keyframes of postures that need to be evaluated, which takes slightly longer. Of course, VR assets usually require plenty of investment, including capital and experts. Therefore, VR-based ergonomic design is usually applied to those complex products whose ergonomics are very important, and the company has sufficient capital and human support. With the rapid development of VR hardware in recent years, the investment cost is gradually decreasing.

Regarding the accuracy of ergonomic evaluation, it can be speculated that the proposed method will have higher accuracy than the desktop-based method. Immersive VE has the physical involvement of the real human, i.e., the user. When the user assumes postures and makes movements in the immersive VE, motion data can be captured that contain real posture information and provide the input of the ergonomic metric. However, in the desktop-based method, postures are edited frame by frame, depending on the experience of the designer. This leads to the instability of postures in reality. To verify our assumption, we invited three experts with rich RULA experience to make on-field evaluations of the posture of removing CA-3-C in both plan A and plan B. They will evaluate the risk level of working posture based on the actual observation and the MOCAP data. In case of inconsistent results, the three experts will discuss and reach an agreement, and take it as the standard value. They discussed their evaluations together to obtain the final result. The desktop simulations of plan A and plan B are completed by designers with rich experience in DELMIA. The simulation animation is recognized by the designer of the product. The RULA score is obtained by DELMIA. To ensure accuracy, the immersive simulation in VE is completed by the company’s experienced designers. They will define and simulate realistic maintenance postures in CAVE. For VE, the RULA score is calculated by the working posture analysis tool proposed in this study. Table 3 shows the RULA scores given by the desktop-based method, immersive VE, and experts.

Table 3 The RULA assessment results after removing CA-3-C. In plan A, a support beam is installed to support the human body

The RULA results support our hypothesis that immersive VE has higher accuracy in ergonomic evaluation. This result is closer to the results obtained by field experts. Ergonomic evaluation in desktop-based methods has a certain degree of uncertainty, which depends on the experience and skills of designers. It is interesting that plans A and B have the same score in the desktop evaluation. In plan B, the installation of the support beam improves the support of the trunk and legs of the maintainer, but the designer neglected the influence on the neck. After installing the support beam, the neck of the maintainer can be at a more comfortable angle, and the head will no longer be in a backward pose. Even experienced designers can hardly perceive the relationship between the neck and support beam in a desktop simulation environment. This has been demonstrated in an immersive environment. A similar conclusion was reached in our previous engineering practice. For the same maintenance operation, two designers even have different evaluation results in some cases, especially for complex products.

The verification with the physical prototype also supports the conclusion. Plan B is proven to be more human friendly. It was initially recommended as the solution for the CA-3-C maintenance task. In addition, the results of the final verification showed that the coverage of the proposed method for predicting ergonomic defects in this satellite maintenance task is 96.8%.

We also compare the proposed method with other mainstream immersive VE methods that are aimed at ergonomic design or similar purposes for complex products. The following items are used as evaluation indicators:

  • Component: Hardware and software for implementing the method.

  • Purpose: Motivation and application scenarios of the method.

  • Interaction mode: The way for a user to interact with an immersive scenario refers to an intuitive sensation while interacting with a VE.

  • Perception level: Perception in realism, the rates proposed by Numfu et al. [54] are adopted. 1 = no perception of reality. 2 = slight perception of reality. 3 = close to reality. 4 = same as reality.

  • Integration level: Degree of integration with DELMIA, Jack or other commonly used design software. We set 3 levels. 1 = no integration at all. The creation and simulation need to start from scratch. 2 = moderate integration. Traditional CAD data can be used in an immersive VE, but additional processing with special software is required. 3 = high-level integration. Data are interworked in real time between the immersive VE and traditional design software.

  • Data for analysis: Data types for ergonomic analysis.

  • Real-time analysis: Ergonomic analysis, while immersive simulation mainly refers to quantitative analysis based on metrics.

  • Quantitative assessment: Ability to conduct quantitative ergonomic evaluation

The comparisons are shown in Table 4.

Table 4 Comparison between the proposed method and other immersive methods

Compared with the mainstream methods, the proposed method realizes real-time scenario mapping with DELMIA. The ability to use the CAD data to directly drive the immersive simulation means that designers do not need to learn the extra professional knowledge of VR, which will reduce their workload. In addition, the proposed method can use a variety of data generated in an immersive VE for real-time quantitative ergonomic analysis and evaluation. The proposed method has high system integration and accuracy. The advantages of the proposed method are described as follows:

  • Modularity and extensibility. The processing and communication of VR data are based on the VRPN protocol. A standard data interface makes it possible to effectively integrate more VR devices and applications to further strengthen the interactive ability and functionality of the system.

  • High accuracy. The system has high accuracy in capturing motion data due to the use of the optical MOCAP system. Real MOCAP data from users can truthfully drive ergonomic analysis and evaluation. This overcomes the subjectivity of the desktop-based method. In addition, more evaluation methods can be integrated into the system in the future.

  • Agile design. A comprehensive VR-based method to help designers conduct scenario creation, immersive simulation and ergonomic evaluation in the early design phase. The ergonomic design and analysis can be carried out effectively. Design defects can be predicted quickly and accurately.

  • Diverse means of interaction and analysis. Interaction and analysis can be conducted in many effective ways. Users can verify the ergonomic design based on experience and observation during the immersive simulation. Moreover, the data generated by immersive simulation can be used to quantitatively evaluate ergonomic design with metrics. In addition, the large working area provided by CAVE and PhaseSpace Impulse allows users to conduct more interactive actions. This will improve users’ perceptual cognition of product design.

  • Friendliness to users. The proposed method realizes real-time mapping between an immersive VE and a virtual scene in DELMIA. The behavior of manikins and virtual products will be driven by the MOCAP data of users. Users do not need to create simulation scenes repeatedly in special VR applications (such as unity 3D). This will reduce the requirements for users’ VR professional skills, reduce their workload and improve the system efficiency.

  • Multiscenario application. The convenience of scene creation and immersive simulation enables the proposed method to be applied to a variety of potential purposes, such as scheme verification, maintenance training and design review. VR-based simulation allows users to visualize and analyze their problems of interest at any phase of the product.

5 Conclusions

The application of VR technology in ergonomic design has a long history. However, the simple immersion simulation of the IMA process can no longer meet the requirements of ergonomic design for complex products. Although there is a lack of physical prototypes in the design stage, this study proposed a VR-based ergonomic design process for the IMA task and an accurate evaluation method by using the complete virtual counterpart of the IMA process. A pilot study based on engineering practice proves the feasibility of the proposed method. First, the proposed method can quickly build a virtual IMA scenario for immersive simulation based on the traditional design platform DELMIA. It reduces the operation difficulty of users caused by the complexity of the VR system and enables them to better conduct ergonomic design. Immersive IMA simulation can be carried out more quickly and effectively. Second, the proposed method can make full use of the data generated by immersive simulation for accurate ergonomic evaluation. Based on the motion data captured by the sensor and the configurable RULA algorithm, the working posture can be quantitatively evaluated according to different application scenarios.

Although the pilot study and comparison with the mainstream method have proven the feasibility of the proposed immersive VE in the ergonomic design of the IMA process, there are still technical problems that limit its wider adoption in practical industrial scenarios. Future research will focus on improving the interaction ability of the system. The current system can realize tactile interaction but lacks feedback. Several studies [56, 57] have shown that tactile feedback is very important for immersive simulation, especially for the maintenance process. Data gloves with tactile feedback will be considered for integration into the system. The more accurate capture of complex operations is also the focus of future research, which plays a very important role in improving the accuracy of simulation and evaluation. In addition, more ergonomic evaluation methods will be used to verify the applicability of the proposed system in a wider industrial field.