1 Introduction

Innovation is critical for companies to be successful in today’s global market. Competitive advantage can be achieved by effectively applying new technologies and processes to challenges faced in current engineering design practices. Opportunities encompass all aspects of product development (including ergonomics, manufacture, maintenance, product life cycle, etc.) with the greatest potential impact during the early stages of the product design process. Prototyping and evaluation are indispensable steps of the current product creation process. Although computer modeling and analysis practices are currently used at different stages, building one-of-a-kind physical prototypes makes the current typical process very costly and time consuming.

New technologies are needed that can empower industry with a faster and more powerful decision making process. VR technology has evolved to a new level of sophistication during the last two decades. VR has changed the ways scientists and engineers look at computers for performing mathematical simulations, data visualization, and decision making (Bryson 1996; Eddy and Lewis 2002; Zorriassatine et al. 2003; Xianglong et al. 2001). VR technology combines multiple human–computer interfaces to provide various sensations (visual, haptic, auditory, etc.), which give the user a sense of presence in the virtual world. This enables users to become immersed in a computer-generated scene and interact using natural human motions. The ultimate goal is to provide an “invisible interface” that allows the user to interact with the virtual environment as they would with the real world. This makes VR an ideal tool for simulating tasks that require frequent and intuitive manual interaction such as assembly methods prototyping.

Several definitions of virtual assembly have been proposed by the research community. For example, in 1997, Jayaram et al. defined virtual assembly as “The use of computer tools to make or “assist with” assembly-related engineering decisions through analysis, predictive models, visualization, and presentation of data without physical realization of the product or supporting processes”. Kim and Vance in 2003 described virtual assembly as the “ability to assemble CAD models of parts using a three-dimensional immersive, user interface and natural human motion”. This definition included the need for an immersive interface and natural interaction as a critical part of virtual assembly. As VR continues to advance, we would like to expand previous definitions to provide a more comprehensive description.

Virtual assembly in this paper is defined as the capability to assemble virtual representations of physical models through simulating realistic environment behavior and part interaction to reduce the need for physical assembly prototyping resulting in the ability to make more encompassing design/assembly decisions in an immersive computer-generated environment.

2 Why virtual assembly?

Assembly process planning is a critical step in product development. In this process, details of assembly operations, which describe how different parts will be put together, are formalized. It has been established that assembly processes often constitute the majority of the cost of a product (Boothroyd and Dewhurst 1989). Thus, it is crucial to develop a proper assembly plan early in the design stage. A good assembly plan incorporates considerations for minimum assembly time, low cost, ergonomics and operator safety. A well-designed assembly process can improve production efficiency and product quality, reduce cost and shorten product’s time to market.

Expert assembly planners today typically use traditional approaches in which the three-dimensional (3D) CAD models of the parts to be assembled are examined on two-dimensional (2D) computer screens in order to assess part geometry and determine assembly sequences for a new product. As final verification, physical prototypes are built and assembled by workers who identify any issues with either the assembly process or the product design. As assembly tasks get more complicated, such methods tend to be more time consuming, costly and prone to errors.

Computer-aided assembly planning (CAAP) is an active area of research that focuses on development of automated techniques for generating suitable assembly sequences based primarily on intelligent identification and groupings of geometric features (Baldwin et al. 1991; de-Mello and Sanderson 1989; Zha et al. 1998; Sung et al. 2001; De Fazio and Whitney 1987; Smith et al. 2001). These methods rely on detailed information about the product geometry, but they do not account for the expert knowledge held by the assembler that may impact the design process. This knowledge, based on prior experience, is difficult to capture and formalize and could be rather extensive (Ritchie et al. 1995). Ritchie et al. (Ritchie et al. 1999) proposed the use of immersive virtual reality for assembly sequence planning. System functionality was demonstrated using an advanced electromechanical product in an industrial environment. Holt et al. (2004) propose that a key part of the planning process is the inclusion of the human expert in the planning. They base their statements on research in cognitive ergonomics and human factors engineering. Leaving the human aspect out of the assembly planning could result in incorrect or inefficient operations. Another limitation of the computer-aided assembly planning methods is that as the number of parts in the assembly increase, the possible assembly sequences increase exponentially and thus it becomes more difficult to characterize criteria for choosing the most suitable assembly sequence for a given product (Dewar et al. 1997a). Once again, the human input is critical to arriving at a cost-effective and successful assembly sequence solution.

Modern CAD systems are also used in assembly process planning. CAD systems require the user to identify constraint information for mating parts by manually selecting the mating surfaces, axes and/or edges to assemble the parts. Thus, these interfaces do not reflect human interaction with complex parts. For complex assemblies, such part-to-part specification techniques make it difficult to foresee the impact of individual mating specifications on other portions of the assembly process, for example ensuring accessibility for part replacement during maintenance, or assessing the effects of changing the assembly sequences. Such computer-based systems also lack in addressing issues related to ergonomics such as awkward to reach assembly operations, etc.

VR technology plays a vital role in simulating such advanced 3D human–computer interactions by providing users with different kinds of sensations (visual, auditory and haptic) creating an increased sense of presence in a computer-generated scene. Virtual assembly simulations allow designers to import concepts into virtual environments during the early design stages and perform assembly/disassembly evaluations that would only be possible much later, when the first prototypes are built. Using virtual prototyping applications, design changes can be incorporated easily in the conceptual design stage, thus optimizing the design process toward Design for Assembly (DFA). Using haptics technology, designers can touch and feel complex CAD models of parts and interact with them using natural and intuitive human motions. Collision and contact forces calculated in real-time can be transmitted to the operator using robotic devices making it possible for him/her to feel the simulated physical contacts that occur during assembly. In addition, the ability to visualize realistic behavior and analyze complex human interactions makes virtual assembly simulations ideal for identifying assembly-related problems such as awkward reach angles, insufficient clearance for tooling, and excessive part orientation during assembly, etc. They also allow designers to analyze tooling and fixture requirements for assembly.

In addition to manufacturing, virtual assembly systems could also be used to analyze issues that might arise during service and maintainability operations such as inaccessibility to parts that require frequent replacement, etc. Expert assembly knowledge and experience that is hard to document could be captured by inviting experienced assembly workers from the shop floor to assemble a new design and provide feedback for design changes (Schwartz et al. 2007). Disassembly and recycling factors can also be taken into account during the initial design stages allowing for an environmentally conscious design. Virtual assembly training can provide a platform for offline training of assembly workers which is important when assembly tasks are hazardous or specially complicated (Brough et al. 2007).

In order to simulate physical mockups in an effort to provide a reliable evaluation environment for assembly methods, virtual assembly systems must be able to accurately simulate real world interactions with virtual parts, along with their physical behavior and properties (Chryssolouris et al. 2000). To replace or reduce the current prototyping practices, a virtual assembly simulation should be capable of addressing both the geometric and the subjective evaluations required in a virtual assembly operation. Boothroyd et al. (1994) describes the more subjective evaluations of assembly as the following:

  • Can the part be grasped in one hand?

  • Do parts nest or tangle?

  • Are parts easy or difficult to grasp and manipulate?

  • Are handling tools required?

  • Is access for part, tool or hands obstructed?

  • Is vision of the mating surfaces restricted?

  • Is holding down required to maintain the part orientation or location during subsequent operations?

  • Is the part easy to align and position?

  • Is the resistance to insertion sufficient to make manual assembly difficult?

If successful, this capability could provide the basis for many useful virtual environments that address various aspects of the product life cycle such as ergonomics, workstation layout, tooling design, off-line training, maintenance, and serviceability prototyping (Fig. 1).

Fig. 1
figure 1

Applications of a virtual assembly/disassembly simulation

3 Virtual assembly: challenges

Several technical challenges must be overcome to realize virtual assembly simulations, namely: accurate collision detection, inter-part constraint detection and management, realistic physical simulation, data transfer between CAD and VR systems, intuitive object manipulation (inclusion of force feedback), etc. In the following section, these challenges are described and previous approaches in each area are summarized.

3.1 Collision detection

Virtual assembly simulations present a much larger challenge than virtual walkthrough environments as they require frequent human interaction and real time simulation involving complex models. Real world assembly tasks require extensive interaction with surrounding objects including grabbing parts, manipulating them realistically and finally placing them in the desired position and orientation. Thus, for successfully modeling such a complex interactive process, the virtual environment not only needs to simulate visual realism, it also needs to model realistic part behavior of the virtual objects. For example, graphic representations of objects should not interpenetrate and should behave realistically when external forces are applied. The first step to accomplish this is implementing accurate collision detection among parts (Burdea 2000).

Contemporary CAD systems typically used in product development incorporate precise geometric models consisting of hierarchical collections of Boundary Representation (B-Rep) solid models bounded with trimmed parametric NURBS surfaces. These representations are typically tessellated for display, and the resulting polygonal graphics representations can be used to detect collisions. However, the relatively high polygon counts required to represent complex part shapes generally result in relatively long computation time to detect collisions. In virtual environments where interactive simulation is critical, fast and accurate collision detection among dynamic objects is a challenging problem.

Algorithms have been developed to detect collisions using different object representations. Several algorithms that use polygonal data for collision detection were designed by researchers at the University of North Carolina and include I-collide (Cohen et al. 1995), SWIFT (Ehmann and Lin 2000), RAPID (Gottschalk et al. 1996), V-collide (Hudson et al. 1997), SWIFT ++ (Ehmann and Lin 2001), and CULLIDE (Govindaraju et al. 2003).Other methods such as V-Clip (Mirtich 1998) and VPS (McNeely et al. 1999) have also been proposed to use in immersive VR applications. A comprehensive review of collision detection algorithms can be found in Lin and Gottaschalk (1998) and Jiménez et al. 2001) and a taxonomy of collision detection approaches can be found in Borro et al. (2005).

Once implemented, collision detection prevents part interpenetration. However, collision detection does not provide feedback to the user regarding how to change position and orientation of parts to align them for completing the assembly operation (Frohlich et al. 2000). Two main classifications of techniques for implementing part positioning during an assembly include physics-based modeling and constraint-based modeling. Physics-based modeling simulates realistic behavior of parts in a virtual scene. Parts are assembled together with the help of simulated physical interactions that are calculated in real-time. The second technique utilizes geometric constraints similar to those used by modern CAD systems. In this approach, geometric constraints such as concentricity, coplanar surfaces, etc. are applied between parts, thus reducing the degrees-of-freedom and facilitating the assembly task at hand.

3.2 Inter-part constraint detection and management

Due to the problems related to physics-based modeling (instability, difficult to attain interactive update rates, accuracy etc.), several approaches using geometric constraints for virtual assembly have been proposed. Constraint-based modeling approaches use inter-part geometric constraints (typically predefined and imported, or defined on the fly) to determine relationships between components of an assembly. Once constraints are defined and applied, a constraint solver computes the new and reduced degrees-of-freedom of objects and the object’s resulting motion.

A vast amount of research focused on solving systems of geometric constraints exists in the literature. Numerical constraint solver approaches translate constraints into a system of algebraic equations. These equations are then solved using iterative methods such as Newton–Raphson (Light and Gossard 1982). Good initial values are required to handle the potentially exponential number of possible solutions. Although solvers using this method are capable of handling large non-linear systems, most of them have difficulties handling over-constrained and under-constrained instances (Fudos 1995) and are computationally expensive which makes them unsuitable for interactive applications such as virtual assembly (Fernando et al. 1999). Constructive constraint approaches are based on the fact that in principle, most configurations of engineering drawings can be solved on a drawing board using standard drafting techniques (Owen 1991). In the rule-constructive method, “solvers use rewrite rules for discovery and execution of construction steps”. Although complex constraints are easy to handle, exhaustive computation requirements (searching and matching) of these methods make them inappropriate for real world applications (Fudos and Hoffman 1997). Examples of this approach are described in Verroust et al. (1992), Sunde (1988) and (Suzuki et al. 1990). Graph-constructive approaches are based on analysis of the constraint graph. Based on the analysis, a set of constructive steps are generated. These steps are then followed to place the parts relative to each other. Graph constructive approaches are fast, methodical and provide means for developing robust algorithms (Owen 1991; Fudos and Hoffman 1997; Bouma et al. 1995; Fudos and Hoffman 1995). An extensive review and classification of various constraint solving techniques is presented in Fudos (1995).

3.3 Physics-based modeling

The physics-based modeling approach relies on simulating physical constraints for assembling parts in a virtual scene. Physical modeling can significantly enhance the user’s sense of immersion and interactivity, especially in applications requiring intensive levels of manipulation (Burdea 1999). Physics-based algorithms simulate forces acting on bodies in order to model realistic behavior. Such algorithms solve equations of motion of the objects at each time step, based on their physical properties and the forces and torques that act upon them.

Physics-based modeling algorithms can be classified into three categories based on the method used, namely the penalty force method, the impulse method, and the analytical method. In the penalty force method, a spring-damper system is used to prevent interpenetration between models. Whenever a penetration occurs, a spring-damper system is used to penalize it (McNeely et al. 1999; Erleben et al. 2005). Penalty-based methods are easy to implement and computationally inexpensive; however, they are characterized with problems caused by very high spring stiffness leading to stiff equations which are numerically intractable (Witkin et al. 1990). The impulse-based methods (Hahn 1988; Mirtich and Canny 1995; Guendelman et al. 2003) simulate interactions among objects using collision impulses. Static contacts in this approach are modeled as a series of high-frequency collision impulses occurring between the objects. The impulse-based methods are more stable and robust than penalty force methods. However, these methods have problems handling stable and simultaneous contacts (such as a stack of blocks at rest) and also in modeling static friction in certain cases like sliding (Mirtich 1996). The analytical method (Baraff 1989; Baraff 1997) checks for interpenetrations. If found, the algorithm backtracks the simulation to the point in time immediately before the interpenetration. Based on contact points, a system of constraint equations is solved to generate contact forces and impulses at every contact point (Baraff 1990). The results from this method are very accurate; however, it requires extensive computation time when several contacts occur simultaneously.

Thus, although various algorithms for physics-based modeling have evolved over the years, simulating realistic behavior among complex parts interactively and accurately is still a challenging task.

4 Review of virtual assembly applications

Progress in constraint modeling and physics-based modeling has supported substantial research activity in the area of virtual assembly simulations. In this paper, we categorize these assembly applications as either constraint-based or physics-based systems.

4.1 Constraint-based assembly applications

The first category consists of systems that use constraints to place parts in their final position and orientation in the assembly. Constraints in the context of this research are of two types. The first are positional constraints, which are pre-defined final part positions. The second are geometric constraints that relate part features and are applied when related objects are in proximity. Geometric constraints are useful in precise part positioning tasks in a virtual environment where physical constraints are absent (Wang et al. 2003; Marcelino et al. 2003). Constraint-based methods summarized in Sect. 3.2 are used to solve for relative object movements.

4.1.1 Systems using positional constraints

IVY (Inventor Virtual Assembly) developed by Kuehne and Oliver (1995) used IRIS Open Inventor graphics library from Silicon Graphics and allowed designers to interactively verify and evaluate the assembly characteristics of components directly from a CAD package. The goal of IVY was to encourage designers to evaluate assembly considerations during the design process to enable design-for-assembly (DFA). Once, the assembly was completed, the application rendered a final animation of assembly steps in a desktop environment.

The high cost of VR systems encouraged researchers to explore the use of personal computers (PC) for VR-based assembly simulations. A PC-based system “Vshop” (Fig. 2) was developed by Pere et al. (1996) for mechanical assembly training in virtual environments. The research focused on exploring PC-based systems as a low-cost alternative and utilizing commercial libraries for easy creation of interactive VR software. The system implemented bounding-box collision detection to prevent model interpenetration. The system provided grasping force feedback to the user and recognized gestures using a Rutgers Master II haptic exoskeleton. Hand gesture recognition was used for various tasks like switching on and off navigation and moving forward/backward in the environment.

Fig. 2
figure 2

VShop user interface

An experimental study investigating the potential benefits of VR environments in supporting assembly planning was conducted by Ye et al. (1999). For virtual assembly planning, a non-immersive desktop and an immersive CAVE (Cruz-Neira et al. 1992, 1993) environment were evaluated. The desktop VR environment consisted of a Silicon Graphics workstation. The CAVE environment was implemented with an IRIS Performer CAVE interface and provided the subjects with a more immersive sense of virtual assemblies and parts. The experiment compared assembly operations in a traditional engineering environment and immersive and non-immersive VR environments. The three conditions differed in how the assembly was presented and handled. The assembly task was to generate an assembly sequence for an air-cylinder assembly (Fig. 3) consisting of 34 parts. The results from the human subject study concluded that the subjects performed better in VEs than in traditional engineering environments in tasks related to assembly planning.

Fig. 3
figure 3

Presentation of aircylinder assembly in Ye’s application

Anthropometric data was utilized to construct virtual human models for addressing ergonomic issues that arise during assembly (Bullinger et al. 2000). A Head Mounted Display (HMD) was used for stereo viewing, and a data glove was used for gesture recognition. Head and hand tracking was implemented using magnetic trackers. While performing assembly tasks, the users could see their human model in the virtual environment. The system calculated the time and cost involved in assembly and also produced a script file describing the sequence of actions performed by the user to assemble the product.

An industrial study was performed at BMW to verify assembly and maintenance processes using virtual prototypes (Gomes de Sa et al. 1999). A Cyber Touch glove device was used for gesture recognition, part manipulation and for providing tactile force feedback to the user. A proximity snapping technique was used for part placement, and the system used voice input and provided acoustic feedback to provide information about the material properties of the colliding object. Gestures from the glove device were also used for navigating the virtual environment. Five different groups with diverse backgrounds participated in the user study. It was concluded that force feedback is crucial when performing virtual assembly tasks.

4.1.2 Systems using geometric constraints

One of the early attempts at utilizing geometric constraints to achieve accurate 3D positioning of solid models was demonstrated by Fa et al. in 1993. The concept of allowable motion was proposed to constrain the free 3D manipulation of the solid model. Simple constraints such as against, coincident, etc. were automatically recognized, and the system computed relative motion of objects based on available constraints.

VADE (Virtual Assembly Design Environment; Jayaram et al. 1997, 1999, 2000a, b; Taylor et al. 2000) developed in collaboration with NIST and Washington State University utilized constraint-based modeling (Wang et al. 2003) for assembly simulations. The system used Pro/Toolkit to import assembly data (transformation matrices, geometric constraints, assembly hierarchy etc.) from CAD to perform assembly operations in the virtual environment. Users could perform dual handed assembly and dexterous manipulation of objects (Fig. 4). A CyberGrasp haptic device was used for tactile feedback during grasping. A physics-based algorithm with limited capabilities was later added to VADE for simulating realistic part behavior (Wang et al. 2001). A hybrid approach was introduced where object motion is guided by both physical and geometric constraints simultaneously. Stereo vision was provided by an HMD or an Immersadesk (Czernuszenko et al. 1997) system. Commercial software tools were added to the system to perform ergonomic evaluation during assembly (Shaikh et al. 2004; Jayaram et al. 2006a). The VADE system was used to conduct industry case studies and demonstrate downstream value of virtual assembly simulations in various applications such as ergonomics, assembly installation, process planning, installation, and serviceability (Jayaram et al. 2007).

Fig. 4
figure 4

VADE usage scenarios

Different realistic hand grasping patterns involving complex CAD models have been explored by Wan et al. (2004a) and Zhu et al. (2004) using a multimodal system called MIVAS (A Multi-Modal Immersive Virtual Assembly System). They created a detailed geometry model of the hand using metaball modeling (Jin et al. 2000; Guy and Wyvill 1995) and tessellated it to create a graphic representation, which was texture-mapped with images captured from a real human hand (Wan et al. 2004b). A three-layer model (skeletal layer, muscle layer and skin layer) was adapted to simulate deformation in the virtual hand using simple kinematics models. Hand to part collision detection and force computations were performed using fast but less accurate VPS software (McNeely et al. 1999), while part to part collision detection was implemented using the RAPID (Gottschalk et al. 1996) algorithm. Geometric constraints were utilized in combination with collision detection to calculate allowable part motion and accurate part placement. Users could feel the size and shape of digital CAD models via the CyberGrasp haptic device from Immersion Corporation (http://www.immersion.com/).

Commercial constraint solvers such as D-Cubed (http://www.plm.automation.siemens.com/en_us/products/open/d-cubed/index.shtml) have also been utilized for simulating kinematic behavior in constraint-based assembly simulations. Marcelino et al. (2003) developed a constraint manager for performing maintainability assessments using virtual prototypes. Instead of importing geometric constraints from CAD systems using proprietary toolkits, a constraint recognition algorithm was developed which examined part geometries (surfaces, edges etc.) within certain proximity to predict possible assembly constraints. Geometric constraint approach was utilized to achieve real time system performance in a realistic kinematic simulation. The system (Fig. 5) imported B-Rep CAD data using Parasolid (http://www.ugs.com/en_us/products/open/parasolid/index.shtml) geometry format. A constraint manager was developed which was capable of validating existing constraints, determining broken constraints and enforcing existing constraints in the system. The constraint recognition algorithm required extensive model preprocessing steps in which bounding boxes were added to all surfaces of the objects before they could be imported.

Fig. 5
figure 5

Marcelino’s constraint manager interface

The concept of assembly ports (Jung et al. 1998; Singh and Bettig 2004) in combination with geometric constraints have been used by researchers for assembly and tolerance analysis. Liu et al. (Liu and Tan 2005) created a system which used assembly ports containing information about the mating part surfaces, for example geometric and tolerance information, assembly direction and type of port (hole, pin, key etc.). If parts were modified by a design team, the system used assembly port information to analyze if new designs could be re-assembled successfully. Different rules were created (proximity, orientation, port type and parameter matching) for applying constraints among parts. Gesture recognition was implemented using a CyberGlove device. A user study was conducted which confirmed that constraint-based modeling was beneficial for users when performing precise assembly positioning tasks (Liu and Tan 2007).

Attempts have also been made at integrating CAD systems with virtual assembly and maintenance simulations (Jayaram et al. 1999, 2006b). A CAD-linked virtual assembly environment was developed by Wang et al. (2006), which utilized constraint-based modeling for assembly. The desktop-based system ran as a standalone process and maintained communication with Autodesk Inventor® CAD software. Low level-of-detail (LOD) proxy representations of CAD models were used for visualization in the virtual environment. The assembly system required persistent communication with the CAD system using proprietary APIs for accessing information such as assembly structure, constraints, B-rep geometry and object properties. The concept of proxy entity was proposed which allowed the system to map related CAD entities (surfaces, edges, etc.) to their corresponding triangle mesh representations present in VR.

Yang et al. (2007) used constraint-based modeling for assembly path planning and analysis. Assembly tree data, geometric data of parts and predefined geometric constraints could be imported from different parametric CAD systems using a special data converter. A data glove device and a hand tracker were used for free manipulation of objects in the virtual environment. The automatic constraint recognition algorithm activated the pre-defined constraints when bounding boxes of the interrelated parts collided. The users were required to confirm the constraint before it could be applied. These capabilities were applied to the integrated virtual assembly environment (IVAE) system.

4.2 Physics-based modeling applications

The second category of applications includes assembly systems that simulate real world physical properties, friction, and contact forces to assemble parts in a virtual environment. These applications allow users to move parts freely in the environment. When a collision is detected, physics-based modeling algorithms are used to calculate subsequent part trajectories to allow for realistic simulation.

Assembly operators working on the shop floor rely on physical constraints among mating part surfaces for completing assembly tasks. In addition, physical constraint simulation is important during assembly planning as well as maintenance assessments to check if there is enough room for parts and tooling. One of the early attempts at implementing physics-based modeling for simulating part behavior was made by Gupta (Gupta and Zeltzer 1995; Gupta et al. 1997). The desktop-based system called VEDA (Virtual Environment for Design for Assembly) used a dual Phantom® interface for interaction and provided haptic, auditory and stereo cues to the user for part interaction. However, the system was limited to render multimodal interactions only among 4–5 polygons and handled only 2D models to maintain an interactive update rate.

Collision detection and physical constraint simulation among complex 3D models was attempted by Fröhlich et al. (2000). They used CORIOLIS™ (Baraff 1995) physics-based simulation algorithm to develop an interactive virtual assembly environment using the Responsive Workbench (Krüger and Fröhlich 1994). Different configurations of spring-based virtual tools were developed to interact with objects. The system implemented the workbench in its table-top configuration and supported multiple tracked hands and users to manipulate an object. The system’s update rates dropped below interactive levels when several hundred collisions occurred simultaneously, and at least five percent tolerance was necessary to avoid numerical instabilities which sometimes resulted in system failure.

Researchers at the Georgia Institute of Technology utilized a similar approach demonstrated by Gupta et al. (1997) to create a desktop-based virtual assembly system called HIDRA (Haptic Integrated Dis/Re-assembly Analysis; Coutee et al. 2001; Coutee and Bras 2002). This approach used GHOST (General Haptic Open Software Toolkit) from Sensable Technologies (http://www.sensable.com/) and dual Phantom® configuration for part grasping. OpenGL was used for visualization on a 2D monitor and V-Clip in conjunction with Q-hull and SWIFT++ were used for collision detection. Because the system (Fig. 6) treated the user’s finger tip as a point rather than a surface, users had difficulty manipulating complicated geometries. Also, using GHOST SDK for physical modeling combined with the “polygon soup” based collision detection of SWIFT++, HIDRA had problems handling non-convex CAD geometry.

Fig. 6
figure 6

Geometry in HIDRA

Researchers (Kim and Vance 2003; Kim and Vance 2004a) evaluated several collision detection and physics-based algorithms and found VPS (McNeely et al. 1999) software from The Boeing Company to be the most applicable for handling the rigorous real time requirements while operating on complex 3D CAD geometry. The system utilized approximated triangulated representations of complex CAD models to generate a volumetric representation that was used for collision computations. Four- and six-sided CAVE systems were supported and a virtual arm model was constructed by using multiple position trackers that were placed on the user’s wrist, forearm and upper arm (Fig. 7). Dual handed assembly was supported and gesture recognition was done using wireless data glove devices from 5DT Corporation (http://www.5dt.com/).

Fig. 7
figure 7

Data glove in a six-sided CAVE

Techniques developed during this research were expanded to facilitate collaborative assembly (Kim and Vance 2004b) through the internet. A combination of peer-to-peer and client–server network architectures was developed to maintain the stability and consistency of the system data. A “Release-but-not-released—RNR” method was developed for allowing computers with different performance capabilities to participate in the network. The system architecture required each virtual environment to be connected to a local PC machine to ensure 1 kHz haptic update rate for smooth haptic interaction. Volumetric approximation of complex CAD models resulted in a fast but inaccurate simulation (with errors up to ~15 mm) and thus did not allow low-clearance parts to be assembled.

A dual-handed haptic interface (Fig. 8) for assembly/disassembly was created by Seth et al. (2005, 2006). This interface was integrated into SHARP: System for Haptic Assembly and Realistic Prototyping and allowed users to simultaneously manipulate and orient CAD models to simulate dual-handed assembly operations. Collision force feedback was provided to the user during assembly. Graphics rendering was implemented with SGI Performer, the Open Haptics Toolkit library was used for communicating with the haptic devices, and VPS (McNeely et al. 1999) for collision detection and physics-based modeling. Using VRJuggler (Just et al. 1998) as an application platform, the system could operate on different VR systems configurations including low-cost desktop configurations, Barco Baron (http://www.barco.com/entertainment/en/products/product.asp?element=1192), Power Wall, four-sided and six-sided CAVE systems. Different modules were created to address issues related to maintenance (swept volumes), training (record and play) and to facilitate collaboration (networked communication). Industrial applications of this work demonstrated promising results for simulating assembly of complex CAD models from a tractor hitch. This research was later expanded to gain collision detection accuracy at the cost of computation speed for simulating low-clearance assembly. SHARP demonstrated a new approach (Seth et al. 2007) by simulating physical constraints using by accurate B-Rep data from CAD systems which allowed the system to detect collisions with an accuracy of 0.0001 mm. Although physical constraints were simulated very accurately, users could not manipulate parts during very low–clearance scenarios with the required precision because of the noise associate with the 3D input devices. Geometric constraints were utilized in combination with physics to achieve precise part manipulation required for low-clearance assembly.

Fig. 8
figure 8

Dual-handed haptic interface in SHARP

Garbaya et al. (Garbaya and Zaldivar-Colado 2007) created a physics-based virtual assembly system which used spring-damper model to provide the user with collision and contact forces during the mating phase of an assembly operation. The PhysX® software toolkit was used for collision detection and physically based modeling. Grasping force feedback was provided using a CyberGrasp™ haptic device and collision force was provided using CyberForce™ haptic device from Immersion Corporation. An experimental study was conducted to check system effectiveness and user performance in real and virtual environments. The study concluded that user performance increased when inter-part collision forces were rendered to the user when compared to the user performance when only grasping forces were provided.

HAMMS (Haptic Assembly, Manufacturing and Machining System) was developed by researchers at the Heriot-Watt University to explore the use of immersive technology and haptics in assembly planning (Ritchie et al. 2008). The system uses a Phantom® device and stereo glasses. The application is based on OpenHaptics Toolkit, VTK and AGEIA PhysX® software. The unique aspect of this application is its ability to log user interaction. This tracking data can be recorded and examined later to generate an assembly procedure. This work is ongoing with future evaluations to be performed.

5 Haptic interaction

Today’s virtual assembly environments are capable of simulating visual realism to a very high level. The next big challenge for the virtual prototyping community is simulating realistic interaction. Haptics is an evolving technology that offers a revolutionary approach to realistic interaction in VEs. “Haptics means both force feedback (simulating object hardness, weight, and inertia) and tactile feedback (simulating surface contact geometry, smoothness, slippage and temperature)” (Burdea 1999). Force cues provided by haptics technology can help designers feel and better understand the virtual objects by supplementing visual and auditory cues and creating an improved sense of presence in the virtual environment (Coutee and Bras 2004; Lim et al. 2007a, b). Research has shown that the addition of haptics to virtual environments can result in improved task efficiency times (Burdea 1999; Volkov and Vance 2001).

Highly efficient physics-based methods that are capable of maintaining high update rates are generally used for implementing haptic feedback in virtual assembly simulations. Various approaches for providing haptic feedback for assembly have been presented in the past which focused on developing new methods for providing tactile (Pere et al. 1996; Jayaram et al. 1999, 2006b; Wan et al. 2004a; Regnbrecht et al. 2005), collision (Kim and Vance 2004b; Seth et al. 2005; Seth et al. 2006) and gravitational force feedback (Coutee and Bras 2004; Gurocak et al. 2002). The high update rate (~1 kHz) requirement for effective haptics has always been a challenge in integrating this technology. As stated earlier, most physics-based algorithms used highly coarse model representations to maintain the update rate requirements. The resulting lack of part shape accuracy of such approaches presents problems when detailed contact information is necessary. Simulating complex part interactions such as grasping is also demanding as it requires the simulation to detect collisions and generate contact forces accurately for each individual finger (Wan et al. 2004a; Zhu et al. 2004; Jayaram et al. 2006b; Zachmann and Rettig 2001). Maintaining update rates for haptic interaction (~1 kHz) while performing highly accurate collision/physics computations in complex interactive simulations such assembly remains a challenge for the community.

In addition, there are several limitations of the haptics technology currently available. Non-portable haptic devices such as Sensable Technologies’ PHANToM® (http://www.sensable.com/; Massie and Salisbury 1994), Immersion’s CyberForce™ (http://www.immersion.com/), Haption Virtuose (http://www.haption.com/index.php?lang=eng), and Novint Falcon (http://www.novintfalcon.com/) devices (Yang et al. 2007) among others (Millman et al. 1993; Buttolo and Hannaford 1995) have workspace limitations which results in restricted user motion in the environment. Additionally, because these devices need to be stably mounted, their use with immersive virtual environments becomes unfeasible.

In contrast, wearable haptic gloves and exoskeleton devices such as CyberTouch™, CyberGrasp™ (http://www.immersion.com/), Rutgers Master II (Bouzit et al. 2002) among others (Gurocak et al. 2002) provide a much larger workspace for interaction. However, they provide force feedback only to fingers and palm and thus are suitable for tasks that involve only dexterous manipulations. In addition, the weight and cable attachments of such devices make their use unwieldy. A detailed discussion on haptics issues can be found in (Burdea 2000). The challenges presented here among several others must be addressed, before the community can explore the real potential of haptics technology in virtual prototyping.

6 CAD-VR data exchange

CAD-VR data exchange is one of the most important issues faced by the virtual prototyping community. CAD systems used by the industry to develop their product models are generally unsuitable for producing optimal representations for VR applications. Most VR applications take advantage of scene-graphs (e.g., Openscenegraph, OpenSG, OpenGL Performer, etc.) for visualization which are simply hierarchical data structures comprised of triangulated mesh geometry, spatial transforms, lighting, material properties, and other metadata. Scene graph renderers provide the VR application with methods to exploit this data structure to ensure interactive frame rates. Translating CAD data into a scene graph requires tessellation of the individual precise parametric surface and/or B-rep solids, often multiple times, to produce several “level-of-detail” polygonal representations of each part. During this translation process, the parametric (procedural modeling history and constraints) information of the CAD model generally does not get imported into the VR application. In addition, pre-existing texture maps may not be included in these visually optimized model representations. In virtual assembly simulations, geometric constraint-based applications that depend on parametric model definitions to define inter-part constraint relationships generally have to deal with two representations of the same model: one for visualization and another for constraint modeling algorithms for performing assembly. Similarly, physics modeling applications also use dual model representations: high-fidelity model for visualization and a coarser representation used for interactive physics calculations (Fröhlich et al. 2000; Seth et al. 2005).

Commercial CAD systems (for example AutoCAD, UGS, Dassault Systems, etc.) have made various attempts to embed capabilities for immersive and desktop stereo visualization into available commercial software to some degree. Attempts have also been made by academia to provide haptic interaction and immersive visualizations for assembly/disassembly applications with commercial CAD systems (Jayaram et al. 2006b; Wang et al. 2006). Thus, although addressed to some degree by industry and academia, there is still no general non-proprietary way to convert CAD assemblies into a representation suitable for VR.

Additionally, today’s VR applications have matured to a level where they provide users with the ability to identify meaningful design changes; however, translating these changes back to CAE applications (such as CAD systems) is currently not possible. The efforts mentioned earlier represent a promising basis for this research, but as yet, it remains a major bottleneck to broader adoption of VR.

7 Summary

Many virtual assembly applications have been developed by various research groups, each with different features and capabilities. The review in the previous section indicated that initial efforts in simulating assembly used pre-defined transformation matrices of parts for positioning in the virtual scene. In such systems, as users moved parts in the environment they were snapped in place based on collision or proximity criteria (Dewar et al. 1997b; Fig. 9). Most of the early applications did not implement collision detection among objects which allowed them to interpenetrate during the simulation.

Fig. 9
figure 9

Data transfer in positional constraint applications

Later, researchers used pre-defined geometric-constraint relationships which were imported from a CAD system for assembling parts. Here, the pre-defined constraints were activated when related parts came close to each other in the environment. Once geometric-constraints were recognized, constrained motion could be visualized between parts which were then assembled using pre-defined final position (Jayaram et al. 1999). Constraint-based approaches have shown promising results in the past. They present lower computation and memory requirements when compared to physics-based methods. In addition, when combined with accurate models (e.g., parametric surface representations, or B-Rep solids) constraint-based approaches allow users to manipulate and position parts in an assembly with very high fidelity. However, some of these applications required special CAD toolkits to extract relevant CAD metadata (Fig. 10) that was required for preparing an assembly scenario (Jayaram et al. 1999; Wan et al. 2004a; Chen et al. 2005). These special data requirements and their dependence on specific CAD systems prevented widespread acceptance of these applications. Many constraint-based virtual assembly systems also incorporated collision detection between models to prevent model interpenetration during assembly. Advanced constraint-based methods were successful in identifying, validating and applying constraints on-the-fly and thus did not require importing predefined CAD constraints (Marcelino et al. 2003; Zhang et al. 2005). Although systems using constraint-based modeling prove successful in simulating object’s kinematics for assembly; simulating realistic behavior among objects involving physical constraints and rigid body dynamics is not possible.

Fig. 10
figure 10

Data transfer in geometric constraint-based applications

Other research incorporated simulation of the real world physical behavior of parts (Fig. 11). Physics-based methods allow for testing scenarios similar to those possible only by physical mock-ups by calculating part trajectories subsequent to collisions, possibly incorporating friction, gravity, and other forces that act on the objects. Physics-based solvers generally sacrifice computation accuracy to keep the update rate of the visual simulation realistic (Jiménez et al. 2001). Most previous efforts used a simplified and approximated polygon mesh representations of CAD models for faster collision and physics calculations. Some of these efforts generated even coarser representations by using cubic voxel elements for physics and collision calculations (McNeely et al. 1999; Garcia-Alonso et al. 1994; Kaufman et al. 1993). Assembly configurations like a tight peg in a hole caused several hundreds of collisions to occur which often resulted in numerical instabilities in the system (Fröhlich et al. 2000). Due to these limitations, very few attempts rely on simulating physical constraints for assembly/disassembly simulations.

Fig. 11
figure 11

Data transfer in physics-based applications

In addition, physics-based methods also lay the foundation for the implementation of haptic interfaces for virtual prototyping applications. Such haptic interfaces allow users to touch and feel virtual models that are present in the simulation. Haptic interfaces require much higher update rates of ~1 kHz which results in trade-offs in accuracy of collision and physics computations. In order to complete assembly tasks with tight tolerances, nominal part size modification may be required (Baraff 1995; Seth et al. 2006). However, because assembly operations require mating with small clearance, it is generally not possible to assemble low-clearance parts with actual dimensions using physics-based methods. The demand for highly accurate physics/collision results while maintaining simulation interactivity is still a challenge for the community. In prototyping applications like virtual assembly, attempts have been made to provide collision and tactile forces to the users for more intuitive interaction with the environment (Zhu et al. 2004; Coutee et al. 2001; Kim and Vance 2004b; Seth et al. 2005, 2006).

8 Discussion and future directions

Collision detection algorithms unquestionably form the first step toward building a virtual assembly simulation system. Although they add to simulation realism by preventing part interpenetration; collision detection alone does not model part behavior or define relative part trajectories necessary to facilitate the assembly operation. Part interaction methods are key to a successful immersive virtual assembly experience.

In general, while constraint-based approaches provide capabilities for precise part positioning in VEs; physics-based approaches, on the other hand, enable virtual mock-ups to behave as their physical counterparts. Identifying physical constraints among an arbitrary set of complex CAD models in a dynamic virtual simulation is a computationally demanding challenge. Collision and physics responses need to be calculated fast enough to keep up with the graphics update rate (~30 Hz) of the simulation. Both of these approaches serve different purposes which are crucial in making a virtual assembly simulation successful.

A research direction that appears promising would be to develop a hybrid method by combining physics-based and constraint-based algorithms. The resulting virtual assembly application would be able to simulate realistic environment behavior for enhanced sense of presence and would also be able to position parts precisely in a given assembly (Table 1). An attempt has been made to implement physics-based algorithm with limited capabilities to an existing constraint-based assembly system (Wang et al. 2001). However, limitations of the physics algorithm, part snapping and excessive metadata requirements using a CAD system dependent toolkit prevented its widespread impact.

Table 1 Comparison of assembly simulation methods

Such an approach would incorporate physics-based methods for simulating realistic part behavior combined with automatic constraint identification, application and haptic interaction. Constraint-based methods would come into play when low-clearance assembly needs to be performed to allow for precise movement of parts into their final position. The challenge in this approach is that physics-based methods should be able to take into account the presence of a geometric constraint and the “hybrid solver” should be able to calculate part trajectories in such a way that both physical and geometric constraints are satisfied at any given point of time.

As the technology progresses, the cost of computing and visualization technology will continue to fall as their capabilities increase. It will soon be possible to utilize this power to integrate faster and more accurate algorithms into virtual assembly simulations that will be capable of handling large assemblies with thousands of parts while incorporating physically accurate part behavior with high-fidelity visual and haptic interfaces.