1 Introduction: issues and bottlenecks

Many scientific disciplines produce massive sets of complex data in a routine way. The process of extracting hidden patterns from such data is generically referred to as “data mining.” It often requires human interaction to further exploit the perceptive and cognitive abilities of users, so as to focus studies on underlying phenomena of interest. Computational fluid dynamics (CFD) is a strong field of application because the study of the characteristics of 3D dynamic structures generated within a flow is of growing importance, especially for flow control. The fields of application are numerous: automotive and aviation design (aerodynamic optimization and trail analysis), urban environment (circulation of air and pollutants), meteorology, oceanography, solar dynamics, planetary magnetism, etc.

1.1 A virtual wind tunnel?

In the last 30 years, evolution of numerical data processing in terms of computing power, data storage capacity and algorithmic efficiency in modeling and simulating physical phenomena have led CFD experts to envision the creation of a “virtual wind tunnel”: the numerical data (generated in real time by simulation code) would be uploaded to a 3D interactive environment, thus making it possible for the experts to analyze the evolution of a non-stationary flow “live,” using various interaction tools (Bryson and Levit 1991). The application expert could also directly interact with the CFD code by modifying simulation parameters directly, controlling the numerical experiment with natural interfaces (e.g., changing the geometry of a simulated foil shape and being able to visualize the effects immediately).

With the current state of available technology, we are still far from this objective and several obstacles hinders us from reaching it. First is the sheer amount of data required, which involves the need to store, transmit, and represent several gigabytes of information per second, even for the simplest of 3D flows. There currently exist a number of powerful techniques for structuring and compressing data, making it possible to consider the transfer of such simulation results in real time from a distant machine (the simulation server) to the user interface. However, interactive computation of these data, as well as their transfer to rendering servers to produce visual, audio, or haptic feedbacks appears unattainable for the time being. However, existing rendering and interaction devices suggest promising possibilities for future CFD tools. It is expected that interactive virtual reality (VR) simulations should drastically improve the research conditions of physicists, providing a wealth of possibilities to perceive and analyze complex phenomena. They should also provide highly favorable environments for the training of future engineers and researchers.

The most complex CFD simulations, where turbulence is fully developed and conditions the phenomenon of interest, can only be approximated using purely statistical approaches. Relying on VR is probably not essential in that case. On the other hand, the time–space organization of coherent structures of a flow and its temporal variability are key points for the interpretation and control of the main properties of the simulation. This organization is three-dimensional in space, in addition to being non-stationary. Traditional tools for 3D viewing and volume rendering using lighting, 2D slabs, etc., although very useful, remain of limited effectiveness relative to this 4D problem. Immersive VR makes aids in achieving a better understanding of the various physical variables within the CFD results (such as speed vectors, density, energy, pressure, temperature, rotational speed, vorticity). Since the visualization space is larger, more data can be displayed in order to fill the user’s field of view. Moreover, during the exploration stage, which involves locating and isolating relevant areas of the flow, this task is facilitated by interactive VR navigation. In addition, the analysis of a given simulation often requires users to visualize several fields simultaneously, using various tools (e.g., pressure isosurfaces combined with velocity streamlines). This type of complex visual analysis may be facilitated by an immersive approach (Ziegeler et al. 2001). Following this, the fundamental rationale of the present work is to study the conditions under which multimodal VR interfaces may complement conventional desktop usage when exploring complex and massive CFD datasets.

1.2 The perception challenge

The sheer size of typical CFD simulations makes it impossible to visualize them, as they are, on current graphics processors, without performing a significant reduction of the amount of data to be displayed, so that interactive rendering does not suffer from any significant latency. This operation is called adaptive visualization and can be achieved with the help of several data reduction techniques. Two-dimensional surfaces can be efficiently visualized with traditional algorithms, such as polygonal reduction (Hoppe 1996) or point-based rendering (Levoy and Rusinkiewicz 2000). More specialized techniques include particle-based approaches or volume rendering (Crawfis et al. 2000). Another useful method is to carry out “on-demand” data loading, by visualizing only the data that the user perceives or is likely to perceive in the near future, taking into account the context of interaction. In this way, only the data contained in the (culled) volume viewed by an immersed user will be loaded in the graphic memory. The most relevant approach (but also the most complex one), consists in structuring 3D information specifically for VR exploration. As an example, we developed a specific fast visualization technique dedicated to immersive viewing of isosurfaces computed on rectilinear, non-uniform grids. The technique is based on a hierarchical, octree-based partitioning computed in a single pre-processing stage from the input data. Based on this structure, fast isosurface computation restricted to the volume of view can be achieved by quickly rejecting 95% of the octree nodes. Details can be found in Gherbi et al. (2006).

Despite numerous and highly desirable efforts to quickly and efficiently visualize rapidly evolving volumes of data, the exploration of complex flow structures often requires the simultaneous representation of many different mathematical values at the same location. This need rapidly leads to a saturation of the user’s visual channel. Moreover, vision is not always best suited for the perception of intermittent time-dependent phenomena or highly local phenomena (intermittency of turbulence, structure breaks resulting from stretching in swirls, etc.). Therefore, the importance of adopting a multisensory VR approach to CFD logically arises.

For example, the audio channel may provide a natural means of perceiving the dynamics of phenomena, such as their periodic, chaotic, or turbulent character. It should be noted that these features cannot easily be directly perceived visually unless (1) they are in the user’s field of view and (2) the visualization parameters are correctly tuned (e.g., correct isovalues). Spatial auditory perception can be exploited to detect and analyze this type of phenomenon within the volume of flow, also providing the user with a guide to their spatial region location. The sense of touch can also be perceived in a VR setup, through the use of specific devices such as haptic arms or vibro-tactile gloves (Burdea 1996). Nevertheless, the use of haptics for flow exploration calls for very specific considerations. One can exploit haptic feedback to “feel” selected physical characteristics of the flow, such as gradients, pressure fields. It is also possible to guide the hand of the user toward certain areas of interest. This is very different from the common use of haptics for feeling the hard surfaces of virtual objects in a 3D environment. In addition, the very nature of CFD data, with rapidly varying (and sometimes hard to predict) properties in space and time, calls for specific developments in order to provide useful and stable sensorimotor feedbacks to a CFD expert immersed in a VR simulation.

1.3 Identifying user tasks and needs

Overall, research in the field of VR-assisted exploration of scientific data has led to the design of rich and powerful systems from a strictly technical point of view (Van Dam et al. 2000). Nevertheless, the resulting platforms are not judged very usable and, at the very least, have not led to a general acceptance by the CFD community. One of the reasons for the lack of widespread acceptance of VR technologies is a simplistic and somewhat naive view of the activity and real needs of CFD users. The techno-centric view focusing on data management and rendering, while necessary, has also led to a relative neglect of user tasks and needs. A state of the art review of the VR-CFD domain quickly reveals that very little exists concerning the evaluation of previous efforts, either of the underlying design methodology or the actual impact on potential users. One manner to overcome this recurring issue is by explicitly adopting a methodology based on user-centered design (Maguire 2001). This design must be founded in an understanding of both the object of study (e.g., specificities of a CFD simulation) and from how experts study the object or are likely to study it with the future VR interfaces.

1.4 Focus and contribution of CoRSAIRe/CFD

The present work outlines the main advances of the CoRSAIRe/CFD project, a 3-year government-funded effort gathering experts in VR, CFD analysis, and multimodal supervision. CoRSAIRe set out to create, in two standard cases,Footnote 1 a truly usable VR exploration environment by formalizing the relationships between the analysis performed by expert users and the numerical data which they rely on. The focus of the present study is thus the identification of relevant strategies of CFD exploration coupled to adapted data representation and interaction techniques.

The contributions of the work are (a) providing a systematic methodology to promote user needs in the VR design loop, through a formal task description leading to the identification of current issues in CFD analysis and (b) implementation of several adapted multisensory interaction techniques to match the foreseen activity of CFD experts. In particular, new 3D audio and haptic feedbacks have been designed and implemented to aid in the analysis of fluid flow.

The remaining sections of the paper are organized as follows:

Section 2 presents the task analysis process, whose goal is to model the activity of a CFD expert investigating a typical dataset, providing the necessary bridge between tasks and display modalities. This analysis led to several useful observations on the nature of existing investigation methods, anticipating needs that may not be immediately perceived (or at least explicitly formulated) by users. It also provides a means to give recommendations on how an immersive virtual environment should allocate the available modalities.

Section 3 describes the technical issues faced in the process of setting up an immersive flow simulator. It first describes the complete architecture integrating the different VR components into a flexible and efficient experimental test bed. Then, auralization techniques adapted to provide CFD experts with meaningful auditory feedbacks are presented. It then focuses on designing efficient haptic rendering for flow dynamics, based on structural observations. Finally, Sect. 4 describes ongoing evaluation experiments being carried out to assess the potential benefits of multimodal VR exploration for CFD.

2 An initial step for design: clarifying field practices and user needs

2.1 Methods

Our study first focused on the use of existing desktop-based tools commonly used to explore large numerical simulations of fluid flows. Interviews were carried out in order to gauge the impact, as perceived by users, of introducing innovative technologies (i.e., VR and multimodal interfaces) in a field already replete with working tools and practices. Five researchers in CFD, average age of 43.5 years (SD = 6.8 years) took part in the study. Each had a mean experience of 10.5 years in the use of CFD software. Two separate techniques were used for the analysis of user needs: semi-directed interviews and observation of work sessions. In the first stage, we carried out six semi-directed interviews with the subjects in the workplace, recorded them, and transcribed them verbatim. Interviews, both confidential and anonymous, focused on three major points: ongoing research, experience and habits in the use of flow simulations, and possible future uses of VR and consequences of its use on everyday work. Interview corpora were subjected to a cognitive discursive analysis (Ghiglione et al. 1998) using the Tropes program developed by Acetic Software. This analysis focused on identifying what properties were deemed relevant (e.g., physical, mathematical) in the study of a CFD simulation, and what strategies were used in the navigation of these complex datasets. In the second stage, we recorded and analyzed four work sessions in which individual subjects used their existing tools (AVS or TecPlot) to generate and explore visual renderings of numerical flow simulations. The simulations used varied between subjects, since they were representative of problems with which they were very familiar and which were part of their ongoing work. Video recordings served as a basis to analyze and model the exploration task. Verbal protocols (Ericsson and Simon 1984/1993) were collected throughout, taking explicit verbalization of the tasks carried out as a unit of analysis.

Coding the subjects’ actions and verbalizations allowed us to construct a task model using the hierarchical task analysis methodology (HTA, see Annett 2003). HTA is based on the premise that tasks may be described following a hierarchy of goals and subgoals. This allows for the close examination of what user goals are and, more importantly to design, what the user needs in order to achieve each goal in terms of information, product functions, etc.

2.2 Results

The user and task analysis led to the construction of three elements to assist the design of a multimodal VR application for the exploration of flow simulations: (1) a model of the tasks carried out in the exploration of existing simulations, chosen by subjects as representative of their ongoing work; (2) a model of user needs; and (3) a set of principles to guide choices in terms of modal allocation to the various kinds of data used by subjects and the choice of relevant interaction techniques.

2.2.1 Use case: the cavity simulation

To serve as an example of a typical CFD analysis task, it was decided to focus on the simulation of an incompressible flow inside an open cavity.Footnote 2 This cavity flow was exploited throughout the CoRSAIRe/CFD project to experiment with the new multimodal immersive schemes described in Sect. 3. The setup is displayed in Fig. 1. In all subsequent discussion, we will respectively denote x, y, and z, the longitudinal, vertical, and transverse directions of the flow, and V = (v x ,v y ,v z ) the corresponding velocity vector. The cavity is 100 mm long and 50 mm high. The total height of the domain is 125 mm, while the total length is 410 mm. For the boundary conditions at the outlet, the longitudinal velocity component was computed using mass conservation over the domain. The gradient of the other two velocity components was set to zero and no-slip conditions were used at the walls. A discretization of 256 cells were used in the longitudinal direction and 128 in the spanwise and normal directions. The mesh was refined near the walls and over the cavity in order to obtain a fine resolution on structures generated by instabilities. To minimize numerical inaccuracy, the greatest size variation between successive cells was 3% over the cavity (5% elsewhere) and the dimensional ratio for one cell was of the order of 1.

Fig. 1
figure 1

The geometry of the CFD simulation used as a test case

The flow is simulated by numerically solving over time the Navier–Stokes equations for incompressible flow:

$$ \nabla \cdot V = 0 \quad (\hbox{Mass equation}) $$
(1)
$$ \frac{\partial{V}} {\partial{t}}+ \nabla \cdot V^tV = -\frac{1} {\rho_0}\nabla{P} + \nabla \cdot \nu \nabla V \quad (\hbox{Momentum equations}) $$
(2)

V is the (eulerian) particle speed, t is the time, ρ0 the uniform and constant density, P the pressure, and ν the kinematic viscosity (constant).

The reader should refer to Gadouin et al. (2001) for details on the numerical solving of these equations in the special case of the cavity of Fig. 1.

Several days of super-computer time were necessary to run the simulated experiment, whose actual duration is on the order of one minute. Velocity vector data resulting from the simulation was stored in two separate sets: (a) each sample point of the complete domain with a frequency of 30 Hz and (b) 21 special interest points, in or near the cavity, with a measuring frequency of 400 Hz. Each step of the simulation occupies about 50 MB of storage space, resulting in a total of about 100 GB.

2.2.2 A task-oriented hierarchical model of current task

The HTA task tree (see Fig. 2) describes the task of exploring the flow simulation as a hierarchy of tasks and subtasks. This model was extracted by combining results from interviews and observations.

Fig. 2
figure 2

Overall view of the task structure for processing the simulation data

Much of the CFD expert’s work (task 1) rests on formalizing the problem at hand in order to construct a relevant protocol to generate flow data. This task involves exploring scientific literature, to identify relevant physical properties, and also mathematical tools, that may be adapted from existing work, to the problem at hand.

For example, interviews with one subject and analysis of related literature (Podvin et al. 2006; Pastur et al. 2008) showed that building this simulation involved four subtasks. First, the subject chose (with colleagues) to model the flow inside an open cavity as an incompressible flow. This term alludes to a well-known set of physical properties, as well as to a widely accepted mathematical model, derived from the Navier–Stokes equation. Secondly, simplification of this model was carried out through the strategic choice of specific schemes of the numerical simulation, allowing cancelation of some of the equation’s terms, and therefore faster solving of the simulation equations. Thirdly, this mathematical model is solved for given boundary conditions reflecting the geometrical and dynamical conditions of the cavity flow. In short, the custom model was built by specifying unusual and little-known approximations and initial conditions to a well-known general simulation problem.

Running the simulation then allows one to save snapshots of the flow. The first instants computed represent the simulation response to the given initial conditions and do not represent the “natural” flow behavior, but rather a transient stage. Some subjects referred to this process as “letting structures grow,” suggesting that the desired state of the flow was one where specific structures would be apparent. In the case described here, subjects used their knowledge of domain literature, past experiments, and visual signatures typical of specific structures in numerical simulations to identify precisely which structures the simulation yielded, i.e., structures known as “Kelvin–Helmholtz rolls,” “pulsating vortices” and “Taylor–Görtler vortices.” This new knowledge allowed researchers to further their knowledge of flow behavior inside the cavity, thus validating their hypotheses and strengthening and revising their mental model of flow behavior (Chinn and Brewer 1993).

The HTA model highlights current working practices as highly constrained by the characteristics of GUI-based desktop tools. In particular, CFD experts construct mental models of dynamic 3D structures based on the sequential exploration of flow slabs and instants. In contrast, VR-based multimodal environments may overturn existing tools by making the whole set of relevant information accessible to users “in one sitting” through the use of multiple sensory modalities. Although this task model only formalizes existing exploration strategies, we expect VR to fundamentally change its structure by allowing new behaviors to emerge in an immersive environment.

2.2.3 Users’ informational needs

Cognitive discursive analysis results highlighted the importance of pattern recognition in the study of flow properties. For example, analysis results highlighted the concept of a “vortex” as particularly important and related to a specific physical behavior, mathematical formula, and graphical signature. In 3D view, vortices appeared as tubular structures; in 2D view, as a range of elliptical structures. Recognizing such patterns relies on several types of information and mathematical parameters form the basis of this reasoning. Although the parameters used are highly problem-specific, velocity and vorticity were shown to be frequently used by all subjects. Spatial (2D and 3D) and temporal (4D) distribution of these parameters allows identification of the underlying structures of the flow (e.g., vortices) and their dynamic properties. Data visualization techniques allowed construction of standard representations such as isosurfaces and isocontours to facilitate pattern recognition by the user. Pattern recognition also relies on the fact that the CFD specialist already knows “what to look for” in terms of graphical signatures, since he is often responsible for building the simulation as well as analyzing its results.

One apparent limitation of existing visualization software is that it only supports detailed visualization of flow properties on 2D “slabs.” In contrast, 3D views only provide information about the overall topology of the flow. Time-dependent properties are therefore accessed through the sequential examination of slabs and 3D views.

2.2.4 Recommendations for modal allocation

André (2000) used the term “modal allocation” to describe the use of specific sensory modalities to present information. This is a necessity when designing a multimodal user interfaces, since designers are confronted with a particularly large design space. In particular, the questions posed are “What is the most relevant sensory channel to convey the various pieces of information necessary to the exploration task?” and “What is the most relevant way to present this information in this particular channel?”

Several criteria are involved when proposing a modal allocation scheme, notably (1) hardware and software limitations, (2) task-related information semantics, and (3) user characteristics, e.g., perceptual and cognitive characteristics. Although existing tools provide a wealth of possibilities for data presentation, our findings suggest that relatively few of these are used in the exploration stage itself. Clear identification of user needs thus allows a minimalist approach to product design, thus simplifying the design process. This means that the development of a VR prototype could limit itself to a few key modalities. Proposing principles for modal allocation implies giving more weight to specific information-modality pairings. Few guidelines exist to guide this process, and none of them can be described as universal, though some provide significant guidance in designing interfaces to explore abstract, numerical, rather than concrete and realistic data (Nesbitt 2003). However, task analysis may help in providing specific guidance depending on information semantics, i.e., the properties of the displayed information that can be viewed as directly relevant to the task at hand. Specifically, the properties identified were as follows: variables dependent on space and time need to be superimposed against the more stable elements of the experimental setup (i.e., the flow’s “surroundings”). Visual feedback may be used to display these invariant elements. Within the flow, scientific reasoning is structured around a limited number of dynamic objects (e.g., vortices, jets, plumes) who have a distinctive spatial and temporal signature. Thus, the use of visual, audio, and temporal channels needs to account for these objects’ shape and temporal variations thereof (e.g., through audio or haptic feedbacks). Beyond identification of these structures, CFD scientists’ work also involves navigation between them in order to reconstruct the global topology and physical behavior of the flow. This implies the use of landmarks—in Lynch’s (1960) sense—to help speed up navigational tasks, which may be presented in any modality, such as Donker et al.’s “torch metaphor” (Donker et al. 2002) or systems for haptic guidance.

The term ‘landmark’, however, only partially reflects the reality of flow exploration. Indeed, flow structures are also time dependent, and CFD experts are more interested in dynamic events than they are in static landmarks. Finally, a logical consequence of this is that users do not access information regarding the behavior of the whole flow at any one time, but need to piece it together by analyzing specific events one by one. This essentially removes the risk of perceptual masking between several sources of information.

3 Multimodal interaction for immersive CFD exploration

Using VR environments for scientific data examination ultimately means that user needs meet technical implementations of VR systems, with the current possibilities and limitations of hardware and software. A user-centered approach should prevent the design process from being exclusively pushed by technological factors, but also pulled by users’ work-related needs.

This section presents how relevant observations and evaluations on realistic test cases helped in the design of helpful immersive, multimodal feedbacks in the CoRSAIRe/CFD framework.

3.1 VR architecture

Early on, several off-the-shelf pieces of CFD simulation software were identified as potential test beds for the CoRSAIRe experiments. The requirements were

  • management of classic graphical objects for CFD simulations (streamlines, isosurfaces, stream ribbons, cutting planes, etc.). This was crucial to provide potential users with a familiar environment where standard 3D representations could be easily invoked.

  • distributed multiscreen visualization along with stereo display capabilities: all experiments were to take place in a large immersive room;

  • management of classic VR functionalities (e.g., 3D tracking is mandatory to match the visual and auditory viewpoint with the user’s position or to interact with the simulation with a 3D pointer);

  • easy-to-use API for extending the input/output capabilities according to needs (in our case, extend visual channel with haptics and audio);

  • reasonable cost;

  • a stable platform that could be reused and extended as the project and its offsprings unfold.

Taking in account these requirements, the final choice was amiraVR, commercialized by Mercury Systems.Footnote 3 The global hardware and software configuration are summarized in Fig. 3.

Fig. 3
figure 3

The VR architecture for the CoRSAIRe/CFD experiments

Our VR setup consists of two large (2 m × 2 m) screens in an L-shape configuration, each displaying the output of a powerful workstation hosting an Nvidia FX 3000G quadro graphic card. Outputs are genlocked and framelocked to ensure that stereo frames are properly synchronized at 100 Hz. A third identical workstation serves as a central console and hosts the communication modules between the central CFD applications and the various inputs and outputs of the system. Viewpoint tracking and 3D pointing are performed by an ART-track2 infrared optical tracking system Footnote 4 with a refresh rate of 60 Hz. The haptic interface consists of a large 6-DOF VIRTUOSE haptic arm provided by the HAPTION company, controlled through the Virtuose API and running on a dedicated PC. The audio rendering is performed on a separate PC equipped with a sound processing card. Sonification is performed within Max/MSP, a graphical development environment for music and multimedia (Website: http://www.cycling74.com). Communications between the different components (inputs, processing, and outputs) are managed through a dedicated client-server architecture developed by one of the CoRSAIRe partners codenamed VEserver/EVI3d (Touraine and Bourdot 2001). Communications themselves rely on message-passing using the UDP protocol. For example, a custom protocol was developed, based on the OSC specifications (Open Sound Control, see Wright et al. 2003), to provide communication between the sonification modules and the core amiraVRTM application. This protocol allows one to easily manage the association between the chosen variables and the sonification parameters while removing many configuration details from the amiraVRTM platform.

Inevitable compromises resulted from combining some specialized, home-made VR components with a commercial application (trading flexibility and source access vs. visualization power). Despite these compromises, the resulting architecture proved fruitful and permitted the integration of audio and haptic modalities into existing visual CFD exploration scenarios explored in early evaluation sessions (Cf. Sect. 2).

3.2 Audio modalities

Sonification refers to the use of non-speech audio to convey information. Due to the high temporal resolution and wide bandwidth, the use of auditory stimuli is highly suitable for the display of time-varying parameters (when compared to other modalities such as video and haptics), concurrent streams (the superposition of multiple audio renderings for various parameters is possible and easily comprehensible if properly designed), and spatial information (lower definition if compared to visual stimuli, but possible over the 360° sphere, therefore true full space three-dimensional rendering).

One study was recently performed within this framework with regards to sound spatialization, which aimed to examine the effect of sound spatialization on a specific sonification and sound exploration task (Katz et al. 2008). Subjects were asked to virtually navigate, using a pointing and tracking device, a two-dimensional topological function mapped onto the surface of a sphere surrounding the user (see Fig. 4). The data function was sonified with a modified beep sound; the task was to find the maximum of the function, the point with the highest frequency sound and to validate its position. The experiment was repeated with and without the use of sound spatialization techniques. While spatialization was not required to perform the task, the precision of target selection appears to improve with the addition of spatialization. This simple test case platform of a single sound source can be used for the investigation of basic principles of auditory spatial sonification exploration. The CFD data sonification task presents a much more complicated dataset, where multiple regions of interests exist and must be explored and understood.

Fig. 4
figure 4

The virtual sphere around the subject (left), and an example of the two-dimensional function mapped on its surface, represented as a planisphere (Katz et al. 2008)

When dealing with CFD simulations, a considerable importance is given to the perception of phenomena characterized by their intermittent nature and, most of all, strongly localizable within space (i.e., intermittence of a turbulence structure, breaking-up of the structure). Sonification can be employed to detect these phenomena within the cavity simulation test bed (Katz et al. 2007), and auditory spatialization can be used to segregate concurrent streams and to guide the user toward a specific position of interest, allowing one to follow future development of the specified parameter or structure in that location.

The data resulting from the cavity simulation described in Sect. 2 (velocity components v x , v y , v z ) have only been slightly pre-processed for sonification. The DC component has been removed from the different parameter values for each position, removing the constant flow component and focusing only on variations around the median value. This results in a dataset which is well suited to audio rendering. The real-time duration of the simulated experiment is on the order of one minute, with the temporal region of expressed interest lasting on the order of tens of seconds.

Two software platforms have then been developed for the sonification of these data inside the cavity: the first platform treats the sonification of the 21 discrete points at a raw sample rate of 400 Hz, and the second sonification platform is for any point within the data volume recorded with the sample rate of 30 Hz. The sonification was developed using the Max/MSP platform (see Sect. 3.1). An example of a user scenario for the sonification configuration of the 400 Hz dataset consists in selecting a number of monitoring/observation points within the cavity and choosing data parameters for sonification (such as the velocity of particles along the X-axis). The rendered audio streams are spatialized in real time, coherent with the visual display according to the current position and orientation of the experimenter using 3D tracking information received from the central system.

A large variety of sonification techniques exist and are used in various applications (Kramer 1994). The technique that was selected and implemented for CoRSAIRe/CFD is termed as “audification.” Audification is based on the transformation of a generic time varying signal into an audible signal. This method is well suited to the CFD data, which can be regarded as a low-sample-rate time-varying data. Expert users currently use frequency analysis transforms of the data for numerical analysis, which is another method well suited to acoustic data. Rather than performing detailed preprocessing analysis of the data streams before the actual audio rendering, in this case the low-sample-rate data streams are transformed into audio streams with auditory information which is legible by the user. No a priori understanding of the data content is needed. The user is in a direct sense “listening” to the output of the parameter probe in the cavity.

Four sonification–audification metaphors have been developed and implemented within the two platforms (i.e., 30 and 400 Hz). The metaphors are described here, using the axial particle velocity parameters as a reference. It should be noted that the different modules listed here can actually accept any type of time-varying parameter, such as the turbulence or the vorticity particle parameters.

  • FM: a simple frequency modulation based on the formula:

    $$ f = f_0 + (\alpha * v(\hbox{centered})) $$
    (3)

    where f is the frequency of the output signal. A carrier wave f 0, in this case a sinusoid, is modulated in frequency by the velocity parameter data values at the given monitoring position (time fluctuations of v x , v y , or v z around their respective mean). The user can then manipulate in real time the frequency of the carrier signal and the weight of the modulation α.

  • GIZMO: a spectral domain pitch shifter based on the GIZMO method (Dudas 2002). According to the GIZMO algorithm, the signal is split into smaller “grains.” Each grain is then transposed using a spectral shift method, and the signal itself is then re-assembled. In this manner, it is possible to shift the pitch of the signal without changing its duration, allowing the user to investigate the evolution of the parameters at the real-time scale of the turbulent flow if desired. The goal is to directly render, for example, the velocity data stream for one of the three axes into an audible audio stream. Due to the fact that its original sample rate and frequency content are too low to be audible, it is transposed in frequency following the formula:

    $$ f = T_f + v(\hbox{centered}) $$
    (4)

    where f is the frequency of the output signal. The transposition factor T f can be manipulated by the user in real time.

  • PHASE VOCODER: this metaphor is very similar to the GIZMO method, except for the fact that the pitch shift is performed through a phase vocoder (Flanagan 1965). A vocoder is an analysis/synthesis system, mainly used for speech, in which a control signal is divided into frequency bands and, for each band, passed through an envelope follower that will then control the signal to be processed. A “phase” vocoder is a further modification on the principle, where the signal to be processed can be scaled both in the frequency and time domains by using phase information.

  • PITCH SHIFT: a particular pitch shifter based on the PSOLA algorithm (Schnell et al. 2000). One of the difficulties of using pitch-shifting algorithms is the creation of audible artifacts. The presence of artifacts is more prominent when the frequency of the signal to be shifted is not known beforehand. The PSOLA algorithm uses knowledge of the fundamental frequency of the signal, dividing the signal into frames with length of a period of the fundamental frequency itself. The frames are then played back sequentially at different speed rates depending of the shifting parameter (in this case, the centered velocity of the particles). From prior analysis of the simulated cavity, the fundamental resonance (F), and therefore the fundamental frequency of the velocity oscillation, has been estimated at F = 13.5 Hz. The user can define the weight of the shifting α following the formula:

    $$ f = F + (\alpha * v(\hbox{centered})) $$
    (5)

    where f is the frequency of the output signal.

It can easily be noticed that the last three sonification metaphors are based on a constant transposition (pitch shifting) of the centered velocity oscillation: the audible differences between the three methods are then given by the artifacts generated by the specific algorithm used to perform the transposition, and different degrees of pitch shifting would bring about different variations at different audible frequencies. This user controlled variability is crucial for the use of the system. As an example scenario, consider a 400 Hz sampled data stream (valid data from 0 to 200 Hz according to sampling theory) and an “unknown” region of interest around 150 Hz. This stream could be directly frequency scaled by a factor of 100, producing an audible stream covering the frequency range from 0 to 20,000 Hz, which extends slightly beyond the entire audible spectrum (20–20,000 Hz), with the region of interest now scaled to 15,000 Hz. But humans are not sensitive to all frequencies to the same degree. In addition, while some frequencies are perceptible, they are not always pleasant to listen to for long durations. Very high frequencies are good example of this. An alternative would be a pitch shift (rather than stretch) where the data is “shifted” by 1,000 Hz, for example. This would result in the entire simulation information being concentrated between 1,000 and 1,200 Hz, a rather fine and limited use of the audible range. To deal with these conditions, the user can adjust and combine different metaphor parameters to create both an informative and usable sonification where they can focus their attention.

Preliminary evaluations of the different audification metaphors with CFD experts, working with their own CFD data results, have been recently carried out. For researchers who do not as yet have the habit of actually listening to their data, the results are encouraging even though the task was not directly obvious. The FM method was reported to be the most “intuitive” in terms of perception of known events, such as oscillation of the mixing layer boundary. This phenomenon was perceived as beat frequencies around the cavity resonance frequency of 13.5 Hz. Such frequency phenomena below the lower frequency limit of human hearing can be perceived as beatings. Slight additional pitch shifting can result in the frequency of the phenomenon increasing a few tens of Hertz, being conversely perceived as low frequency tone oscillations. This simply depends on the exact transformation or adjustment of the sonification parameters. While the fundamental CFD phenomenon of this effect remains the same, the auditory perception is quite different. It has been decided that subsequent evaluations will require a learning phase approach in order to better demonstrate to the expert users the functionality of each metaphor, with its perceptual counterpart, before parameter adjustments are made.

Future work in the sonification of the CFD data will diverge from direct audification and will consider the use of pre-processing algorithms to extract features from the dataset. For example, spatial transformations could be used, sonifying the data along the non-temporal dimensions to identify spatial periodicity of turbulent structures.

3.3 Haptic rendering of CFD datasets

As analyzed in Sect. 2, the general objective of a CFD specialist is to locate interesting structures (e.g., a vortex core), based on visual cues. But the intrinsic complexity of the topology of CFD structures makes the task of precise positioning (say, for future annotation) even more difficult. Also, one should note that the more cluttered the visual space is with simultaneous renderings; the more difficult it is to pinpoint specific 3D structures.

In this context, haptic perception has been investigated for the last two decades, with significant achievements (Menelas et al. 2009b). In the framework of the CoRSAIRe project, we have introduced novel haptic renderings and associated metaphors to complement visual and auditory feedbacks and to present information that is otherwise difficult to perceive, so as to improve the interaction process. Specifically, we investigated the use of the haptic feedback (a) to facilitate and speed up the positioning in a scene (magnetic metaphor), (b) to enable the CFD user to haptically perceive isosurfaces (isosurface haptic rendering), and (c) to provide a new tool for critical point analysis.

3.3.1 Magnetic metaphor

In the magnetic metaphor, the target point acts like a magnet attracting the haptic probe.Footnote 5 In the implemented version, the force feedback is computed via the function represented in Fig. 5. Whenever the distance between the haptic probe and the target is less than a threshold D, the hand of the user is attracted with a quadratic force by the virtual magnet until threshold d is reached. Since at this position the user is very close to the target, the attraction force is then progressively diminished until it vanishes.

Fig. 5
figure 5

The distance–force mapping function used for the magnetic metaphor

This metaphor has been evaluated through a psychophysical study in a targeting task (a common task in data analysis). This experiment consisted in reaching some specific points of a Q-factor isosurface Footnote 6 computed on the cavity flow simulation (see Fig. 6). Only haptic-enhanced conditions were tested, and three haptic paradigms were compared: the proposed magnetic metaphor versus two standard kinesthetic force feedbacks (a polygonal and a volumetric one). Ten users participated in the experiment using the immersive VR simulation described in Sect. 3.1. For each trial, the trajectory described by the user as well as the required time are logged.

Fig. 6
figure 6

A Q-Factor isosurface displayed inside the cavity along with pre-defined targets (dark spots)

Experiments showed that the magnetic metaphor provided a much better feedback to perform the task: user trajectories were smoother, displayed less hesitation, targeting precision was improved, and time-to-target was reduced, compared to more standard kinesthetic force feedbacks (see Fig. 7). The reader may refer to Fauvet et al. (2007) for details on protocol implementation and error measurement results. The magnetic metaphor proved interesting to signal points of interest once these had been identified, allowing the hand of the CFD expert to be attracted to interesting locations in subsequent exploration sessions.

Fig. 7
figure 7

An example of targeting trajectory. Top a trajectory using the proposed method. Bottom-left using the polygonal kinesthetic force feedback method. Bottom-right using the volumetric kinesthetic force feedback method

3.3.2 Isosurface rendering for haptic feedback

The literature on isosurface rendering follows indirect and direct rendering approaches. The first category aims to extract a polygonal representation from volume data. Algorithms adapted from the initial Marching Cubes approach (Lorensen and Cline 1987) can be used for computation of such surfaces. Producing a surface-based representation offers the advantage of providing a stable feedback for a subsequent haptic interaction. However, due to the computation time required by the surface estimation step, real-time surface update (as required in a VR environment) is inherently difficult to achieve. These limitations were overcome in Adachi et al. (1995), and later by Mark et al. (1996) and Chen et al. (2000) by the introduction of an intermediate representation approximating the surface.

Concerning direct rendering, a well-known approach was exhibited by Avila and Sobierajski (1996). Their work addressed the haptic exploration of the complete data volume, or a sub-volume, such as an isosurface. For isosurface rendering, this method does not require any intermediate representation of the surface. The generated feedback is expressed as a combination of a retarding and a stiffness force directly approximated by the penetration distance to the virtual surface, proportional to a gradient computed on the field value. This approach works well with standard data volumes, providing a very fast haptic loop (hence a satisfactory sensorimotor feedback) without any surface representation. However, some undesirable vibrations can occur in regions exhibiting high frequency data. In such regions, high gradients in the field value result in a poor approximation of the penetration distance of the probe into the isosurface. This shortcoming was previously underlined in Fauvet et al. (1996).

For these reasons, we introduced in Menelas et al. (2008) a flexible method based on a more generic approach. By casting rays emanating from the probe in several directions, positions are computed the where the probe would be if it were constrained by a virtual isosurface, i.e., the proxy position (Fig. 8). Once this position is determined, this information is haptically conveyed to the user through the haptic channel using a penalty-based method.

Fig. 8
figure 8

Computation of the proxy position in Menelas et al. (2008). a Six rays are cast from the probe position A. Here, three intersection points are found. b A′ is the projection of the probe on the plane defined by the three intersection points. c The proxy position P is on the isosurface

We have experimented with this approach in a task consisting of path-following along an isosurface using the same cavity flow simulation with and VR setup. Figure 9 exhibits the proposed route on the surface. The new method (referred to as M3) was compared with two other methods, namely the volumic approach of Avila and Sobierajski 1996 mentioned earlier (M1) and an intermediate representation computation model (M2) proposed in Krner et al. (1999). Ten participants were randomly allocated into three groups. We measured the accuracy (precision of the haptic interaction) in the tracking task, the haptic rendering quality (users’ preference) of each method and the computation load required by each method.

Fig. 9
figure 9

Experimented isosurface. The line going from point A to point B represents the recommended path

Performance measurements carried out on the three algorithms confirm the fact that the indirect rendering (M2) requires significantly more computing time than the direct (volumetric) rendering methods (M1, M3). It was noted that the haptic loop frequency of M2 depends on the amount of data (i.e., the more data there is, the lower the haptic loop frequency). On the contrary, data count does not significantly affect the frequency of the haptic loop in either M1 or M3 (direct rendering algorithms). Moreover, participants highlighted the fact that a better haptic rendering was provided with the new flexible method M3, which allowed users to perceive all the isosurface details, even weak undulations (see Fig. 9).

Figure 10 presents a typical scenario that takes advantage of the proposed haptic metaphors, whereby the user has in addition a 2D volumetric cutting plane attached to the haptic probe which then follows the isosurface. In such a situation, in addition to the haptic feedback of the isosurface, the user can simultaneously access additional meaningful information situated in the transverse plane. Moreover, some quantities such as vorticity or Q-Factor may be directly mapped onto the haptic feedback as a viscosity drag.

Fig. 10
figure 10

Accurate positioning of a colored cutting plane during a CFD immersive exploration session

3.3.3 Haptic characterization of critical points

Among feature-based flow visualization methods, topology-based approaches aim to detect and classify critical points of the flow.Footnote 7 Such points are of primary importance as they structure the overall flow features. To this effect, we investigated the characterization of critical points by means of haptic feedback. As outlined by the HTA task tree, the building of a mental model of the analyzed flow is carried out through a step by step construction of the solution. The work presented here addresses the analysis of one instant of an unsteady flow (we are currently investigating the extension of our solution to a time sequence). This approach can be divided into two main steps; detection followed by characterization of critical points.

In the detection step, the CFD expert starts with an empty visual scene and is invited to freely explore the flow domain. During this exploration process, the presence of critical points located in the cuboid that surrounds the haptic probe position is rendered via a vibration feedback in addition to the visual display (see Fig. 11). Critical points are detected “on the fly” within the local environment explored by the user. Thus, pre-existing expertise and previously discovered critical points both serve to guide the exploration process throughout various areas of interest. The expert can construct his own mind map of the flow at her/his own pace.

Fig. 11
figure 11

Representation of all the critical points located in the volume surrounding the probe position

Once the critical points are detected, the characterization step serves to identify how the flow enters and leave these points. To this effect, the velocity of the flow field is now directly mapped as a force feedback to the user in the neighborhood of the critical points. In the current implementation of the system, the mapping function is defined by F = α·V, where α is the coefficient of the mapping and V is the velocity of the flow. With this metaphor, the haptic probe tends to follow the path of a particle injected inside the flow.

The proposed concept has been evaluated through several psychophysical experiments in Menelas et al. (2009a). In summary, all participants emphasized that vibration cues provided real assistance for them to rapidly detect areas of interest. Conversely, with pure visual feedback, they seemed to randomly explore the cavity in search of critical points. Regarding the characterization step, some participants noted that being able to perceive the velocity of the flow through the haptic modality and to manually follow the trajectory of a fluid particle allowed for an interactive analysis of critical points (as opposed to the static display of a streamline). On the other hand, these users noted difficulties faced while attempting to understand the temporal evolution of some serpentine (sinuous) streamlines.

4 Conclusion and future work

The virtual wind tunnel paradigm echoes a pervasive need in the CFD community for more interactive and intuitive means to explore large flow-related datasets. VR and multimodal interaction have long been viewed as “holy grails” of CFD in that they provide the basis of a multisensory, immersive work experience. Although numerous attempts had been made in the past, no existing tools implement VR today in such a way as to replace, or even complement, existing desktop-based solutions. This is deemed to be due to both a lack of formal knowledge regarding work-related user needs, and technological issues underlying the design of workable solutions.

The innovative character of our contribution in the field is that it simultaneously tackles both these issues by proposing a framework to guide the relevant use of available techniques (e.g., modal allocation schemes) as well as innovative research taken on interaction and rendering techniques.

By means of a formal task analysis approach, we were able to obtain verbal data regarding the steps followed by CFD researchers when exploring a typical flow simulation. From there, recommendations were issued on the multimodal presentation of the structures to be identified within the simulation. An immersive VR-CFD simulator was presented, in which several dedicated interaction experiments led to the development of new, customized audification and haptic interaction techniques. These methods take into account the specificities of CFD investigations (e.g., high gradients, turbulent flows), while being tuned to the experts’ needs and feedbacks, in order to provide a clear advantage for users as compared to traditional desktop-based approaches.

We found early on in the user analysis that there is no consensus on the objects to be identified: mathematical operators reveal some properties of the flow, but they must be combined and coincide with previous user experience in past work; a process which can be highly individual. Therefore, while the general exploration procedure is clear, the expert user must retain maximum freedom to select the data that will be relevant in the analysis. The use of 3D perception proved a clear advantage, along with the possibility of interactively setting visualization parameters (such as isovalues). Audio feedback clearly showed potential value, but even with a limited set of parameters the range of transformations must be refined, reduced, and adjusted to optimize the exploration of possible perceptions. User interaction and a learning phase are essential in the early stages of use with such a flexible approach. In addition, experiments clearly showed that critical point localization was made easier by choosing relevant haptic metaphors.

For these reasons, we are currently preparing a new set of experiments, focusing on very localized simulation properties, where the user will interactively tune rendering parameters to minimize or maximize some perceived stimuli corresponding to phenomena previously identified by direct, “blind” computation. For example, the cavity flow described in Sect. 2 exhibits some clear symmetries in the transverse direction, i.e., points that are on both sides of a vertical sagittal plane which have correlated flow values. We expect the user to be able to discriminate and characterize these symmetries better and more quickly with immersive feedback using multiple sensorimotor modalities, compared to standard monoscopic visual observation. In this manner, we will move gradually from human factors studies dedicated to observing VR experiments to the design of actual working environments where substantial progress in the CFD domain can be made.