1 Introduction

With the increasing computational power to perform simulations of complex physical phenomena, large-scale time-varying volume datasets are nowadays prevalent. To understand the complex phenomena, feature extraction and feature tracking methods are crucial steps in the pipeline of current time-varying volume visualization tools. Features, which are defined as voxel regions of interest satisfying a set of predefined threshold value, usually experience complex evolution events over time, such as creation, dissipation, continuation, bifurcation, and amalgamation.

Animation is a powerful technique for time-varying volume visualization. In general, a key step of animation is view path design, which often contains two main steps: viewpoint selection and view path generation. Although view path design for static volume datasets is a well-developed research field, it is still a challenging task to create a well-designed view path for time-varying volume datasets. On the one hand, it is difficult to select suitable viewpoints. The selected viewpoints should generate informative images for each time step. Meanwhile, these viewpoints should display the temporal feature evolution at the most extent, especially for event-rich time-varying volume datasets. On the other hand, view path generation is challenging for time-varying volume datasets. Viewpoint transition is always simultaneous with transitions of temporal adjacent volume datasets (data transition). These transitions are not mutually independent and have influence on each other.

The visual information of features is usually employed to select suitable viewpoints and obtain an informative animation for time-varying volume visualization. This information depends on various properties of features and transfer function settings, such as projected area, curvatures, perceived colors or opacity. The rendered image under the selected viewpoint based on this information is able to display a good distribution of features; however, it usually ignores the topological evolutions of features. In previous view path generation methods, data transition and viewpoint transition are usually two independent processes. As a result, the generated view path often suffers sudden changes during viewpoint transition.

This paper presents a novel topology aware method for view path design of time-varying volume datasets. In viewpoint selection, a new viewpoint quality measure is introduced to quantify the visual information based on the mutual information between viewpoint sets and feature groups (the set of evolution related features). Then a novel viewpoint quality measure of the topology information is proposed based on the skeleton information of features. The visual and topology information are integrated to determine the optimal viewpoint set for each time step. In view path generation, temporal viewpoint coherence is proposed to divide the time sequence into several appropriate time segments. The volume datasets in each time segment share a fixed representative viewpoint during data transition. Two representative viewpoints in adjacent time segments are linked smoothly to compose the viewpoint transition. The generated view path can help users capture the evolution events to great extent in time-varying volume visualization while remain the minimum viewpoint transition.

The paper is organized as follows. In Sect. 2, we review related works. Section 3 provides an overview of the proposed algorithm. The data preprocessing is introduced in Sect. 4. The evaluation of viewpoint quality and the view path generation are described in Sects. 5 and 6, respectively. Experimental results and discussion are described in Sect. 7. We summarize and conclude our work in Sect. 8.

2 Related work

2.1 Viewpoint quality measures

Viewpoint quality measure is an important topic in the field of volume visualization. Koenderink and Doorn proposed an aspect graph (Koenderink and Van Doorn 1976, 1979), which partitions the view sphere according to the topology similarity of the object projection. Barral et al. proposed heuristic measure based on the fraction of visible surfaces with respect to the total surfaces number, and the projected area ratio between the visible surfaces and the whole visual part of the scene (Barral et al. 2000). Information theory has been introduced to evaluate the quality. Viewpoint entropy (Vázquez et al. 2001), the first information theory-based measure proposed for polygon meshes, considers the projected areas of faces as the amount of information captured under a certain view. Viewpoint entropy has been extensively applied to select suitable viewpoints in volume visualization. Bordoloi and Shen (2005) applied the concept of viewpoint entropy for volume viewpoint selection, and the information was adapted to the visibility of each voxel weighted by its noteworthiness value. Takahashi et al. (2005) explored the work for evaluating the viewpoint optimality of each decomposed feature component, which is assigned with a weight to emphasize its importance. Tao et al. (2009) integrated shape information into viewpoint entropy to locate the structural information maximized viewpoint. Viewpoint mutual information (VMI) is another viewpoint quality measure based on an information channel between viewpoints and volumetric objects to determine the most expressive viewpoint Viola et al. (2006). Feixas et al. (2009) utilized VMI built between viewpoints and polygons to select the optimal viewpoint. Tao et al. (2013) defined two interrelated information channels between streamlines and viewpoints to select best viewpoints. In this paper, we build an information channel between viewpoints and feature groups.

2.2 View path design for static data

View path design for static volume data has been well studied in recent years, and the objective is to intuitively observe and explore volume data along a view path inside the data or on the view sphere. Several methods have been proposed to produce optimal animation sequences to interpolate the parameter space between keyframes, such as anima system (Moltedo and Morigi 1993), the template-based approach (Akiba et al. 2010), and the keystate-based method (Mühler and Preim 2010). Andújar et al. (2004) designed a collision-free path for model scene exploration, and Hsu et al. (2013) refined the coarse path derived from a roadmap graph based on multi-criteria. Sokolov et al. (2006) restricted possible viewpoints on the view sphere, and ordered the viewpoint according to importance information to construct a view path. Ma et al. (2014) located the viewpoint on the mesh extracted from the auxiliary entropy field volume, and traversed all the selected viewpoints. Tao et al. (2013) utilized VMI measure to select optimal viewpoints, and generated a view path passing through all viewpoints for flow visualization.

2.3 View path design for time-varying volume data

Previous view path design methods can be roughly classified into two categories: graph-based method and information-based method. Yu et al. (2010) constructed an event graph to build a digital storytelling approach. This largely facilitated the generation of visualization animations, especially for users without enough priori knowledge. Other information-based methods utilize viewpoint information to design view path. Bordoloi and Shen (2005) simplified the view path design by confining the viewpoints transition to a certain view, which is with the maximum entropy summation for the whole time sequence. Ji and Shen (2006) introduced a dynamic view selection approach specifically for time-varying volume datasets, in which the quantity of information of each viewpoint is firstly determined for each static volume data, and then a global view path is further determined based on dynamic programming (Bellman 1954). Our method belongs to the second category. A topology aware information metric is proposed to select the optimal viewpoint sets and temporal coherence is used to maximize feature evolution information with least a viewpoint transition.

3 Algorithm overview

Fig. 1
figure 1

The pipeline of our topology aware view path design method

Our view path design method is comprised of three major steps: preprocessing, viewpoint entropy computation, and view path generation, as shown in Fig. 1.

The preprocessing stage consists of feature extraction, directed acyclic graph generation and skeleton extraction. Firstly, features are extracted from the time-varying volume dataset. Then, feature evolution is tracked and presented as the feature evolution graph. Meanwhile, the skeleton of each feature is also extracted.

The quality of a viewpoint is estimated by the combination of visual information and topology information. The visual information is based on the information channel between viewpoints and feature groups. The topology information is evaluated based on the skeleton of features and quantified with Kullback-Leibler distance. Then, the visual and topology information are integrated into a topology aware viewpoint selection framework to select the optimal viewpoint for each time step.

Given the quantitative evaluation of viewpoints at each time step, the proposed view path generation method partitions the view sphere into several viewpoint clusters based on viewpoint spatial coherence. Then viewpoint temporal coherence is utilized to compute the lasting time of each viewpoint cluster as its lifecycle. At last, a representative viewpoint for each viewpoint cluster is determined, and a smooth viewpoint path is generated by linking neighbor representative viewpoints.

In the generation of view path, the viewpoint is placed at the representative viewpoint of the selected viewpoint cluster. This viewpoint remains static until its lifecycle ends, and then smoothly moves to next representative viewpoint along the generated path.

4 Data preprocessing

Given a time-varying volume dataset, we first extract features at each time step by means of a seeded region growing (Adams and Bischof 1994). The skeleton of each feature is further computed from the local maximum voxels based on the thinner algorithm (Tran and Shih 2005).

The spatial overlap based feature tracking method (Silver and Wang 1997) is applied to track the evolutions of features. This method assumes that a feature usually overlaps itself in adjacent time steps. Octree data structure is used to speed up the matching process.

Several graph-based visualization tools are proposed to exhibit the feature evolution over time. In the feature evolution graph, features in the same time step are aligned along the vertical axis, and consecutive time points are aligned along the horizontal axis. Directed acyclic graph (DAG) is recognized as the most common method to display detailed feature evolution. In the feature evolution graph, each DAG corresponds one evolution relation of a feature. In the DAG, a node represents a feature and the edge between two nodes indicates their evolution between adjacent time steps. In the exploration of time-varying volume data, users can select any node in the feature evolution graph and the related DAG will be highlighted. Two DAGs are selected and highlighted in Fig. 2a, in which one is with green nodes and edges and the other is with blue nodes and edges. In this way, users can pay more attention to the evolution of features of interest.

The term feature group is used to describe a feature set at each time step belonging to the same DAG. To emphasize the focused feature groups, the other groups are merged into a single background feature group. For example, there are three feature groups in Fig. 2b at time step 8, in which the first one is colored in green, the second one is colored in blue, and the last one is the background feature group that consists of the other features. The advantages of feature group will be further discussed in Sect. 5.1.

Fig. 2
figure 2

A feature evolution graph of the turbulent vortex dataset with two DAGs selected (a) and the corresponding three feature groups at time step 8, one is colored in green, the second one is colored in blue, and the background feature group colored in white (b)

5 Viewpoint quality measure for static data

5.1 Visual information

In this paper, viewpoint mutual information (VMI) is employed to quantify the visual information of viewpoints. We build an information channel based on feature groups tailored for the time-varying volume datasets. Firstly, a viewpoint set V and a feature group set G are considered as two random variables, and the visibility descriptor is constructed from an information channel \(V\rightarrow G.\) The probability of viewpoint v is \(p(v)=\frac{1}{N_v},\) where \(N_v\) is the number of viewpoints. p(g|v) represents the visual perception of a feature group g from the viewpoint v is defined by the normalized visibility of the feature group g from the viewpoint v, and \(\sum _{g\in G}{p(g|v)}=1.\) The average visibility of a feature group from all viewpoints is defined as \(p(g)=\sum _{v\in V}{p(v)p(g|v)}.\) The visibility has been proposed to measure the impact of each individual voxel on the image during direct volume rendering (Bordoloi and Shen 2005).

VMI between feature groups and viewpoints is used to quantify the visual information \({H(v)}_{\rm vis}\) of each viewpoint v as follows:

$$\begin{aligned} {H(v)}_{\rm vis}= \sum _{g\in G} { p(g|v) \log { \frac{p(g|v)}{p(g)} } }. \end{aligned}$$
(1)

Since VMI is applied to depict the relevance between viewpoints and feature groups, high \({H(v)}_{\rm vis}\) corresponds a strong relevance between the current viewpoint and all feature groups. This means that little information is displayed under the current viewpoint. On the contrary, low \({H(v)}_{\rm vis}\) indicates weak relevance between the current viewpoint and all feature groups, and this means the viewpoint is an informative one.

Evolution information has been represented as a feature evolution graph. Users can select DAGs of interest in the feature evolution graph. An automatic transfer function is built to highlight the selected DAGs by assigning a larger opacity value to the corresponding features. Given the transfer function, we obtain the visibility of the feature group by accumulating the visibilities during rendering. Then, we can obtain the final visual information for each viewpoint according to Eq. 1.

Results of the VMI based on features and feature groups are shown in Figs. 3 and 4, respectively. The purple colored feature group is experiencing a typical bifurcation event in a turbulent vortex dataset at time step 12 and 13. Figure 3 shows the rendered images from the optimal viewpoint based on the mutual information between viewpoints and features. Figure 4 shows the corresponding images from the mutual information between viewpoints and feature groups. The bifurcation event in these images are circled by white. By comparing these two figures, we can conclude that more event information can be obtained from Fig. 4. It is clear that the VMI measure based on the feature group is more effective way to find the optimal viewpoints for the exploration of evolution events.

Fig. 3
figure 3

The figure shows viewpoint selection results of the turbulent vortex dataset based on the VMI measure with the mutual information between viewpoints and features. The selected purple feature group is experiencing a bifurcation event. The rendered image from the optimal viewpoint at time step 12 (a) and time step 13 (b)

Fig. 4
figure 4

The figure shows viewpoint selection results of the turbulent vortex dataset based on the VMI measure with the mutual information between viewpoints and feature groups. The selected purple feature group is experiencing a bifurcation event. The rendered image from the optimal viewpoint at time step 12 (a) and time step 13 (b)

5.2 Topology information

Topology information plays an important role in the exploration of temporal evolution, as illustrated in Figs. 5 and 6. A feature group with an amalgamation event over time is shown in Fig. 5 under the visual information preferred viewpoint, while Fig. 6 shows the same event under the topology aware viewpoint. Although the visual information captured from these two viewpoints is similar, the information revealed about feature evolution is apparently different.

Fig. 5
figure 5

An amalgamation event under a visual information preferred viewpoint at time step 1 (a) and time step 2 (b)

Fig. 6
figure 6

An amalgamation event under a topology information aware viewpoint at time step 1 (a) and time step 2 (b)

Since the significant change of the skeleton indicates events of interest, we take the skeleton information to emphasize the topological properties of feature groups. The skeleton curve of each feature is formed by numerable skeleton points. However, it is a major challenge to measure information of the topology description under various viewpoints. As the sparsely distributed skeleton curve usually has an irregular shape and stretch, it is hard to evaluate the quality of each viewpoint. The Kullback-Leibler (K-L) distance is well studied to quantify the difference between two probability distributions \(p=\{p_i\}\) and \(q=\{q_i\}.\) We employ the K-L distance to quantify the topology information. The probability distribution p is given by the ratio of the feature group and all feature groups. The other probability distribution q is given by the ratio of the number of the skeleton points and the number of all skeleton points. Thus, the topology information \({H(v)}_{\rm topo}\) of each viewpoint is defined as

$$\begin{aligned} {H(v)}_{\rm topo}=KL(v)= \sum _{i=1} ^{n} p_i \log { p_i/q_i } = \sum _{i=1} ^{n} \frac{v_i}{v_t} \log { \frac{ v_i/v_t }{V_i/V_t}}, \end{aligned}$$
(2)

where

  • \(v_i\) is the ith visibility of feature group;

  • \(v_t = \sum _{i=1}^{N_f}v_i\), is the total visibility of all feature groups;

  • \(V_i\) is the actual number of skeleton points of the i-th feature group;

  • \(V_t = \sum _{i=1}^{N_f}V_i\) is the number of feature groups.

This viewpoint measure can be evaluated by the distance between the normalized distribution of visibilities and the normalized distribution of skeletons. A low value of KL(v) means that the normalized distribution of feature group visibilities is close to the normalized distribution of actual feature group skeletons. On the other hand, a high value of KL(v) indicates more shift from the ideal distribution. Thus, the viewpoints with higher KL(v) values indicate more topology information.

Fig. 7
figure 7

Viewpoint selection results of the turbulent vortex dataset based on the topology information measure. a The optimal viewpoint and b the worst viewpoint. c The rendered image of feature skeletons from two viewpoints: a is from the optimal viewpoint and b is from the worst one. d The viewing sphere of the topology information measure and the color mapping from blue to red corresponds to the topology information measure from low to high

Results of the topology quality measure are shown in Fig. 7 with the same evolution event in Fig. 3. According to the experimental results, users are able to capture more topology information from the selected viewpoint.

5.3 Viewpoint quality evaluation

To quantify the viewpoint quality for time-varying volume datasets, we combine the visual information \({H(v)}_{\rm vis}\) and the topology information \({H(v)}_{\rm topo}\) into a hybrid manner as follows:

$$\begin{aligned} H(v)= \alpha {H(v)}_{\rm vis}+(1-\alpha ){H(v)}_{\rm topo}, \end{aligned}$$
(3)

where H(v) is the viewpoint quality measure. The parameter \(\alpha\) is a weight in the range [0, 1] to balance the appearance with regard to \({H(v)}_{\rm vis}\) and \({H(v)}_{\rm topo},\) which are both normalized values.

We find that the combination is a good control over the relative importance of two appearances. The parameter \(\alpha\) affects the static viewpoint selection. The lower weight parameter is, the more topology information is presented. On the contrary, the higher the weight parameter is, the more visual information is captured.

The parameter \(\alpha\) also affects the view path generation. The skeleton is composed of infinite skeleton points, and is relatively sensitive to the changes in adjacent time steps. Therefore, the frequency of viewpoint transition will arise with a lower value, as illustrated in Fig. 8.

The parameter \(\alpha\) can be adjusted according to the specific data and the generated result. In practice, the initial weight \(\alpha =0.5\) is already good enough. The weight can be fine-tuned to explore optimal results. A larger weight facilitates to generate a stable view path and a smaller weight tends to acquire more detailed information about feature evolution.

Fig. 8
figure 8

Results of the generated view path with difference weights. ae The view path generated with the weight value 0.0, 0.2, 0.4, 0.6 and 0.8, respectively. The color mapping from blue to red corresponds to the time of each viewpoint sample on the view path from start to end

6 View path generation

Animation for time-varying volume visualization involves several parameter space transitions, such as the transitions of view parameters, lighting parameters, temporal data, and transfer functions. In fact, most time-varying volume datasets generated by the simulation process do not possess enough temporal granularities. In this case, the transition in both data transition and viewpoint transition at the same time will result in camera shake effects. To address this problem, we separate the data transition and viewpoint transition, i.e., there is only one transition over a period of time. In the data transition, the viewpoint stays at a fixed position, and this viewpoint can cover most evolution events. In the view transition, data remains static, and the viewpoint moves along a smooth path to next viewpoint.

6.1 Data transition

We utilize a novel concept of viewpoint coherence to provide a suitable division of temporal segments. The viewpoint coherence is comprised of viewpoint spatial coherence and viewpoint temporal coherence.

Viewpoint spatial coherence means that the neighboring viewpoints on the viewing sphere usually contain close information in rendered images. We cluster viewpoints on the viewing sphere at each time step by means of a seeded region growing algorithm (Adams and Bischof 1994).

Fig. 9
figure 9

An illustration of viewpoint temporal coherence. a The rendered image of purple colored feature group under a random viewpoint. b The information value curve of the purple colored feature group through the whole time span

As another character for time-varying volume visualization, viewpoint temporal coherence means that internal features often exhibit a similar behavior in adjacent time steps. For example, one random viewpoint is selected for a turbulent combustion dataset and the focused feature group is colored in purple as shown in Fig. 9a. The corresponding information value curve through the whole time span is presented in Fig. 9b. This curve can be roughly separated into two stable segments: the first segment from time step 1 to time step 29, and the other one from time step 40 to time step 51.

Fig. 10
figure 10

a An illustration of lifecycles. There are five viewpoint clusters in this time span, from \(C_1\) to \(C_5\). b An illustration of viewpoint transition. The view transition happens at time step 12 and time step 26

Given viewpoint temporal coherence, we can obtain the time range, in which viewpoint clusters mainly remain stable. The clusters are sorted based on the average information value. As illustrated in Fig. 10a, there are five viewpoint clusters in this time span, and for convenience sake, we call this graph as the lifecycle graph.

Next, data transition is determined by selecting the viewpoint cluster with the maximum information value based on the lifecycle graph. In Fig. 10a, the first data transition is viewpoint cluster \(C_1\), which lasts from time \(t_1\) to time \(t_{14}\). Following is the viewpoint cluster \(C_3\) and \(C_2\) , and the data transitions are \([t_{14},t_{22}]\) and \([t_{23},t_{30}]\), respectively.

When the data transitions are determined, we accumulate the viewpoint information in the temporal segment and select the viewpoint with the maximum view information value as the representative viewpoint.

6.2 Viewpoint transition

The viewpoint transition starts from the representative viewpoint of the previous temporal segment, and ends at the representative viewpoint of the current temporal segment. To avoid the camera shake effects, we complete this transition process without data transition.

To keep the viewpoint transition smooth, we link the endpoints by the B-spline curve. In addition to the start viewpoint and the end viewpoint, we also find an intermediate viewpoint as another control point for the viewpoint interpolation.

Given the start point and the end point, we determine a plain passing through the two points. The plain divides the sphere surface into two parts with the maximum area ratio. By selecting every viewpoint on the smaller sphere surface as the intermediate viewpoint, we compute the B-spline curve. Then, we project the B-spline curves onto the viewing sphere. According to this projected curve on the viewing sphere, we can compute the perceived information and select the curve with the maximum amount of information as the view transition. After constructing the viewpoint transition between two representative viewpoints, we generate the view path by connecting all viewpoint transitions.

7 Results and discussion

We implemented our view path design method on a quad-core Intel i5-760 (8M Cache, 3.38 GHz) with 8GB memory and NVIDIA GeForce GTX 460 graphics card. Direct volume rendering is utilized to implement entire pipeline. The viewing sphere is generated using the HEALPix package (Gorski et al. 2005) and is constructed by 1200 sample viewpoints. We demonstrate the effectiveness of our approach using two datasets. The first is a turbulent vortex dataset and the second is a turbulent combustion dataset.

7.1 Turbulent vortex dataset

The main task of view path design for the \(128^3\) turbulent vortex dataset is to study the evolution and interactions of vortex tubes. The feature evolution graph has been shown in Fig. 2a. We demonstrate the effectiveness of this approach by selecting two representative feature evolutions.

Fig. 11
figure 11

Results of the six time step lasting evolution with a typical bifurcation event. a The DAG of its evolution. e The transition graph of the view path. bd Snapshots captured at time step 1–3. fh Snapshots captured at time step 4–6

Figure 11 shows the results of an evolution with a typical bifurcation event. The DAG of the evolution is shown in Fig. 11a. There is just one optimal viewpoint from birth to death and the transition graph of the view path is shown as Fig. 11e. The parameter \(\alpha\) is set as 0.5. The snapshots of the view path is shown in Fig. 11. It is obvious that the bifurcation event can be clearly seen from the selected viewpoints. It took 112 s to obtain the visual information 6 and 111 s to obtain the topology information for all the 1200 views of 6 time steps. After being assigned weights, it took 0.3 s to compute the path information.

Fig. 12
figure 12

Comparison of our approach with dynamic viewpoint selection approach. a Results of our approach applying to the purple colored feature evolution. The generated view path can be see at the top-right. Other subpictures are the rendered images of even number time step from time 14–26. b Results of dynamic viewpoint selection approach (Ji and Shen 2006) applying to the purple colored feature evolution. The generated view path can be see at the top-right. Other subpictures are the rendered images of even number time step from time 14–26. The color mapping from blue to red corresponds to the time of each viewpoint sample on the view path from start to end

Figure 12 shows the results of another blue colored DAG, as shown in Fig. 2. Considering that the turbulent vortex dataset only lasts for 100 time steps, the feature evolution with 31 time steps is relatively complex and long. During the feature evolution, there are three viewpoint transitions, as shown in Fig. 10b. The viewpoint transitions happen at time step 12, and time step 26. The smoothed view path can be seen from the subpicture at the top right of Fig. 12a and the selected snapshots of the view path is shown as other subpictures of Fig. 12a. It took 571 s to obtain the visual information and 584 s to obtain the topology information for all the 1200 views of 31 time steps. After being assigned weights, it took 0.9 s to compute the path information.

In addition, we compare the effectiveness of our method with the existing view path approach addressed by Ji and Shen (2006). The transfer function setting is the same with our method, and the speed is within (0.4, 0.5). The generated view path is shown in from the subpicture at the top right of Fig. 12b and the selected snapshots of the view path are shown in other subpictures of Fig. 12b. Comparison between two methods show that there is no obvious difference before the 14th time step. Starting from the 16th time step, our method shows obvious advantage of displaying more topology change of the feature. The viewpoint of our method is almost vertical to skeleton plane, and the topology change of the purple colored feature can be clearly observed, while the viewpoint of the dynamic view selection method does not place enough emphasis on the evolution.

7.2 Turbulent combustion dataset

The turbulent combustion dataset contains 122 time steps, and the resolution is \(480\times 720\times 120\). The main object of this dataset is to understand the correlation of scalar fields such as temperature, mixing rates, and species concentrations in turbulent flames. In this paper, we focus at designing a view path to observe the feature evolution in OH attribute.

Fig. 13
figure 13

Feature evolution graph of turbulent combustion dataset with one DAG highlighted from the first time step to the 73th time step

Fig. 14
figure 14

Results of one selected DAG in the combustion dataset. a The generated view path and the color mapping from blue to red corresponds to the time of each viewpoint sample on the view path from start to end. bh Snapshots captured at time step 4, 12, 20, 43, 53, 63 and 73, respectively

Figure 13 shows the feature evolution graph of the turbulent combustion dataset from time step 1–73 with one DAG highlighted. The parameter \(\alpha\) is set as 0.4, and the view path is displayed in Fig. 14. During the feature evolution there are two viewpoint transitions, as shown in Fig. 14a. The viewpoint transitions happen at time step 3, and time step 22. Between time step 3 and 22, there exists one small green feature bifurcation event. Through analyzing Fig. 14b–d, we can conclude that the view path can clearly acquire this evolution information. After the second view transition, the viewpoint remains static from time step 23–73. It can be seen from Fig. 14e, f, h that this selected view direction keeps vertical to most of the feature evolutions. It took 1829 s to obtain the visual information and 1930 s to obtain the topology information for all the 1200 views of 73 time steps. After being assigned weights, it took 12 s to compute the path information.

8 Conclusion and future work

In this paper, we presented a novel view path design method tailored for time-varying volume datasets. This method pipeline is based on the phenomena of feature evolution. We compute the visual information and topology information for each time step to better reveal feature evolution. The view path is designed by linking adjacent viewpoint transitions. Our experiments further demonstrated its practicality and effectiveness with two time-varying volume datasets.

Currently our view path is confined on the viewing sphere, thus the internal changes may be occluded by the external structure. In future work, we will extend our method to view path design with improvement of placing the viewpoint inside the volume. In this way, users could observe more detailed and hidden evolutions.