Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

The human cortex is a convoluted sheet that forms folding patterns. Because of this, functionally distinct regions are close to each other in a volume space but geometrically distant in terms of distance measured along the cortex. Such geometric property of the cortex has been well preserved in the cortical surface model [4, 7, 9, 13, 20]. Thus, surface-based analysis, in particular, surface-based registration for optimizing the alignment of anatomical and functional data across individuals, has received great attention in both anatomical and functional studies [8, 10, 17, 20, 21].

Most of the advanced cortical surface registration approaches have been implemented in the spherical coordinates based on either folding patterns [10, 15] or landmarks (sulcal or gyral curves) [8, 17]. In particular, landmark-based spherical mappings provide flexibility to choose sulcal or gyral curves in functional activation areas for the improvement of the alignment in regions of interest (ROIs) [1] even though the gyral or sulcal curves are the coarse representation of the cortex. Nevertheless, the landmark or folding pattern based spherical mappings require the spherical reparametrization of the cortical surface in which adjacent gyri with distinct functions are well separated. This surface reparametrization process introduces large distance and area distortion that potentially affects the quality of the surface alignment. To avoid such distortion, one would expect to directly align the cortical surfaces in their own coordinates. Vaillant and Glaunès [18] first introduced a vector-valued measure acting on vector fields as geometric representation of surfaces in their own space and then imposed a Hilbert space structure on it, whose norm was used to quantify the geometric similarity between two surfaces in their own coordinates. Since then, the vector-valued measure has been incorporated as a matching functional in the variational problems of large deformation diffeomorphic metric surface and curve mapping (LDDMM) [18, 19, 22]. It has been shown that first aligning gyral/sulcal curves and then cortical surfaces have great improvement in mapping cortical surfaces when compared to directly mapping cortical surfaces alone [21]. This gives an idea of multiresolution mapping for reducing computational cost and improving cortical surface alignment.

Multiresolution diffeomorphic mapping has been proposed for images. LDDMM with a mixture of kernels was introduced for aligning images [3], providing the mathematical foundation of a multiresolution diffeomorphic image mapping. However, the weights associated with kernels were not straightforward to determined and its computation remained the same as that of the image mapping algorithm in [2]. Rather than a simple weighted mixture of kernels, large deformation diffeomorphic kernel bundle mapping (LDDKBM) was proposed to allow multiple kernels at multiple scales incorporated in the registration of images [16]. It combines sparsity priors with the kernel bundle resulting in compact representations across scales. The results demonstrated tremendous improvement on image mapping.

Paper Contributions. This paper presents a multiresolution surface mapping algorithm under the LDDMM framework. We take advantage of a multiresolution analysis of surfaces, constructing coarse-to-fine surfaces that become natural sparse priors of the cortical anatomy. The vertices on the surface at individual resolutions provide the anchor points where the parametrization of deformation vector fields is supported. This naturally constructs tangent bundles of diffeomorphisms at different resolution levels, similar to LDDKBM [16], and hence generates multiresolution diffeomorphic transformation. We show that our construction of multiresolution LDDMM surface mapping can potentially reduce computational cost and improves the mapping accuracy of cortical surfaces.

2 Methods

2.1 Multiresolution Analysis for Surfaces

In this study, we adopt multiresolution analysis (MRA) for arbitrary surfaces from Lounsbery et al. [12] to construct coarse-to-fine surface meshes. The method, which is related to the mathematical foundations of wavelets, decomposes a polyhedral surface into 2 separate components, namely a low-resolution surface and a corresponding collection of coefficients containing the removed “details”. This process, when performed iteratively, produces a family of surfaces, wherein each successive surface is of a lower resolution than its predecessor. A recovery process, known as “synthesis”, reverses the decomposition such that the original high-resolution surface could be progressively reproduced from any member of the family. We can thus define a chain of nested function spaces \(\mathbf {V}^{(0)}\subset \mathbf {V}^{(1)}\subset \cdots \), such that \(f\in \mathbf {V}^{(r)}\) is a function at resolution r, \(r\in [0,R]\), with the level of detail increasing as r increases.

Let \(T=(\{x_i\},\{\varSigma _{ijk}\})\) be a triangular surface mesh, where \(\{x_i\},i=1,\dots ,N\) is a set of vertices and \(\{\varSigma _{ijk}\}\) a set of simplices, with each simplex \(\varSigma _{ijk}\) as a three tuple of points \(x_i,x_j,x_k\). Given a mesh \(T^{(r)}\) at level r, with coordinates \(X^{(r)} = [x_1,\dots , x_i, \dots ,x_{N^{(r)}}]\), where \(x_i\in \mathbb {R}^3\) and \(N^{(r)}\) is the number of vertices on \(T^{(r)}\). The new vertices on \(T^{(r+1)}\), denoted as \(\hat{X}^{(r+1)}\), can be given as

$$\begin{aligned} \hat{X}^{(r+1)} = X^{(r)}A_{N^{(r)}\times M}, \end{aligned}$$
(1)

where A is an \(N^{(r)}\times M\) matrix for a simple subdivision scheme (where all elements of \(A_j=0\) except for \(A_{ij}=A_{kj}=0.5\)) and M is the number of the new vertices on \(T^{(r+1)}\), \(X^{(r+1)} = [X^{(r)},\hat{X}^{(r+1)}]\). Given an “averaging matrix” \(B_{N^{(r+1)}\times N^{(r+1)}}\), a general subdivision scheme can be rewritten as

$$\begin{aligned} X^{(r+1)} = X^{(r)}[I_{N^{(r)}\times N^{(r)}} \quad A]B = \tilde{X}^{(r+1)}B = X^{(r)}P^{(r)}, \end{aligned}$$
(2)

where \(\tilde{X}^{(r+1)} = X^{(r)}[I_{N^{(r)}\times N^{(r)}} \quad A]\), \(P^{(r)} = [I_{N^{(r)}\times N^{(r)}} \quad A]B\).

As explained in [12], surfaces can be parametrized with a function S(y), where y is defined on a base (coarsest) mesh \(T^{(0)}\), i.e. y is a point on one of the simplices in \(\varSigma _{ijk}^{(0)}\) and can be tracked through a predefined subdivision process to a limit surface. We begin by first defining \(S^{(0)}(y):=y\). Let \(S^{(r-1)}(y)\) be found in a simplex \(\varSigma _{abc}^{(r)}\) with vertices \((\tilde{x}_a,\tilde{x}_b,\tilde{x}_c)\), \(\tilde{x}\) found in \(\tilde{X}^{(r)}\). Using the barycentric coordinates \((\lambda _a,\lambda _b,\lambda _c)\) such that \(S^{(r-1)}(y) = \lambda _a\tilde{x}_a + \lambda _b\tilde{x}_b + \lambda _c\tilde{x}_c\), we can induce a bijective map \(S^{(r-1)}(y)\rightarrow S^{(r)}(y)\) where

$$\begin{aligned} S^{(r)}(y) = \lambda _ax_a + \lambda _bx_b + \lambda _cx_c, \qquad x\in X^{(r)} \end{aligned}$$
(3)

and \((x_{a},x_{b},x_{c})\) corresponds to \((\tilde{x}_a,\tilde{x}_b,\tilde{x}_c)\) of the simplex \(\varSigma _{abc}^{(r)}\). Then, \(S(y)\!\!:=\lim _{r\rightarrow \infty }S^{(r)}(y)\). In matrix form,

$$\begin{aligned} S^{(r)}(y) = \varvec{\lambda }^{(r)}(y)(X^{(r)})^T. \end{aligned}$$
(4)

It follows that

$$\begin{aligned} S^{(r)}(y)&= \varvec{\lambda }^{(r)}(y) (X^{(r-1)}P^{(r-1)})^T \end{aligned}$$
(5)
$$\begin{aligned}&= \varvec{\lambda }^{(r)}(y) (P^{(r-1)})^T\cdots (P^{(0)})^T(X^{(0)})^T. \end{aligned}$$
(6)

In other words, surfaces when parameterized into meshes, can be henceforth understood as functions from a small collection of triangles into \(\mathbb {R}^3\). The subdivision of triangles allows us to move from one resolution to another, providing a family of surfaces for registration. We show three resolutions of the brain cortex in Fig. 1.

Fig. 1.
figure 1

Increasing levels of \(X^{(r)}\) from left to right. Subcaptions indicates the corresponding levels, number of vertices, and number of faces - “Level-r (no. of vertices, no. of faces)”.

2.2 Multiresolution Large Deformation Diffeomorphic Metric Mapping for Surfaces

Now, we state a variational problem for mapping two surfaces under the framework of LDDMM. LDDMM assumes that transformation can be generated from one to another via flows of diffeomorphisms \(\varphi _t\), which are solutions of ordinary differential equations \(\dot{\varphi }_t = v_t (\varphi _t), t \in [0,1],\) starting from the identity map \(\varphi _0={{\mathtt {Id}}}\). They are therefore characterized by time-dependent velocity vector fields \(v_t, t \in [0,1]\). We define a metric distance between a target surface \(S_{targ}\) and an atlas surface \(S_{atlas}\) as the minimal length of curves \(\varphi _t \cdot S_{atlas}, t \in [0,1],\) in a shape space such that, at time \(t=1\), \(\varphi _1 \cdot S_{atlas} = S_{targ}\). Lengths of such curves are computed as the integrated norm \(\Vert v_t \Vert _V\) of the vector field, where \(v_t \in V\) and V is a reproducing kernel Hilbert space with kernel \(k_V\) and norm \(\Vert \cdot \Vert _V\). To ensure solutions are diffeomorphisms, V must be a space of smooth vector fields. The duality isometry in Hilbert spaces allows us to express the lengths in terms of \(m_t\in V^*\), interpreted as momentum such that \(\forall u\in V\), \(\langle m_t, u \circ \varphi _t\rangle _2 = \langle k_V^{-1}v_t, u\rangle _2\), and \(\langle m, u\rangle _2\) denotes the \(\mathbb {L}^2\) inner product between m and u, With a slight abuse of symbols, it is the result of the natural pairing between m and v in cases where m is singular (e.g., a measure). This identity is classically written as \(\varphi _t^* m_t = k_V^{-1} v_t\), where \(\varphi _t^*\) is referred to as the pullback operation on a vector measure, \(m_t\). Using the identity \(\Vert v_t\Vert _V^2 = \langle k_V^{-1}v_t, v_t\rangle _2=\langle m_t,k_Vm_t\rangle _2\) and the fact that energy-minimizing curves coincide with constant-speed length-minimizing curves, we obtain the metric distance between the atlas and target, \( \rho (S_{atlas},S_{targ})\), by minimizing \(\Vert v_t\Vert _V^2\), such that \(\varphi _1 \cdot S_{atlas}=S_{targ} \) at time \(t=1\) [5]. We associate this with the variational problem in the form of

$$\begin{aligned} J(m_t) =&\inf \nolimits _{m_t: \dot{\varphi }_t = k_Vm_t(\varphi _t), \varphi _0={\mathtt {Id}}} \rho (S_{atlas},S_{targ})^2 \nonumber \\&+ \gamma E(\varphi _1 \cdot S_{atlas},S_{targ}), \end{aligned}$$
(7)

where E is defined based vector-valued measure as introduced in [18]. For any two surfaces \(S_1\) and \(S_2\), \(E(S_1,S_2)\) is defined as

$$\begin{aligned} \begin{array}{ll} E(S_1,S_2) &{}= \sum \nolimits _{f,g} N^{t}_{f} k_W(c_g,c_f) N_g - 2\sum \nolimits _{f,q} N^t_f k_W(c_f,c_q) Nq \\ &{}\qquad + \sum \nolimits _{q,p} N_q^t k_W(c_q,c_p) N_r, \end{array} \end{aligned}$$
(8)

where fg are simplices from \(S_1\) while qp are simplices from \(S_2\). \(N_g\) is then the normal vector pointing out of the centre, \(c_g\), of simplex g. \(k_W\) is a Gaussian kernel with bandwidth \(\sigma _W\). The metric distance \(\rho (S_{atlas},S_{targ})^2\) could be easily computed as \(\int _{0}^{1} ||v_t ||_V^2 dt\).

We now construct the multiresolution diffeomorphic mapping for surfaces under the framework of LDDMM. In the previous section, we show that a surface, S, may be sequentially subsampled into meshes of decreasing resolution \(T^{(r)}\dots T^{(1)}\). With a slight abuse of notation, let us define these meshes as the discretization of the surface, rewriting \(T^{(r)}\) as \(S^{(r)}\), such that \(\lim \limits _{r\rightarrow \infty }S^{(r)}\rightarrow S\). The duality isometry of \(m_t\) with \(v_t\) allows defining the smooth vector field, \(v_t\) through \(m_t\), where \(m_t\) can sparsely anchor at the vertices on \(S^{(r)}\). Therefore, it is natural to seek \(m_t^{(r)}\) defined at the vertices on \(S^{(r)}\) and then construct the smooth vector field, \(w_t^{(r)}= k_V^{(r)}m_t^{(r)}\), where the size of \(k_V^{(r)}\) can be adapted to the sparse level of the vertices on \(S^{(r)}\). From this construction, \(w_t^{(r)}, r=0, 1, \dots , R\) can be defined via momentum \(m_t^{(r)}\otimes \delta _x, x\in S^{(r)}, r=0, 1, \dots , R\) and construct independent tangent spaces of diffeomorphisms, \(w^{(r)}\in W^{(r)}\). The family of vector fields forms reproducing kernel Hilbert spaces, which could be summed across multiple resolutions, i.e., \(\vartheta _t(w_t)= \sum _{r=0}^R w_t^{(r)}\), to form one single vector field for the flow equation \(\dot{\varphi _{t}^{\vartheta }} = \vartheta _t(\varphi ^{\vartheta }_{t})\). Through this family of vector fields, we redefine \(\rho ^{MRA}(S_{atlas},S_{targ})\) by minimizing \(\int _0^1 \sum _{r=0}^R \Vert w_t^{(r)} \Vert _{W^{(r)}}^2 dt\) such that \(\varphi _1^{\vartheta } \cdot S_{atlas}=S_{targ} \) at time \(t=1\), where \(\Vert w_t^{(r)} \Vert _{W^{(r)}}^2 = \big <(k_V^{(r)})^{-1}w_t^{(r)} , w_t^{(r)} \big >_2=\big <m_t^{(r)}, k_V^{(r)}m_t^{(r)} \big >_2\). This construction of \(\rho ^{MRA}(S_{atlas},S_{targ})\) is in turn similar to that proposed for the large deformation diffeomorphic kernel bundle mapping (LDDKBM) for the registration of images [16].

We now modify Eq. (7) to the variational problem for the multiresolution LDDMM surface mapping in the form of

$$\begin{aligned} J(\varvec{m}_t) =&\inf \nolimits _{m_t^{(r)}: \dot{\varphi }^{\vartheta }_t = \sum _{r=0}^Rk_V^{(r)}m_t^{r}(\varphi ^{\vartheta }_t), \varphi ^{\vartheta }_0={\mathtt {Id}}} \rho ^{MRA}(S_{atlas},S_{targ})^2 \nonumber \\&+ \gamma E(\varphi _1 \cdot S_{atlas}^{(R)},S_{targ}^{(R)}), \end{aligned}$$
(9)

where \(\varvec{m}_t=\{ m_t^{(r)} \}\). We can rewrite this variational problem as

$$\begin{aligned} J(\varvec{m}_t) =&\inf \nolimits _{m_t^{(r)}: \dot{\varphi }^{\vartheta }_t = \sum _{r=0}^Rk_V^{(r)}m_t^{r}(\varphi ^{\vartheta }_t), \varphi ^{\vartheta }_0={\mathtt {Id}}} \int _0^1 \sum _{r=0}^R \Vert w_t^{(r)} \Vert _{W^{(r)}}^2 dt \nonumber \\&+ \gamma E(\varphi _1 \cdot S_{atlas}^{(R)},S_{targ}^{(R)}), \end{aligned}$$
(10)

where \(w_t^{(r)}(\cdot ) = \sum _{i=1}^{N^{(r)}}k_V^{(r)}(\cdot ,x_i)m_t^{(r)}(x_i)\) and \(x_i\in S^{(r)}\).

2.3 Gradient Computation and Implementation

To reduce the computational cost, we minimize Eq. (10) at each resolution level when R gradually increases from \(0, 1, 2, \cdots \). We use the gradient descent method to solve Eq. (10) when R is fixed and the method presented in [14] to speed up the computation of the Gaussian transform.

We now compute the gradient of J in Eq. (10) with respect to \(\varvec{m}_t=\{ m_t^{(r)}\}\). We begin by considering a variation in the vector field \(w^{(r)}_{t,\varepsilon }=w^{(r)}_t + \varepsilon \tilde{w}^{(r)}_t\). The corresponding variation of \(x_{j,1}=\varphi ^{\vartheta }_1(x_j)\) is

$$\begin{aligned} \tilde{x}_{j,1} = \partial _{\varepsilon }x_{j,1}|_{\varepsilon =0}= \int _0^1d_{x_{j,t}}\varphi _{t1}^{\vartheta }\tilde{w}^{(r)}_t(x_{j,t}) dt, \end{aligned}$$
(11)

where \(\varphi _{t1} := \varphi _1\circ \varphi _t^{-1}\). Based on the derivation from [18], the variation of E is

$$\begin{aligned} \partial _\varepsilon E|_{\varepsilon =0}&= \int _0^1\langle k_V^{(r)}(x_{j,t},\cdot )(d_{x_{j,t}}\varphi _{t1}^{\vartheta })^*\nabla _{x_{j,t}}E,\tilde{w}^{(r)}_t \rangle dt. \end{aligned}$$

This implies that the gradient of E in the space \(\mathbb {L}^2([0,1],W^{(r)})\) of vector fields, at a particular level r is of the form

$$\begin{aligned} \nabla E(t,\cdot ) = \sum _jk_V^{(r)}(x_{j,t},\cdot )(d_{x_{j,t}}\varphi _{t1}^{\vartheta })^*\nabla _{x_{j,t}}E. \end{aligned}$$
(12)

In this way, we have reduced the gradient computations to \(\nabla _{x_{j,t}}E\), (the derivative of the data attachment term, E, with respect to the vertices in \(x_{j,t}\)). When the surface is represented using vector-valued measure, \(\nabla _{x_{j,t1}}E\) is given in [18]. The Jacobian of the transformation, \(d_{x_{j,t}}\varphi _{t1}^{\vartheta }\), is given by the following relationship (refer to [11] for further details)

$$\begin{aligned} \frac{d}{dt}(d_{x_{j,t}}\varphi _{t1}^{\vartheta }) = -d_{x_{j,t}}\varphi _{t1}^{\vartheta }d_{x_{j,t}}\vartheta . \end{aligned}$$
(13)

Finally, using Eq. (13), we can directly compute

$$\begin{aligned} \frac{d}{dt}\nabla _{x_{j,t}}E&= -(d_{x_{j,t}}\vartheta (w_t))^*\nabla _{x_{j,t}}E, \end{aligned}$$
(14)

which can be integrated backwards from \(t=1\) to 0. For a given resolution, the gradient of \(\int \limits _{0}^{1} \sum _{r=0}^R ||w_t^{(r)} ||_{W^{(r)}}^2 dt\) is \(2w_t^{(r)}\). The gradient of the cost functional J with respect to \(w_t^{(r)}\) is then

$$\begin{aligned} \nabla _{w_t^{(r)}} J(x) = 2\sum _j k_V^{(r)}(x_{j,t},x) \left[ \gamma ({d_{x_{j,t}}\varphi _{t1}^{\vartheta }})^*\nabla _{x_{j,t}}E + m_{j,t}^{(r)} \right] . \end{aligned}$$
(15)

We can hence write the gradient of J with respect to \(m_t^{(r)}\) as

$$\begin{aligned} \nabla _{m_t^{(r)}} J(x)= 2\gamma ({d_{x_{j,t}}\varphi _{t1}^{\vartheta }})^*\nabla _{x_{j,t}}E + 2m_{t}^{(r)}. \end{aligned}$$
(16)

We now summarize the optimization with the following algorithm:

Algorithm 1

Given \(S_{atlas},S_{targ}\), use Eq. (5) and obtain \(\{S^{(r)}_{atlas}\},\{S^{(r)}_{targ}\}, r\in \{0,1,\cdots , R\}\).

for \(R=0, 1, 2, \cdots \) do

Step 1: Compute the gradient \(\nabla _{m_t^{(R)}} J_t(x)\) using Eqs. (13), (14) and (16).

Step 2: Update \(m_t^{(R)}\) using \(m_t^{(R)}= m_t^{(R), old} - \epsilon \nabla _{m_t^{(R)}} J(x) \),

where \(\epsilon \) is an adaptive gradient descent step size. Evaluate J.

Step 3: Repeat steps 1,2 until J is optimized at level R.

Step 4: Initialize \(m_t^{(R+1)}\). Assume \(X^{(R)}\) and \(X^{(R+1)}\) to be sets

respectively containing vertices of \(S^{(R)}\) and \(S^{(R+1)}\).

if \(x\in X^{(R+1)} \cap X^{(R)}\) then

\(m_t^{(R+1)}(x) = m_t^{(R)}(x)\).

else if \(x\in X^{(R+1)}/X^{(R)}\) then

\(m^{(R+1)}_t(x) = m^{(R)}_t(x)P^{(R)}\)

end if

end for

3 Experiments

In this section, we will first show experiments on real datasets using the proposed registration algorithm and the LDDMM surface mapping in [18]. We will then show the computation time and evaluate the mapping accuracy of the two mapping algorithms. For all experiments, we use a Gaussian kernel, i.e. \(k_V(x,y):=\exp (-||x-y ||_2/\sigma _V)\).

Figure 2 illustrates one example of the mapping results using the proposed method. Both atlas and target surfaces have 10242 vertices and 20480 faces. The final deformed atlas showed on panel (d) was obtained using the proposed mapping algorithm with four resolution levels respectively associated with the diffeomorphic kernels of \(\sigma _V=\{25,10,5,1\}\). In panel (f), we visually examine the deformed atlas by plotting the minimum distance (mm) from every vertex of the target surface to every other vertex on the deformed source. The mean and standard deviation of the minimum distance is 0.8238 \(\pm \) 0.565.

We visually compared this mapping result with that obtained using the LDDMM surface mapping in [18]. To make the two mapping algorithms comparable, the mapping procedure was the same except that the LDDMM surface mapping was only applied to the finest level of the surfaces and \(\sigma _V=1\). From Fig. 3(c), we can see that the LDDMM surface mapping method tends to have the undesirable behaviour of ‘inwards folding’ (regions with in-folding indicated with tiny black arrows) along the precentral gyrus, while this is not observed using the proposed coarse-to-fine method.

Fig. 2.
figure 2

An example of cortical surface mapping using the proposed algorithm with time for the diffeomorphic flow was discretized into 30 steps. Panels (a, e) respectively show the atlas and target surfaces, while panels (b,c) show the intermediate mapping results (deformed atlas at time steps of 10 and 20) and panel (d) illustrates the deformed atlas. Panel (f) shows the minimum distance from a point on the target to every other point on the deformed atlas.

Fig. 3.
figure 3

This figure shows the comparison between the LDDMM surface mapping and the proposed method for cortical surface registration. Panels (a) and (b) are the deformed atlases obtained using LDDMM surface mapping and the proposed method, respectively. The second row shows a closer view of the region around the central sulcus. Panels (c,d) respectively correspond to those from the LDDMM algorithm and the proposed method. Black arrows on panel (c) point out the locations with undesired infolding features.

Next, we aligned the atlas to 5 cortical surfaces using the same mapping procedures as those introduced above for both the proposed method and the LDDMM surface mapping. Table 1 lists the parameter setting and the computational cost averaged across the 5 cortical surfaces. In general, the computational time is much less for the all four levels in the proposed method. This is due to the initialization provided by the low resolution surfaces, which allows the gradient optimization at high resolutions, (where the computation is more costly), to converge quickly.

We also evaluated the mapping accuracy of the two methods using surface alignment consistency (SAC) that was initially introduced by Van Essen [6]. The SAC quantifies the anatomical variability of a sulcal region among a group of subjects that can be characterized by the cortical mapping. A larger value indicates better mapping. With prior information such as delineated surface regions, the SAC is given as \(\sum _{i=1}^N(i-1)n(i)/(N-1)(N_{\text {total}})\), where N is the total number of subjects used, n(i) is the number of points that were mapped correctly for i number of times and \(N_{\text {total}}\) is the total number of nodes associated with a particular region.

Table 1. Average computational parameters and time for both methods, using the Matlab software. For the MRA-LDDMM, the average time taken is the runtime for the entire set of \(\sigma _V\) for that particular level, at a specific \(\sigma _W\) (bandwidth of the data-attachment term). The LDDMM runtime records the entire time taken to go through the same set of decreasing \(\sigma _W\) used by the MRA-LDDMM.

In our experiment, we manually delineated seventeen sulcal regions on all the cortical surfaces (see details in [21]). The delineated regions are shown in Fig. 4. Figure 5 shows the comparison of SAC values for the LDDMM surface mapping and the proposed method at the last 2 levels. As expected, the SAC values of the proposed method increases as the surface resolution becomes finer. The SAC was also higher for the larger and more prominent sulci such as the Central Sulcus, Cingulate Sulcus, Sylvian Fissure, Superior Precentral Sulcus and Superior Temporal Sulcus. At level 3 (or 4) of the proposed mapping method, the SAC values were uniformly greater than those obtained using the LDDMM surface mapping for all seventeen sulcal regions.

Fig. 4.
figure 4

Seventeen sulcal regions are illustrated on the atlas surface.

Fig. 5.
figure 5

Surface alignment consistency (SAC) for the LDDMM surface mapping and the proposed method. Indexed sulci regions 1-17 are respectively: Dorsal bank of Calcarine Sulcus(1), Ventral bank of Calcarine Sulcus(2), Central Sulcus(3), Cingulate Sulcus(4), Collateral Sulcus(5), Inferior Frontal Sulcus(6), Intraparietal Sulcus(7), Inferior Precentral Sulcus(8), Inferior Temporal Sulcus(9), Lateral Occipital Sulcus(10), Occiptal Temporal Sulcus(11), Parietal Occipital Sulcus(12), Postcentral Sulcus(13), Superior Frontal Sulcus(14), Sylvian Fissure(15), Superior Precentral Sulcus(16), Superior Temporal Sulcus(17).

4 Conclusion

This paper introduced the multiresolution diffeomorphic mapping for cortical surfaces. We showed that this algorithm improves alignment as compared to the LDDMM-surface algorithm [18]. It has potential to reduce the computational time.