1 Introduction

Electromechanical models of the heart simulate the physical behaviour of a patient’s heart, in order to perform advanced analysis of the cardiac function. They are of increasing interest to help clinicians in their daily practice (Kayvanpour et al. 2015; Baillargeon et al. 2014; Smith et al. 2011). In particular, recent works have been successful in predicting haemodynamic changes in cardiac resynchronisation therapy (Sermesant et al. 2012), ventricular tachycardia inducibility and dynamics (Chen et al. 2016), as well as in detecting and localising infarcts (Duchateau et al. 2016) using 3D personalised models.

After building the patient’s heart mesh geometry, the simulated heartbeat has to match clinical data, such as ejected blood volume and pressure measurements, or more detailed information about regional motion and abnormalities available from imaging modalities such as 3D echocardiography or cine MRI. This is done by finding adequate simulations settings (boundary conditions, loading constraints) and values of model parameters such as myocardial stiffness and contractility (Xi et al. 2011; Chabiniok et al. 2012). This phase of parameter estimation is usually referred to as the personalisation of the cardiac model (Marchesseau et al. 2013b) and results in a personalised cardiac model (Wang et al. 2012) made of a patient-specific heart geometry (Schaerer et al. 2006) and patient-specific biomechanical parameters.

A wide variety of 3D computer heart models exists in the literature, which describe the anatomy and physiology of the heart at various scales. For example, the 3D mesh describing the heart geometry can be made of very different numbers of nodes, and the cellular electromechanical phenomena underlying the build-up of myocardial forces can either be described with a large number of equations, or simplified equations. We refer to the two comprehensive reviews of Chabiniok et al. (2016) and Clayton et al. (2011) for a large discussion of various models of different scales, types and implementations. The scale and precision (also known as fidelity) of the model is chosen according to the study and the available data. In general, the time required to compute a simulation increases with its level of detail. The simulation of a 3D heartbeat using some of the most complex 3D models can take up to several hours of computation on computers with hundreds of cores (Panthee et al. 2016). This means that for applications where many simulations need to be repeatedly performed (e.g. parameter estimation), computational time becomes a real issue.

The joint use of low-fidelity models to approximate a high-fidelity model and lower the computational burden has been investigated by the multifidelity modelling community since (Kennedy and O’Hagan 2000). As described in Peherstorfer et al. (2016), a model management method usually handles and feeds the outputs of a low-fidelity model (e.g. a simplified model, a regression model or a projection-based model) to an underlying application-specific method (e.g. an optimisation algorithm) as surrogates to the high-fidelity model outputs. The method also optionally decides when to recompute simulations of the high-fidelity model to guarantee the accuracy of the low-fidelity approximation.

Here we present an original 0D/3D multifidelity approach for the personalisation of 3D cardiac models (Fig. 1). First, from our 3D cardiac model, we derived and implemented a simplified “0D” model which is faster by 4 orders of magnitude. This was performed as proposed in Caruel et al. (2014), by approximating the geometry of the ventricle as a sphere and assuming spherical symmetry and homogeneity of the electromechanical behaviour.

Then, we introduce a multifidelity coupling in order to approximate 3D model simulations from 0D model simulations. To this end, we build a parameter mapping which converts parameters of the 3D model into parameters of the 0D model, based on a few representative 3D simulations in the parameter space (called the sigma-simulations). Outputs of the 3D model are then approximated from 0D model simulations, thus enabling a reduction in the computational burden when a large number of 3D simulations outputs are required.

Finally, we present a multifidelity personalisation method, built by adapting an efficient optimisation algorithm called CMA-ES (Hansen 2006) to use approximations of the 3D simulations obtained through the multifidelity coupling instead of the real 3D simulations. This leads to a fast and computationally efficient personalisation method for the 3D model parameters.

Fig. 1
figure 1

3D and 0D cardiac models (ac). Our multifidelity personalisation method performs parameter estimation in the 3D model using CMA-ES (d), based on 0D simulations obtained through the multifidelity coupling between the models

A preliminary version of this work was described in Mollero et al. (2016). In this manuscript, we propose a significantly extended methodology for the multifidelity coupling. First, the sigma-simulations selection is performed so that additional computational gains are possible when some estimated parameters have the same equations and values in both models. Then, a more robust, nonlinear, parameter mapping is used. An additional step is finally introduced to correct the possible errors arising during the estimation of 0D model parameters. We also present an improved methodology for the multifidelity personalisation method which enables the use of a single coupling for many iterations of CMA-ES. This is done while simultaneously ensuring that the approximation is accurate enough for the optimisation algorithm, resulting in an overall estimation which is 5 times faster than in Mollero et al. (2016) on average.

In terms of results, we present extended results and discussions for both the multifidelity coupling and the multifidelity personalisation method. The approximation accuracy of the coupling is compared to an hypersurface interpolation method, and the personalisation method is compared to BOBYQA (Powell 2009), a commonly used derivative-free optimisation algorithm. This leads to an extended discussion on the computational aspects of our method in a parallel environment. This work is illustrated on a personalisation problem involving 5 parameters and 3 outputs, and we demonstrate results on a database of 121 different geometries and clinical values, which we believe to be one of the largest cohort of personalised cardiac cases to date. This personalisation took around 2.5 days on our cluster.

Lastly, our 0D model equations are encoded in the CellML format (Cuellar et al. 2003) and made available for download from the Physiome Model RepositoryFootnote 1 (Yu et al. 2011). Python scripts to perform parameter estimation in the 0D model will be released within 1 month of publication, from the same location.

2 Multifidelity cardiac modelling and personalisation framework

In this work, we use both a 3D electromechanical model which can simulate the behaviour of complex patient-specific heart geometries and a reduced “0D” version of this model which can be summarised in a few equations. Both models rely on the same mechanical laws, but simplifying assumptions are made on the geometry of the 0D model to derive its equations. We also introduce the personalisation framework for the parameters of both models.

2.1 The 3D cardiac model

Our 3D cardiac eletromechanical model is an implementation of the Bestel–Clement–Sorine (BCS) model (Chapelle et al. 2012) by Marchesseau et al. (2010, 2013a) in SOFAFootnote 2, an open-source simulation software. The model uses the following items as an input:

  • A 3D tetrahedral biventricular mesh, either synthetically created or derived from segmented MRI images.

  • A set of myocardial fibres directions, defined at each node of the mesh. Here we use synthetic fibres from the rule based on Streeter (1979).

  • A set of depolarisation and repolarisation times at each node of the set computed from an electrophysiology model. Here we use the Eikonal model as described in Sermesant et al. (2012).

Myocardial forces are then computed at each node and at each time step from the equations of the BCS model. Then, the myocardial motion (mesh nodes velocities) as well as ventricular volumes and pressures is computed at each time step of the cardiac cycle from these forces. See “Appendix A” for a description of the mechanical model equations and parameters. With myocardial meshes made of around 15 000 nodes and a time step of 5 ms, a single beat of 0.9s takes 15 minutes to compute on average on a single-core (Intel(R) Core(TM) i7-4600U [2.10GHz]).

2.2 The 0D cardiac model

As described in Caruel et al. (2014), it is possible to derive the equations of a fast 0D model of the heart, which relies on the same BCS equations. This is done by making the following simplifying assumptions on the geometry, the electrical activation and the properties of the material:

  1. 1.

    The ventricle has a spherical shape.

  2. 2.

    The material is incompressible.

  3. 3.

    The electrical activity is synchronous and homogeneous over the sphere.

With these assumptions of spherical symmetry, myocardial forces and motion are also spherically symmetric and can be entirely described by the inner radius r of the ventricle. Deformation and stress tensors can also be reduced to a simple form (see Caruel et al. (2014), which leads to a system of a dozen equations (see “Appendix B”).

We implemented the equations into C code and solve the system of equations using an explicit Forward Euler method with a temporal discretisation of 0.01 ms. This leads to the simulation of around 15 beats per second. We also encoded the 0D model in the CellML format (Cuellar et al. 2003), which is an open standard based on the XML markup language to store and exchange computer-based mathematical models. This model can be downloaded from the Physiome Model RepositoryFootnote 3 and easily exploited through the software OpenCOR (Garny and Hunter 2015).

2.3 Parameter estimation framework for cardiac models

After building the model, parameter estimation is usually the first step to analyse clinical data with a model. It consists in finding parameter values for which the simulation with the model reproduces available values and quantities in the data, such as pressure or volume measurements. In particular, when the geometry is patient-specific, this phase is called cardiac model personalisation (Marchesseau et al. 2013b; Kayvanpour et al. 2015).

Formally, we consider a cardiac model M, a set of simulated quantities called the outputs \({{\mathscr {O}}}\) and a subset \({\mathscr {P}}_\text {M}\) of varying parameters of the model (while the other parameters are supposed fixed). Given a vector of these parameters \(x\in \varOmega _\text {M}\), we note \({{\mathscr {O}}}_\text {M}(x)\) the values of the outputs \({\mathscr {O}}\) in the simulation of M with parameter values x. The goal of personalisation is to find parameter values \(x*\in \varOmega _{M}\) for which the outputs values \({\mathscr {O}}_\text {M}(x*)\) best match some target values \(\widehat{{\mathscr {O}}}\).

This is an inverse problem, which can be tackled by different methods (see the review of Chabiniok et al. (2016)). We propose here a parameter estimation framework (Fig. 1d) through derivative-free optimisation, using an efficient genetic algorithm called CMA-ES (Hansen 2006).

2.3.1 Robust optimisation with the genetic algorithm CMA-ES

We define the score \(S(x,\widehat{{\mathscr {O}}})\) of some parameter values x as the \(L_2\) distance between \({\mathscr {O}}_\text {M}(x)\) and \(\widehat{{\mathscr {O}}}\), normalised by the Hadamard (coordinate-by-coordinate) division \(\oslash \) with a vector \(\mathscr {N}\), in order to compare outputs with different units:

$$\begin{aligned} S(x,\widehat{{\mathscr {O}}})=||{({\mathscr {O}}_\text {M}(x)-\widehat{{\mathscr {O}}}) \oslash \mathscr {N}}. \end{aligned}$$

We then perform a derivative-free optimisation with the genetic algorithm CMA-ES, which aims at minimising this score S. The algorithm (which stands for covariance matrix adaptation evolution strategy) asks at each iteration n for the scores of m points \(x_i\in \varOmega _\text {M}\) (a generation), drawn from a multivariate distribution with covariance \(I_n^c\) and mean \(I_n^m\). Then, it combines Bayesian principles of maximum likelihood with natural gradient descent on the ranks of the points scores in the generation to update both \(I_n^c\) and \(I_n^m\).

The CMA-ES algorithm has many advantages in this context. First, it can explore a large and unbounded parameter space while still performing a local search at each iteration and has shown very good results on problems involving hundreds of parameters to optimise (Geijtenbeek et al. 2013). Second, because the updates of the \(I_n^c\) and \(I_n^m\) only depend on the score ranks, it is very robust to outliers in the generation, in particular to parameter values for which the simulation diverges (in which case we give an arbitrary high score to these parameters).

Also, since each score comes from an independent simulation, this algorithm is well suited to parallel environments. We can either decide to set a very high population size m and do many parallel simulations (in this case the algorithm can converge in a few iterations), or a lower population size and rely instead on many iterations of the algorithm for convergence.

2.3.2 Application to the 0D model

Because the 0D model is extremely fast (15 beats per seconds), parameter estimation is also very fast with the 0D model. For example, with a population size of 50 points per generation it takes less than 50 generations and 3 minutes on a 4-core computer (with parallel computation of the simulations within each generation) to make most of the problems with sets of up to 10 outputs and parameters converge.

In our current implementation, 29 outputs can be extracted from the pressure, volume and flow curves and 25 parameters of the 0D model can be estimated. Python scripts to automatically perform the parameter estimation will be released within 1 month after the publication, available for download from the Physiome Model Repository.Footnote 4

2.3.3 Application to the 3D model

It is possible to apply directly this framework to the 3D model, but the computational burden can become an issue because of the time required to compute the 3D simulations. Indeed, either we set a small population size, but we need many iterations of CMA-ES (of around 15 minutes each). Or we set a high population size but is the number of parallel CPUs used at the same time which may become prohibitive. In Sect. 4, our multifidelity personalisation method lowers this computational burden by replacing the outputs values of 3D simulations with approximations computed from 0D simulations through a multifidelity coupling between the two models, as explained in Sect. 3.

3 Multifidelity coupling: approximating global outputs values of the 3D model

We present here a multifidelity coupling between the 3D and the 0D model. We will call global outputs of the models quantities which can be computed from the simulations of both models, such as the total ejected blood volume (stroke volume) or the minimal (diastolic) aortic pressure.

We consider a set of \(N_1\) parameters of the 3D model \({\mathscr {P}}_\text {3D}\), a set of global outputs \({\mathscr {O}}\), and a set of parameter values \(x_{i}\in \varOmega _{3D}\) of the parameters \({\mathscr {P}}_\text {3D}\). The goal is to get approximations of the values \({\mathscr {O}}_\text {3D}(x_{i})\) by performing 0D simulations and only a few 3D simulations.

We will illustrate the method on the following problem: a set of 5 parameters \({\mathscr {P}}_\text {3D}\) of the 3D model, and a set of 3 outputs \({\mathscr {O}}\) listed in Table 1. We want to approximate the output values for \(m=30\) simulations with parameters \(x_{i}\), drawn from a multivariate distribution (as in a CMA-ES iteration).

Table 1 Sets of 3D model parameters and global outputs in the example
Fig. 2
figure 2

Multifidelity coupling: After performing a few 3D sigma-simulations, we find coupled 0D simulations for each of those simulations. Then, we build a parameter mapping which converts parameters of the 3D model into parameters of the 0D model, in order to approximate 3D simulations outputs with the outputs of 0D simulations

3.1 Global strategy: building a mapping between 3D and 0D mechanical parameters

Because they rely on the same equations, both models display many similar trends in their global outputs values when some parameters vary. For example, if a 3D simulation and a 0D simulation have the same stroke volume, the stroke volume variations with changes in the haemodynamic parameters (such as the peripheral resistance) are very similar in both models.

However, some parameters do not behave exactly the same, and are not always even in the same range of values. This is especially the case of mechanical parameters such as the 3D and 0D dampings which rely on different equations. But even for parameters from the same equations in both models (such as \(\sigma \) and \(c_1\)), the values might be very different in 0D and 3D simulations with similar outputs, due to the different assumptions.

Formally, there is no trivial function which can convert the \(x_{i}\in \varOmega _{3D}\) into values \(y\in \varOmega _{0D}\) of 0D model parameters \({\mathscr {P}}_\text {0D}\), for which the global outputs values \({\mathscr {O}}_\text {0D}(y)\) and \({\mathscr {O}}_\text {3D}(x)\) are the same (or at least close). The idea of the multifidelity coupling is to find 0D model simulations which are similar to a few selected 3D simulations and then build a parameter mapping \(\phi \) between the parameters of both models. We use the following strategy:

  1. 1.

    First we perform a few representative 3D simulations within the domain of interest (called the sigma-simulations with parameters \(X_{i}\in \varOmega _{3D}\)).

  2. 2.

    Then, for each 3D sigma-simulation with parameter values \(X_{i}\in \varOmega _{3D}\), we estimate parameter values \(Y_{i}\in \varOmega _{0D}\) of a coupled 0D simulation which approximates the outputs \({\mathscr {O}}\) of the 3D sigma-simulation.

  3. 3.

    From those 3D model parameters \(X_{i}\) and 0D model parameters \(Y_{i}\), we derive a parameter mapping \(\phi \) which converts 3D parameters into 0D parameters.

  4. 4.

    Finally, we approximate the global outputs values \({\mathscr {O}}_\text {3D}(x)\) of all the 3D simulations \(x_{i}\in \varOmega _{3D}\), from the 0D simulations with parameters \(\phi (x_{i})\in \varOmega _{0D}\). This is done by adding a correction term \(\psi \) which is learnt to avoid numerical errors in the previous steps.

The overall process is illustrated in Fig. 2. In the sequel, we first discuss the selection of representative sigma-simulations (Sect. 3.2), then the computation of coupled 0D simulations (Sect. 3.3), then the parameter mapping \(\phi \) (Sect. 3.4) and the correction term \(\psi \) (Sect. 3.5). Finally, we give numerical results of the multifidelity approximation in Sect. 3.6.

3.2 Sigma-simulations: performing representative 3D simulations within the domain of interest

We consider a subset \(\mathscr {P'}_\text {3D}\subset {\mathscr {P}}_\text {3D}\) of \(N_2 < N_1\) parameters which cannot be converted directly into 0D model parameters. In order to assess the global outputs variations to these parameters in the set of \(x_{i}\in \varOmega _{3D}\), we perform a few selected simulations in the domain \(\varOmega _{3D}\).

To this end, we perform PCA on the set of \(x_{i}\in \varOmega _{3D}\), which gives \(N_1\) eigenvectors of the set. Then, we extract the \(N_2\) eigenvectors \(\mathbf {d_k}\) which display the maximal variations of the parameters \(\mathscr {P'}_\text {3D}\). This is done by sorting the eigenvectors by the norm of their projection of the subspace made by the coordinates corresponding to the parameters \(\mathscr {P'}_\text {3D}\), and selecting the \(N_2\) largest.

One sigma-simulation is then performed at the centre (\(X_{0}\)) of the domain of interest \(\varOmega _{3D}\) and pairs are performed equidistant of \(X_{0}\) in each of the \(N_2\) extracted directions (\(X_{k}^{+}=X_{0}+d_k\) and \(X_{k}^{-}=X_{0}-d_k\) for \(k=1..N_2\)). This gives a total of \(2N_2+1\) representative simulations in the domain.

In our example, the three haemodynamic parameters \(R_p\), C and \(P_\text {ve}\) have the same values and the same equations in both models, so we can use the same values directly in the two models. On the other hand, the contractility \(\sigma \) and the stiffness \(c_1\) do not have the same values in both models so we need to assess how their variation is going to impact the global outputs. We then extract the \(N_2=2\) directions for which the variations of \(\sigma \) and \(c_1\) are maximal and perform a total of \(2N_2+1=5\) sigma-simulations with parameters \(X_{0}\), \(X_{1}^{+}\), \(X_{2}^{+}\), \(X_{1}^{-}\) and \(X_{2}^{-}\).

3.3 Coupled 0D simulations: reproducing global outputs of the 3D sigma-simulations with 0D simulations

Then, for each sigma-simulation with parameters \(X_{i},\) \(i=1..2N_2+1\) and output values \({\mathscr {O}}_\text {3D}(X_{i})\), we want to find a corresponding 0D simulation which has similar global outputs values. To this end, we consider another set \(\mathscr {O'}\) of global outputs called the coupling outputs, and a set of 0D parameters \(\mathscr {P'}_\text {0D}\) called the coupled 0D parameters.

We then find values \(Y_{i}\) of the parameters \(\mathscr {P'}_\text {0D}\) for which the coupling outputs values \(\mathscr {O'}_\text {0D}(Y_{i})\) of the 0D model simulations are the closest from the coupling outputs values \(\mathscr {O'}_\text {3D}(X_{i})\) of the 3D model sigma-simulations, with all other parameters being the same in both models. This is what we call a coupled 0D simulation.

This is done by performing, for each 3D sigma-simulation \(k=1..N_2\), an independent parameter estimation of the 0D model parameters \({\mathscr {P}}_\text {0D}\) using the method presented in Sect. 2.3. The target values \(\widehat{\mathscr {O'}}\) for the coupling outputs \(\mathscr {O'}\) are their values in the corresponding 3D sigma-simulation.

In our example, since we want to approximate outputs from the volume and pressure curves (see Table 1), we need to approximate these curves with the 0D model. We then chose a set of 4 coupling outputs \(\mathscr {O'}\) from these curves, and a set of 3 coupled 0D parameters \({\mathscr {P}}_\text {0D}\) of the 0D model to estimate, both listed in Table 2.

Table 2 Coupling outputs, Coupled 3D parameters, Coupled OD parameters which are estimated and Directly Mapped Parameters in the example of
Table 3 Coupling outputs values for the 3D sigma-simulations with parameters \(X_{i}\) and the corresponding coupled 0D simulations with parameters \(Y_{i}\)

After performing the 5 parameter estimations for the 5 sigma-simulations, we found 5 coupled 0D simulations with parameters \(Y_{i},\) \(i=1..5\) which have similar coupling outputs values, which we report in Table 3. We also display the pressure and volume curves of the 3D sigma-simulations and coupled 0D simulations in Fig. 3.

It is worth noting there is no guarantee that we can find a set of parameters for which the 0D simulation has exactly the same global outputs values as the 3D simulation. In fact, we can observe in Table 3 that some coupling outputs do not have the same values in a 3D sigma-simulation and the 0D corresponding coupled simulations. We will see in a subsequent section how this obstacle can be overcome.

We point out that there are many possibilities to choose the sets of coupling outputs \(\mathscr {O'}\) and coupled 0D parameters. For example, another possibility would have been to use directly the set of outputs to approximate \({\mathscr {O}}\). This would have lead to 0D and 3D simulations with the same stroke volume, but not necessarily the same minimal and maximal volumes. In general, the sets of \(\mathscr {O'}\) and \({\mathscr {O}}\) have to be related so that it is possible to calculate the values of the outputs \({\mathscr {O}}\) from the values of the coupling outputs \(\mathscr {O'}\).

Similarly, there are many possibilities to choose the sets of coupling 0D parameters. Here we could also have set the resting radius in the 0D model to a value for which the “resting volume” is the same than in the 3D model and then estimate only the stiffness and contractility of the 0D model. Empirically, it seems to be a good thing to use more parameters to avoid ending in a local minimum during the parameter estimation of the 0D simulations.

3.4 Parameter mapping: a function to convert 3D model parameters into 0D model parameters

We now have a corresponding coupled 0D simulation with parameters \(Y_{i}\in \varOmega _{0D}\) for each sigma-simulation with parameters \(X_{i}\in \varOmega _{3D}\). The second idea of the coupling is to build a mapping \(\phi \) between the 3D and 0D model parameters using the \(X_{i}\) and \(Y_{i}\). This mapping will then be used to approximate global outputs values \({\mathscr {O}}_\text {3D}(x_i)\) of the 3D simulations with parameters \(x_i\), from the values \({\mathscr {O}}_\text {0D}(\phi (x_i))\) of the 0D simulations with parameters \(\phi (x_i)\).

The parameters \(X_{i}\in \varOmega _{3D}\) were chosen in a specific way in Sect. 3.2: one (\(X_{0}\)) is at the centre of the \(x_i\), and there are two equidistant of \(X_{0}\) (\(X_{k}^{+}\) and \(X_{k}^{-}\) for \(k=1..N_2\)) for each of the \(N_2\) axis, which are orthogonals from each other. However, the \(Y_{i}\) were independently estimated for each sigma-simulation so there is no such relationship.

For the mapping \(\phi \), we use here a degree 2 hypersurface which interpolates the \(Y_{i}\) in the points \(X_{i}\). In dimension 1, this is equivalent to finding a degree 2 polynomial which interpolates three specific points. In higher dimension (\(N_2\) in our case), there is a straightforward formula because of the specific organisation of \(X_{i}\) along orthogonal axis:

$$\begin{aligned} \left\{ \begin{array}{@{}l@{}} c_k=\mathbf {(x-X_{0})}\cdot \frac{\mathbf {d_{k}}}{||\mathbf {d_{k}}||^2},\qquad k=1..N_2\\ \mathbf {F_k^+}=\frac{(Y_{k}^{+}-Y_{0})+(Y_{k}^{-}-Y_{0})}{2},\qquad k=1..N_2\\ \mathbf {F_k^-}=\frac{(Y_{k}^{+}-Y_{0})-(Y_{k}^{-}-Y_{0})}{2},\qquad k=1..N_2\\ \phi (x)=Y_{0}+\sum \limits _{k=1}^{N_{2}} c_k \cdot (c_k\cdot \mathbf {F_k^+}+\mathbf {F_k^-}). \end{array}\right. \end{aligned}$$
(1)
Fig. 3
figure 3

Comparison between the volume (top) and pressure (bottom) curves of the sigma-simulations simulated with the 3D model (red), and the corresponding coupled 0D simulations (black). The 5 columns correspond, respectively, to the sigma-simulations with parameters \(X_{0}\), \(X_{1}^{+}\), \(X_{2}^{+}\), \(X_{1}^{-}\) and \(X_{2}^{-}\)

This formula leads to \(\phi (X_{i})=Y_{i}\) for all the \(i=1..2N_2+1\) sigma-simulations, so the parameters of the 3D sigma-simulations are mapped to the parameters of the coupled 0D simulations of the previous section. We will then use this mapping to approximate global outputs of 3D simulations with parameters \(x_i\) from 0D simulations with parameters \(\phi (x_i)\).

3.5 Approximating global outputs: correcting bias

Ideally in the computation of coupled 0D simulations in Sect. 3.3, we find 0D simulations with the same coupled outputs values than the 3D sigma-simulations, i.e. \(\mathscr {O'}_\text {3D}(X_{i})=\mathscr {O'}_\text {3D}(Y_{i})\). As illustrated in Table 3, this is not always the case and the coupled outputs values can be different between the coupled 0D simulations and the sigma-simulations. This also means that the direct approximation of the sigma-simulations output values \({\mathscr {O}}_\text {3D}(X_{i})\) by the values \({\mathscr {O}}_\text {0D}(Y_{i})\) through the mapping has a bias due to this difference.

In order to correct this approximation bias, both for the output values of the sigma-simulations and all the subsequent 3D simulations with parameters \(x_i\), we build a new degree 2 hypersurface \(\psi \) between the parameters of the sigma-simulations \(X_{i}\) and the bias. The formula is exactly the same as in Eq. 1 where the \(Y_{i}\) are replaced by the bias values \(({\mathscr {O}}_\text {3D}(X_{i})-{\mathscr {O}}_\text {0D}(Y_{i}))\).

The final approximating function \(\mathscr {C}_{\phi ,\psi }\) used to approximate the \({\mathscr {O}}_\text {3D}(x_{i})\) is then given by the following formula:

$$\begin{aligned} \mathscr {C}_{\phi ,\psi }(x_{i}) = {\mathscr {O}}_\text {0D}(\phi (x_{i})) + \psi (x_{i}) \approx {\mathscr {O}}_\text {3D}(x_{i}), \end{aligned}$$
(2)

and interpolates in particular the global outputs values \({\mathscr {O}}_\text {3D}(X_{i})\) of the sigma-simulations.

3.6 Approximation results

Results are given here for the approximation of the global outputs values \({\mathscr {O}}_\text {3D}(x_{i})\) of the 30 simulations with parameters \(x_i\). We compute the mean absolute error made on the approximation of the 3 global outputs \({\mathscr {O}}\), first with the biased approximation with \({\mathscr {O}}_\text {0D}(\phi (x_{i}))\) (\(MAE_\text {Biased}\)) and then with the corrected approximation with \(\mathscr {C}_{\phi ,\psi }\) (\(MAE_\text {Corrected}\)). Results are reported in Table 4.

We observe a good approximation of the output values compared to the range of values to be approximated and that the corrected approximation makes a better approximation of the outputs values than the biased approximation. This means the hypersurface \(\psi \) indeed corrects errors due to the differences between the coupled 0D simulations and the 3D sigma-simulations.

Table 4 Error in the approximation of the global outputs values \({\mathscr {O}}_\text {3D}(x_{i})\) with the various methods

Finally, we compare our method to an interpolation with a degree 2 hypersurface (\(MAE_\text {Hypersurface}\)). To this end, we use the same formula than Eq. 1, where the \(Y_{i}\) are replaced by the output values \({\mathscr {O}}_\text {3D}(X_{i})\). We see in particular that our method performs better on all the outputs (\(MAE_\text {Corrected} < MAE_\text {Hypersurface}\)), in particular on the pressure values. This is because the sigma-simulations are computed only in the directions of maximal variations of the parameters \(\sigma _0\) and \(c_1\) (see Sect. 3.2). There is then a few directions of the parameter space in which the variations of global output values could not be evaluated by the interpolation.

In order to compare more fairly to an interpolation method, we computed the sigma-simulations in all the directions of the domain by selecting all the eigenvectors in Sect. 3.2, leading to \(2\cdot N_1+1=11\) sigma-simulations. We performed the degree 2 interpolation from these 11 sigma-simulations and report the results (\(MAE_\text {Hypersurface-11}\)). The degree 2 hypersurface performs better than our method on the stroke volume and the mean pressure but not on the diastolic pressure.

We conclude that the approximation using the coupling of the 0D and 3D models gives competitive approximation results compared to the classical regression methods, and with the lowest computational cost. This is because the variations of some outputs (which rely on the same equations in both models) can be directly approximated in some directions of the parameter space, without having to compute 3D simulations in these directions. Here in particular, the pressure outputs variations due to changes in the haemodynamic parameters C, \(R_p\) and \(P_\text {ve}\) are correctly predicted with the coupling (especially the diastolic aortic pressure (DP) variations), even though no sigma-simulation was computed in the directions of maximal variation of these parameters (Sect. 3.2). As a consequence, only 5 sigma-simulations are required to approximate all the outputs values within the parameter space with the coupling, while the hypersurface interpolation needs 11 sigma-simulations to achieve similarly accurate results.

4 Multifidelity optimisation for efficient 3D cardiac model personalisation

Here we present our multifidelity personalisation method for the 3D model. We suppose that a parameter estimation with CMA-ES was set up over \(N_1\) parameters \({\mathscr {P}}\) of the 3D model as described in Sect. 2.3, some global outputs \({\mathscr {O}}\), some target values \(\widehat{{\mathscr {O}}}\) and a population size m. The idea of the method is to replace the scores of 3D simulations in CMA-ES with approximate scores calculated through multifidelity coupling.

Fig. 4
figure 4

Criteria for selecting the generation for the next coupling step in the selection step: as the 3D parameters of the simulations asked by CMA-ES (in black) are increasingly far from the sigma-simulations (in green) of the coupling, the predicted outputs values with 0D simulations (in orange) are increasingly far from the real outputs values of the 3D simulations. We then recompute the coupling when this distance is too high (\(M(o_n) > \gamma \sqrt{{|}{\mathscr {O}}{|}}\))

We illustrate the method with the same set of 5 parameters \({\mathscr {P}}\) and 3 outputs \({\mathscr {O}}\) as in Sect. 3 and the same number \(m=30\) for the population size. Target values \(\widehat{{\mathscr {O}}}\) for the optimisation are, respectively, 60 ml for the stroke volume (SV), 7315 Pa for the diastolic aortic pressure (DP) and 10,152 Pa for the mean aortic pressure (MP). The normalisation coefficients for this problem (in the vector \(\mathscr {N}\) defined in Sect. 2.3.1) are 10 ml for the stroke volume (SV), 200 Pa for the diastolic aortic pressure (DP) and the mean aortic pressure (MP).

4.1 Multifidelity-CMA: CMA-ES optimisation with the multifidelity coupling

At each iteration, the algorithm CMA-ES asks for the scores of m simulations of the 3D model, whose parameters \(x_{j}\) are drawn from a multivariate distribution.

A first approach to replace the computation of the 3D simulations by 0D simulations is to perform the coupling described in Sect. 3 for each generation of CMA-ES. This means recomputing sigma-simulations, coupled 0D simulations and a parameter mapping for each set of \(x_{j}\). This was our approach (called Coupled-CMA) in Mollero et al. (2016). We showed that the optimisation could converge with approximate scores, even as fast as with the real scores in some cases. We also personalised 34 hearts with this method, thus exhibiting a practical personalisation method with a lower computational burden than the original CMA-ES algorithm (because only the sigma-simulations were computed for each generation instead of the m 3D simulations).

Here we present an improved approach called multifidelity-CMA. Instead of recomputing the coupling for each generation, we approximate scores of 3D simulations of successive generations of CMA-ES. Indeed, because the sets of parameters \(x_{j}^n\) and \(x_{j}^{n+1}\) asked by CMA-ES in two consecutive generations n and \(n+1\) are usually close, the function \(\mathscr {C}_{\phi ,\psi }\) computed at the iteration n to approximate 3D simulations with parameters \(x_{j}^n\) can give a good approximation for 3D simulations with parameters \(x_{j}^{n+1}\) as well.

On the other hand, after a few iterations \(n+1..n+p\), the points asked by CMA-ES can be increasingly far from the sigma-simulations of the multifidelity coupling performed at n. This can lead to approximations of the scores which are increasingly inaccurate, making the optimisation impossible.

We then developed a criterion to evaluate the accuracy of the approximation for a few successive iteration of CMA-ES and then decide at which step a new multifidelity coupling has to be computed. This is done by iterating on the following steps:

  1. 1.

    Coupling step At a generation \(n_0\) of CMA-ES, we first perform a multifidelity coupling, as explained in 3.4. This leads to the computation of the function \(\mathscr {C}_{\phi ,\psi }\).

  2. 2.

    Exploration step Then, we perform N iterations \(n=n_0+1..n_0+N\) of the CMA-ES algorithm, where all the outputs \({\mathscr {O}}_\text {3D}(x_j^n)\) of the 3D simulations with parameters \(x_i^n\) are approximated by \(\mathscr {C}_{\phi ,\psi }(x_{j}^n)\).

  3. 3.

    Control step For each of these N iterations, we compute a control-simulation: the 3D simulation whose parameters \(o_n\) are the mean of the population parameters \(x_j^n\).

  4. 4.

    Selection step We compute our criterion \(M(o_n)\) as the Mahalanobis distance between the vector of outputs values \({\mathscr {O}}_\text {3D}(o_n)\) of the control-simulation and the set of vectors of approximated outputs values \(\mathscr {C}_{\phi ,\psi }(x_{j}^n)\).

    Finally, we select the iteration \(n*\) at which the next coupling step is performed with the following criteria:

    $$\begin{aligned} n*=\underset{M(o_n) < \gamma \sqrt{{|}{\mathscr {O}}{|}}}{\hbox {argmin}} {\mathscr {O}}_\text {3D}(o_n) \end{aligned}$$

The process is illustrated in Fig. 4. The Mahalanobis distance \(M(o_n)\) is a ratio between the approximation error on the control-simulation output values, and the range of approximate outputs values for this generation. Roughly, this gives an indication on “how accurate the coupling is” on the control-simulation, compared to “how accurate it needs to be” so that CMA-ES can rank the scores accurately.

For example in Fig. 5, we report for \(N=10\) iterations the scores of the control-simulations \(o_n\) which were predicted through the function \(\mathscr {C}_{\phi ,\psi }\) (in black), and the real scores of these simulations (in blue). Simultaneously, we show the criterion \(M(o_n)\) for these N iterations and the upper value (red line) \(\gamma \sqrt{{|}{\mathscr {O}}{|}}\) for the criterion (\(\gamma =1.5\) here).

We can see that the score prediction (thus the approximation of the outputs \({\mathscr {O}}_\text {3D}(x_{j}^n)\) values by \(\mathscr {C}_{\phi ,\psi }\)) is quite accurate for at least the 5 first iterations and is less accurate for \(n\ge 6\). Then, even though the score prediction seems as accurate at the iteration 5 than at the generation 1, \(M(o_n)\) is higher. This is because the prediction error is more important relatively to the set of \(\mathscr {C}_{\phi ,\psi }(x_{j}^n)\) of the generation, in particular in directions where the set has a lower variance.

Fig. 5
figure 5

Top: real scores (blue) and approximated scores (black) of the \(N=10\) control-simulations. Bottom: Value of the criterion \(M(o_n)\) of the control-simulations

In this example, the iteration 5 was selected to recompute the coupling (black vertical line), which is also the iteration where the control-simulation has the minimal score over the 10 iterations. In some cases, later iterations can have a lower score but are not selected because the criterion \(M(o_n)\) is too high for this iteration (such as the iteration 7).

The upper bound \(\gamma \sqrt{{|}{\mathscr {O}}{|}}\) for the criterion has an important impact on the optimisation behaviour. If a high accuracy is imposed (small \(\gamma \) value), then one of the earlier iterations of the exploration step is usually selected for the subsequent coupling step, even if a later control-simulation has a lower score. This can lead to a slow optimisation. On the other with a small accuracy (high \(\gamma \) value), the CMA-ES algorithm can end up in local minima because it performed the optimisation on inaccurate values.

Therefore, the value of \(\gamma \) characterises a trade-off between maximising the optimisation gain with a single coupling and ensuring the approximation errors do not impact the optimisation process. Because of the probabilistic nature of the algorithm and the various nonlinearities of the score function, the optimal value of \(\gamma \) seems very dependent on the optimisation problem. We found \(\gamma =1.5\) to give good convergence results in our experiments, and the number \(n*\) of the iteration selected in the selection step is 5.5 in average in our experiments.

4.2 Computational considerations: a parallelisable method

The main computational cost in personalisation methods comes from the computation of the 3D simulations. In our implementation, each simulation of one heartbeat with the 3D model uses one CPU, during a time \(T_{3D}\) which depends mostly on the size of the mesh, and the heartbeat duration.

Most of the modern research is performed on computer clusters which can perform many tasks at the same time. In particular, in our method, many steps can be parallelised. To compare different optimisation methods in a parallel setting, we introduce here two metrics: the classic CPU time which measures the total amount of CPU resources used and the optimisation time which measures the duration of the optimisation in (real) time.

During one complete iteration of multifidelity-CMA, the following steps are parallelised:

  1. 1.

    Computation of the \(2N_2+1\) 3D sigma-simulations: the simulations are performed in parallel and each one takes a CPU time \(T_{3D}\). The whole step has then a CPU time of \((2N_2+1) \cdot T_{3D}\) and an optimisation time of \(T_{3D}\)

  2. 2.

    Computation of the coupled 0D simulations: all the parameter estimations are performed in parallel. Each one uses 4 CPUs during fixed time of around 3 minutes. The whole step has a CPU time of \((2N_2+1) \cdot 4 \cdot \) 3 minutes and an optimisation time of 3 minutes.

  3. 3.

    Computation of the N 3D control-simulations: the simulations are performed in parallel and each one takes a CPU time \(T_{3D}\). The whole step has a CPU time of \(N \cdot T_{3D}\) and an optimisation time of \(T_{3D}\)

In our example, we have 5 sigma-simulation and 10 control-simulation, and the 3D simulation takes 15 minutes. Each iteration of multifidelity-CMA then takes a total CPU time of \(5*15+4*5*3+10*15 = 285\) minutes and an optimisation time of 33 minutes.

4.3 Results: comparison of optimisation time, CPU time for 4 personalisation methods

Here we compare the evolution of the CPU time and the score S during optimisation on a typical case, with the 4 following optimisation methods:

  1. 1.

    The multifidelity-CMA method with 0D/3D coupling.

  2. 2.

    The multifidelity-CMA method where the approximation of outputs is done with a degree 2 hypersurface interpolation relying on 11 sigma-simulations (as explained in Sect. 3.6).

  3. 3.

    The classic CMA-ES method with a population size of \(m=30\).

  4. 4.

    BOBYQA, which is another commonly used gradient-free optimiser, for example, to solve personalisation problems (Seegerer et al. 2015) or as a baseline to evaluate other personalisation methods (Neumann et al. 2016). It uses trust region method and forms successive quadratic models of the score function which interpolates the points computed during optimisation.

Fig. 6
figure 6

Comparison of the evolution of the score S (top) and CPU time (bottom) during optimisation for the four methods. BOBYQA is in red, the classic CMA-ES is in blue, multifidelity-CMA with the hypersurface approximation is in black, and multifidelity-CMA with 0D/3D coupling is in green

Results are shown in Fig. 6. We can see that BOBYQA (red) is slow to converge, but has also a low computational cost, both due to the fact that BOBYQA performs only one iteration at a time. The normal CMA-ES (blue) converges faster than BOBYQA, but with a very high computational cost because 30 simulations of the 3D model are computed at each generation.

Finally, both our multifidelity approaches are very fast to converge; however, the multifidelity-CMA which uses the 0D/3D multifidelity coupling is the one with the lowest CPU time (because only 5 sigma-simulations per complete iteration is computed instead of 11, as explained in Sect. 3.6).

We conclude that the multifidelity approach of the CMA-ES algorithm leads to considerable improvements in optimisation speed, both from the original CMA-ES algorithm and BOBYQA. Finally, the approximation of outputs with a 0D/3D multifidelity coupling instead of a generic hypersurface interpolation leads to additional computational gains.

4.4 Results: personalisation of a database of 121 cases

We finally present results on a large database of 121 cases. For each patient, a biventricular heart mesh geometry (between 10,000 and 15,000 nodes) was built from the available MRI image and the boundaries of the myocardium were tracked in the cine MRI images as described in Jolly et al. (2011) and Wang et al. (2013). This led to the computation of the volume curve and then the value of the stroke volume. Pressure measurements were also available for each heartbeat.

We applied our multifidelity-CMA method to personalise the whole cohort. The optimisation started from a vector \(x_{start}\) of parameter values which has the same values for every patient, except for \(P_{ve}\), which is set at the value DP-2000 Pa (see Table 6). The algorithms ran for around 2.5 days, and the BOBYQA optimisation was ran on the same problems during this period as well.

We consider a personalisation to be successful when a set of parameter values was found with a score lower than \(l_1=0.1\), and acceptable if the score is lower than \(l_2=1\). This means the personalised simulation matches the target stroke volume within 1 ml and the pressure measurements within 20 Pa for the successful case, and within, respectively, 10 ml and 200 Pa in the acceptable case. In other cases, the personalisation is said failed. We report the number of successful, acceptable and failed cases on this database, for both methods in Table 5.

Table 5 Results of the personalisation on the database
Table 6 Statistics of the estimated parameter values and their variations during the personalisation

A high number of cases were successfully personalised (113 among 121 cases) with our method. For the acceptable cases, and one of the failed case, the optimisation had converged in a local minima. For the other failed case, the CMA-ES algorithm diverged to extreme parameter values during optimisation. For BOBYQA, the convergence was not yet reached in most of the non-successful cases (the score is the lowest in the last iteration).

We finally report the mean and standard deviation of all the estimated parameter values, in Table 6, as well as the norm of their relative variation \(|\varDelta |\) compared to the starting value during the optimisation. This shows in particular that the stiffness \(c_1\) did not change a lot during the personalisation process. The arterial compliance C and the contractility \(\sigma _0\) are the parameters which changed the most.

5 Discussion and conclusion

We presented a novel multifidelity approach involving a 3D cardiac electromechanical cardiac model and a simplified 0D model, which relies on the same equations but with simplifying assumptions. We developed an original multifidelity coupling between the parameters of both models, which gives a good multifidelity approximation of global output values in 3D simulations from 0D simulations. We then used this approximation in an efficient parameter estimation process using the genetic algorithm CMA-ES, in order to have an efficient multifidelity personalisation method for the 3D model.

Our multifidelity coupling procedure computes a mapping between the parameters of a few representative 3D sigma-simulations within the domain, and the parameters of corresponding coupled 0D simulations with the same output values. This is done through parameter estimation on the 0D model parameters to compute coupled 0D simulations that have the same global outputs values than 3D sigma-simulations. The parameter mapping is then derived through an interpolation method.

This enables to get fast and accurate approximations of 3D simulations with the 0D model. These approximations are then used in the parameter estimation of 3D model parameters with CMA-ES, to replace 3D simulations while simultaneously controlling the accuracy of the approximation and recomputing a coupling when the accuracy is too low. Ultimately, this results into both an increase in the speed of the 3D parameter estimation process and a decrease in the computational cost.

Our multifidelity approach slightly differs from more classic multifidelity methods (Kennedy and O’Hagan 2000; Peherstorfer et al. 2016) where the same parameter values are used as input of both models, and the outputs of the low-fidelity model are corrected a posteriori to fit the outputs of the high-fidelity model. Since the parameters of both models are not exactly the same, we had to find a mapping between the parameters instead of the outputs. This was tractable thanks to the fast parameter estimation in the 0D model.

A first extension of the multifidelity coupling would be to use additional shared parameters and equations in both models, to approximate a larger variety of outputs of the 3D model (e.g. flow velocities, timings of valve opening and closing). Since CMA-ES has already been proven successful on complex optimisation problems with a larger parameter space, we expect the personalisation method to scale well. A second extension would be to use the multifidelity personalisation to personalise “geometrical” or “local” measurements which are outputs of the 3D model but not of the 0D model (e.g. the septal shortening or the circumferential torsion). Indeed, even though they cannot be approximated through the 0D/3D multifidelity coupling, their values can still be locally approximated during personalisation using the hypersurface interpolation.

Finally, the lower-fidelity approximation could be used not only for personalisation but also for other applications that require many simulations, such as parameter sensitivity or uncertainty quantification (with Monte–Carlo methods, for example) and also for applications simulations that require the computation of many cardiac cycles. In particular, a case where the multifidelity approach could be useful is when the 3D model is coupled with a full-body circulation model as boundary conditions. Indeed, studies associated with such models (for example on the influence of physical exercise, increased heart rate and/or pressure loads) usually require many heartbeats to be computed. This can be computationally intensive with the 3D model, but it could be done faster using 0D simulations, through a similar coupling method than in this manuscript. In this case where the number of coupled parameters would be high, additional constraints could be added in the parameter mapping to impose correlations between parameters with different equations or values but a similar behaviour.