Abstract
Magnetic particle imaging (MPI) is a tracer-based technique for medical imaging where the tracer consists of ironoxide nanoparticles. The key idea is to measure the particle response to a temporally changing external magnetic field to compute the spatial concentration of the tracer inside the object. A decent mathematical model demands for a data-driven computation of the system function which does not only describe the measurement geometry but also encodes the interaction of the particles with the external magnetic field. The physical model of this interaction is given by the Landau–Lifshitz–Gilbert (LLG) equation. The determination of the system function can be seen as an inverse problem of its own which can be interpreted as a calibration problem for MPI. In this contribution the calibration problem is formulated as an inverse parameter identification problem for the LLG equation. We give a detailed analysis of the direct as well as the inverse problem in an all-at-once as well as in a reduced setting. The analytical results yield a deeper understanding of inverse problems connected to the LLG equation and provide a starting point for the development of robust numerical solution methods in MPI.
Access provided by Autonomous University of Puebla. Download chapter PDF
Similar content being viewed by others
1 Introduction
Magnetic particle imaging (MPI) is a dynamic imaging modality for medical applications that has first been introduced in 2005 by B. Gleich and J. Weizenecker [10]. Magnetic nanoparticles, consisting of a magnetic iron oxide core and a nonmagnetic coating, are inserted into the body to serve as a tracer. The key idea is to measure the nonlinear response of the nanoparticles to a temporally changing external magnetic field in order to draw conclusions on the spatial concentration of the particles inside the body. Since the particles are distributed along the bloodstream of a patient, the particle concentration yields information on the blood flow and is thus suitable for cardiovascular diagnosis or cancer detection [23, 24]. An overview of MPI basics is given in [23]. Since MPI requires the nanoparticles as a tracer, it mostly yields quantitative information on their distribution, but does not image the morphology of the body, such as the tissue density. The latter can be visualized using computerized tomography (CT) [29] or magnetic resonance imaging (MRI) [15]. These do not require a tracer, but involve ionizing radiation in the case of CT or, in the case of MRI, a strong magnetic field and a potentially high acquisition time. Other tracer-based methods are, e.g., single photon emission computerized tomography (SPECT) and positron emission tomography (PET) [8, 30, 36], which both involve radioactive radiation. The magnetic nanoparticles that are used in MPI, on the other hand, are not harmful for organisms. For a more detailed comparison of these methods, we would like to refer the reader to [23].
At this point there have been promising preclinical studies on the performance of MPI, showing that this imaging modality has a great potential for medical diagnosis since it is highly sensitive with a good spatial and temporal resolution, and the data acquisition is very fast [24]. However, particularly in view of an application to image the human body, there remain some obstacles. One obstacle is the time-consuming calibration process. In this work, we assume that the concentration of the nanoparticles inside the body remains static throughout both the calibration process and the actual image acquisition. Mathematically, the forward problem of MPI then can essentially be formulated as an integral equation of the first kind for the particle concentration (or distribution) c,
where the integration kernel s is called the system function. The system function encodes some geometrical aspects of the MPI scanner, such as the coil sensitivities of the receive coils in which the particle signal u is measured, but mostly it is determined by the particle behavior in response to the applied external magnetic field.
The actual inverse problem in MPI is to reconstruct the concentration c under the knowledge of the system function s from the measured data u. To this end, the system function has to be determined prior to the scanning procedure. This is usually done by evaluating a series of full scans of the field of view, where in each scan a delta sample is placed in a different pixel until the entire field of view is covered [23]. Another option is a model-based approach for s (see for example [22, 28]), which basically involves a model for the particle magnetization. Since this model often depends on unknown parameters, the model-based determination of the system function itself can again be formulated as an inverse problem. This article now addresses this latter type of inverse problem, i.e., the identification of the system function for a known set of concentrations from calibration measurements. More precisely, our goal is to find a decent model for the time-derivative of the particle magnetization m, which is proportional to s.
So far, in model-based approaches for the system function, the particle magnetization m is not modeled directly. Instead, one describes the mean magnetization \(\overline {\mathbf {m}}\) of the particles via the Langevin function, i.e., the response of the particles is modeled on the mesoscopic scale [21, 23]. This approach is based on the assumption that the particles are in thermodynamic equilibrium and respond directly to the external field. For this reason, the mean magnetization is assumed to be a function of the external field, such that the mean magnetization is always aligned with the external field. The momentum of the mean magnetization is calculated via the Langevin function. This model, however, neglects some properties of the particle behavior. In particular, the magnetic moments of the particles do not align instantly with the external field [4].
In this work, we thus address an approach from micromagnetics, which models the time-dependent behavior of the magnetic material inside the particles’ cores on the micro scale and allows to take into account various additional physical properties such as particle-particle interaction. For an overview, see for example [25]. Since the core material is iron oxide, which is a ferrimagnetic material that shows a similar behavior as ferromagnets [5, 6], we use the Landau–Lifshitz–Gilbert (LLG) equation
see also [9, 26], for the evolution of the magnetization m of the core material. The field H eff incorporates the external magnetic field together with other relevant physical effects. According to the LLG equation, the magnetization m performs a damped precession around the field vector of the external field, which leads to a relaxation effect. The LLG equation has been widely applied to describe the time evolution in micromagnetics [2, 7, 11].
In contrast to the imaging problem of MPI, the inverse problem of determining the magnetization m along with the constants \(\widetilde {\alpha }_1, \widetilde {\alpha }_2\) turns out to be a nonlinear inverse problem, which is typical for parameter identification problems for partial differential equations, for example electrical impedance tomography [1], terahertz tomography [38], ultrasound imaging [3] and other applications from imaging and nondestructive testing [20].
We use the all-at-once as well as the reduced formulation of this inverse problem in a Hilbert space setting, see also [16, 17, 31], and analyze both cases including well-definedness of the forward mapping, continuity, and Fréchet differentiability and calculate the adjoint mappings for the Fréchet derivatives. By consequence, iterative methods such as the Landweber method [14, 27], also in combination with Kaczmarz’ method [12, 13], Newton methods (see, e.g., [33]), or subspace techniques [37] can be applied for the numerical solution. An overview of suitable regularization techniques is given in [18, 19].
We begin with a detailed introduction to the modelling in MPI. In particular, we describe the full forward problem and present the initial boundary value problem for the LLG equation that we use to describe the magnetization evolution. In Sect. 3, we formulate the inverse problem of calibration both in the all-at-once and in the reduced setting to obtain the final operator equation that is analyzed in the subsequent section. First, in Sect. 4.1, we present an analysis for the all-at-once setting. The inverse problem in the reduced setting is then addressed in Sect. 4.2. Finally, we conclude our findings in Sect. 5 and give an outlook on further research.
Throughout the article, we make use of the following notation: The differential operators − Δ and ∇ are applied by components to a vector field. In particular this means that by ∇u we denote the transpose of the Jacobian of u. Moreover, 〈a, b〉 or a ⋅b denotes the Euclidean inner product between two vectors and A: B the Frobenius inner product between two matrices.
2 The Underlying Physical Model for MPI
The basic physical principle that is exploited in MPI is Faraday’s law of induction, which states that whenever the magnetic flux density B through a coil changes in time, this change induces an electric current in the coil. This current, or rather the respective voltage, can be measured. In MPI, the magnetic flux density B consists of the external applied magnetic field H ext and the particle magnetization M P, i.e.,
where μ 0 is the magnetic permeability in vacuum. The particle magnetization M P(x, t) in \(x \in \Omega \subseteq \mathbb {R}^3\) depends linearly on the concentration c(x) of magnetic material, which corresponds to the particle concentration, in x ∈ Ω and on the magnetization m(x, t) of the magnetic material. We thus have
where \(\lvert \mathbf {m} \rvert = m_{\mathrm {S}} > 0\), i.e., the vector m has the fixed length m S that depends on the magnetic core material inside the particles. At this point it is important to remark that we use a slightly different approach to separate the particle concentration, which carries the spatial information on the particles, from the magnetization behavior of the magnetic material and the measuring process. In our approach, the concentration is a dimensionless quantity, whereas in most models, it is defined as the number of particles per unit volume (see, e.g. [23]).
A detailed derivation of the forward model in MPI, based on the equilibrium model for the magnetization, can be found in [23]. The steps that are related to the measuring process can be adapted to our approach. For the reader’s convenience, we want to give a short overview and introduce the parameters related to the scanner setup.
If the receive coil is a simple conductor loop, which encloses a surface S, the voltage that is induced can be expressed by
The signal that is recorded in the receive coil thus originates from temporal changes of the external magnetic field H as well as of the particle magnetization M P,
For the signal that is caused by the change in the particle magnetization we obtain
The function
is called the system function and can be interpreted as a potential to induce a signal in the receive coil. The function p R is called the coil sensitivity and is determined by the architecture of the respective receive coil. For our purposes, we assume that p R is known. The measured signal that originates from the magnetic particles can thus essentially be calculated via an integral equation of the first kind with a time-dependent integration kernel s.
The particle magnetization, however, changes in time in response to changes of the external field. It is thus an important objective to encode the interplay of the external field and the particles in a sufficiently accurate physical model. The magnetization of the magnetic particles that are used in MPI can be considered on different scales. The following characterization from ferromagnetism has been taken from [25]:
On the atomic level, one can describe the behavior of a magnetic material as a spin system and take into account stochastic effects that arise, for example, from Brownian motion.
In the microscopic scale, continuum physics is applied to work with deterministic equations describing the magnetization of the magnetic material.
In the mesoscopic scale, we can describe the magnetization behavior via a mean magnetization, which is an average particle magnetic moment.
Finally, on a macroscopic scale, all aspects that arise from the microstructure are neglected and the magnetization is described by phenomenological constitutive laws.
In this work, we intend to use a model from micromagnetism, allowing us to work with a deterministic equation to describe the magnetization of the magnetic material. The core material of the nanoparticles consists of iron-oxide or magnetite, which is a ferrimagnetic material. The magnetization curve of ferrimagnetic materials is similar to the curve that is observed for ferromagnets, but with a lower saturation magnetization (see, e.g., [5, 6]). This approach has also been suggested in [32]. The evolution of the magnetization in time is described by the Landau–Lifshitz–Gilbert (LLG) equation
see [9, 25] and the therein cited literature. The coefficients
are material parameters that contain the gyromagnetic constant γ, the saturation magnetization m S of the core material and a damping parameter α D. The vector field H eff is called the effective magnetic field. It is defined as the negative gradient \(-D\mathcal {E}(\mathbf {m})\) of the Landau energy \(\mathcal {E}(\mathbf {m})\) of a ferromagnet, see, e.g., [25]. Taking into account only the interaction with the external magnetic field H and particle-particle interactions, this energy is given by
where A ≥ 0 is a scalar parameter (the exchange stiffness constant [9]). We thus have
Together with Neumann boundary conditions and a suitable initial condition our model for the magnetization thus reads
where \({\mathbf {h}}_{\mathrm {ext}} = \frac {\mu _0 m_{\mathrm {S}}}{2A} {\mathbf {H}}_{\mathrm {ext}}\) and \(\alpha _1 := 2A\widetilde {\alpha }_1, \alpha _2 := 2A\widetilde {\alpha }_2 > 0\). The initial value m 0 = m(t = 0) corresponds to the magnetization of the magnetic material in the beginning of the measurement. To obtain a reasonable value for m 0, we take into account that the external magnetic field is switched on before the measuring process starts, i.e., m 0 is the state of the magnetization that is acquired when the external field is static. This allows us to precompute m 0 as the solution of the stationary problem
with Neumann boundary conditions.
Remark 1
In the stationary case, damping does not play a role, and if we additionally neglect particle-particle interactions, we obtain the approximative equation
with an approximation \(\hat {\mathbf {m}}_0\) to \(\hat {\mathbf {m}}\), since α 2 ≈ 0 and H eff ≈ μ 0 m S H ext. The above equation yields \(\hat {\mathbf {m}}_0 \parallel {\mathbf {h}}_{\mathrm {ext}}(t=0)\). Together with \(\lvert \hat {\mathbf {m}}_0 \rvert = m_{\mathrm {S}}\) this yields
This represents a good approximation to m 0 where h ext is strong at the time point t = 0:
2.1 The Observation Operator in MPI
Faraday’s law states that a temporally changing magnetic field induces an electric current in a conductor loop or coil, which yields the relation (1). By consequence, not only the change in the particle magnetization contributes to the induced current, but also the dynamic external magnetic field H ext. Since we need the particle signal for the determination of the particle magnetization, we need to separate the particle signal from the excitation signal due to the external field. This is realized by processing the signal in a suitable way using filters.
MPI scanners usually use multiple receive coils to measure the induced particle signal at different positions in the scanner. We assume that we have \(L \in \mathbb {N}\) receive coils with coil sensitivities \({\mathbf {p}}^{\mathrm {R}}_\ell \), ℓ = 1, …, L, and the measured signal is given by
where T is the repetition time of the acquisition process, i.e., the time that is needed for one full scan of the object, and \(a_\ell \, : \, [0,T] \rightarrow \mathbb {R}\) is the transfer function with periodic continuation \(\widetilde {a}_\ell \, : \, \mathbb {R} \rightarrow \mathbb {R}\). The transfer function serves as a filter to separate particle and excitation signal, i.e., it is chosen such that
In practice, \(\widetilde {a}_\ell \) is often a band pass filter. For a more detailed discussion of the transfer function, see also [23]. In this work, the transfer function is known analytically.
We define
such that the measured particle signals are given by
where m fulfills (7), (8), (9).
To determine m in Ω × (0, T), we use the data v kℓ(t), k = 1, …, K, ℓ = 1, …, L, from the scans that we obtain for different particle concentrations c k, k = 1, …, K, \(K \in \mathbb {N}\). The forward operator thus reads
2.2 Equivalent Formulations of the LLG Equation
In this section, we derive additional formulations of (7)–(9) that are suitable for the analysis. The approach is motivated by Kruzík and Prohl [25], where only particle-particle interactions are taken into account.
First of all, we observe that multiplying (7) with m on both sides yields
which shows that the absolute value of m does not change in time. Since \(\lvert {\mathbf {m}}_0 \rvert = m_{\mathrm {S}}\), we have \(\mathbf {m}(x,t) \in m_{\mathrm {S}}\cdot \mathcal {S}^2\), where \(\mathcal {S}^2 := \lbrace \mathbf {v} \in \mathbb {R}^3 \, : \, \lvert \mathbf {v} \rvert = 1 \rbrace \) is the unit sphere in \(\mathbb {R}^3\). As a consequence, we have \(0 = \nabla \lvert \mathbf {m} \rvert ^2 = 2\nabla \mathbf {m} \cdot \mathbf {m}\) in Ω, so that, by taking the divergence we get
Now we make use of the identity
for \(\mathbf {a},\mathbf {b},\mathbf {c} \in \mathbb {R}^3\) to derive
Using (15) together with (16), (17) and \(\lvert \mathbf {m} \rvert = m_{\mathrm {S}}\), we obtain from (7)–(9)
Taking the cross product of m with (18) and multiplying with \(-\hat {\alpha }_2\), where \(\hat {\alpha }_1=\frac {\alpha _1}{m_{\mathrm {S}}^2\alpha _1^2+\alpha _2^2}\), \(\hat {\alpha }_2=\frac {\alpha _2}{m_{\mathrm {S}}^2\alpha _1^2+\alpha _2^2}\), by (16), (17) and cancellation of the first and third term on the right hand side we get
where the second term on the left hand side can be expressed via (18) as
This yields the alternative formulation
3 An Inverse Problem for the Calibration Process in MPI
Apart from the obvious inverse problem of determining the concentration c of magnetic particles inside a body from the measurements v ℓ, ℓ = 1, …, L, MPI gives rise to a range of further parameter identification problems of entirely different nature. In this work, we are not addressing the imaging process itself, but consider an inverse problem that is essential for the calibration process. Here, calibration refers to determining the system function s ℓ, which serves as an integral kernel in the imaging process. The system function includes all system parameters of the tomograph and encodes the physical behaviour of the magnetic material in the cores of the magnetic particles inside a temporally changing external magnetic field. Experiments show that a simple model for the magnetization, based on the assumption that the particles are in their equilibrium state at all times, is insufficient for the imaging, see, e.g., [22]. A model-based approach with an enhanced physical model has so far been omitted due to the complexity of the involved physics and the system function is usually measured in a time-consuming calibration process [23, 24].
In this work, we address the inverse problem of calibrating an MPI system for a given set of standard calibration concentrations c k, k = 1, …, K, for which we measure the corresponding signals and obtain the data v kℓ(t), k = 1, …, K, ℓ = 1, …, L. Here we assume that the coil sensitivity \({\mathbf {p}}^{\mathrm {R}}_{\ell }\) as well as the transfer function \(\widetilde {a}_{\ell }\) are known.
This, together with the fact that m is supposed to satisfy the LLG equation (21)–(23), is used to determine the system function (4). Actually, since p R is known, the inverse problem under consideration here consists of reconstructing m from (13), (21)–(23). As the initial boundary value problem (21)–(23) has a unique solution m for given \(\hat {\alpha }_1\), \(\hat {\alpha }_2\), it actually suffices to determine these two parameters. This is the point of view that we take when using a classical reduced formulation of the calibration problem
with the data y kℓ = v kℓ and the forward operator
containing the parameter-to-state map
that maps the parameters \(\hat {\alpha }\) into the solution \(\mathbf {m}:=S(\hat {\alpha })\) of the LLG initial boundary value problem (21)–(23). The linear operator \(\mathcal {K}\) is the integral operator defined by the kernels K kℓ, k = 1, …, K, ℓ = 1, …, L, i.e.,
Here, the preimage and image spaces are defined by
and the state space \(\tilde {\mathcal {U}}\) will be chosen appropriately below, see Sect. 4.2.
Alternatively, we also consider the all-at-once formulation of the inverse problem as a simultaneous system
for the state m and the parameters \(\hat {\alpha }\), with the forward operator
where
and
with \(\mathcal {K}_{k,\ell }\) as in (27). Here \(\mathbb {F}\) maps between \(\mathcal {U}\times \mathcal {X}\) and \(\mathcal {W}\times \mathcal {Y}\) with \(\mathcal {X}\), \(\mathcal {Y}\) as in (28), and \(\mathcal {U}\), \(\mathcal {W}\) appropriately chosen function spaces, see Sect. 4.1.
Iterative methods for solving inverse problems usually require the linearization \(F'(\hat {\alpha })\) of the forward operator F and its adjoint \(F'(\hat {\alpha })^*\) (and likewise for \(\mathbb {F}\)) in the given Hilbert space setting.
For example, consider Landweber’s iteration cf., e.g., [14, 27] defined by a gradient decent method for the least squares functional \(\|F(\hat {\alpha })-y\|{ }_{\mathcal {Y}}^2\) as
with an appropriately chosen step size μ n. Alternatively, one can split the forward operator into a system by considering it row wise \(F_k(\hat {\alpha })=y_k\) with F k = (F kl)ℓ=1…L or column wise \(F_\ell (\hat {\alpha })=y_\ell \) with F ℓ = (F kl)k=1,…,K, or even element wise \(F_{kl}(\hat {\alpha })=y_{kl}\), and cyclically iterating over these equations with gradient descent steps in a Kaczmarz version of the Landweber iteration cf., e.g., [12, 13]. The same can be done with the respective all-at-once versions [16]. These methods extend to Banach spaces as well by using duality mappings, cf., e.g., [35], however, for the sake of simplicity of exposition and implementation, we will concentrate on a Hilbert space setting here; in particular, all adjoints will be Hilbert space adjoints.
4 Derivatives and Adjoints
Motivated by their need in iterative reconstruction methods, we now derive and rigorously justify derivatives of the forward operators as well as their adjoints, both in an all-at-once and in a reduced setting.
To simplify notation for the following analysis sections, the subscript “ext” in the external magnetic field will be skipped. Moreover, to avoid confusion with the dual pairing, we will use the dot notation for the Euclidean inner product.
4.1 All-at-Once Formulation
We split the magnetization additively into its given initial value m 0 and the unknown rest \(\hat {\mathbf {m}}\), so that the forward operator reads
for given \(\mathbf {h}\in L^2(0,T;L^p(\Omega ;\mathbb {R}^3))\), p ≥ 2, where ΔN : H 1( Ω) → H 1( Ω)∗ and, using the same notation, \(\Delta _N:H^2_N(\Omega )\to L^2(\Omega )(\subseteq H^1(\Omega )^*)\) with \(H^2_N(\Omega )=\{u\in H^2(\Omega )\, : \, \partial _\nu u=0\mbox{ on }\partial \Omega \}\) Footnote 1 is equipped with homogeneous Neumann boundary conditions, i.e, it is defined by
and thus satisfies
The forward operator is supposed to act between Hilbert spaces
with the linear space
for s ∈ [0, 1], where the latter embedding is continuous by, e.g., [34, Lemma 7.3], applied to \(\frac {\partial u_i}{\partial x_j}\), and interpolation, as well as
We equip \(\mathcal {U}\) with the inner product
which, in spite of the nontrivial nullspace of the Neumann Laplacian − ΔN, defines a norm equivalent to the usual norm on \(L^2(0,T;H^2(\Omega ;\mathbb {R}^3))\cap H^1(0,T;L^2(\Omega ;\mathbb {R}^3))\), due to the estimates
This, together with the definition of the Neumann Laplacian (30), and the use of solutions z, v to the auxiliary problems
allows to derive the identity
which will be needed later on for deriving the adjoint.
On \(\mathcal {W}=H^1(0,T;H^1(\Omega ;\mathbb {R}^3))^*\) we use the inner product
with the isomorphism − ΔN + id : H 1( Ω) → (H 1( Ω))∗ and the time integral operators
so that I 2[w]t(t) = −I 1[w](t), I 1[w]t(t) = −I 2[w]tt(t) = w(t) and I 2[w](0) = I 2[w](T) = 0, hence
so that in case \({\mathbf {w}}_2\in L^2(0,T;L^2(\Omega ;\mathbb {R}^3))\),
In case p > 2 in the assumption on h, we can set \(\mathcal {W}=H^1(0,T;L^2(\Omega ;\mathbb {R}^3))^*\) and use the simpler inner product
which in case \({\mathbf {w}}_2\in L^2(0,T;L^2(\Omega ;\mathbb {R}^3))\) satisfies
4.1.1 Well-Definedness of the Forward Operator
Indeed it can be verified that \(\mathbb {F}\) maps between the function spaces introduced above, cf. (31), (32). For the linear (with respect to \(\hat {\mathbf {m}}\)) parts \(\hat {\alpha }_1 \hat {\mathbf {m}}_t\), \(-\Delta _N \hat {\mathbf {m}}\), and \(\int _0^T \int _\Omega {\mathbf {K}}_{k\ell }(t,\tau ,x)\cdot {\mathbf {m}}_t(x,\tau )\, dx\, d\tau \) of \(\mathbb {F}\), this is obvious and for the nonlinear terms \(\hat {\alpha }_2 ({\mathbf {m}}_0+\hat {\mathbf {m}})\times \hat {\mathbf {m}}_t\), \(|\nabla ({\mathbf {m}}_0+\hat {\mathbf {m}})|{ }^2({\mathbf {m}}_0+\hat {\mathbf {m}})\), \((({\mathbf {m}}_0+\hat {\mathbf {m}})\cdot \mathbf {h})({\mathbf {m}}_0+\hat {\mathbf {m}})\) we use the following estimates (36), (37), (38), (39), (40), (41), holding for any \(\mathbf {u},\mathbf {w},\mathbf {z}\in \mathcal {U}\). For the term \(\hat {\alpha }_2 ({\mathbf {m}}_0+\hat {\mathbf {m}})\times \hat {\mathbf {m}}_t\), we estimate
where we have used duality and continuity of the embeddings \(H^1(0,T;H^1(\Omega ;\mathbb {R}^3))\) \(\hookrightarrow L^2(0,T;H^1(\Omega ;\mathbb {R}^3))\hookrightarrow L^2(0,T;L^3(\Omega ))\) in the first and second estimate, and Hölder’s inequality with exponent 4 in the third estimate; For the term \(|\nabla ({\mathbf {m}}_0+\hat {\mathbf {m}})|{ }^2({\mathbf {m}}_0+\hat {\mathbf {m}})\), we use
again using duality and the embeddings \(H^1(0,T;H^1(\Omega ;\mathbb {R}^3))\) ↪L ∞(0, T;H 1( Ω))↪L ∞(0, T;L 6( Ω));For the term \((({\mathbf {m}}_0+\hat {\mathbf {m}})\cdot \mathbf {h})({\mathbf {m}}_0+\hat {\mathbf {m}})\), we estimate
by duality and the embedding \(H^1(0,T;H^1(\Omega ;\mathbb {R}^3))\hookrightarrow L^2(0,T;L^6(\Omega ))\), as well as Hölder’s inequality.
In case p > 2, \(\mathbb {F}\) maps into the somewhat stronger space \(\mathcal {W}=H^1(0,T;L^2(\Omega ;\mathbb {R}^3))^*\), due to the estimates
as well as
and
for \(p^{**}=\frac {2p}{p-2}<\infty \), which can be bounded by the \(\mathcal {U}\) norm of u and z, using interpolation with \(s=\frac 14\) in (31).
4.1.2 Differentiability of the Forward Operator
Formally, the derivative of \(\mathbb {F}\) is given by
where \(\frac {\partial \mathbb {F}_0}{\partial \hat {\mathbf {m}}}(\hat {\mathbf {m}},\hat {\alpha }):\mathcal {U}\to \mathcal {W}\), \(\frac {\partial \mathbb {F}_0}{\partial \hat {\alpha }_1}(\hat {\mathbf {m}},\hat {\alpha }):\mathbb {R}\to \mathcal {W}\), \(\frac {\partial \mathbb {F}_0}{\partial \hat {\alpha }_2}(\hat {\mathbf {m}},\hat {\alpha }):\mathbb {R}\to \mathcal {W}\), \((\frac {\partial \mathbb {F}_{k\ell }}{\partial \hat {\mathbf {m}}}(\hat {\mathbf {m}},\hat {\alpha }))_{k=1,\ldots ,K,\ell =1,\ldots ,L}:\mathcal {U}\to L^2(0,T)^{KL}\). Fréchet differentiability follows from the fact that in
all linear terms cancel out and the nonlinear ones are given by (abbreviating \(\mathbf {m}={\mathbf {m}}_0+\hat {\mathbf {m}}\))
hence, using again (36)–(38), they can be estimated by some constant multiplied by \(\|\mathbf {u}\|{ }_{\mathcal {U}}^2+\beta _1^2+\beta _2^2\).
4.1.3 Adjoints
We start with the adjoint of \(\frac {\partial \mathbb {F}_0}{\partial \hat {\mathbf {m}}}(\hat {\mathbf {m}},\hat {\alpha })\). For any \(\mathbf {u}\in \mathcal {U}\), y ∈ L 2(0, T;L 2( Ω)), we have, using the definition of − ΔN, i.e., (30),
where we have integrated by parts with respect to time and used the vector identities
Matching the integrals over Ω × (0, T) and Ω ×{T}, respectively, and taking into account the homogeneous Neumann boundary conditions implied by the definition of − ΔN, (30), as well as the identities (34), (35), we find that \(\frac {\partial \mathbb {F}_0}{\partial \hat {\mathbf {m}}}(\hat {\mathbf {m}},\hat {\alpha })^*\mathbf {y}=:\mathbf {z}\) is the solution of (33) with f = f y, \(\mathbf {g}={\mathbf {g}}^{\mathbf {y}}_T\), where in case \(\mathcal {W}=H^1(0,T;H^1(\Omega ;\mathbb {R}^3))^*\), \(\mathbf {y}=I_2[\widetilde {y}]\), with \(\widetilde {y}(t)\) solving
for each t ∈ (0, T), or in case \(\mathcal {W}=H^1(0,T;L^2(\Omega ;\mathbb {R}^3))^*\), just y = I 2[w].
With the same y, after pointwise projection onto the mutually orthogonal vectors \(\hat {\mathbf {m}}_t(x,t)\) and \(({\mathbf {m}}_0(x)+\hat {\mathbf {m}}(x,t))\times \hat {\mathbf {m}}_t(x,t)\) and integration over space and time, we also get the adjoints of \(\frac {\partial \mathbb {F}_0}{\partial \hat {\alpha }_1}(\hat {\mathbf {m}},\hat {\alpha })\), \(\frac {\partial \mathbb {F}_0}{\partial \hat {\alpha }_2}(\hat {\mathbf {m}},\hat {\alpha })\)
Finally, the fact that for \(\mathbf {u}\in \mathcal {U}\), y ∈ L 2(0, T)KL
where we have integrated by parts with respect to time, implies that due to (34), \((\frac {\partial \mathbb {F}_{k\ell }}{\partial \hat {\mathbf {m}}}(\hat {\mathbf {m}},\hat {\alpha }))_{k=1,\ldots ,K,\ell =1,\ldots ,L}^*y=\mathbf {z}\) is obtained by solving another auxiliary problem (33) with
Remark 2
In case of a Landweber-Kaczmarz method iterating cyclically over the equations defined by \(\mathbb {F}_0,\mathbb {F}_{k\ell }\), k = 1, …, K, ℓ = 1, …, L, adjoints of derivatives of \(\mathbb {F}_0\) remain unchanged while adjoints of \(\frac {\partial \mathbb {F}_{k\ell }}{\partial \hat {\mathbf {m}}}(\hat {\mathbf {m}},\hat {\alpha }))_{k=1,\ldots ,K,\ell =1,\ldots ,L}\) are defined as in (42), (43) by just skipping the sums over k and ℓ there.
4.2 Reduced Formulation
We now consider the formulation (24) with F defined by (25), (26), and (27). Due to the estimate
if \(\widetilde {a}_{\ell }\in L^2(0,T), c_k{\mathbf {p}}_{\ell }^R\in L^2(\Omega ,\mathbb {R}^3)\) we can choose the state space in the reduced setting as
which is different from the one in the all-at-once setting.
4.2.1 Adjoint Equation
From (25) the derivative of the forward operation takes the form
where u solves the linearized LLG equation
and m is the solution to (21)–(23). This equation can be obtained by formally taking directional derivatives (in the direction of u) in all terms of the LLG equation (21)–(23), or alternatively by subtracting the defining boundary value problems for S(m + 𝜖 u) and S(m), dividing by 𝜖 and then letting 𝜖 tend to zero.
The Hilbert space adjoint
of \(F'(\hat {\alpha })\) satisfies, for each z ∈ L 2(0, T)KL,
as the transfer function \(\tilde {a}\) is periodic with period T, and the continuous embedding H(0, T)↪C[0, T] allows us to evaluate u(t = T).
Observing
we see that, if q z solves the adjoint equation
then with (46), we have
which implies the Hilbert space adjoint \(F'(\hat {\alpha })^*:\mathcal {Y}\rightarrow \mathbb {R}^2\)
provided that the adjoint state q z exists and belongs to a sufficiently smooth space (see Sect. 4.2.2 below).
The final condition (49) is equivalent to
where m i(T), i = 1, 2, 3, denotes the i-th component of m(T). The matrix \(M^{\hat {\alpha }}_T\) with \(\det (M^{\hat {\alpha }}_T)=|\hat {\alpha }_1(\hat {\alpha }_1^2+\hat {\alpha }_2^2)|\) is invertible if \(\hat {\alpha }_1 > 0\), which matches the condition for existence of the solution to the LLG equation. Hence, we are able to rewrite the adjoint equation in the form
Remark 3
Formula (50) inspires a Kaczmarz scheme relying on restricting the observation operator to time subintervals for every fixed k, ℓ, namely, we segment (0, T) into several subintervals (t j, t j+1) with the break points 0 = t 0 < … < t n−1 = T and
with
hence
Here we distinguish between the superscript j for the time subinterval index and subscripts k, ℓ for the index of different receive coils and concentrations.
For \(z^j\in \mathcal {Y}^j\),
yield the same Hilbert space adjoint \(F^{j'}(\hat {\alpha })^*:\mathcal {Y}^j\rightarrow \mathbb {R}^2\) as in (50), and the adjoint state \({\mathbf {q}}^{z^j}\) still needs to be solved on the whole time line [0, T] with
Besides this, the conventional Kaczmarz method resulting from the collection of observation operators \(\mathcal {K}_{k\ell }\) with k = 1…K, ℓ = 1…L as in (13) is always applicable, where
with
Thus \({F^{\prime }_{k\ell }(\hat {\alpha })}^*\) can be seen as (50), where the adjoint state \({\mathbf {q}}^z_{k\ell }\) solves (51)–(53) with corresponding data
for each \(z\in \mathcal {Y}_{k\ell }\).
4.2.2 Solvability of the Adjoint Equation
First of all, we derive a bound for q z. To begin with, we set τ = T − t to convert (51)–(53) into an initial boundary value problem. Then we test (51) with \({\mathbf {q}}^z_t\) and obtain the identities and estimates
Above, we employ the fact that the solution m to the LLG equation has |m| = 1 and the continuity of the embeddings \(H^1(\Omega ,\mathbb {R}^3)\hookrightarrow L^6(\Omega ,\mathbb {R}^3)\hookrightarrow L^3(\Omega ,\mathbb {R}^3), H^2(\Omega ,\mathbb {R}^3)\hookrightarrow L^\infty (\Omega ,\mathbb {R}^3)\) through the constants \(C^\Omega _{H^1\rightarrow L^6}, C^\Omega _{H^1\rightarrow L^3}\) and \(C^\Omega _{H^2\rightarrow L^\infty }\), respectively.
Employing Young’s inequality we deduce, for each t ≤ T and 𝜖 > 0 sufficiently small,
The generic constant C might take different values whenever it appears.
To have the full H 1 −norm on the left hand side of this estimate, we apply the transformation \(\tilde {{\mathbf {q}}^z}(t)=e^t{\mathbf {q}}^z(t)\), which yields \(\tilde {{\mathbf {q}}^z}_t(t)=e^{t}({\mathbf {q}}^z(t)+{\mathbf {q}}^z_t(t))\). After testing by \({\mathbf {q}}^z_t\), the term \(\int _\Omega {\mathbf {q}}^z(t)\cdot {\mathbf {q}}^z_t(t) \,dx=\frac {1}{2}\frac {d}{dt}\|{\mathbf {q}}^z(t)\|{ }^2_{L^2(\Omega ,\mathbb {R}^3)}\) will contribute to \(\frac {1}{2}\frac {d}{dt}\|\nabla {\mathbf {q}}^z(t)\|{ }^2_{L^2(\Omega ,\mathbb {R}^3)}\) forming the full H 1 −norm on the left hand side. Alternatively, one can add q z to both sides of (51) and evaluate the right hand side with \(\int _\Omega {\mathbf {q}}^z(t)\cdot {\mathbf {q}}^z_t(t) \,dx\leq \frac {1}{4\epsilon }\|{\mathbf {q}}^z(t)\|{ }^2_{H^1(\Omega ,\mathbb {R}^3)}+\epsilon \|{\mathbf {q}}^z_t(t)\|{ }^2_{L^2(\Omega ,\mathbb {R}^3)}\).
Integrating over (0, t), we get
with the evaluation for the terms \(\|\tilde {K} z\|{ }_{L^2(0,T;L^2(\Omega ,\mathbb {R}^3))}\) and \(\|(M_T^{\hat {\alpha }})^{-1}\tilde {K}_T z\|{ }^2_{H^1(\Omega ,\mathbb {R}^3)}\) (not causing any misunderstanding, we omit here the subscripts k, ℓ for indices of concentrations and coil sensitivities)
with some i, j, k = 1, 2, 3. This estimate holds for \(c{\mathbf {p}}^R\in H^1(\Omega ,\mathbb {R}^3)\) and thus requires some smoothness of the concentration c, while the coil sensitivity p R is usually smooth in practice.
Then applying Grönwall’s inequality yields
Integrating (62) on (0, T), we also get
Altogether, we obtain
This result applied to the Galerkin approximation implies existence of the solution to the adjoint equation. Uniqueness also follows from (63).
4.2.3 Regularity of the Solution to the LLG Equation
In (63), first of all we need the solution \(\mathbf {m}\in L^\infty (0,T;H^2(\Omega ,\mathbb {R}^3))\) \(\cap L^2(0,T;H^3(\Omega ,\mathbb {R}^3))\) to the LLG equation. This can be obtained from the regularity result in [11, Lemma 2.3] for \({\mathbf {m}}_0\in {H^2(\Omega ,\mathbb {R}^3)}\) with small \(\|\nabla {\mathbf {m}}_0\|{ }_{L^2(\Omega ,\mathbb {R}^3)}\). The remaining task is verifying that the estimate still holds in case the external field h is present, i.e., the right hand side of (21) contains the additional term \(\mbox{Proj}_{{\mathbf {m}}^\bot }\mathbf {h}\).
Following the lines of the proof in [11, Lemma 2.3], we take the second spatial derivative of \(\mbox{Proj}_{{\mathbf {m}}^\bot }\mathbf {h}\), then test it by Δm such that
with C just depending on the constants in the embeddings \(H^1(\Omega ,\mathbb {R}^3)\hookrightarrow L^6(\Omega ,\mathbb {R}^3)\hookrightarrow L^3(\Omega ,\mathbb {R}^3)\). Then we can proceed similarly to the proof of [11, Lemma 2.3] by applying Young’s inequality, Gronwall’s inequality and time integration to arrive at
where ∥h∥ is evaluated in \(L^2(0,T;H^1(\Omega ,\mathbb {R}^3))\) or \(L^2(0,T;H^2(\Omega ,\mathbb {R}^3))\) as in the two cases mentioned above.
It remains to prove \({\mathbf {m}}_t\in L^2(0,T;H^1(\Omega ,\mathbb {R}^3))\hookrightarrow L^2(0,T;L^3(\Omega ,\mathbb {R}^3))\) to validate (63). For this purpose, instead of working with (21) we test (18) by − Δm t and obtain
Integrating over (0, T) then employing Hölder’s inequality, Young’s inequality and (64), it follows that
Also \(\|{\mathbf {m}}_t\|{ }_{L^2(0,T;L^2(\Omega ,\mathbb {R}^3))}<C\left (\|\nabla {\mathbf {m}}_0\|{ }_{L^2(\Omega ,\mathbb {R}^3)}+\|\mathbf {h}\|{ }_{L^2(0,T;L^2(\Omega ,\mathbb {R}^3))}\right )\) according to [25] with taking into account the presence of h, we arrive at
where ∥h∥ is evaluated in \(L^2(0,T;H^1(\Omega ,\mathbb {R}^3))\) or \(L^2(0,T;H^2(\Omega ,\mathbb {R}^3))\).
In conclusion, the fact that \(\mathbf {m}\in L^\infty (0,T;H^2(\Omega ,\mathbb {R}^3))\cap L^2(0,T;H^3(\Omega ,\mathbb {R}^3))\cap H^1(0,T;H^1(\Omega ,\mathbb {R}^3))\) for \({\mathbf {m}}_0\in H^2(\Omega ,\mathbb {R}^3)\) with small \(\|\nabla {\mathbf {m}}_0\|{ }_{L^2(\Omega ,\mathbb {R}^3)}\), and\(\mathbf {h}\in L^2(0,T;H^1(\Omega ,\mathbb {R}^3)), \partial _\nu \mathbf {h}=0\) on ∂ Ω or \(\mathbf {h}\in L^2(0,T;H^2(\Omega ,\mathbb {R}^3))\) guarantee unique existence of the adjoint state \({\mathbf {q}}^z\in L^\infty (0,T;H^1(\Omega ,\mathbb {R}^3))\cap H^1(0,T;L^2(\Omega ,\mathbb {R}^3))\). And this regularity of q z ensures the adjoint \(F'(\hat {\alpha })^*\) in (50) to be well-defined.
Remark 4
-
The LLG equation (21)–(23) is uniquely solvable for \(\hat {\alpha }_1>0\) and arbitrary \(\hat {\alpha }_2\). Therefore, the regularization problem should be locally solved within the ball \(\mathcal {B}_\rho (\hat {\alpha }^0)\) of center \(\hat {\alpha }^0\) with \(\hat {\alpha }^0_1>0\) and radius \(\rho <\hat {\alpha }_1^0\).
-
[11, Lemma 2.3] requires smallness \(\|\nabla {\mathbf {m}}_0\|{ }_{L^2(\Omega ,\mathbb {R}^3)}\leq \lambda \), and this smallness depends on \(\hat {\alpha }\) through the relation \(C^I\left (\lambda ^2+2\lambda +\frac {\hat {\alpha }_2}{\hat {\alpha }_1}\lambda \right )<1\) with C I depending on the constants in the interpolation inequalities.
Altogether, we arrive at
4.2.4 Differentiability of the Forward Operator
Since the observation operator \(\mathcal {K}\) is linear, differentiability of F is just the question of differentiability of S.
Let us rewrite the LLG equation (21) in the following form
and denote
Considering the system of equations
with the same boundary and initial data for each, we see that \(\tilde {\mathbf {v}}^\epsilon \) solves
explicitly
Observing the similarity of (71)–(73) to the adjoint equation (51)–(53) with \(\tilde {\mathbf {v}}^\epsilon \) in place of q z and denoting by b 𝜖 the right-hand side of (68) or (71), one can evaluate \(\|\tilde {\mathbf {v}}^\epsilon \|\) using the same technique as in Sect. 4.2.2. By this way, one achieves, for each \(\epsilon \in [0,\bar {\epsilon }]\),
with \(\mathbf {b^\epsilon }\in L^2(0,T;L^2(\Omega ,\mathbb {R}^3))\) also by analogously estimating and employing \(\mathbf {m},\mathbf {n}\in L^\infty (0,T;H^2(\Omega ,\mathbb {R}^3))\cap L^2(0,T;H^3(\Omega ,\mathbb {R}^3))\cap H^1(0,T;H^1(\Omega ,\mathbb {R}^3))\). We note that the constant C here is independent of 𝜖.
Next letting \(\mathcal {V}:={L^\infty (0,T;H^1(\Omega ,\mathbb {R}^3))\cap H^1(0,T;L^2(\Omega ,\mathbb {R}^3))}\), we have
In order to prove uniform boundedness of the derivatives of \(\tilde {f}, \tilde {g}\) w.r.t λ, 𝜖 in the above estimate, we again proceed in a similar manner as in Sect. 4.2.2 since the space for q z in Sect. 4.2.2 (c.f. (64)) coincides with \(\mathcal {V}\) here and by the fact that
for \(\mathbf {m},\mathbf {n}\in L^\infty (0,T;H^2(\Omega ,\mathbb {R}^3))\cap L^2(0,T;H^3(\Omega ,\mathbb {R}^3))\cap H^1(0,T;H^1(\Omega ,\mathbb {R}^3))\). If ∂ ν h = 0 on ∂ Ω, we just need the \(\|.\|{ }_{L^2(0,T;H^1(\Omega ,\mathbb {R}^3))}\)-norm for h as claimed in (64). This estimate holds for any \(\epsilon \in [0,\bar {\epsilon }]\), and the constant C is independent of 𝜖.
To accomplish uniform boundedness for \(\|\mathbf {b^\epsilon }\|{ }_{L^2(0,T;L^2(\Omega ,\mathbb {R}^3))}\), we need to show that \(\|\mathbf {v^\epsilon }\|{ }_{\mathcal {V}}\) is also uniformly bounded w.r.t 𝜖. It is seen from
that v 𝜖 solves
Noting that M := m + λ𝜖 v 𝜖 = λ n + (1 − λ)m has \(\|\mathbf {M}\|\leq \frac {C}{{\hat {\alpha }^0_1}-\rho }\) for all λ ∈ [0, 1] with C being independent of 𝜖, and \(\tilde {g}\) is first order in \(\hat {\alpha }\), we can rewrite (75) into the linear equation
Following the lines of the proof in Sect. 4.2.2, boundedness of the terms \(-\Delta , \tilde {F}(\mathbf {M})\), \(\tilde {B}(\mathbf {M})\) are straightforward, while the main term in \(\tilde {G}(\hat {\alpha }+\lambda \epsilon \beta ,\mathbf {M})\) producing the single square norm of \(\mathbf {v_t^\epsilon }\), after being tested by \(\mathbf {v_t^\epsilon }\) is
According to this, one gets, for all \(\epsilon \in [0,\bar {\epsilon }]\),
with C depending only on \({\mathbf {m}}_0, \mathbf {h}, \hat {\alpha }^0,\rho \).
Since b 𝜖 → 0 pointwise and \(\|\mathbf {b^\epsilon }\|{ }_{L^2(0,T;L^2(\Omega ,\mathbb {R}^3))}\leq C\) for all \(\epsilon \in [0,\bar {\epsilon }]\), applying Lebesgue’s Dominated Convergence Theorem yields convergence of \(\|\mathbf {b^\epsilon }\|{ }_{L^2(0,T;L^2(\Omega ,\mathbb {R}^3))}\), thus of \(\|\tilde {\mathbf {v}}^\epsilon \|{ }_{\mathcal {V}}\), to zero. Fréchet differentiability of the forward operator in the reduced setting is therefore proved.
5 Conclusion
In this contribution we outlined a mathematical model of MPI taking into account relaxation effects, which led us to the LLG equation describing the behavior of the magnetic material inside the particles on a microscale level. For calibrating the MPI device it is necessary to compute the system function, which mathematically can be interpreted as an inverse parameter identification problem for an initial boundary value problem based on the LLG equation. To this end we deduced a detailed analysis of the forward model, i.e., the operator mapping the coefficients to the solution of the PDE as well as of the underlying inverse problem. The inverse problem itself was investigated in an all-at-once and a reduced approach. The analysis includes representations of the respective adjoint operators and Fréchet derivatives. These results are necessary for a subsequent numerical computation of the system function in a robust manner, which will be subject of future research. Even beyond this, the analysis might be useful for the development of solution methods for other inverse problems that are connected to the LLG equation.
Notes
- 1.
Note that as opposed to H 1( Ω) functions, H 2( Ω) functions do have a Neumann boundary trace.
References
L. Borcea, Electrical impedance tomography. Inverse Prob. 18, R99–R136 (2002)
F. Bruckner, D. Suess, M. Feischl, T. Führer, P. Goldenits, M. Page, D. Praetorius, M. Ruggeri, Multiscale modeling in micromagnetics: existence of solutions and numerical integration. Math. Models Methods Appl. Sci. 24, 2627–2662 (2014)
D. Colton, R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory (Springer, New York, 2013)
L.R. Croft, P.W. Goodwill, S.M. Conolly, Relaxation in x-space magnetic particle imaging. IEEE Trans. Med. Imaging 31, 2335–2342 (2012)
B.D. Cullity, C.D. Graham, Introduction to Magnetic Materials (Wiley, New York, 2011)
W. Demtroeder, Experimentalphysik (Springer, Berlin, 2013)
T. Dunst, M. Klein, A. Prohl, A. Schäfer, Optimal control in evolutionary micromagnetism. IMA J. Numer. Anal. 35, 1342–1380 (2015)
A.S. Fokas, G.A. Kastis, Mathematical Methods in PET and SPECT Imaging (Springer, New York, 2015), pp. 903–936
T.L. Gilbert, A phenomenological theory of damping in ferromagnetic materials. IEEE Trans. Magn. 40, 3443–3449 (2004)
B. Gleich, J. Weizenecker, Tomographic imaging using the nonlinear response of magnetic particles. Nature 435, 1214–1217 (2005)
B. Guo, M.-C. Hong, The Landau-Lifshitz equation of the ferromagnetic spin chain and harmonic maps. Calculus Variations Partial Differ. Equ. 1, 311–334 (1993)
M. Haltmeier, R. Kowar, A. Leitao, O. Scherzer, Kaczmarz methods for regularizing nonlinear ill-posed equations II: applications. Inverse Prob. Imaging 1, 507–523 (2007)
M. Haltmeier, A. Leitao, O. Scherzer, Kaczmarz methods for regularizing nonlinear ill-posed equations I: convergence analysis. Inverse Prob. Imaging 1, 289–298 (2007)
M. Hanke, A. Neubauer, O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 72, 21–37 (1995)
W. Hinshaw, A. Lent, An introduction to NMR imaging: From the Bloch equation to the imaging equation. Proc. IEEE 71, 338–350 (1983)
B. Kaltenbacher, Regularization based on all-at-once formulations for inverse problems. SIAM J. Numer. Anal. 54, 2594–2618 (2016)
B. Kaltenbacher, All-at-once versus reduced iterative methods for time dependent inverse problems. Inverse Prob. 33, 064002 (2017)
B. Kaltenbacher, A. Neubauer, O. Scherzer, Iterative Regularization Methods for Nonlinear Ill-Posed Problems (De Gruyter, Berlin, 2008)
B. Kaltenbacher, F. Schöpfer, T. Schuster, Iterative methods for nonlinear ill-posed problems in Banach spaces: convergence and applications to parameter identification problems. Inverse Prob. 25, 065003 (2009)
A. Kirsch, A. Rieder, Inverse problems for abstract evolution equations with applications in electrodynamics and elasticity. Inverse Prob. 32, 085001 (2016)
T. Kluth, Mathematical models for magnetic particle imaging. Inverse Prob. 34, 083001 (2018)
T. Kluth, P. Maass, Model uncertainty in magnetic particle imaging: nonlinear problem formulation and model-based sparse reconstruction. Int. J. Magn. Part. Imaging 3, 1707004 (2017)
T. Knopp, T.M. Buzug, Magnetic Particle Imaging: an Introduction to Imaging Principles and Scanner Instrumentation (Springer, Berlin, 2012)
T. Knopp, N. Gdaniec, M. Möddel, Magnetic particle imaging: from proof of principle to preclinical applications. Phys. Med. Biol. 62, R124 (2017)
M. Kruzík, A. Prohl, Recent Developments in the Modeling, Analysis, and Numerics of Ferromagnetism. SIAM Rev. 48, 439–483 (2006)
L. Landau, E. Lifshitz, 3—On the theory of the dispersion of magnetic permeability in ferromagnetic bodies Reprinted from Physikalische Zeitschrift der Sowjetunion 8, Part 2, 153, 1935, in Perspectives in Theoretical Physics, ed. by L. Pitaevski (Pergamon, Amsterdam, 1992), pp. 51–65
L. Landweber, An iteration formula for Fredholm integral equations of the first kind. Am. J. Math. 73, 615–624 (1951)
T. März, A. Weinmann, Model-based reconstruction for magnetic particle imaging in 2D and 3D. Inverse Prob. Imaging 10, 1087–1110 (2016)
F. Natterer, The Mathematics of Computerized Tomography (Vieweg+Teubner, Wiesbaden, 1986)
F. Natterer, F. Wübbeling, Mathematical Methods in Image Reconstruction (SIAM, Philadelphia, 2001)
T.T.N. Nguyen, Landweber–Kaczmarz for parameter identification in time-dependent inverse problems: all-at-once versus reduced version. Inverse Prob. 35, 035009 (2019)
D.B. Reeves, J.B. Weaver, Approaches for modeling magnetic nanoparticle dynamics. Critical Rev. Biomed. Eng. 42, 85–93 (2014)
A. Rieder, On the regularization of nonlinear ill-posed problems via inexact Newton iterations. Inverse Prob. 15, 309–327 (1999)
T. Roubíček, in Nonlinear Partial Differential Equations with Applications. International Series of Numerical Mathematics (Springer, Basel, 2013)
T. Schuster, B. Kaltenbacher, B. Hofmann, K. Kazimierski, Regularization Methods in Banach Spaces (de Gruyter, Berlin, 2012). Radon Series on Computational and Applied Mathematics
L. Shepp, Y. Vardi, Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imag. 1, 113–122 (1982)
A. Wald, T. Schuster, Sequential subspace optimization for nonlinear inverse problems. J. Inverse Ill-posed Prob. 25, 99–117 (2016)
A. Wald, T. Schuster, Tomographic terahertz imaging using sequential subspace optimization, in New Trends in Parameter Identification for Mathematical Models, ed. by B. Hofmann, A. Leitao, J. Zubelli (Birkhäuser, Basel, 2018)
Acknowledgements
The work of Anne Wald and Thomas Schuster was partly funded by Hermann und Dr. Charlotte Deutsch–Stiftung and by the German Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung, BMBF) under 05M16TSA. This article was written during Tram Nguyen’s employment at Alpen-Adria-Universität Klagenfurt.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Kaltenbacher, B., Nguyen, T.T.N., Wald, A., Schuster, T. (2021). Parameter Identification for the Landau–Lifshitz–Gilbert Equation in Magnetic Particle Imaging. In: Kaltenbacher, B., Schuster, T., Wald, A. (eds) Time-dependent Problems in Imaging and Parameter Identification. Springer, Cham. https://doi.org/10.1007/978-3-030-57784-1_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-57784-1_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-57783-4
Online ISBN: 978-3-030-57784-1
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)