Abstract
We study the problem of reconstructing an unknown source term in parabolic equations from integral observations. It is reformulated into a variational problem in combination with Tikhonov regularization and then a formula for the gradient of the objective functional to be minimized is computed via a solution of an adjoint problem. The variational problem is discretized by the splitting method based on finite difference schemes and solved by the conjugate gradient method. A numerical scheme for numerically estimating singular values of the solution operator in the inverse problem is suggested. Some numerical examples are presented to show the efficiency of the method.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The problem of determining a source term in parabolic equations from some observations plays an important role in practice [4, 9, 10]. Because of its importance, many researchers devoted their attention to it [1,2,3, 5, 7, 8, 12, 14, 17, 18, 22, 24]. For more details, let \(\varOmega \) be a bounded domain in \(\mathbb R^n\) with boundary \(\partial \varOmega \). Denote the cylinder \( Q := \varOmega \times (0,T],\) where \(T>0\) and \(S := \partial \varOmega \times (0,T]\). Let
Consider the initial boundary value problem
Let F have either one of the following forms
with \(\varphi (x,t)\in L^2(Q)\) and \(g(x,t)\in L^2(Q)\) being given.
We consider the problem of determining f from N integral observations of the solution u
with \(\omega _i(x) \in L^\infty (\varOmega )\), nonnegative almost everywhere and \(\int _\varOmega \omega _i(x)dx > 0\), being weighted functions. Suppose that \(z_i,\ i=1,2,\dots ,N\) are approximately given by \(z_i^\delta \) satisfying
These inverse problems may have many solutions, especially in the case f depends on x and t.
Indeed, suppose that the coefficients of (7) are sufficiently smooth. If \(\varphi (x,t)\not =0\) and u(x, t) is given for all \((x,t)\in Q=\varOmega \times (0,T)\), the inverse problem has a unique solution
We show that if there is a u satisfying (11), then there are infinitely many \(u\in C^\infty (Q),\ u|_S=0\) satisfying (11). Indeed, for \(v(x)\in C^\infty (\varOmega )\) satisfying (11), consider the following equation
Denote \(\mathcal P=\text {span}\{\omega _1, \omega _2,\dots ,\omega _N\}\). Then \(\mathcal P\) is a subspace of \(L^2(\varOmega )\) and \(\dim \mathcal P\le N.\) So \(\mathcal Q=\mathcal P^\perp \) is an infinite-dimensional space. Moreover, we have presentation \(v=v^1+v^2\), where \(v^1\in \mathcal {P},\) \(v^2\in \mathcal Q\) and \(\int _\varOmega v^1(x) v^2(x) dx = 0.\) It concludes that there are infinite functions \(v\in C^\infty (\varOmega )\) satisfying equation (12). So, there are infinitely many functions \(u(\cdot ,t)\in C^\infty (\varOmega )\) satisfying equation
Or, there are infinite functions \(u(\cdot ,t)\in C^\infty (\varOmega )\) satisfying (11). We conclude that the inverse problem of finding f from (11) has infinite solutions. Therefore, we have to introduce a notion to its solution.
This paper is organized as follows. In Section 2 we will describe the variational method with the splitting finite difference scheme to solve the inverse problem. In Section 3 we present the discretized the variational problem and the conjugate gradient method. Finally in Section 4 we simulate the proposed algorithms for some concrete examples.
2 Variational Problem
To introduce the concept of weak solution, we use the standard Sobolev spaces \(H^1(\varOmega )\), \(H^1_0(\varOmega )\), \(H^{1,0}(Q)\) and \(H^{1,1}(Q)\) [11, 21, 23]. Further, for a Banach space B, we define
with the norm
In the sequel, we shall use the space W(0, T) defined as
equipped with the norm
We note here that \((H^1_0(\varOmega ))' = H^{-1}(\varOmega )\).
The solution of the problem (7) is understood in the weak sense as follows: A weak solution in W(0, T) of the problem (7) is a function \(u(x,t)\in W(0,T)\) satisfying the identity
and
Based on the standard hypotheses (1), (2), (3), (4), (5) and 6, the existence and uniqueness of a solution, as well as an a priori estimate to the problem (7), can be established. More precisely, following [23, Chapter IV] and [21, pp. 141–152] there exists a unique solution in W(0, T) of the problem (7). Furthermore, there is a positive constant \(c_d\) independent of \(a_i,b,f,\varphi ,g\) and v such that
We denote the solution u(x, t) of the problem (7) by u(x, t; f) or u(f) to emphasize its dependence on f. To identify f from (11), we minimize the misfit functional
with respect to f. However, this minimization problem is unstable and there might be many minimizers to it. Therefore, we minimize the Tikhonov functional instead of (15). In fact, we minimize
for the case F has form (8).
for the case F has form (9).
for the case F has form (10). Here, \(\gamma > 0\) is the Tikhonov regularization parameter, \(f^*\) is an a priori estimation of f. By the standard method, we can prove that \(J_\gamma \) is Fréchet differentiable and derive a formula for its gradient. As \(l_iu(f)\) is affine, the functional \(J_\alpha \) is strictly convex. Hence, it attains a unique minimizer which we call \(f^*-\) least square solution to the inverse problems (7) and (11). As the inverse problem may have many solutions, we will see that the choice of \(f^*\) is crucial for selecting which one among these solutions to the inverse problem.
Indeed, introducing the adjoint problem
we can prove the following results [17, 19].
Theorem 1
The functional \(J_\gamma \) (8) is Fréchet differentiable and its gradient \(\nabla J_\gamma \) at f has the form
where p(x, t) is the solution to the adjoint problem (19).
Remark 1
When \(J_\gamma \) has the form in (17) or (18), we have
-
i)
$$\begin{aligned} \nabla J_\gamma (f) = \int _0^T \varphi (x,t) p(x,t) dt+ \gamma (f(x)- f^*(x)) \ \text {for the functional\ } 17. \end{aligned}$$
-
ii)
$$\begin{aligned} \nabla J_\gamma (f) = \int _\varOmega \varphi (x,t) p(x,t) dx+ \gamma (f(t)- f^*(t)) \ \text {for the functional\ } 18. \end{aligned}$$
2.1 Conjugate Gradient Method
To find the minimizer of (16), we use the conjugate gradient method (CG). It proceeds as follows: Assume that at the \(k-\)th iteration we have \(f^k\). Then the next iteration is
with
and
To evaluate \(\alpha ^k\) we denote by \(\bar{u}(v, g)\) the solution to the problem
with \(\tilde{u}[f]\) being the solution to the linear problem
In this case, the observation operators have the form
with \(A_i\) being bounded linear operators from \(L^2(Q)\) into \(L_2(0,T)\). We have
Differentiating \(J_\gamma (f^k+\alpha d^k)\) with respect to \(\alpha \) and putting \(\frac{\partial J_\gamma (f^k+\alpha d^k)}{\partial \alpha }=0\), after some elementary calculations, we obtain
Since \(d^k = - \nabla _\gamma (f^k)+\beta ^k d^{k-1} , \ r^k = - \nabla J_\gamma (f^k)\) and \(\left\langle r^k, d^{k-1}\right\rangle _{L^2(Q)} = 0\), we have
Thus, the CG has the form
Step 1: Set \(k=0\), initiate \(f^0.\)
Step 2: Calculate \(r^0 = - \nabla J_\gamma (f^0)\) and set \( d^0 = r^0\).
Step 3: Evaluate
Set \(f^1=f^0 + \alpha ^0 d^0.\)
Step 4: For \(k = 1,2, \dots \). Calculate
with
Step 5: Calculate
Update
2.2 Singular Values
Set
where \(A_i\) is defined in (20). The problem of determining f in (7) (f has form in (8) or (9) or (10)) from (11) can be written in the form \(\mathcal A f=z\), where
To characterize the ill-posedness degree of the inverse source problem, we have to estimate the singular values of \(\mathcal A\), i.e., the eigenvalues of \(\mathcal A^*A.\) In doing so, we proceed as follows.
We will present for the case f depends on both time and space variable, i.e., \(\mathcal A: L^2(Q)\rightarrow (L^2(0,T))^N\). For the operator \(A_i\), we have \(A_i^*\tilde{g}=\varphi (x,t)\tilde{p}(x,t)\), where \(\tilde{g}\in L^2(0,T)\) and \(\tilde{p}(x,t)\) is the solution to the adjoint problem
From (20), we have
Hence,
If we take \(z_i\) such that \(z_i=l_i\bar{u}(v,g)\), then due to Theorem 1, we have \(J'_0(f)=\sum _{i=1}^NA_i^*A_if=\varphi (x,t){p^*}(x,t)\), where \(p^*\) is the solution of the adjoint problem
Thus, if \(f(x,t)\in L^2(Q)\) is given, we can calculate the value \(J'_0(f)=\sum _{i=1}^NA_i^*A_if=\varphi (x,t)p^*(x,t)\). Although we do not know the explicit form of \(A_i^*A\), we can use the Lanczos algorithm [20] to estimate its eigenvalues when we discretize the problem. The algorithm looks as follows:
Initialization: Let \(\beta _0=0, q_0=0\) and an arbitrary vector b, calculate \(q_1=\frac{b}{\Vert b\Vert }\).
Put \(Q=q_1\) and \(k=0.\)
Iteration: For \(k=1,2,3,\dots \)
We will present some numerical examples showing the efficiency of this algorithm in Section 4.
3 Variational Method for Discretized Problem
In this section, we have to restrict some conditions on the domain and coefficients. We start with Problem (13)–(14). First, we suppose that \(\varOmega \) is the open parallelepiped \((0,L_1)\times (0,L_2) \times \cdots \times (0,L_n)\) in \( \mathbb R^n \). Second, in (7), we suppose that \(a_{ij} = 0\), if \(i\ne j\), and for simplicity from now on we denote \(a_{ii}\) by \(a_i\). Following [15, 16, 25] (see also [6, 19]), we subdivide the domain \(\varOmega \) into small cells by the rectangular uniform grid specified by
with \(h_i = L_i/N_i\) being the grid size in the \(x_i\)-direction, \(i = 1,\dots , n\). To simplify the notation, we denote by \(x^k := (x_1^{k_1}, \dots , x_n^{k_n}) \), where \(k := (k_1, \dots , k_n)\), \(0\le k_i \le N_i\). We also denote by \(h := (h_1,\dots , h_n)\) the vector of spatial grid sizes and \(\varDelta h := h_1\cdots h_n\). Let \(e_i\) be the unit vector in the \(x_i\)-direction, \(i = 1,\dots , n\), i.e., \(e_1 = (1,0,\dots ,0)\) and so on. Denote by
In the following, \(\varOmega _h\) denotes the set of the indices of all interior grid points and \(\bar{\varOmega }_h\) denotes the set of the indices of all grid points belonging to \(\bar{\varOmega }_h\), i.e.,
We also make use of the following sets
for \(i = 1,\dots , n\). For a function u(x, t) defined in \(Q_T\), we denote by \(u^k(t)\) its approximate value at \((x^k,t)\). We define the following forward finite difference quotient with respect to \(x_i\)
Now, taking into account the homogeneous boundary condition, we approximate the integrals in (13) as follows
Here \(b^k(t)\), \(f^k(t)\), \(\varphi ^k(t)\), \(g^k(t)\) and \(a_i^{k+\frac{e_i}{2}}(t)\) are approximations to the functions b(x, t), f(x, t), \(\varphi (x,t)\), g(x, t) and \(a_i(x,t)\) at the grid point \(x^k\). More precisely, if these functions are continuous at \(x^k\), we take their approximations by their value at \(x^k\) and \(a_i^{k+\frac{e_i}{2}}(t)=a_i(x^{k+\frac{e_i}{2}},t)\). Otherwise, we take
and
With the approximations (21), (22), (23), (24) and (25), we have the following discrete analogue of (13)
We note that, using the discrete analogue of integration by parts with boundary condition \(u^0=\eta ^0=0\) and \(u^{N_i}=\eta ^{N_i}=0\), we obtain
Hence, replacing this equality into (26), we obtain the following system which approximates the original problem (7)
with \(\bar{u} = \{u^k, k\in \varOmega _h\}\) being the grid function. The function \(\bar{v}\) is the grid function approximating the initial condition v and
for \(k\in \varOmega _h\) and
We note that the coefficient matrices \(\Lambda _i\) are positive semi-definite (see, e.g., [19]). The boundedness of the solution of (27) has shown in the following theorem.
Theorem 2
Let \(\bar{u}\) be a solution of the Cauchy problem (27). There exists a constant c independent of h and the coefficients of the equation such that
Proof
For arbitrary \(t^*\in (0,T]\), set
Since
and \(\bar{u}^k(0) = \bar{v}\), it follows from (26) that
Multiplying the both sides of the equality (29) by 2, applying Cauchy’s inequality to the first term in the right hand side, noting that \(b^k\ge 0\), we obtain
Put
From (30) we have
Applying Gronwall’s inequality, we obtain
Hence, we have
From the conditions (1), (2) and (3) about the coefficient \(a_i\), the inequalities (30) and (31) we have
Combining the two inequalities, we obtain the inequality (28).\(\square \)
3.1 Time Discretization
To obtain the finite difference scheme for (27), we divide the time interval [0, T] into M sub-intervals by the points \(t_i, i = 0,\dots , M, t_0=0, t_1=\varDelta t,\dots , t_{M} = M\varDelta t=T.\) For simplifying the notation, we set \(u^{k,m} := u^k(t_m)\). We also denote by \(F^{k,m} :=F^k(t_m)\) and \(\varLambda _i^m=\varLambda _i(t_m)\), \(m=0,\dots , M\). In the following, we drop the spatial index for simplifying the notation. The finite difference scheme is written as follows
3.2 Splitting Method
In order to obtain a splitting scheme for the Cauchy problem (27), we also discrete the time interval in the same with finite difference method. We denote \(u^{m+\delta } :=\bar{u}(t_m+\delta \varDelta t),\varLambda _i^m :=\varLambda _i(t_{m}+\Delta t/2).\) We introduce the following implicit two-circle component-by-component splitting scheme [15]
Equivalently,
where \(E_i\) is the identity matrix corresponding to \(\varLambda _i, i=1,\dots , n\). The splitting scheme (33) can be rewritten in the following compact form
with
where \(B_i^m := (E_i+\frac{\varDelta t}{4}\varLambda _i^m)^{-1}(E_i-\frac{\varDelta t}{4}\varLambda _i^m), i=1,\dots , n.\)
3.3 Discretized Variational Problem
To complete the variational method for multi-dimensional cases, we use the splitting method for the forward problem and take the discretized functional
where \(u^{k,m}(\bar{f})\) shows its dependence on the right-hand side term \(\bar{f}\) and m is the index of grid points on time axis. The notation \(\omega _i^k=\omega _i(x^k)\) indicates the approximation of the function \(\omega _i(x)\) in \(\varOmega _h\) at points \(x^k\). Normally, we take as its average over the cell where \(x_k\) is located.
For minimizing the problem (35) by the conjugate gradient method, we first calculate the gradient of objective function \(J_0^{h,\varDelta t}(\bar{f})\) and it is shown by the following theorem
Theorem 3
The gradient \(\nabla J_0^{h,\varDelta t}(\bar{f})\) of the objective function \(J_0^{h,\varDelta t}\) at \(\bar{f}\) is given by
where \(\eta \) satisfies the adjoint problem
with
Here the matrix \((B^{m})^*\) is given by
Proof
For an infinitesimally small variation \(\delta \bar{f}\) of \(\bar{f}\), we have from (35) that
where \(w^{k,m} := u^{k,m}(\bar{f}+\delta \bar{f})-u^{k,m}(\bar{f})\) and \(\psi _i^{k,m}={\varDelta h}\omega _i^k(\sum _{k\in \varOmega _{h}}\omega _i^ku^{k,m}-z_i^m),\ k\in \varOmega _h.\)
It follows from (34) that w is the solution to the problem
Taking the inner product of both sides of the mth equation of (39) with an arbitrary vector \(\eta ^m\in \mathbb R^{N_1\times \dots \times N_n}\), summing the results over \(m=0,\dots , M-1\), we obtain
Here \(\bigl (B^m\bigl )^*\) is the adjoint matrix of \(B^m\).
Taking the inner product of both sides of the first equation of (37) with an arbitrary vector \(w^{m+1}\), summing the results over \(m=0,\dots , M-1\), we obtain
Note that \(w^0=\eta ^M=0,\) from (40) and (41), we have
On the other hand, it can be proved by induction that \(\sum _{i=1}^N\sum _{m=1}^{M}\sum _{k\in \varOmega _h}\bigl (\omega _i^kw^{k,m}\bigl )^2=o(\Vert \delta \bar{f}\Vert )\). Hence, from (38) and(42), we obtain
Consequently, the gradient of the objective function \(J_0^h\) can be written as
Note that, since the coefficient matrices \(\varLambda _i^m, i=1,\dots ,n, m=0,\dots ,M-1\) are symmetric, we have
and
The proof is complete.\(\square \)
The conjugate gradient method for the discretized function (35) can be written by following steps:
Step 1. Given an initial approximation \(f^0\) and calculate the residual \(\hat{r}^0=\sum _{i=1}^N[l_iu(f^0)-z_i]\) by solving the splitting (32) with f being replaced by initial approximation \(f^0\) and set \(k=0\).
Step 2. Calculate the gradient \(r^0=-\nabla J_{\gamma }(f^0)\) given in (36) by solving the adjoint problem (37). Then we set \(d^0=r^0.\)
Step 3. Calculate
where \(l_id^0\) can be calculated from the splitting scheme (32) with f being replaced by \(d^0\) and \(g(x,t)=0, \ v=0\). Then, we set
Step 4. For \(k=1,2,\dots \), calculate \(r^k=-\nabla J_\gamma (f^k), d^k=r^k+\beta ^{k}d^{k-1},\) where
Step 5. Calculate \(\alpha ^k\)
where \(l_id^k\) can be calculated from the splitting scheme (32) with f being replaced by \(d^k\) and \(g(x,t)=0, \ v=0\). Then, set
4 Numerical Example
To illustrate the performance of the proposed algorithm, we present in this section some numerical tests. These algorithms were implemented in Matlab and run on a personal laptop with 11th Gen Intel(R) Core(TM) i5 2.4Mhz 2419 Mhz 4 Core(s) 8 Logical Processors.
4.1 One-Dimensional Problems
In this subsection, we present some numerical examples to estimate singular values and determine f. Let \(\varOmega =(0,1)\) and \(T=1\). Consider the one-dimensional system
where
For discretization, we take the grid size to be 0.02 in x and t. We take 3 observations at \(x^{10}=0.2, x^{25}=0.5\) and \(x^{35}=0.7\). The weighted functions \(\omega _i(x), i =1, 2, 3\) are chosen as follows
Approximate singular values of \(\mathcal A\) for the case f depends only on time variable t and space variable x are drawn in Fig. 1. From this figure, we see that the singular values for the case when f depends only on x is much smaller than that for the case f depends only on t. Therefore, the problem of reconstructing \(f=f(x)\) is much more ill-posed than \(f=f(t)\).
Now we present numerical results for reconstructing f(x, t). We test three types of f(x, t): smooth, non-smooth and discontinuous in the following examples.
Example 1
Example 2
Example 3
In all of three above examples, the initial guess \(f^*=0,02(\text {rand}(N_x,M)-0,5)+f\), noisy level \(\delta =0,02\), \(\gamma =10^{-2}\) and the initial iteration of the conjugate gradient method \(f^0=0\). Numerical solutions are presented in Figs. 2, 3 and 4.
4.2 Two-Dimensional Problems
We consider the domain \(\varOmega = (0,1)\times (0,1), \ T=1\) and denote the space variable \(x=(x_1,x_2)\). We take 4 observation distributed in 4 parts: \((0,0.5)\times (0,0.5)\), \((0.5,1)\times (0,0.5)\), \((0.5,1)\times (0.5,1)\) and \((0,0.5)\times (0.5,1)\).
Consider the system
The grid sizes are chosen 0.02 in x and in t. The weighted functions \(\omega _i(x), \ i=1, 2, 3, 4\) are chosen as follows
We test our algorithm for three cases f: (1) \(f=f(t)\), (2) \(f=f(x)\) and (3) \(f=f(x,t)\).
Example 4
We choose the a priori estimation \(f^*=0\), regularization parameter \(\gamma =10^{-2}\), \(f^0=0\), noise level \(\delta =0,02\) and
We suppose that f depends only on the time variable and has the form
-
1)
$$\begin{aligned} f(t)=\sin (2\pi t). \end{aligned}$$
-
2)
$$f(t)={\left\{ \begin{array}{ll} 2t &{} \text {if}\ t<0.5,\\ 2(1-t) &{} \text {otherwise}. \end{array}\right. }$$
-
3)
$$f(t)={\left\{ \begin{array}{ll} 1 &{} \text {if}\ 0.25\le t\le 0.75,\\ 0 &{} \text {otherwise}. \end{array}\right. }$$
The numerical results of Example 4 are shown in Fig. 5.
Example 5
We choose the a priori estimation \(f^*=0,02(\text {rand}(N_1,N_2)-0,5)+f\), regularization parameter \(\gamma =10^{-2}\), \(f^0=0\), noise level \(\delta =0,02\) and
We suppose that f depends only on the space variable and has the form
-
1)
$$\begin{aligned} f(x_1,x_2)=\sin (\pi x_1)\sin (\pi x_2). \end{aligned}$$
-
2)
$$f(x_1,x_2)={\left\{ \begin{array}{ll} 2x_2 &{} \text {if}\ x_2\le 0.5\ \text {and}\ x_2\le x_1\le 1-x_2,\\ 2(1-x_2)&{} \text {if}\ x_2\ge 0.5\ \text {and}\ x_2\ge x_1\ge 1-x_2,\\ 2x_1&{} \text {if}\ x_1\le 0.5\ \text {and}\ x_1\le x_2\le 1-x_1,\\ 2(1-x_1)&{} \text {otherwise}. \end{array}\right. }$$
-
3)
$$f(x_1,x_2)={\left\{ \begin{array}{ll} 1&{} \text {if}\ 0.25\le x_1\le 0.75 \ \text {and}\ 0.25\le x_2\le 0.75,\\ 0&{} \text {otherwise}. \end{array}\right. }$$
The numerical results of Example 5 are shown in Figs. 6, 7 and 8.
Example 6
We choose the a priori estimation \(f^*=0,02(\text {rand}(N_1,N_2,M)-0,5)+f\), regularization parameter \(\gamma =10^{-2}\), \(f^0=0\), noise level \(\delta =0,02\) and
We suppose that f depends on both the space and time variable as follows
The results of Example 6 are shown in Fig. 9.
We now discuss on the role of \(f^*.\) We will see that its choice is important in the case the inverse problem has many solutions.
We assume that f depends only on time variable. This guarantee the uniqueness solution to inverse problem. We take some different values for \(f^*\). However, the choice of \(f^*\) does not affect much the numerical solution. The information of this test as in the case f depends only on time variable as in Example 4, regularization parameter \(\gamma =10^{-2}\), \(f^0=0\), noise level \(\delta =0,02\). The numerical results with \(f^*=0, f^*=2\) and \(f^*=5\) are presented in Fig. 10 and Table 1 are not much different from each other.
In the case when the solution is not unique, the choice of \(f^*\) is crucial. As mention above, there may be infinitely many solutions to the inverse problem, the prediction \(f^*\) plays a significant role for selecting the solution. We use the system as in the case f depends both on time and space variables as in Example 6, regularization parameter \(\gamma =10^{-2}\), \(f^0=0\), noise level \(\delta =0,02\). By varying \(f^*\) near f, we can see that the conjugate gradient method will reconstruct the approximation which is closest \(f^*\).
In the test, if we choose \(f^*\) by
The numerical results are presented as in Figs. 11, 12, 13 and Table 2. We can see that if \(f^*\) is not close to the exact f, the algorithm cannot reconstruct the chosen f, but maybe the other one.
In the last example, we will test in case we have more observations. The priori estimation \(f^*=0\), noise level \(\delta =0,02\), regularization parameter \(\gamma =10^{-2}\), \(f^0=0\), \(a_1(x,t), a_2(x,t), a(x,t)\) and the initial condition v are chosen as in Example 4. The grid sizes are chosen 0.02 in x and in t. We choose 9 observations in domains \((0,0,34)\times (0,0,34),\ (0,0,34)\times (0,34,0,68), \ (0,0,34)\times (0,68,1), \ (0,34,0,68)\times (0,0,34),\ (0,34,0,68)\times (0,34,0,68), (0,34,0,68) \times (0,68,1), (0,68,1)\ \times (0,0,34), (0,\) \(68,1)\times (0,34,0,68), \ (0,68,1)\times (0,68,1).\) The results for reconstructing f are shown in Fig. 14. The comparison of the error between 3 observations and 9 observations is presented in Table 3. We can see that the numerical results for the case of 9 observations are better than that for the case of 3 observations.
References
Borukhov, V.T., Vabishchevich, P.N.: Numerical solution of an inverse problem of source reconstructions in a parabolic equation. Mat. Model. 10(11), 93–100 (1998). (Russian)
Erdem, A., Lesnic, D., Hasanov, A.: Identification of a spacewise dependent heat source. Appl. Math. Model. 37, 10231–10244 (2013)
Hào, D.N.: A noncharacteristic Cauchy problem for linear parabolic equations II: a variational method. Numer. Funct. Anal. Optim. 13, 541–564 (1992)
Hào, D.N.: Methods for Inverse Heat Conduction Problems. Peter Lang Verlag, Frankfurt/Main, Bern, New York, Paris (1998)
Hào, D.N., Huong, B.V., Oanh, N.T.N., Thanh, P.X.: Determination of a term in the right-hand side of parabolic equations. J. Comput. Appl. Math. 309, 28–43 (2017)
Hào, D.N., Thành, N.T., Sahli, H.: Splitting-based gradient method for multi-dimensional inverse conduction problems. J. Comput. Appl. Math. 232, 361–377 (2009)
Hamdi, A.: Identification of a time-varying point source in a system of two coupled linear diffusion-advection-reaction equations: application to surface water pollution. Inverse Prob. 25, 115009 (2009)
Hasanov, A., Pektaş, B.: A unified approach to identifying an unknown spacewise dependent source in a variable coefficient parabolic equation from final and integral overdeterminations. Appl. Numer. Math. 78, 49–67 (2014)
Isakov, V.: Inverse Source Problems. Amer. Math. Soc, Providence, RI (1990)
Isakov, V.: Inverse Problems for Partial Differential Equations, 2nd edn. Springer, New York (2006)
Ladyzhenskaya, O.A.: The Boundary Value Problems of Mathematical Physics. Springer-Verlag, New York (1985)
Lavrent’ev, M.M., Maksimov, V.I.: On the reconstruction of the right-hand side of a parabolic equation. Comput. Math. Math. Phys. 48, 641–647 (2008)
Ling, L.V., Takeuchi, T.: Point sources identification problems for heat equations. Commun. Comput. Phys. 5, 897–913 (2009)
Ling, L.V., Yamamoto, M., Hon, Y.C., Takeuchi, T.: Identification of source locations in two-dimensional heat equations. Inverse Prob. 22, 1289–1305 (2006)
Marchuk, G.I.: Methods of Numerical Mathematics. Springer-Verlag, New York (1975)
Marchuk, G.I.: Splitting and alternating direction methods. In: Ciaglet, P.G., Lions, J.-L. (eds). Handbook of Numerical Mathematics. Volume 1: Finite Difference Methods. Elsevier Science Publisher B.V., North-Holland, Amsterdam (1990)
Oanh, N.T.N., Huong, B.V.: Determination of a time-dependent term in the right hand side of linear parabolic equations. Acta Math. Vietnam. 41, 313–335 (2016)
Prilepko, A.I., Tkachenko, D.S.: The Fredholm property and the well-posedness of the inverse source problem with integral overdetermination. Comput. Math. Math. Phys. 43, 1338–1347 (2003)
Thành, N.T.: Infrared Thermography for the Detection and Characterization of Buried Objects. PhD thesis, Vrije Universiteit Brussel, Brussels, Belgium (2007)
Trefethen, L.N., Bau, D. III.: Numerical Linear Algebra. Philadelphia: SIAM (1997)
Tröltzsh, F.: Optimal Control of Partial Differential Equations: Theory. Methods and Applications. Amer. Math. Soc., Providence, Rhode Island (2010)
Vabishchevich, P.N.: Numerical solution of the problem of the identification of the right-hand side of a parabolic equation. Russian Math. (Iz. VUZ) 47(1), 27–35 (2003)
Wloka, J.: Partial Differential Equations. Cambridge Univ. Press, Cambridge (1987)
Yamamoto, M.: Conditional stability in determination of densities of heat sources in a bounded domain. In: Control and Estimation of Distributed Parameter Systems: Nonlinear Phenomena (Vorau, 1993), 359–370, Internat. Ser. Numer. Math. 118, Birkhäuser, Basel (1994)
Yanenko, N.N.: The Method of Fractional Steps. Springer-Verlag, Berlin, Heidelberg, New York (1971)
Acknowledgements
This work was supported by Vietnam Ministry of Education and Training under grant number B2024-TDV-12.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Oanh, N.T.N. Source Identification for Parabolic Equations from Integral Observations by the Finite Difference Splitting Method. Acta Math Vietnam 49, 283–308 (2024). https://doi.org/10.1007/s40306-024-00536-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40306-024-00536-6
Keywords
- Source identification
- Integral observations
- Least squares method
- Tikhonov regularization
- Conjugate gradient method