Abstract
The problem of determining a time-dependent term from integral observations in the right-hand side of parabolic equations is studied. It is reformulated into a variational problem, and a formula for the gradient of the functional to be minimized is derived via an adjoint problem. The variational problem is discretized by the splitting method based on finite differences. A formula for the gradient of the discretized functional is given and the conjugate gradient method is suggested for numerically solving the problem. Several numerical examples are presented for illustrating the efficiency of the algorithm.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In many practical contexts, the sources in diffusion or heat transfer processes are not known and required to be determined from several additional conditions, say, observations or measurements [1, 2, 6, 9, 15, 16]. These are inverse problems of determining a term in the right-hand side of parabolic equations. Due to their importance in practice, a great number of researchers took part in studying them and a lot of theoretical and numerical methods for them have been developed [3–5, 11–13, 17–19, 25–28]. As there is a vast literature on these inverse problems, we do not attempt to give a review on them, but refer to the excellent survey by Prilepko and Tkachenko [26] and the recent paper by Hasanov [12] and the references therein, as well as in the above cited references.
In this paper, we concentrate ourselves on a particular case: determine a time-dependent term from an integral observation in the right-hand side of parabolic equations in the time direction. We note that the problem of determining a time-dependent term in the right-hand side of parabolic equations has been mainly investigated for one-dimensional case (see above cited references) with pointwise observations, except for [19, 24] where some multidimensional problems with integral observations are studied. Our motivation for such a formulation of this problem is that in practice any instrument has a width, and so any measurement is average. Therefore, integral observations are more reasonable than pointwise ones. Following this viewpoint, the pointwise observations can be regarded as an average process or the limit of average processes. However, when solutions of multidimensional direct problems are understood in the weak sense, in general, pointwise observations do not make sense since the solutions are not continuous. We will reformulate the inverse problem in our setting to a variational problem and prove that the functional to be minimized is Fréchet differentiable and we derive a formula for it. To solve the variational problem numerically, one can use the standard finite element method and prove some convergence as in [14]. However, in this paper, we use the finite difference method instead. The reason for such an approach is that one can use the splitting method for the discretized problems by the finite difference method and thus reduce high-dimensional problems to smaller ones [10]. We will derive the gradient of the discretized functional to be minimized and then use the conjugate gradient (CG) method [23] for solving the problem and test the algorithm on a computer.
This paper is organized as follows. In the next section, we will formulate the inverse problem and its variational formulation. In Section 3, we will discretize the variational problem by the splitting method and derive the gradient for it. In the last section, we will present some numerical experiments for showing the efficiency of the algorithms.
2 Problem Setting and Its Variational Formulation
Let Ω be an open bounded domain in \(\mathbb {R}^{n}\). Denote by ∂Ω the boundary of Ω, Q := Ω × (0, T], and S := ∂Ω×(0, T]. Consider the following problem
Here, a i , i = 1,…, n, b and φ are in L ∞(Q), g ∈ L 2(Q), f ∈ L 2(0, T) and u 0 ∈ L 2(Ω). It is assumed that \(a_{i} \geq \underline {a} > 0\) with \(\underline {a}\) being a given constant and b≥0. Furthermore,
with \(\underline {\varphi }\) being a given constant.
To introduce the concept of weak solution, we use the standard Sobolev spaces H 1(Ω), \({H^{1}_{0}}({\Omega })\), H 1,0(Q) and H 1,1(Q) [20, 29, 31]. Further, for a Banach space B, we define
with the norm
In the sequel, we shall use the space W(0, T) defined as
equipped with the norm
We note here that \(({H^{1}_{0}}({\Omega }))^{\prime } = H^{-1}({\Omega })\).
The solution of the problem (2.1) is understood in the weak sense as follows: A weak solution in W(0, T) of the problem (2.1) is a function u(x, t) ∈ W(0, T) satisfying the identity
and
Following [31, Chapter IV] and [29, p. 141–152] we can prove that there exists a unique solution in W(0, T) of the problem (2.1). Furthermore, there is a positive constant c d independent of a i , b, f, φ, g and u 0 such that
In this paper, we will consider the inverse problem of determining the time-dependent term f(t) from an integral observation of the solution u. Namely, we try to reconstruct f(t) from the observation
where ω(x) is a weight function. We suppose that ω ∈ L ∞(Ω), nonnegative almost everywhere in Ω and \({\int }_{\Omega }\omega (x) dx > 0\). The observation data h is supposed to be in L 2(0, T).
Before going further, let us discuss about the kind of observation. Suppose that x 0∈Ω is the point where we want to observe the heat transfer (or diffusion) process, i.e., the solution u, and Ω1 is a neighborhood of it. Let ω be of the form
with |Ω1| being the volume of Ω1. Then , lu shows the result of the measurement and can be understood as an average of u(x 0, t) if it exists. If we let |Ω1| tend to zero, it will converge to u(x 0, t). However, since the solution u is understood in the weak sense, u(x 0, t) does not always make sense. Thus, it is more reasonable to formulate the inverse problem of determining f(t) in the form of (2.1), (2.5), rather than the form (2.1) and
We note that the solvability of the inverse problem (2.1), (2.7) has been proved by Prilepko and Solov’ev by the Rothé method [25]. The solvability of the inverse problem (2.1), (2.5) has been proved in [24]. However, the numerical methods for the last have not been developed. Our aim is to suggest a stable numerical method for it as follows.
We denote the solution u(x, t) of (2.1) by u(x, t, f) (or u(f) if there is no confusion) to emphazise its dependence on the unknown function f(t). Following the least-squares approach [7, 8], we estimate the unknown function f(t) by minimizing the objective functional
over L 2(0, T).
To stabilize this variational problem, we minimize the Tikhonov functional
with γ being a regularization parameter which has to be properly chosen and f ∗ an estimation of f which is supposed in L 2(0, T). If γ > 0, it is easily proved that there exists a unique solution to the minimization problem (2.9) over L 2(0, T). Now, we prove that J γ is Fréchet differentiable and derive a formula for its gradient.
Theorem 1
The functional J γ is Fréchet differentiable and its gradient ∇J γ (f) at f has the form
where p(x,t) satisfies the adjoint problem
Proof
We note that if we change the time direction in the adjoint problem (2.11), then we get a problem of the same type of the direct problem (2.1). Therefore, if the solution of the sadjoint problem is understood in the weak sense as u, there exists a unique weak solution in W(0, T) of it.
Denote the scalar product in L 2(0, T) by 〈⋅,⋅〉. For an infinitesimally small variation δ f of f, we have
where δ u(f) is the solution to the problem
Due to the estimate (2.4), \(\|l\delta u(f)\|^{2}_{L^{2}(0,T)} = o(\|\delta f\|_{L^{2}(0,T)})\) as \(\|\delta f\|_{L^{2}(0,T)} \rightarrow 0\).
We have
Using Green’s formula (see [29, Theorem 3.18]) for (2.12) and (2.11), we have
Hence,
Consequently, J 0 is Fréchet differentiable and its gradient has the form
From this equality, we immediately arrive at (2.10). The proof is complete. □
As the gradient of J γ is estimated, we use the CG to find the minimizers of the objective functional (2.8). It goes as follows:
where
To determine α k, we proceed as follows. We denote by \(\tilde {u}[f]\) the solution of the problem
and \(\overline {u}[u_{0},g]\) the solution of the problem
Then,
where \(Af := l\tilde {u}[f] \) is a bounded linear operator from L 2(0, T) into itself. We now evaluate α k which is the solution of the minimization problem
We have
The derivative of J γ (f k + α d k) with respect to α thus has the form
Letting \(\frac {d J_{\gamma }(f^{k}+\alpha d^{k})}{d\alpha }=0\), we have
From (2.13), α k can be rewritten as
Therefore,
The above iteration process is written for the continuous problem. To find the minimizers of J γ (f), we have to discretize the direct and adjoint problems (2.1) and (2.7), as well as the functional J γ . We can, of course, use the solutions to the discretized direct and adjoint problems to approximate the gradient of J γ via the formula (2.10). However, from practice, we see that such an approach yields a good approximation to the gradient only for the first iteration. We therefore first discretize the direct problem (2.1) and form a discretization of J γ and then based on this to introduce the discretized adjoint problem to find the gradient of the discretized functional. We will do it by the finite difference method, as this is easy and advantageous in reducing the dimension of the minimization problem by the splitting method.
3 Discretization of the Variational Problem
Suppose that Ω := (0, L 1) × (0, L 2) × ⋯ × (0, L n ) in \( \mathbb {R}^{n} \), where L i , i = 1,…, n are given positive numbers. In the first part of this section, we will present the splitting finite difference scheme for the multidimensional problems. Next, we will discretize the variational problem and derive a formula for the gradient of the discretized functional and then describe the CG method for it. In the next section, we will present our numerical examples for illustrating the efficiency of the algorithms.
3.1 Splitting Finite Difference Scheme for the Direct Problem
Following [21, 22, 32] (see also [10, 30]), we subdivide the domain Ω into small cells by the rectangular uniform grid specified by
with h i = L i /N i being the grid size in the x i -direction, i = 1,…, n. To simplify the notation, we denote \(x^{k} := (x_{1}^{k_{1}}, \dots , x_{n}^{k_{n}}) \), where k := (k 1,…, k n ), 0≤k i ≤N i . We also denote by h := (h 1,…, h n ) the vector of spatial grid sizes and Δh := h 1⋯h n . Let e i be the unit vector in the x i -direction, i = 1,…, n, i.e. e 1 = (1,0,…,0) and so on. Denote
In the following, Ω h denotes the set of the indices of all interior grid points belonging to Ω. We also denote the set of the indices of all grid points belonging to \(\bar {\Omega }\) by \(\bar {\Omega }_{h}\). That is,
We also make use of the following sets
for i = 1,…, n. For a function u(x, t) defined in Q, we denote by u k(t) its approximate value at (x k, t). We define the following forward finite difference quotient with respect to x i
Now, taking into account the homogeneous boundary condition, we approximate the integrals in (2.3) as follows
Here, φ k(t), g k(t) and \(a_{i}^{k+\frac {e_{i}}{2}}(t)\) respectively are an approximation to the functions φ(x, t), g(x, t) and a i (x, t) at the grid point x k. We take the following convention: if φ(x, t) or/and g(x, t) is continuous, then we take φ k(t) or/and g k(t) by their value at x k, and if a i (x, t) is continuous, we take \(a_{i}^{k+\frac {e_{i}}{2}}(t) := a_{i}(x^{k + \frac {h_{i} e_{i}}{2}},t)\). Otherwise, take
and
With the approximations (3.4)–(3.8), we have the following discrete analog of the Eq. (2.3)
We note that, using the discrete analog of the integration by parts and the homogeneous boundary condition u k = 0, η k = 0 for k i = 0, we obtain
Hence, replacing this equality into (3.11), we obtain the following system which approximates the original problem (2.1)
with \(\bar u = \{u^{k}, k\in {\Omega }_{h}\}\) being the grid function. The function \(\bar u_{0}\) is the grid function approximating the initial condition u 0(x):
and
for k ∈ Ω h . Moreover,
We note that the coefficient matrices Λ i are positive semi-definite (see, e.g., [30]). In order to obtain a splitting scheme for the Cauchy problem (3.13), we discretize it in time. We divide the time interval [0, T] into M subintervals by
with Δt = T/M. We denote \(u^{m+\delta } :=\bar u(t_{m}+\delta {\Delta } t),{{\Lambda }_{i}^{m}} :={\Lambda }_{i}(t_{m}+{\Delta } t/2).\) We introduce the following implicit two-circle component-by-component splitting scheme [21]
Equivalently,
where E i is the identity matrix corresponding to Λ i , i = 1,…, n. The splitting scheme (3.18) can be rewritten in the following compact form
with
where \({A_{i}^{m}} := \left (E_{i}+\frac {\Delta t} 4{{\Lambda }_{i}^{m}}\right )^{-1}\left (E_{i}-\frac {\Delta t} 4{{\Lambda }_{i}^{m}}\right ), i=1,\dots , n.\)
It can be proved that [10, 30] the scheme (3.17) is stable and there exists a positive constant c d d independent of the coefficient a i , i = 1,…, n and b such that
When the space dimension is one, we approximate (3.13) by Crank-Nicholson’s method and the solution of the discretized problem is reduced to the form (3.18).
3.2 Discretization of the Variational Problem
We discretize the objective functional J 0(f) as follows:
where u k, m(f) shows its dependence on the right-hand side term f, and m is the index of grid points on the time axis. The notation ω k = ω(x k) means an approximation of the function ω(x) in Ω at points x k, for example, we take
In this subsection, for simplicity of notations, by writing f, we mean the grid function defined on the grid {0,Δt,…, MΔt} with the norm \(\|f\| = ({\Delta } t{\sum }_{m=1}^{M}| f^{m}|^{2})^{1/2}\). By this notations, we thus discretized the functional l u(f) by
with
For minimizing the problem (3.22) by the conjugate gradient method, we first calculate the gradient of the objective function \(J_{0}^{h,{\Delta } t}(f)\) as follows.
Theorem 2
The gradient \(\nabla J_{0}^{h,{\Delta } t}(f)\) of the objective function \(J_{0}^{h,{\Delta } t}\) at f is given by
where η satisfies the adjoint problem
with
and the matrices (A m ) ∗ and (B m ) ∗ being given by
Proof
For an infinitesimally small variation δ f of f, we have from (3.22) that
where v m = {v k, m:=u k, m(f + δ f)−u k, m(f)}.
It follows from (3.19) that v is the solution to the problem
Taking the inner product of both sides of the mth equation of (3.30) with an arbitrary vector \(\eta ^{m}\in \mathbb {R}^{N_{1}\times \ldots \times N_{n}}\) and then summing the results over m = 0,…, M−1, we obtain
Here, 〈⋅,⋅〉 is the inner product in \(\mathbb {R}^{N_{1}\times \ldots \times N_{n}}\) and \(\left (A^{m}\right )^{\ast }\) is the adjoint matrix of A m. Taking the inner product of both sides of the first equation of (3.26) with an arbitrary vector v m+1, summing the results over m = 0,…, M−2, we obtain
Taking the inner product of both sides of the second equation of (3.26) with an arbitrary vector v M, we have
From (3.32) and (3.33), we have
From (3.31), (3.34), we obtain
Because v 0 = 0, we have
On the other hand, from (3.21), we have \({\sum }_{m=1}^{M}{\sum }_{k\in {\Omega }_{h}}\left (\omega ^{k}v^{k,m}\right )^{2}=o(\|f\|)\). Hence, it follows form (3.29) and (3.35) that
Consequently, \(J_{0}^{h,{\Delta } t}\) is differentiable and its gradient has the form (3.25). □
Remark 1
Since the matrices Λ i , i = 1,…, n are symmetric, we have for m = 0,…, M−1
Similarly,
3.3 Conjugate Gradient Method
The conjugate gradient method applied to the discretized functional (3.22) has now the form:
Step 1 Given an initial approximation \(f^{0} \in \mathbb {R}^{M+1}\) and calculate the residual \(\hat r^{0}=({l_{h}^{1}}u(f^{0})-h^{1},{l_{h}^{2}}u(f^{0})-h^{2},\ldots , {l_{h}^{M}}u(f^{0})-h^{M})\) by solving the splitting (3.17) with f being replaced by the initial approximation f 0 and set k = 0.
Step 2 Calculate the gradient r 0 = −∇J γ (f 0) given in (3.25) by solving the adjoint problem (3.26). Then, we set d 0 = r 0.
Step 3 Calculate
where l h d 0 are calculated from the splitting scheme (3.17) with f being replaced by d 0 and g(x, t) = 0, u 0 = 0. Then, set
Step 4 For k = 1, 2, ⋯, calculate r k = −∇J γ (f k), d k = r k + β k d k−1, where
Step 5 Calculate α k
where l h d k are calculated from the splitting scheme (3.17) with f being replaced by d k and g(x, t) = 0, u 0 = 0. Then, set
4 Numerical Simulation
In this section, we present some numerical examples showing that our algorithm is efficient. Let T = 1, we test our algorithm for reconstructing the following functions
-
Example 1: f(t) = sin(π t),
-
Example 2: \( f(t) = \left \{\begin {array}{ll} 2 t &\text { if } 0 \leq t \leq 0.5\\ 2(1-t) &\text { if } 0.5 \leq t \leq 1 \end {array}\right ., \)
-
Example 3: \( f(t) = \left \{\begin {array}{ll} 1 &\text { if } 0.25 \leq t \leq 0.75\\ 0 &\text { otherwise} \end {array}\right .. \)
The reason for choosing these functions is that the first one is very smooth, the second one is not differentiable at t = 0.5, and the last one is discontinuous. Thus, these examples have different degree of difficulty.
From these test functions, we will take some explicit solutions u to the (2.1), the explicit functions φ and f and then calculate the remained term g in the right-hand side of (2.1). From u, we calculate l u = h and then put some random noise in h. The numerical simulation takes the noisy data h and reconstructs f from it by our algorithm. We stop the algorithm when ∥f k+1−f k∥ is small enough, say 10−3. We then compare the numerical solution with the exact one to show the efficiency of our approach.
We note that in our examples, we took the functions f(t) with f(0)=f(T) = 0. However, we also tested our algorithm for other functions f with f(0)≠0 or/and f(T)≠0 and got the same good numerical results. Hence, for saving the space of the paper, we do not present them here.
4.1 One-Dimensional Problems
Let Ω=(0,1). We reconstruct the function f from the system
We take u(x, t) = sin(π x)(1−t), u 0(x) = sin(π x), φ(x, t) = (x 2+5)(t 2+5) and then put one of the above functions f into the system to get g(x, t). In the observation lu(2.16), we take the following weight functions, either
or
We note that the observation operator with the second weight function can be regarded as a pointwise observation.
The numerical results for these tests are presented in Figs. 1, 2, 3, 4, 5, and 6. From these results, we see that the numerical results in the one-dimensional cases are very good, although the noise level is 10 %. In Tables 1 and 2, we present the regularization parameters, L 2−errors, iterations where we stop the algorithm and the values of the objective function. From these tables, we see that our algorithm is very accurate.
4.2 Two-Dimensional Problems
In this subsection, we present our numerical simulation for various problems. We take Ω=(0,1)×(0,1). In this case, (2.1) has the form
In the all tests, we take the noise level by 10−1 and 10−2, the weight function
The regularization parameter γ is taken by 10−3. However, the numerical results for the case with noise level 10−2 are not much different from that of the case with noise level 10−1; therefore, we present our results for the last case only.
As in the one-dimensional cases, we put the given the data a 1, a 2, b, f, φ and u into the (4.3) to get g. After that, we put some noise in lu to get the noisy data h, and from it, we apply our algorithm to reconstruct f.
Test 1. Example 1: (Fig. 7)
From our various tests, we realized that our algorithm is more stable when the function φ is “big.” If φ is small, then the numerical are not as good as for the case “big” φ as the following test shows (Fig. 8).
Test 2. Example 1:
We take the same equation as in Test 1 however with \(\varphi (x,t)=\left ({x_{1}^{2}}+1\right )\left ({x_{2}^{2}}+1\right )\left (t^{2}+1\right )\). We note that the norm of this function φ is less than that in Test 1.
The numerical results in this case are slightly worse than that in Test 1 as Figs. 9 and 10 show.
Test 3. Example 2:
Numerical results for this test are presented in Figs. 11 and 12.
Test 4. Example 3:
The numerical results for this test are presented in Figs. 13 and 14.
In Table 3, we present errors in the L 2−norm, the values of the objective function with respect to the noise levels. From this table, we see that our algorithm is very accurate.
5 Conclusions
In this paper, we investigate the inverse problem of determining a time-dependent term from integral observations in the right-hand side of parabolic equations. We reformulate it as a variational problem and prove that the functional to be minimized is Fréchet differentiable and we derive a formula for its gradient via an adjoint problem. The variational problem is then discretized by the splitting finite difference method. The discretized functional to be minimized is proved to be differentiable and its gradient is given via the discretized adjoint problem. The conjugate gradient method in coupling with Tikhonov regularization is suggested for numerically solving the problem. Several numerical tests are carried out which show that our algorithm is efficient.
References
Alifanov, O.M.: Inverse Heat Transfer Problems. Wiley, New York (1994)
Beck, J.V., Blackwell, B.: Clair, St.C.R.: Inverse Heat Conduction, Ill-Posed Problems. Wiley, New York (1985)
Borukhov, V.T., Vabishchevich, P.N.: Numerical solution of an inverse problem of source reconstructions in a parabolic equation. Mat. Model. 10, 93–100 (1998). Russian
Borukhov, V.T., Vabishchevich, P.N.: Numerical solution of the inverse problem of reconstructing a distributed right-hand side of a parabolic equation. Comput. Phys. Comm. 126, 32–36 (2000)
Cannon, J.R.: Determination of an unknown heat source from overspecified boundary data. SIAM J. Numer. Anal. 5, 275–286 (1968)
Cannon, J.R.: The one-dimensional heat equation. Addison-Wesley Publication Company, California (1984)
Hào, D.N.: A noncharacteristic Cauchy problem for linear parabolic equations II: a variational method. Numer. Funct. Anal. Optim. 13, 541–564 (1992)
Hào, D.N.: A noncharacteristic Cauchy problem for linear parabolic equations III: a variational method and its approximation schemes. Numer. Funct. Anal. Optim. 13, 565–583 (1992)
Hào, D.N.: Methods for inverse heat conduction problems. Peter Lang Verlag, Frankfurt/Main, New York, Paris (1998)
Hào, D.N., Thành, N.T., Sahli, H.: Splitting-based gradient method for multi-dimensional inverse conduction problems. J. Comput. Appl. Math. 232, 361–377 (2009)
Farcas, A., Lesnic, D.: The boundary-element method for the determination of a heat source dependent on one variable. J. Eng. Math. 54, 375–388 (2006)
Hasanov, A.: Identification of spacewise and time-dependent source terms in 1D heat conduction equation from temperature measurement at a final time. Int. J. Heat Mass Trans. 55, 2069–2080 (2012)
Hasanov, A., Pektaş, B.: Identification of an unknown time-dependent heat source term from overspecified Dirichlet boundary data by conjugate gradient method. Comput. Math. Appl. 65, 42–57 (2013)
Hinze, M.: A variational discretization concept in control constrained optimization: the linear-quadratic case. Computat. Optimiz. Appl. 30, 45–61 (2005)
Isakov, V.: Inverse Source Problems. American Mathematical Society, Providence, RI (1990)
Isakov, V.: Inverse Problems for Partial Differential Equations, 2nd. Springer, New York (2006)
Iskenderov, A.D.: Some inverse problems on determining the right-hand sides of differential equations. Izv. Akad. Nauk Azerbaijan. SSR Ser. Fiz.-Tehn. Mat. Nauk 2, 58–63 (1976). Russian
Iskenderov, A.D., Tagiev, R.G.: An inverse problem on determinating the right hand side of evolution equations in Banach spaces. In Questions of Applied Mathematics and Cybernetics (A collection of Scientific Papers), Azerbaijan State University. Baku 1, 51–56 (1979)
Kriksin, Yu.A., Plyushchev, S.N., Samarskaya, E.A., Tishkin, V.F.: The inverse problem of source reconstruction for a convective diffusion equation. Mat. Model. 7(11), 95–108 (1995). Russian
Ladyzhenskaya, O.A.: The Boundary Value Problems of Mathematical Physics. Springer-Verlag, New York (1985)
Marchuk, G.I.: Methods of Numerical Mathematics. Springer-Verlag, New York (1975)
Marchuk, G.I.: Splitting and alternating direction methods. In: Ciaglet P.G., Lions J.L. (eds.) Handbook of Numerical Mathematics. Volume 1: Finite DifferenceMethods. Elsevier Science Publisher B.V., North-Holland, Amsterdam (1990)
Nemirovskii, A.S.: The regularizing properties of the adjoint gradient method in ill-posed problems. Zh. vychisl. Mat. mat. Fiz. 26, 332–347 (1986). Engl. Transl. in U.S.S.R. Comput. Maths. Math. Phys. 26(2), 7–16 (1986)
Orlovskii, D.G.: Solvability of an inverse problem for a parabolic equation in the Hölder class. Mat. Zametki 50(3), 107–112 (1991). (Russian); translation in Math. Notes 50(3), 952–956 (1991)
Prilepko, A.I., Solovev, V.V.: Solvability theorems and Rotes method for inverse problems for a parabolic equation. I. Diff. Equ. 23, 1230–1237 (1988)
Prilepko, A.I., Tkachenko, D.S.: Inverse problem for a parabolic equation with integral overdetermination. J. Inverse Ill-Posed Probl. 11, 191–218 (2003)
Savateev, E.G.: On problems of determining the source function in a parabolic equation. J. Inv. Ill-Posed Probl. 3, 83–102 (1995)
Solovev, V.V.: Solvability of the inverse problem of finding a source, using overdetermination on the upper base for a parabolic equation. Diff. Equ. 25, 1114–1119 (1990)
Tröltzsh, F.: Optimal control of partial differential equations: theory, methods and applications. American Mathematical Society, Providence, Rhode Island (2010)
Thành, N.T.: Infrared thermography for the detection and characterization of buried objects. PhD thesis, Vrije Universiteit Brussel, Brussels, Belgium (2007)
Wloka, J.: Partial Differential Equations. Cambridge University Press, Cambridge (1987)
Yanenko, N.N.: The Method of Fractional Steps. Springer-Verlag, Berlin, Heidelberg, New York (1971)
Acknowledgments
This research was supported by Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 101.02-2014.54.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Ngoc Oanh, N.T., Huong, B.V. Determination of a Time-Dependent Term in the Right-Hand Side of Linear Parabolic Equations. Acta Math Vietnam 41, 313–335 (2016). https://doi.org/10.1007/s40306-015-0143-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40306-015-0143-y
Keywords
- Determination of the right-hand side
- Inverse problems
- Ill-posed problems
- Integral observations
- Splitting methods
- Conjugate gradient method