Abstract
We study the behaviour of the solutions to a dynamic evolution problem for a viscoelastic model with long memory, when the rate of change of the data tends to zero. We prove that a suitably rescaled version of the solutions converges to the solution of the corresponding stationary problem.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The most common models of viscoelasticity with long memory, such as the Maxwell model (see [4,5,6, 15]), lead to a dynamic evolution governed by a system of partial differential equations of the form
in \(\Omega \) for \(t\in [0,T]\), where \(\Omega \subset {\mathbb {R}}^d\) is the reference configuration, [0, T] is the time interval, u(t) and eu(t) are the displacement at time t and the symmetric part of its gradient, \({\mathbb {A}}\) and \({\mathbb {B}}\) are the elasticity and viscosity tensors, \(\beta >0\) is a material constant, and \(\ell (t)\) is the external load at time t. This system is complemented by boundary and initial conditions
where z and \(u_{in}\) are prescribed functions, the latter representing the history of the displacement for \(t\le 0\). Existence and uniqueness for (1.1)–(1.3) can be found in [3].
In this paper we study the quasistatic limit of the solutions to this problem, i.e., the limit of these solutions when the rate of change of the data tends to zero. More precisely, given a small parameter \(\varepsilon >0\), we consider the solution \(u^{\varepsilon }\) of (1.1)–(1.3) corresponding to \(\ell (\varepsilon t)\), \(z(\varepsilon t)\), and \(u_{in}(\varepsilon t)\). To study the asymptotic behaviour of \(u^{\varepsilon }\) as \(\varepsilon \rightarrow 0^+\) it is convenient to introduce the rescaled solution \(u_{\varepsilon }(t):=u^{\varepsilon }(t/\varepsilon )\), which turns out to be the solution of the system
in \(\Omega \) for \(t\in [0,T]\), with boundary and initial conditions (1.2) and (1.3).
Under different assumptions on \(\ell (t)\), z(t), and \(u_{in}(t)\) we prove (Theorems 3.6 and 3.7) that \(u_{\varepsilon }(t)\) converges, as \(\varepsilon \rightarrow 0^+\), to the solution \(u_0(t)\) of the stationary problem
with boundary condition (1.2).
By using just the energy-dissipation inequality, it is not difficult to prove a similar result for the Kelvin–Voigt model, in which the viscosity term
is replaced by \(-\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}e{\dot{u}}(t))\). On the other hand, in the case of the equation of elastodynamics without damping terms, i.e., when \({\mathbb {B}}=0\), by using the Fourier decomposition with respect to the eigenfunctions of the operator \(-\mathop {\mathrm{div}}\nolimits ({\mathbb {A}}eu)\), we can easily see that the convergence of \(u_{\varepsilon }\) to \(u_0\) does not hold in general. The purpose of this paper is to prove that the non-local damping term (1.6) is enough to obtain the convergence of the solutions of the evolution problems to the solution of the stationary problem.
Our result can be considered in the framework of the study of the quasistatic limits, i.e. the convergence of the solutions to second order evolution equations with rescaled times towards the solutions to the corresponding stationary equations. Similar problems in finite dimension have been studied in [1, 7, 10, 14]. A special case involving the wave equations on time-dependent intervals in dimension one has been studied in [8, 12]. The main novelty of our problem is the the non-local form of the damping term, given by (1.6).
The main tools to prove our results are two different estimates (Lemmas 3.8 and 5.2), related to the energy-dissipation balance (2.25) and to the elliptic system (5.17) obtained from (1.4) via Laplace Transform. After a precise statement of all assumptions, more details on the line of proof will be given after Theorem 3.7.
2 Hypotheses and Statement of the Problem
Let d be a positive integer and let \(\Omega \subset {\mathbb {R}}^d\) be a bounded open set with Lipschitz boundary. We use standard notation for Lebesgue and Sobolev spaces. Let \({\mathbb {R}}^{d\times d}_{sym}\) be the space of all symmetric \(d{\times }d\) matrices. For convenience we set
and we always identify the dual of H with H itself. The symbols \((\cdot ,\cdot )\) and \(\Vert \cdot \Vert \) denote the scalar product and the norm in H or in \({\tilde{H}}\), according to the context. The duality product between \(V'_0\) and \(V_0\) is denoted by \(\langle \cdot ,\cdot \rangle \). Given \(u\in V\), its strain eu is defined as the symmetric part of its gradient, i.e., \(eu:=\frac{1}{2}(\nabla u+\nabla u^T)\), where \(\nabla u\) is the Jacobian matrix, whose components are \((\nabla u)_{ij}:= \partial _j u_i\) for \(i,j=1,\dots ,d\).
Under these assumptions, the Second Korn Inequality (see, e.g., [11, Theorem 2.4]) states that there exists a positive constant \(C_K=C_K(\Omega )\) such that
Moreover, there exists a positive constant \(C_{P}=C_P(\Omega )\) such that the following Korn–Poincaré Inequality holds (see, e.g., [11, Theorem 2.7]):
Thanks to (2.2) we can use on the space V the equivalent norm
Let \({\mathscr {L}}({\mathbb {R}}^{d\times d}_{sym};{\mathbb {R}}^{d\times d}_{sym})\) be the space of all linear operators from \({\mathbb {R}}^{d\times d}_{sym}\) into itself. We assume that the elasticity and viscosity tensors \({\mathbb {A}}\) and \({\mathbb {B}}\), which depend on the variable \(x\in \Omega \), satisfy the following assumptions:
and for a.e. \(x\in \Omega \)
where \(c_{{\mathbb {A}}}\), \(c_{{\mathbb {B}}}\), \(C_{{\mathbb {A}}}\), and \(C_{{\mathbb {B}}}\) are positive constants independent of x, and the dot denotes the Euclidean scalar product of matrices.
Let us fix \(T>0\) and \(\beta >0\). To give a precise meaning to the notion of solution to problem (1.2)–(1.4) we introduce the function spaces
Remark 2.1
By the Sobolev Embedding Theorem, if \(u\in {\mathcal {V}}\) (resp. \(u\in {\mathcal {V}}_{loc}\)), then
We study problem (1.2)–(1.4) with \(\ell \), z, and \(u_{in}\) depending on \(\varepsilon \). Let us consider \(\varepsilon >0\) and
\(u_{\varepsilon ,in}\in C^0((-\infty ,T);H)\cap C^1((-\infty ,T);V_0')\) such that
The notion of solution to (1.2)–(1.4) is made precise by the following definition.
Definition 2.2
We say that \(u_{\varepsilon }\) is a solution to the viscoelastic dynamic system (1.2)–(1.4), with forcing term \(\ell =f_{\varepsilon }+g_{\varepsilon }\), boundary condition \(z_{\varepsilon }\), and initial condition \(u_{\varepsilon ,in}\), if
In the next remark we shall see that (2.9) can be reduced to the following problem starting from 0:
with \(\varphi _{\varepsilon }\in L^2(0,T;H)\), \(\gamma _{\varepsilon }\in H^1(0,T;V'_0)\), \(u^0_{\varepsilon }\in V\), \(u^0_{\varepsilon }-z_{\varepsilon }(0)\in V_0\), \(u^1_{\varepsilon }\in H\).
Remark 2.3
It is easy to see that \(u_{\varepsilon }\) is a solution according to Definition 2.2 if and only if its restriction to [0, T], still denoted by \(u_{\varepsilon }\), solves (2.10) with
where
To solve problem (2.10) it is enough to study the corresponding problem with homogeneous boundary condition:
with
Remark 2.4
The function \(u_{\varepsilon }\) is a solution to (2.10) if and only if \({{v}}_{\varepsilon }=u_{\varepsilon }-z_{\varepsilon }\) solves (2.13) with
Therefore, existence and uniqueness for (2.13) imply existence and uniqueness for (2.10).
Remark 2.5
In [3] problem (2.13) has been studied with initial conditions taken in the sense of interpolation spaces. Given two Hilbert spaces X and Y, the symbol \([X,Y]_{\theta }\) denotes the interpolation space between X and Y of exponent \(\theta \in (0,1)\). Thanks to [9, Theorem 3.1] we have the following inclusions:
where \(V_0^{\frac{1}{2}}:=[V_0,H]_{\frac{1}{2}}\) and \(V_0^{-\frac{1}{2}}:=[H,V'_0]_{\frac{1}{2}}\). Consequently
Therefore, the initial conditions in (2.13) are satisfied also in the stronger sense
The following proposition provides the main properties of the solutions. We recall that, if X is a Banach space, \(C_w^0([0,T];X)\) denotes the space of all weakly continuous functions from [0, T] to X, namely, the vector space of all functions \(u:[0,T]\rightarrow X\) such that for every \(x'\in X'\) the function \(t\mapsto \langle x', u(t)\rangle \) is continuous from [0, T] to \({\mathbb {R}}\).
Proposition 2.6
Given \(\varepsilon >0\), assume (2.7) and (2.8). Then there exists a unique solution \(u_{\varepsilon }\) to the viscoelastic dynamic system (2.9). Moreover, it satisfies
Proof
By Remarks 2.3 and 2.4 it is enough to prove the theorem for (2.13). Existence and uniqueness are proved in [3], taking into account Remark 2.5 about the equivalence between the initial conditions in the sense of (2.13) and (2.16).
After an integration by parts with respect to time, it easy to see that the weak formulation (2.13) is equivalent to the following one:
In [13], in a more general context, it has been proved that if \({{v}}_{\varepsilon }\) satisfies (2.18) and the initial conditions in the sense of (2.13), then it satisfies also
We fix \(s\in [0,T)\). We want to prove
Thanks to the theory developed in [3] there exists a unique \({\tilde{v}}_{\varepsilon }\in L^2(s,T;V_0)\cap H^1(s,T;H)\cap H^2(s,T;V'_{0})\) such that
By the results in [13] the function \({\tilde{v}}_{\varepsilon }\) satisfies also
Since clearly \({{v}}_{\varepsilon }\) satisfies (2.21) and (2.22), by uniqueness we have \({\tilde{v}}_{\varepsilon }(t)={{v}}_{\varepsilon }(t)\) for every \(t\in [s,T]\). In particular, from (2.23) we deduce that (2.20) holds. \(\square \)
To complete the proof we need the following proposition about the energy-dissipation balance, where \({\tilde{H}}\) is defined by (2.1), and \({\mathscr {W}}_{\varepsilon }(t)\) represents the work done in the interval [0, t].
Proposition 2.7
Given \(\varepsilon >0\), we assume (2.14). Let \({{v}}_{\varepsilon }\) be the solution to (2.13) and let \({w}_{\varepsilon }:[0,T]\rightarrow {\tilde{H}}\) be defined by
Then \({w}_{\varepsilon }\in H^1(0,T;{\tilde{H}})\) and the following energy-dissipation balance holds for every \(t\in [0,T]\):
where
Proof
It is convenient to extend the data of our problem to the interval [0, 2T] by setting
It is clear that \(h_{\varepsilon }\in L^2(0,2T;H)\) and \(\ell _{\varepsilon }\in H^1(0,2T;V'_0)\). By uniqueness of the solution to (2.13), the solution on [0, 2T] is an extension of \({{v}}_{\varepsilon }\), still denoted by \({{v}}_{\varepsilon }\). We also consider the extension of \({w}_{\varepsilon }\) on [0, 2T] defined by (2.24).
Since \(e{{v}}_{\varepsilon }\in L^2(0,2T;{{\tilde{H}}})\), it follows from (2.24) that \({w}_{\varepsilon }\in H^1(0,2T;{\tilde{H}})\), and
Thanks to (2.20) in [0, 2T] and (2.26) there exists a representative of \(\dot{{w}}_{\varepsilon }\) such that
Moreover, since \({{v}}_{\varepsilon }\) satisfies (2.13) in [0, 2T], we have for a.e. \(t\in [0,2T]\)
Multiplying (2.26) and (2.28) by \(\psi \in {\tilde{H}}\) and \(\varphi \in V_0\), respectively, and then integrating over \(\Omega \) and adding the results, for a.e. \(t\in [0,2T]\) we get
Given a function r from [0, 2T] into a Banach space X, for every \(\eta > 0\) we define the sum and the difference \(\sigma ^{\eta }r, \delta ^{\eta }r:[0, 2T -\eta ]\rightarrow X \) by
For a.e. \(t\in [0,2T-\eta ]\) we have \(\sigma ^{\eta }{{v}}_{\varepsilon }(t), \delta ^{\eta }{{v}}_{\varepsilon }(t)\in V_0\) and \(\sigma ^{\eta }{w}_{\varepsilon }(t), \delta ^{\eta }{w}_{\varepsilon }(t)\in {\tilde{H}}\). For a.e. \(t\in [0,2T-\eta ]\) we use (2.29) first at time t and then at time \(t + \eta \), with \(\varphi :=\delta ^{\eta }{{v}}_{\varepsilon }(t)\) and \(\psi :=\delta ^{\eta }{w}_{\varepsilon }(t)\). By summing the two expressions and then integrating in time on the interval [0, t] we get
where for a.e. \(\tau \in [0,2T-\eta ]\)
An integration by parts in time gives
Moreover
We now divide by \({\eta }\) all terms of (2.31)–(2.35). Observing that
thanks to (2.20) in [0, 2T) and (2.27), we can pass to the limit as \({\eta }\rightarrow 0^+\), and from (2.30) we obtain that (2.25) is satisfied for every \(t\in [0,T]\). \(\square \)
Proof of Proposition 2.6 (Continuation)
Now we want to prove (2.17). By using (2.25), for every \(t\in [0,T]\) we can write
Let \(\Psi _{\varepsilon }:[0,T]\rightarrow [0,+\infty )\) be defined by
since \({w}_{\varepsilon }\in C^0([0,T];{\tilde{H}})\), thanks to (2.19) and (2.36) we have \(\Psi _{\varepsilon }\in C^0([0,T])\).
Now we fix \(t\in [0,T]\). Given a sequence \(\{t_k\}_k\subset [0,T]\) such that \(t_k\rightarrow t\) as \(k\rightarrow +\infty \), we define
By elementary computations we have
therefore, by (2.3) and (2.6) there exists a positive constant \(C=C({\mathbb {A}},{\mathbb {B}},\Omega )\) such that
The right-hand side of the previous inequality tends to 0 as \(k\rightarrow +\infty \) because of (2.19) and the continuity of \(\Psi _{\varepsilon }\). Since \(z_{\varepsilon }\in C^0([0,T];V)\), by (2.7), and \(u_{\varepsilon }={{v}}_{\varepsilon }+z_{\varepsilon }\), we obtain (2.17). \(\square \)
3 Statement of the Main Results
In this section we present the main results about the convergence, as \(\varepsilon \rightarrow 0^+\), of the solutions \(u_{\varepsilon }\). We assume the following hypotheses on the dependence on \(\varepsilon >0\) of our data:
-
(H1)
\(\{f_{\varepsilon }\}_{\varepsilon }\subset L^2(0,T;H)\) and \(\{g_{\varepsilon }\}_{\varepsilon }\subset H^1(0,T;V'_0)\) such that
$$\begin{aligned} f_{\varepsilon }&\xrightarrow [\varepsilon \rightarrow 0^+]{}f \quad \text {strongly in }L^2(0,T;H),\\ g_{\varepsilon }&\xrightarrow [\varepsilon \rightarrow 0^+]{}g \quad \text {strongly in } W^{1,1}(0,T;V'_0); \end{aligned}$$ -
(H2)
\(\{z_{\varepsilon }\}_{\varepsilon }\subset H^2(0,T;H)\cap H^1(0,T;V)\) such that
$$\begin{aligned} z_{\varepsilon }\xrightarrow [\varepsilon \rightarrow 0^+]{}z\quad \text {strongly in } W^{2,1}(0,T;H)\cap W^{1,1}(0,T;V); \end{aligned}$$ -
(H3)
\(\{u_{\varepsilon ,in}\}_{\varepsilon }\subset C^0((-\infty ,0];V)\cap C^1((-\infty ,0];H)\), \(u_{in}\in C^0((-\infty ,0];V)\), and there exist \(a>0\) such that
$$\begin{aligned}&\begin{aligned} u_{\varepsilon ,in}&\xrightarrow [\varepsilon \rightarrow 0^+]{}u_{in}&\quad \text {strongly in } C^{0}([-a,0];V),\\ \varepsilon {\dot{u}}_{\varepsilon ,in}&\xrightarrow [\varepsilon \rightarrow 0^+]{}0&\quad \text {strongly in }C^{0}([-a,0];H), \end{aligned}\\&\begin{aligned}&\int _{-\infty }^{-a}\frac{1}{\beta \varepsilon }\mathrm {e}^{\frac{\tau }{\beta \varepsilon }}\Vert u_{\varepsilon ,in}(\tau )\Vert _V\mathrm {d}\tau \xrightarrow [\varepsilon \rightarrow 0^+]{}0,\\&\int _{-\infty }^{-a}\frac{1}{\beta \varepsilon }\mathrm {e}^{\frac{\tau }{\beta \varepsilon }}\Vert u_{in}(\tau )\Vert _V\mathrm {d}\tau \xrightarrow [\varepsilon \rightarrow 0^+]{}0. \end{aligned} \end{aligned}$$
Remark 3.1
Let \(u^0_{\varepsilon }=u_{\varepsilon ,in}(0)\), \(u^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)\), and \(u^0=u_{in}(0)\). Hypothesis (H3) implies
Our purpose is to show that the solutions \(u_{\varepsilon }\) converge, as \(\varepsilon \rightarrow 0^+\), to the solution \(u_0\) of the stationary problem (1.5) with boundary condition (1.2). The notion of solution to this problem is the usual one:
Remark 3.2
The existence and uniqueness of a solution \(u_0\) to (3.1) follows easily from the Lax–Milgram Lemma. Since \(f+g \in L^2(0,T;V'_0)\), the estimate for the solution implies also \(u_0\in L^2(0,T;V)\).
We shall sometimes use the corresponding problem with homogeneous boundary conditions:
with \(h\in L^2(0,T;H)\) and \(\ell \in H^1(0,T;V'_0)\).
Remark 3.3
The function \(u_0\) is a solution to (3.1) if and only if \(v_0=u_0-z\) is a solution to (3.2) with
The following lemma will be used to prove the regularity with respect to time of the solution to (3.1).
Lemma 3.4
Let \(m\in {\mathbb {N}}\) and \(p\in [1,+\infty )\). If \(f=0\), \(g\in W^{m,p}(0,T;V'_0)\), and \(z\in W^{m,p}(0,T;V)\), then the solution \(u_0\) to problem (3.1) satisfies \(u_0\in W^{m,p}(0,T;V)\).
Proof
By Remark 3.3 it is enough to consider the case \(z=0\). Let \(R:V'_0\rightarrow V_0\) be the resolvent operator defined as follows:
Since \(u_0(t)=R(g(t))\), the conclusion follows from the continuity of the linear operator R. \(\square \)
Remark 3.5
If \(f=0\), since \(g\in W^{1,1}(0,T;V'_0)\) and \(z\in W^{1,1}(0,T;V)\), we can apply Lemma 3.4 to obtain that the solution \(u_0\) to (3.1) belongs to \(W^{1,1}(0,T;V)\), hence \(u_0\in C^0([0,T];V)\).
In the final statement of the next theorem, besides (H1)–(H3) we assume \(f_{\varepsilon }=0\) and the following compatibility condition: there exists an extension of g (still denoted by g) such that
The meaning of (3.3) is that \(u_{in}(t)\) is in equilibrium with external loads for \(t\in [-a,0]\). This condition must be required if we want to obtain uniform convergence of \(u_{\varepsilon }\) to \(u_0\) also near \(t=0\).
We are now in a position to state the main results of this paper.
Theorem 3.6
Let us assume (H1)–(H3). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.9) and let \(u_0\) be the solution to the stationary problem (3.1). Then
If, in addition, \(f_{\varepsilon }=0\) for every \(\varepsilon >0\), then
If \(f_{\varepsilon }=0\) for every \(\varepsilon >0\) and the compatibility condition (3.3) holds, then we have also
In the case of solutions to problems (2.10) we have the following results, assuming that
Theorem 3.7
Let us assume (H1), (H2), and (3.10). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.10), with \(\varphi _{\varepsilon }=f_{\varepsilon }\) and \(\gamma _{\varepsilon }=g_{\varepsilon }\), and let \(u_0\) be the solution to the stationary problem (3.1). Then (3.4) and (3.5) hold. Moreover, if \(f_{\varepsilon }=0\) for every \(\varepsilon >0\), then (3.6) and (3.7) hold.
Theorems 3.6 and 3.7 will be proved in several steps. First, we prove (3.8) and (3.9) when \(f_{\varepsilon }=0\) and the compatibility condition (3.3) holds (Theorem 4.1). For \(g\in H^2(0,T;V_0')\) the proof is based on the estimate in Lemma 3.8 below, which is derived from the energy-dissipation balance (2.25). The general case is obtained by an approximation argument based on the same estimate.
Next, we prove that (3.4) holds for the solutions of (2.10) if \(\gamma _{\varepsilon }=\gamma =0\), \(z_{\varepsilon }=0\), \(u^0_{\varepsilon }=0\), and \(u^1_{\varepsilon }=0\) (Proposition 6.1). The proof is obtained by means of a careful estimate of the solutions to the elliptic system (5.17) obtained from (2.13) via Laplace Transform (Section 5). Under the general assumptions (H1), (H2), and (3.10) the same result is deduced from the previous one by an approximation argument based again on Lemma 3.8 below.
Then, (3.5) is obtained from (3.4) using a suitable test function in (2.10) (Theorem 6.3). A further approximation argument gives (3.4) and (3.5) under the assumptions (H1), (H2), and (H3) (Theorem 6.4).
Finally, if \(f_{\varepsilon }=0\), we obtain (3.6) and (3.7) from (3.4) and (3.5) (Lemma 7.1), concluding the proof of Theorems 3.6 and 3.7.
The following lemma, derived from the energy-dissipation balance (2.25) of Proposition 2.7, will be frequently used to approximate the solutions of (2.13) by means of solutions corresponding to more regular data.
Lemma 3.8
Given \(\varepsilon >0\), \(\varphi _\varepsilon \in L^2(0,T;H)\), \(\ell _{\varepsilon }\in H^1(0,T;V'_0)\), \(v^0_{\varepsilon }\in V_0\), and \(v^1_{\varepsilon }\in H\), let \({{v}}_{\varepsilon }\) be the solution to (2.13) with \(h_{\varepsilon }=\varepsilon \varphi _\varepsilon \). Then there exists a positive constant \(C_{E}=C_{E}({\mathbb {A}},{\mathbb {B}},\Omega ,T)\), independent of \(\varepsilon \), such that
Proof
By the energy-dissipation balance (2.25) proved in Proposition 2.7 and by (2.3) and (2.6) there exists a positive constant \(C=C({\mathbb {A}},{\mathbb {B}},\Omega )\) such that for every \(t\in [0,T]\) we have
where the work is now defined by
Let \(K_{\varepsilon }:=\varepsilon \Vert \dot{{v}}_{\varepsilon }(t)\Vert _{L^{\infty }(0,T;H)}\) and \(E_{\varepsilon }:=\Vert {{v}}_{\varepsilon }(t)\Vert _{L^{\infty }(0,T;V)}\), which are finite by (2.17). Thanks to (3.11) and (3.12) for every \(t\in [0,T]\) we get
By passing to the supremum with respect to t and using the Young Inequality we can find a positive constant \(C_{E}=C_{E}({\mathbb {A}},{\mathbb {B}},\Omega ,T)\) such that
which concludes the proof. \(\square \)
In the proof of Theorem 3.6 we shall use the following lemma, which ensure that it is enough to consider the case \(z_{\varepsilon }=0\) and \(z=0\).
Lemma 3.9
If Theorem 3.6 holds when \(z_{\varepsilon }=0\) for every \(\varepsilon >0\), then it holds for arbitrary \(\{z_{\varepsilon }\}_\varepsilon \) and z satisfying (H2).
Proof
It is not restrictive to assume \(\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}ez_{\varepsilon }(0))=\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}ez(0))=0\). Indeed, if this is not the case, we can consider the solutions \(z^0_{\varepsilon }\) and \(z^0\) to the stationary problems
and we can replace \(z_{\varepsilon }(t)\) and z(t) by \({\tilde{z}}_{\varepsilon }(t):=z_{\varepsilon }(t)+z^0_{\varepsilon }\) and \({\tilde{z}}(t):=z(t)+z^0\). It is clear that \(\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}e{\tilde{z}}_{\varepsilon }(0))=\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}e{{\tilde{z}}}(0))=0\) and that problems (2.9) and (3.1) do not change passing from \(z_{\varepsilon }\) and z to \({\tilde{z}}_{\varepsilon }\) and \({\tilde{z}}\).
Let \(\psi _{\varepsilon },\psi :[0,T]\rightarrow V'_0\) be the functions defined by
Since
we obtain \(\psi _{\varepsilon }\in H^1_{loc}({\mathbb {R}};V'_0)\) and \(\psi \in W^{1,1}_{loc}({\mathbb {R}};V'_0)\). Moreover, thanks to (H2) we have
Since \(u_{\varepsilon }\) is the solution to (2.9), by Remark 2.3 it solves (2.10) with \(\gamma _{\varepsilon }=g_{\varepsilon }-p_{\varepsilon }\) and initial conditions defined by (2.11), where \(p_{\varepsilon }\) is defined by (2.12). By Remark 2.4 the function \({{v}}_{\varepsilon }=u_{\varepsilon }-z_{\varepsilon }\) is the solution to (2.13) with
and initial conditions \(v^0_{\varepsilon }\) and \(v^1_{\varepsilon }\) defined by (2.15). We define the family of convolution kernels \(\{\rho _{\varepsilon }\}_{\varepsilon }\subset L^1({\mathbb {R}})\) by
and notice that, by (3.14) and (3.13), the integral in (3.16) coincides with \((\rho _{\varepsilon }*\psi _{\varepsilon })(t)\), hence
By Remark 3.3 the function \(v_0=u_0-z\) is the solution to (3.2) with \(h=f\) and \(\ell =g+\mathop {\mathrm{div}}\nolimits ({\mathbb {A}}ez)\). By the definition of \({{v}}_{\varepsilon }\) and \(v_0\) it is clear that to prove the theorem it is enough to show that the conclusions of Theorem 3.6 holds for \({{v}}_{\varepsilon }\) and \(v_0\). To this aim, we introduce the solution \({\tilde{v}}_{\varepsilon }\) to (2.13) with \(h_{\varepsilon }=f_{\varepsilon }\), \(\ell _{\varepsilon }=g_{\varepsilon }-p_{\varepsilon }+\mathop {\mathrm{div}}\nolimits ({\mathbb {A}}ez_{\varepsilon })\), and \(v^0_{\varepsilon }\), \(v^1_{\varepsilon }\) defined by (2.15). Then the function \(\bar{v}_{\varepsilon }:={{v}}_{\varepsilon }-{\tilde{v}}_{\varepsilon }\) satisfies (2.13) with \(h_{\varepsilon }=-\varepsilon ^2\ddot{z}_{\varepsilon }\), \(\ell _{\varepsilon }=\psi _{\varepsilon }-\rho _{\varepsilon }*\psi _{\varepsilon }\), and homogeneous initial conditions. By Lemma 3.8 we can write
By (3.15) and by classical results on convolutions we obtain
Since \(\{\ddot{z}_{\varepsilon }\}_{\varepsilon }\) is bounded in \(L^1(0,T;H)\) by (H2), from (3.18) we deduce
By Remark 2.3 the function \({\tilde{v}}_{\varepsilon }\) is the solution to (2.9) with \(g_{\varepsilon }\) replaced by \(g_{\varepsilon }+\mathop {\mathrm{div}}\nolimits ({\mathbb {A}}ez_{\varepsilon })\) and \(z_{\varepsilon }=0\). Thanks to (H1) and (H2) we have
Since by hypothesis, Theorem 3.6 holds in the case of homogeneous boundary condition, its conclusions are valid for \({\tilde{v}}_{\varepsilon }\) and \(v_0\). Thanks to (3.19) and (3.20) the same results hold for \({{v}}_{\varepsilon }\) and \(v_0\). This concludes the proof. \(\square \)
In a similar way we can prove the following result.
Lemma 3.10
If Theorem 3.7 holds when \(z_{\varepsilon }=0\) for every \(\varepsilon >0\), then it holds for arbitrary \(\{z_{\varepsilon }\}_\varepsilon \) and z satisfying (H2).
4 The uniform convergence
In this section we shall prove (3.8) and (3.9) of Theorem 3.6 under the compatibility condition (3.3).
Theorem 4.1
Let us assume (H1)–(H3), the compatibility condition (3.3), and \(f_{\varepsilon }=0\) for every \(\varepsilon >0\). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.9) and let \(u_0\) be the solution to the stationary problem (3.1), with \(f=0\). Then (3.8) and (3.9) hold.
To prove the theorem we need the following lemma, which gives the result when g is more regular.
Lemma 4.2
Under the assumptions of Theorem 4.1, if \(g\in H^2(0,T;V'_0)\), then (3.8) and (3.9) hold.
Proof
Thanks to Lemma 3.9 we can suppose \(z=0\) and \(z_{\varepsilon }=0\) for every \(\varepsilon >0\). Let \(p_{\varepsilon }\) be defined by (2.11). Since \(u_{\varepsilon }\) is the solution to (2.9), thanks to Remark 2.3 it solves (2.13) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=g_{\varepsilon }-p_{\varepsilon }\), \(v^0_{\varepsilon }=u_{\varepsilon ,in}(0)\), and \(v^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)\). We fix \(b>a>0\) and we extend the function g in (3.3) to \((-\infty ,T)\) in such a way \(g\in W^{1,1}(-\infty ,T;V'_0)\) and \(g(t)=0\) for every \(t\in (-\infty ,-b]\). Since \(z=0\) we can extend \(u_0\) by solving the following problem:
We observe that \(u_0=0\) on \((-\infty ,-b]\) and \(u_0=u_{in}\) on \([-a,0]\) by the compatibility condition (3.3).
Assume \(g\in H^2(0,T;V'_0)\). By Lemma 3.4 (with \(z=0\)) we have \(u_0\in H^2(0,T;V)\), hence by (3.1) we get for a.e. \(t\in [0,T]\)
where \(\rho _{\varepsilon }\) is defined by (3.17) and
Let \(q_{\varepsilon }:=g_{\varepsilon }-g+\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}eu_0)-(\rho _{\varepsilon }*\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}eu_0))-p_{\varepsilon }+{\tilde{p}}_{\varepsilon }\). By (4.1) the function \(\bar{u}_{\varepsilon }:=u_{\varepsilon }-u_0\) satisfies (2.13) with \(h_{\varepsilon }=-\varepsilon ^2\ddot{u}_0\), \(\ell _{\varepsilon }=q_{\varepsilon }\), \(v^0_{\varepsilon }=u_{\varepsilon ,in}(0)-u_0(0)\), and \(v^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)-{\dot{u}}_0(0)\).
Since \(g\in W^{1,1}(-\infty ,T;V'_0)\) and \(g=0\) on \((-\infty ,-b]\), by Lemma 3.4 we obtain \(u_0\in W^{1,1}(-\infty ,T;V)\) and therefore \(\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}eu_0)\in W^{1,1}(-\infty ,T;V'_0)\). Then the properties of convolutions imply
As we have already observed, by the compatibility condition (3.3) we have \(u_0=u_{in}\) on \([-a,0]\), hence
Thanks to (H3) we obtain \( {\tilde{g}}^0_{\varepsilon }-g^0_{\varepsilon }\rightarrow 0\) strongly in \(V'_0\) as \(\varepsilon \rightarrow 0^+\). Hence
By (H1), (4.2), and (4.3) we have
Since \(u_0(0)= u_{in}(0)\), (H3) gives
By using Lemma 3.8 we get
therefore thanks to (4.4), (4.5), and (4.6) we obtain the conclusion. \(\square \)
In the proof of Theorems 4.1, 6.2, and 6.4 we shall use the following density result.
Lemma 4.3
Let X, Y be two Hilbert spaces such that \(X\hookrightarrow Y\) continuously, with X dense in Y. Then for every \(m,n\in {\mathbb {N}}\) with \(m\le n\), and \(p\in [1,2]\) the space \(H^n(0,T;X)\) is dense in \(W^{m,p}(0,T;Y)\).
Proof
Since every simple function with values in Y can be approximated by simple functions with values in X, it is easy to see that \(L^2(0,T;X)\) is dense in \(L^p(0,T;Y)\).
To prove the result for \(m=1\) we fix \(u\in W^{1,p}(0,T;Y)\). By the density of \(L^2(0,T;X)\) in \(L^p(0,T;Y)\) we can find a sequence \(\{\psi _k\}_k\subset L^2(0,T;X)\) such that \(\psi _k\rightarrow {\dot{u}}\) strongly in \(L^p(0,T;Y)\) as \(k\rightarrow +\infty \). By the density of X in Y there exists \(\{u^0_k\}_k\subset X\) such that \(u_k^0\rightarrow u(0)\) strongly in Y as \(k\rightarrow +\infty \). Now we define
It is easy to see that \(\{u_k\}_k\subset H^1(0,T;X)\) and
Arguing by induction we can prove that for every integer \(m\ge 0\) the space \(H^m(0,T;X)\) is dense in \(W^{m,p}(0,T;Y)\). Since \(H^n(0,T;X)\) is dense in \(H^m(0,T;X)\), the conclusion follows. \(\square \)
We are now in a position to deduce Theorem 4.1 from Lemma 4.2 by means of an approximation argument.
Proof of Theorem 4.1
Thanks to Lemma 3.9 we can suppose \(z=0\) and \(z_{\varepsilon }=0\) for every \(\varepsilon >0\). We fix \(\delta >0\). By Lemma 4.3 there exists a function \(\psi \in H^2(0,T;V'_0)\) such that
By (H1) there exists a positive number \(\varepsilon _0=\varepsilon _0(\delta )\) such that
Let \(p_{\varepsilon }\) be defined by (2.12). Since \(u_{\varepsilon }\) is the solution to (2.9) with \(f_{\varepsilon }=0\) and \(z_{\varepsilon }=0\), thanks to Remark 2.3 it solves (2.13) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=g_{\varepsilon }-p_{\varepsilon }\), \(v^0_{\varepsilon }=u_{\varepsilon ,in}(0)\), and \(v^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)\). Moreover, let \({\tilde{u}}_{\varepsilon }\) be solution to (2.13) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=\psi -p_{\varepsilon }\), \(v^0_{\varepsilon }=u_{\varepsilon ,in}(0)\), and \(v^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)\), and let \({\tilde{u}}_0\) be the solution to (3.2) with \(h=0\) and \(\ell =\psi \). Thanks to Remark 2.3 the function \({\tilde{u}}_{\varepsilon }\) is the solution to (2.9) with \(f_{\varepsilon }=0\), \(g_{\varepsilon }=\psi \), and \(z_{\varepsilon }=0\), hence by Lemma 4.2 we have
We now consider the functions \({\bar{u}}_0:={\tilde{u}}_0-u_0\) and \(\bar{u}_{\varepsilon }:={\tilde{u}}_{\varepsilon }-u_{\varepsilon }\). Since \({\bar{u}}_0\) is the solution to (3.2), with \(h=0\) and \(\ell =\psi -g\), by the Lax-Milgram Lemma we get
Moreover, since \(\bar{u}_{\varepsilon }\) is the solution to (2.13), with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=\psi -g_{\varepsilon }\), \(v^0_{\varepsilon }=0\), and \(v^1_{\varepsilon }=0\), thanks to Lemma 3.8 we get
By combining (4.7), (4.8), (4.11), and (4.12), we can find a positive constant \(C=C({\mathbb {A}},{\mathbb {B}},\Omega ,T)\) such that
for every \(\varepsilon \in (0,\varepsilon _0)\). Since
by (4.9), (4.10), and (4.13) we have
The conclusion follows from the arbitrariness of \(\delta >0\). \(\square \)
5 Use of the Laplace Transform
In this section we shall use the Laplace Transform to prepare the proof of the convergence, as \(\varepsilon \rightarrow 0^+\), of the solutions of the problems
to the solution \(v_0\) of the problem
when \(\{h_{\varepsilon }\}_{\varepsilon }\subset L^2(0,T;H)\), \(h\in L^2(0,T;H)\), and
This partial result will be the starting point for the proof of the convergence in \(L^2(0,T;V)\) under the general assumptions of Theorem 3.6.
5.1 The Laplace Transform for Functions with Values in Hilbert Spaces
Given a complex Hilbert space X, let \(r\in L^1_{loc}(0,+\infty ;X)\) be a function such that
and let \({\mathbb {C}}_+:=\{s\in {\mathbb {C}}:\mathfrak {R}(s)>0\}\). The Laplace Transform of r is the function \({{\hat{r}}}:{\mathbb {C}}_+\rightarrow X\) defined by
Besides \({{\hat{r}}}\), we shall also use the notation \({\mathcal {L}}(r)\), which is sometimes written as \({\mathcal {L}}_t(r(t))\), with dummy variable t. In the particular case \(r\in L^{\infty }(0,+\infty ;X)\) we have
There is a close connection between the Laplace Transform and the Fourier Transform, defined for every \(\rho \in L^{1}({\mathbb {R}};X)\) as the function \({\mathcal {F}}(\rho )\in L^{\infty }({\mathbb {R}};X)\) given by
For \({\mathcal {F}}(\rho )\) we use also the notation \({\mathcal {F}}_t(\rho (t))\) with dummy variable t. For the main properties of the Fourier and Laplace Transforms of functions with values in Hilbert spaces we refer to [2].
We extend the function r satisfying (5.4) by setting \(r(t)=0\) for every \(t<0\). By (5.5) and (5.6) we have
We remark that the Laplace Transform commutes with linear transformations, as shown in the following proposition (see, for instance [2, Proposition 1.6.2]).
Proposition 5.1
Let X and Y be two complex Hilbert spaces, let \(r \in L^1_{loc}(0,+\infty ;X)\), and let T be a continuous linear operator from X to Y. Then \(T\circ r\in L^1_{loc}(0,+\infty ;Y)\). If in addition, r satisfies (5.4), then the same property holds also for \(T\circ r\), with X replaced by Y, and \({\mathcal {L}}(T\circ r)(s)=(T\circ {\hat{r}})(s)\) for every \(s\in {\mathbb {C}}_+\).
Now we consider the Inverse Laplace Transform. Let \(R:{\mathbb {C}}_+\rightarrow X\) be a function. Suppose that there exists \(r\in L^1_{loc}(0,+\infty ;X)\) such that (5.4) holds and \({\mathcal {L}}(r)(s)=R(s)\) for every \(s\in {\mathbb {C}}_+\). In this case we say that r is the Inverse Laplace Transform of R, and we use the notation \(r={\mathcal {L}}^{-1}(R)\) or \(r={\mathcal {L}}_s^{-1}(R(s))\) with dummy variable s. It can be proven that r is uniquely determined up to a negligible set (see [2, Theorem 1.7.3]). Moreover, r can be obtained by the Bromwich Integral Formula:
where \(s_1\) is an arbitrary positive number. Clearly (5.7) can be expressed in terms of the Inverse Fourier Transform, namely
where \({\mathcal {F}}^{-1}_{s_2}(R(s_1+is_2))\) denotes the Inverse Fourier Transform with respect to the variable \(s_2\).
To use the Laplace Transform, we extend our problems from the interval [0, T] to \([0,+\infty )\). To do this, we extend the functions \(h_{\varepsilon }\) and h, introduced in (5.3), by setting them equal to zero in \((T,+\infty )\), and we consider the solution to (5.1) in \([0,+\infty )\), which we still denote \({{v}}_{\varepsilon }\). Moreover, we consider the solution to (5.2) in \([0,+\infty )\), which we still denote \(v_0\). Notice that, thanks to the choice of the extension we have
By Proposition 3.8 and by using the equality \(h_{\varepsilon }=0\) on \((T,+\infty )\), we get
Since \(h\in L^2(0,T;H)\) and \(h=0\) on \((T,+\infty )\), by means of standard estimates for the solution to (5.2) we obtain
From (2.4), (5.1), and (5.9) we can deduce
To study our problem by means of the Laplace Transform we introduce the complexification of the Hilbert spaces H, \(V_0\), and \(V'_0\) defined by
The symbols \((\cdot ,\cdot )\) and \(\Vert \cdot \Vert \) denote the hermitian product and the norm in \({\hat{H}}\) or in other complex \(L^2\) spaces. For every \(s\in {\mathbb {C}}_+\) the Laplace Transforms \({\hat{h}}_{\varepsilon }(s)\) and \({\hat{h}}(s)\) of \(h_{\varepsilon }\) and h in \({\hat{H}}\) are well defined. Thanks to (5.9) and (5.10) the Laplace Transforms \({\hat{v}}_{\varepsilon }(s)\) and \({\hat{v}}_0(s)\) in \({\hat{V}}_0\) are well defined for every \(s\in {\mathbb {C}}_+\). By (5.11) the Laplace Transform \( \hat{\ddot{v}}_{\varepsilon }(s)\) of \(\ddot{{v}}_{\varepsilon }(s)\) in \({\hat{V}}_0'\) is well defined for every \(s\in {\mathbb {C}}_+\). Using (5.9) we can integrate by parts two times in the integral which defines \( \hat{\ddot{v}}_{\varepsilon }\) and, since \({{v}}_{\varepsilon }(0)=0\) and \(\dot{{v}}_{\varepsilon }(0)=0\), we obtain
By considering the operators \(S_{{\mathbb {A}}},S_{{\mathbb {B}}}:{\hat{V}}_0\rightarrow {\hat{V}}_0'\) defined by
we can rephrase (5.1) and (5.2) for a.e. \(t\in [0,+\infty )\) as equalities in \({\hat{V}}_0'\):
Now we want to consider the Laplace Transforms, in the sense of \({\hat{V}}'_{0}\), of both sides of these equations. By Proposition 5.1 we can say
where \({\hat{v}}_{\varepsilon }\) and \({\hat{v}}_0\) are the Laplace Transforms of \({{v}}_{\varepsilon }\) and \(v_0\), respectively, in the sense of \({\hat{V}}_0\). Moreover, since we have
this integral admits Laplace Transform in the sense of \({\hat{V}}_0\), which for every \(s\in {\mathbb {C}}_+\) satisfies
Hence, by using Proposition 5.1 again, we obtain
5.2 Properties of the Laplace Transform of the solutions
Thanks to (5.12), (5.15), and (5.16) we can apply the Laplace Transform to both sides of (5.13) and (5.14) to deduce for every \(s\in {\mathbb {C}}_+\) the following equalities in \({\hat{V}}_0'\):
Our purpose is to prove that for every \(s_1>0\) we have
To prove (5.19) we need two lemmas. In the first one we deduce from (5.17) an estimate for \({\hat{v}}_{\varepsilon }(s)\), which is used in the second lemma to prove a convergence result for \({\hat{v}}_{\varepsilon }(s)\).
Lemma 5.2
For every \(s\in {\mathbb {C}}_+\) there exists a positive constant M(s) such that
Proof
We fix \(\varepsilon \in (0,1)\) and for every \(s\in {\mathbb {C}}_+\) we define the operator \(S_{\varepsilon }(s):{\hat{V}}_0\rightarrow {\hat{V}}_0'\) in the following way:
Since \(S_{\varepsilon }(s)({\hat{v}}_{\varepsilon }(s))={\hat{h}}_{\varepsilon }(s)\) by (5.17), the Lax-Milgram Lemma, together with the Korn-Poincaré Inequality (2.3), implies (5.20) if we can show that for every \(s\in {\mathbb {C}}_+\) there exists a positive constant K(s), independent of \(\varepsilon \), such that
where
We can suppose \(\psi \in {\hat{V}}_0{\setminus }\{0\}\), otherwise the inequality is trivially satisfied, and we set
which satisfy, thanks to the Korn-Poincaré Inequality (2.3) and to (2.4)–(2.6), the following relations
where \(c_0:=1+\frac{c_{{\mathbb {B}}}}{C_{{\mathbb {A}}}}\) and \(c_1:=1+\frac{C_{{\mathbb {B}}}}{c_{{\mathbb {A}}}}\). Therefore, to prove (5.21) it is enough to obtain
For simplicity of notation we set \(z=\varepsilon s\) and we consider two cases.
Case \(b>\frac{2}{3\beta ^2}\). In this situation, thanks to (A.2) and (A.3) we know that the polynomial \(\beta z^3+z^2+\beta b z+a\) has one real root \(z_0\) and two complex and conjugate ones w and \(\bar{w}\). Therefore, thanks to Lemmas A.1 and A.2, we can write
where in the last inequality we used \(z_0<0\).
If \(a\le 2|z|^2\), then \(|z|\ge \frac{a}{2|z|}\) and, thanks to (5.24), we deduce
For \(a>2|z|^2\) we have
and, by writing \(z=x+iy\), we obtain
which implies
By (5.25) and (5.26) in the case \(b> \frac{2}{3\beta ^2}\) we conclude
Case \(b_0\le b\le \frac{2}{3\beta ^2}\). In this case, thanks to (5.22), we have \(a_0\le a\le \frac{2}{3\beta ^2}\). We define
Then for \(z\in {\mathbb {C}}_+\), with \(|z|>R\), we get
where we used the inequalities \(|\beta z|\le |\beta z+1|\) and \(1\le |\beta z+1|\).
To deal with the case \(z\in {\mathbb {C}}_+\), with \(|z|\le R\), we define
and we claim \(\gamma >0\). Indeed the function under examination is continuous with respect to (z, a, b), and by Lemma A.1 it does not vanish in the compact set considered in the minimum problem. By using also (5.28) we conclude that for \(b_0\le b\le \frac{2}{3\beta ^2}\) we have
for every \(z\in {\mathbb {C}}_+\) and every a satisfying (5.22). Since \(\varepsilon \in (0,1)\) we have
therefore, by setting
from (5.27) and (5.29) we obtain (5.23), which concludes the proof. \(\square \)
5.3 Convergence of the Laplace Transform of the Solutions
We begin by proving the pointwise convergence.
Lemma 5.3
For every \(s\in {\mathbb {C}}_+\) we have
Proof
Thanks to (5.3) and to the Hölder Inequality for every \(s\in {\mathbb {C}}_+\) we get
Consequently, thanks to Lemma 5.2, for every \(s\in {\mathbb {C}}_+\) there exist two constants \(\bar{M}(s)>0\) and \(\varepsilon (s)\in (0,1)\) such that
By (5.31) we can say that for every \(s\in {\mathbb {C}}_+\) there exist a sequence \(\varepsilon _j\xrightarrow []{}0^+\) and \(v^*(s)\in {\hat{V}}_0\) such that
Thanks to (2.5) and (5.32) for every \(\psi \in {\hat{V}}_0\) we deduce
Therefore by (5.30) we have
Since, by (5.18), \({\hat{v}}_0(s)\) is a solution to (5.33), by uniqueness \(v^*(s)={\hat{v}}_0(s)\). Moreover, since the limit does not depend on the subsequence, the whole sequence satisfies
To prove the strong convergence we use \({\hat{v}}_{\varepsilon }(s)\) and \({\hat{v}}_0(s)\) as test function in (5.17) and (5.18), respectively. By subtracting the two equalities, we obtain
from which we deduce
By using again (5.30), (5.31), and (5.34), we can deduce
Thanks to the coerciveness assumption (2.6), the conclusion follows from the weak convergence (5.34) together with (5.35). \(\square \)
Now we are in a position to prove the following result about the convergence in the space \(L^2\) on the lines \(\{s_1+is_2:s_2\in {\mathbb {R}}\}\).
Proposition 5.4
The functions \({\hat{v}}_{\varepsilon }\) and \({\hat{v}}_0\) satisfy (5.19).
Proof
For every \(s\in {\mathbb {C}}_+\), by using \({\hat{v}}_{\varepsilon }(s)\) as test function in (5.17) we obtain
As before, we set
and we observe that (5.22) holds. Therefore, thanks to (5.24), Lemma A.1, and (5.29) we can deduce
where in the first line we used the inequality \(|z_0(\beta \varepsilon s+1)|\le |\varepsilon s-z_0| \) for every \(s\in {\mathbb {C}}_+\), which follows from the condition \(z_0<0\).
As a consequence of these inequalities and of (5.36) there exists a positive constant \(C=C(\alpha ,\beta ,\gamma ,a_0)\) such that for every \(s\in {\mathbb {C}}_+\) we have
Therefore, by using (5.37) and the coerciveness assumption (2.6), we can write
from which, recalling (2.3), we deduce
By extending the function \(h_{\varepsilon }\) to \((-\infty ,0)\) with value 0, we can write
Since for every \(s=s_1+is_2\in {\mathbb {C}}_+\) the function \(t\mapsto \mathrm {e}^{-s_1t}h_{\varepsilon }(t)\) belongs to \(L^2({\mathbb {R}};H)\), by the properties of the Fourier Transform we deduce that \(s_2\mapsto {\hat{h}}_{\varepsilon }(s_1+is_2)\) belongs to \( L^2({\mathbb {R}};{\hat{H}})\) for every \(\varepsilon >0\). Moreover, by using (5.3) and the Plancherel Theorem, we can write
Since \({\hat{v}}_{\varepsilon }(s)\rightarrow {\hat{v}}_0(s)\) strongly in \({\hat{V}}_0\) by Lemma 5.3 and \({\hat{h}}_{\varepsilon }(s)\rightarrow {\hat{h}}(s)\) strongly in \({\hat{H}}\) by (5.30), thanks to (5.38) and (5.39) we can apply the Generalized Dominated Convergence Theorem to get the conclusion. \(\square \)
6 \(L^2\) Convergence
In this section we shall prove (3.4) and (3.5) under the assumptions of Theorems 3.6 and 3.7. We begin by proving the following partial result.
Proposition 6.1
Let \(\{h_{\varepsilon }\}_{\varepsilon }\subset L^2(0,T;H)\) and \(h\in L^2(0,T;H)\) be such that (5.3) holds. Let \({{v}}_{\varepsilon }\) and \(v_0\) be the solutions to problems (5.1) and (5.2). Then
Proof
By the Plancherel Theorem we deduce from (5.8) and Proposition 5.4 that for every \(s_1>0\)
which concludes the proof. \(\square \)
Theorem 6.2
Let us assume (H1), (H2), and (3.10). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.10), with \(\varphi _{\varepsilon }=f_{\varepsilon }\) and \(\gamma _{\varepsilon }=g_{\varepsilon }\), and let \(u_0\) be the solution to the stationary problem (3.1). Then (3.4) holds.
Proof
Thanks to Lemma 3.10 it is enough to prove the theorem in the case \(z=0\) and \(z_{\varepsilon }=0\) for every \(\varepsilon >0\). We divide the proof into two steps.
Step 1. The case \(u^1_{\varepsilon }=0\). We reduce the problem to the case of homogeneous initial conditions by considering the functions
Let us define
Since \(u^1_{\varepsilon }=0\), it is easy to see that \({{v}}_{\varepsilon }\) satisfies (2.13) with \(h_{\varepsilon }=f_{\varepsilon }\), \(\ell _{\varepsilon }=q_{\varepsilon }\), \(v^0_{\varepsilon }=0\), and \(v^1_{\varepsilon }=0\), while \(v_0\) satisfies (3.2) with \(h=f\) and \(\ell =q\). By (3.10) and (6.1), to prove (3.4) it is enough to show that
In order to apply Proposition 6.1, we approximate the forcing terms of the problems for \({{v}}_{\varepsilon }\) and \(v_0\) by means of functions in \(H^1(0,T;H)\) and we consider the corresponding solutions \({\tilde{v}}_{\varepsilon }\) and \({\tilde{v}}_0\), for which Proposition 6.1 yields \({\tilde{v}}_{\varepsilon }\rightarrow {\tilde{v}}_0\) strongly in \(L^2(0,T;V)\) as \(\varepsilon \rightarrow 0^+\). Finally we show that \(\Vert {\tilde{v}}_{\varepsilon }-{{v}}_{\varepsilon }\Vert _{L^2(0,T;V)}\) and \(\Vert {\tilde{v}}_0-v_0\Vert _{L^2(0,T;V)}\) are small uniformly with respect to \(\varepsilon \), and this leads to the proof of (6.4).
Let us fix \(\delta >0\). Thanks to the density of H in \(V'_0\) and to Lemma 4.2 we can find \(\psi \in H^1(0,T;H)\) and \(h_{{\mathbb {A}}}^0,h_{{\mathbb {B}}}^0\in H\) such that
Thanks to (H1) and (3.10) there exist \(\varepsilon _0=\varepsilon _0(\delta )\in (0,\tfrac{1}{\beta })\) such that for every \(\varepsilon \in (0,\varepsilon _0)\) we have
Let \(\varphi _{\varepsilon },\varphi :[0,T]\rightarrow H\) be defined for every \(t\in [0,T]\) by
By (6.2), (6.3), (6.5), (6.6), (6.7), (6.8), and (6.9) for every \(\varepsilon \in (0,\varepsilon _0)\) we obtain
Since \(t\mapsto \mathrm {e}^{-\frac{t}{\beta \varepsilon }}\psi ^0_{{\mathbb {B}}}\) converges to 0 strongly in \(L^2(0,T;H)\) as \(\varepsilon \rightarrow 0^+\), by (6.9) we have
Let \({\tilde{v}}_{\varepsilon }\) be the solution to (5.1) with \(h_{\varepsilon }=f_{\varepsilon }+ \varphi _{\varepsilon }\) and let \({\tilde{v}}_0\) be the solution to (5.2) with \(h=f+ \varphi \). By (H1) and (6.12) we have
hence Proposition 6.1 yields
To estimate the difference \({\tilde{v}}_{\varepsilon }-{{v}}_{\varepsilon }\) we observe that it solves (2.13) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=\varphi _{\varepsilon }-q_{\varepsilon }\), \(v^0_{\varepsilon }=0\), and \(v^1_{\varepsilon }=0\). Therefore, by Lemma 3.8 we have
To estimate the difference \({\tilde{v}}_0-v_0\) we observe that it solves (3.2) with \(h=0\) and \(\ell =\varphi -q\). Therefore by the Lax-Milgram Lemma we obtain
By (6.10), (6.11), (6.14), and (6.15) there exists a positive constant \(C=C({\mathbb {A}},{\mathbb {B}},\Omega ,T)\) such that
hence
This inequality, together with (6.13), gives
By the arbitrariness of \(\delta >0\) we obtain (6.4), which concludes the proof of Step 1.
Step 2. The general case. Let \({\tilde{u}}_{\varepsilon }\) be the solution to (2.13) with \(h_{\varepsilon }=f_{\varepsilon }\), \(\ell _{\varepsilon }=g_{\varepsilon }\), \(v^0_{\varepsilon }=u^0_{\varepsilon }\), and \(v^1_{\varepsilon }=0\). By Step 1
The function \(u_{\varepsilon }-{\tilde{u}}_{\varepsilon }\) is the solution to (2.13) with all data equal to 0 except \(v^1_{\varepsilon }\), which is now equal to \(u^1_{\varepsilon }\). Therefore, Lemma 3.8 and (3.10) yield
which, together with (6.16), gives (3.4). \(\square \)
In the following theorem, under the assumptions of Theorem 3.7 we deduce (3.5) from (3.4).
Theorem 6.3
Let us assume (H1), (H2), and (3.10). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.10), with \(\varphi _{\varepsilon }=f_{\varepsilon }\) and \(\gamma _{\varepsilon }=g_{\varepsilon }\), and let \(u_0\) be the solution to the stationary problem (3.1). Then (3.5) holds.
Proof
Thanks to Lemma 3.10 we can suppose \(z=0\) and \(z_{\varepsilon }=0\) for every \(\varepsilon >0\). It is convenient to extend the data of our problem to the interval [0, 2T] by setting for every \(t\in (T,2T]\)
Since (H1) holds, it is clear that \(\{f_{\varepsilon }\}_{\varepsilon }\subset L^2(0,2T;H)\), \(\{g_{\varepsilon }\}_{\varepsilon }\in H^1(0,2T;V'_0)\),
Moreover, the solution to (2.10) on [0, 2T] with the extended data is an extension of \(u_{\varepsilon }\), which is still denoted by \(u_{\varepsilon }\). Similarly, the solution to (3.1) on [0, 2T] is still denoted by \(u_0\). Since (6.17) and (6.18) hold, Theorem 6.2 gives
We further extend \(u_{\varepsilon }\) to \({\mathbb {R}}\) by setting \(u_{\varepsilon }(t)=0\) for every \(t\in {\mathbb {R}}{\setminus }[0,2T]\), and we define
where \(\rho _{\varepsilon }\) is as in (3.17). By the properties of convolutions and (6.19) we get
Thanks to (6.19) and (6.20), by using (2.10) and (3.1) we obtain
Since
By (6.19) and (6.22) there exists a sequence \(\varepsilon _j\xrightarrow []{}0^+\) such that for a.e. \(t\in [0,2T]\) we have
We choose \(T_0\in (T,2T)\) such that (6.23) and (6.24) hold at \(t=T_0\). This implies
Since \(z_{\varepsilon }=0\) for a.e. \(t\in [0,T_0]\) we can use \(u_{\varepsilon }(t)\in V_0\) as test function in (2.10). Then we integrate by parts in time on the interval \((0,T_0)\) to obtain
Thanks to (3.1), (3.10), (6.17), (6.19), (6.20), and (6.25) the first term on the left-hand side of the previous equation tends to 0 as \(j\rightarrow +\infty \). Since \(T_0>T\) we have
By the arbitrariness of the sequence \(\{\varepsilon _j\}_j\) we have
which concludes the proof. \(\square \)
We now use Theorems 6.2 and 6.3 to obtain (3.4) and (3.5) under the assumptions of Theorem 3.6.
Theorem 6.4
Let us assume (H1)–(H3). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.9) and let \(u_0\) be the solution to the stationary problem (3.1). Then (3.4) and (3.5) hold.
Proof
Thanks to Lemma 3.9 we can suppose \(z=0\) and \(z_{\varepsilon }=0\) for every \(\varepsilon >0\). Let \(p_{\varepsilon }\) be defined by (2.12). Since \(z_{\varepsilon }=0\), by Remark 2.3 the function \(u_{\varepsilon }\) solves (2.13) with \(h_{\varepsilon }=f_{\varepsilon }\), \(\ell _{\varepsilon }=g_{\varepsilon }-p_{\varepsilon }\), \(v^0_{\varepsilon }=u_{\varepsilon ,in}(0)\), and \(v^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)\). To obtain (3.4) and (3.5) we cannot apply Theorems 6.2 and 6.3 directly, because \(\{p_{\varepsilon }\}_{\varepsilon }\) does not converge to 0 in \(W^{1,1}(0,T;V'_0)\) as \(\varepsilon \rightarrow 0^+\) and, in general, \(p_{\varepsilon }\notin L^2(0,T;H)\).
To overcome this difficulty we construct a family \(\{q_\varepsilon \}_{\varepsilon }\subset H^1(0,T;H)\) such that \(\Vert {q_\varepsilon -p_\varepsilon }\Vert _{W^{1,1}(0,T;V'_0)}\) is uniformly small and \(q_\varepsilon \rightarrow 0\) strongly in \(L^2(0,T;H)\) as \(\varepsilon \rightarrow 0^+\). Then we can apply Theorems 6.2 and 6.3 to the solutions \({{v}}_{\varepsilon }\) to (2.13) with \(p_\varepsilon \) replaced by \(q_\varepsilon \), obtaining that \({{v}}_{\varepsilon }\rightarrow u_0\) strongly in \(L^2(0,T;V)\) and \(\varepsilon \dot{{v}}_{\varepsilon }\rightarrow 0\) strongly in \(L^2(0,T;H)\). Finally, we show that \(\Vert {{v}}_{\varepsilon }-u_{\varepsilon }\Vert _{L^2(0,T;V)}\) and \(\varepsilon \Vert \dot{{v}}_{\varepsilon }-{\dot{u}}_{\varepsilon }\Vert _{L^2(0,T;H)}\) are small uniformly with respect to \(\varepsilon \), and this leads to the proof of (3.4) and (3.5).
To construct \(q_\varepsilon \) we consider \(g^0_{\varepsilon }\) introduced in (2.12) and we define
where \(\rho _{\varepsilon }\) is the convolution kernel (3.17). By (H3) we have \(\mathop {\mathrm{div}}\nolimits ({\mathbb {B}}eu_{in})\!\in \ C^0((-\infty ,0]; V'_0)\), hence the properties of convolutions imply
Since
thanks to (H3) we have \(g^0_{\varepsilon }-{\tilde{g}}^0_{\varepsilon }\rightarrow 0\) strongly in \(V'_0\) as \(\varepsilon \rightarrow 0^+\), hence (6.26) implies
Let us fix \(\delta >0\). By the density of H in \(V'_0\) we can find \(h^0\in H\) such that \(\Vert h^0-g^0\Vert _{V'_0}<\delta \). By (6.27) there exists \(\varepsilon _0=\varepsilon _0(\delta )\in (0,\tfrac{1}{\beta })\) such that
Let \(q_{\varepsilon }\in H^1(0,T;H)\) be defined by \(q_{\varepsilon }(t):=\mathrm {e}^{-\frac{t}{\beta \varepsilon }}h^0\) for every \(t\in [0,T]\). Then
Since \(p_\varepsilon (t)=\mathrm {e}^{-\frac{t}{\beta \varepsilon }}g^0_\varepsilon \), by (6.28) we have also
Let \({{v}}_{\varepsilon }\) be the solution to (2.13) with \(h_{\varepsilon }=f_{\varepsilon }-q_{\varepsilon }\), \(\ell _{\varepsilon }=g_{\varepsilon }\), \(v^0_{\varepsilon }=u_{\varepsilon ,in}(0)\), and \(v^1_{\varepsilon }={\dot{u}}_{\varepsilon ,in}(0)\). By (H1) and (6.29) we have
By (H3) we have
Therefore we can apply Theorems 6.2 and 6.3 to obtain
To estimate the difference \({{v}}_{\varepsilon }-u_{\varepsilon }\) we observe that it solves (2.13) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=p_{\varepsilon }-q_\varepsilon \), \(v^0_{\varepsilon }=0\), and \(v^1_{\varepsilon }=0\). Therefore, by Lemma 3.8 and (6.30) we have
Since by (6.33)
thanks to (6.31) and (6.32) we have
By the arbitrariness of \(\delta >0\) we obtain (3.4) and (3.5), which concludes the proof.
\(\square \)
7 The Local Uniform Convergence
In this section we shall prove (3.6) and (3.7) under the assumptions of Theorems 3.6 and 3.7. The proof is based on the following lemma.
Lemma 7.1
Let \(\{\ell _{\varepsilon }\}_{\varepsilon }\subset H^{1}(0,T;V'_0)\) and \(\ell \in W^{1,1}(0,T;V'_0)\) be such that
Let \({{v}}_{\varepsilon }\) be a solution to the viscoelastic dynamic system (2.13) with \(h_{\varepsilon }=0\) and arbitrary initial data. Moreover, let \(v_0\) be the solution to the stationary problem (3.2) with \(h=0\). We assume that
Then
Proof
We divide the proof into two steps.
Step 1. Let us assume \(\ell _{\varepsilon }=\ell \in H^2(0,T;V'_0)\) for every \(\varepsilon >0\). By Lemma 3.4 (with \(z=0\)) we have \(v_0\in H^2(0,T;V)\), hence recalling (3.2) we get for a.e. \(t\in [0,T]\)
Now we define \(\bar{v}_{\varepsilon }:={{v}}_{\varepsilon }-v_0\) and observe that by (7.2) and (7.3) we have
Let us consider
Since \({{v}}_{\varepsilon }\) satisfies (2.13) with \(h_{\varepsilon }=0\), by (7.6) the function \(\bar{v}_{\varepsilon }\) satisfies (2.13) with \(h_{\varepsilon }=-\varepsilon ^2\ddot{v}_0\) and \(\ell _{\varepsilon }=q_{\varepsilon }\). After two integrations by parts in time we deduce
hence
Now we fix \(\delta \in (0,T)\), and we consider \(\eta \in (0,\delta )\) and \(\zeta \in (\eta ,\delta )\). We define the family of functions \(\{\bar{w}_{\varepsilon }\}_{\varepsilon }\subset H^1(0,T;{\tilde{H}})\) by
where \(\rho _{\varepsilon }\) is defined by (3.17) and \(\bar{v}_{\varepsilon }\) is extended to \({\mathbb {R}}\) by setting \(\bar{v}_{\varepsilon }(t)=0\) on \({\mathbb {R}}{\setminus }[0,T]\). By properties of convolutions we have
By the energy-dissipation balance (2.25) of Proposition 2.7, for every \(t\in [\eta ,T]\) and \(s\in (\eta ,\zeta )\) we can write
where the work is defined by
Now we take the mean value with respect to s of all terms of (7.11) on \((\eta ,\zeta )\), and we pass to the supremum with respect to t on \([\eta ,T]\). Thanks to (2.3) and (2.6) we deduce
Notice that for every \(s\in (\eta ,\zeta )\) we have
hence thanks to the Young Inequality and (7.12) there exists a positive constant \(C=C({\mathbb {A}},{\mathbb {B}},\Omega ,T)\) such that
By passing to the limit in (7.13) as \(\varepsilon \rightarrow 0^+\), thanks to (7.7), (7.8), (7.9), and (7.10) we obtain
which, by the definition of \(\bar{v}_{\varepsilon }\), concludes the proof of (7.4) and (7.5) in the case \(\ell \in H^2(0,T;V'_0)\).
Step 2. In the general case \(\ell \in W^{1,1}(0,T;V'_0)\) we use an approximation argument. Given \(\delta >0\), by Lemma 4.3 there exists a function \(\psi \in H^2(0,T;H)\) such that
Thanks to (7.1) for every \(\sigma \in (0,T)\) there exists a positive number \(\varepsilon _0=\varepsilon _0(\delta ,\sigma )\) such that
Let \({\tilde{v}}_{\varepsilon }\) be the solution to (2.13) in the interval \([\sigma ,T]\) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }=\psi \), \({\tilde{v}}_{\varepsilon }(\sigma )={{v}}_{\varepsilon }(\sigma )\), and \(\dot{{\tilde{v}}}_{\varepsilon }(\sigma )=\dot{{v}}_{\varepsilon }(\sigma )\), and let \({\tilde{v}}_0\) be the solution to (3.2) in the interval [0, T] with \(h=0\) and \(\ell =\psi \). By applying Step 1 in the interval \([\sigma ,T]\) we obtain
We set \({\bar{v}}_0:={\tilde{v}}_0-v_0\) and \(\bar{v}_{\varepsilon }:={\tilde{v}}_{\varepsilon }-{{v}}_{\varepsilon }\). We observe that \({\bar{v}}_0\) is the solution to (3.2) with \(h=0\) and \(\ell \) replaced by \(\psi -\ell \), hence by the Lax-Milgram Lemma we get
Moreover, \(\bar{v}_{\varepsilon }\) is the solution to (2.13) in the interval \([\sigma ,T]\) with \(h_{\varepsilon }=0\), \(\ell _{\varepsilon }\) replaced by \(\psi -\ell _{\varepsilon }\), and homogeneous initial conditions. Thanks to Lemma 3.8 we obtain
By combining (7.14), (7.15), (7.18), and (7.19), we can find a positive constant \(C=C({\mathbb {A}},{\mathbb {B}},\Omega ,T)\) such that
Since for every \(\eta \in (\sigma ,T)\) we have
thanks to (7.16), (7.17), and (7.20) we obtain
for every \(\eta \in (\sigma ,T)\). By the arbitrariness of \(\delta >0\) and \(\sigma >0\) we conclude. \(\square \)
Now we are in a position to prove (3.6) and (3.7).
Theorem 7.2
Let us assume (H1), (H2), (3.10), and \(f_{\varepsilon }=0\) for every \(\varepsilon >0\). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.10), with \(\varphi _{\varepsilon }=0\) and \(\gamma _{\varepsilon }=g_{\varepsilon }\), and let \(u_0\) be the solution to the stationary problem (3.1), with \(f=0\). Then (3.6) and (3.7) hold.
Proof
By Theorems 6.2 and 6.3 we obtain (3.4) and (3.5). Since \(f_{\varepsilon }=0\) and \(g_{\varepsilon }\rightarrow g\) strongly in \(W^{1,1}(0,T;V'_0)\) as \(\varepsilon \rightarrow 0^+\) by (H1), we can apply Lemma 7.1 to conclude.
\(\square \)
Theorem 7.3
Let us assume (H1)–(H3) and \(f_{\varepsilon }=0\) for every \(\varepsilon >0\). Let \(u_{\varepsilon }\) be the solution to the viscoelastic dynamic system (2.9) and let \(u_0\) be the solution to the stationary problem (3.1), with \(f=0\). Then (3.6) and (3.7) hold.
Proof
Thanks to Lemma 3.9 we can suppose \(z=0\) and \(z_{\varepsilon }=0\) for every \(\varepsilon >0\). By Theorem 6.4 we obtain (3.4) and (3.5). Since \(u_{\varepsilon }\) is a solution to (2.9) with \(f_{\varepsilon }=0\), by Remark 2.3 it solves (2.13) with \(h_{\varepsilon }=0\) and \(\ell _{\varepsilon }=g_{\varepsilon }-p_{\varepsilon }\), where \(p_{\varepsilon }\) is defined by (2.12). Since
we can apply Lemma 7.1 to conclude. \(\square \)
Finally we can prove Theorems 3.6 and 3.7.
Proof of Theorem 3.6
It is enough to combine Theorems 4.1, 6.4, and 7.3. \(\square \)
Proof of Theorem 3.7
It is enough to combine Theorems 6.2, 6.3, and 7.2. \(\square \)
References
Agostiniani, V.: Second order approximations of quasistatic evolution problems in finite dimension. Discrete Contin. Dyn. Syst. 32, 1125–1167 (2010)
Arendt, W., Batty, C., Hieber, M., Neubrander, F.: Vector-valued Laplace Transforms and Cauchy Problems, Monographs in Mathematics, vol. 96. Birkhäuser, Basel (2001)
Dafermos, C.: An abstract Volterra equation with applications to linear viscoelasticity. J. Differ. Equ. 7, 554–569 (1970)
Dautray, R., Lions, J.L.: Mathematical Analysis and Numerical Methods for Science and Technology, vol. 1. Original French edition published by Masson, S.A., Paris, Physical Origins and Classical Methods (1984)
Fabrizio, M., Giorgi, C., Pata, V.: A new approach to equations with memory. Arch. Rational. Mech. Anal. 198, 189–232 (2010)
Fabrizio, M., Morro, A.: Mathematical Problems in Linear Viscoelasticity. SIAM Stud. Appl. Math. 12 (1992)
Gidoni, P., Riva, F.: A vanishing inertia analysis for finite dimensional rate-independent systems with nonautonomous dissipation and an application to soft crawlers. Calc. Var. Partial Differ. Equ. 60, 191 (2021)
Lazzaroni, G., Nardini, L.: On the quasistatic limit of dynamic evolutions for a peeling test in dimension one. J. Nonlinear Sci. 28, 269–304 (2018)
Lions, J.L., Magenes, E.: Non-homogeneous Boundary Value Problems and Applications, vol. 181. Springer, Berlin (1972)
Nardini, L.: A note on the convergence of singularly perturbed second order potential-type equations. J. Dyn. Differ. Equ. 29, 783–797 (2017)
Oleinik, O.A., Shamaev, A.S., Yosifian, G.A.: Mathematical Problems in Elasticity and Homogenization. Studies in Mathematics and its Applications, vol. 26. North-Holland Publishing Co., Amsterdam (1992)
Riva, F.: On the approximation of quasistatic evolutions for the debonding of a thin film via vanishing inertia and viscosity. J. Nonlinear Sci. 30, 903–951 (2020)
Sapio, F.: A dynamic model for viscoelasticity in domains with time–dependent cracks. NoDEA Nonlinear Differ. Equ. Appl. 28, 67 (2021)
Scilla, G., Solombrino, F.: A variational approach to the quasistatic limit of viscous dynamic evolutions in finite dimension. J. Differ. Equ. 267, 6216–6264 (2019)
Slepyan, L.I.: Models and Phenomena in Fracture Mechanics. Foundations of Engineering Mechanics. Springer, Berlin (2002)
Acknowledgements
This paper is based on work supported by the National Research Project (PRIN 2017) “Variational Methods for Stationary and Evolution Problems with Singularities and Interfaces”, funded by the Italian Ministry of University and Research. The authors are members of the Gruppo Nazionale per l’Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A.
Appendix A.
Throughout this section we fix \(a_0>0\), \(b_0>0\), and \(c_1\ge c_0>1\). For every a, b with
we consider the polynomial \(p(z):=\beta z^3+z^2+\beta b z+a\) depending on the complex variable z. The following result about the roots of this polynomial is used in the proof of Lemma 5.2 and Proposition 5.4.
Lemma A.1
There exists a positive constant \(\alpha =\alpha (\beta ,a_0,b_0,c_0,c_1)\) such that, for every \(a,b\in {\mathbb {R}}\) satisfying (A.1), the roots of the polynomial p have real parts in the interval \((-\frac{1}{\beta },-\alpha )\).
Proof
Let us set \(z:=x+iy\) with \(x,y\in {\mathbb {R}}\). Then \(p(z)=0\) if and only if
from which we derive
By recalling \(a>0\) and \(b-a\ge (c_0-1)a>0\), for every \(x\ge 0\) we have \(q(x)>0\) and \(r(x)>0\), and so the real part of the roots cannot be positive or zero. Moreover, since for every \(x\le -\frac{1}{\beta }\) we have \(\beta x^3+x^2\le 0\), we obtain
which imply that the real part of the roots does not belong to \((-\infty ,-\frac{1}{\beta }]\). Therefore, by calling \(z_1,z_2,z_3\in {\mathbb {C}}\) the three roots of the polynomial p, we can say
Case 1: there is only one real root. In this case by (A.3) there exists a unique \(x_1\in (-\frac{1}{\beta },0)\) which satisfies \(r(x_1)=0\) and \(3x^2_1+\frac{2}{\beta }x_1+b>0\). Indeed by setting \(y_1:=\sqrt{3x^2_1+\frac{2}{\beta }x_1+b}\) we obtain that \(x_1+iy_1\) and \(x_1-iy_1\) are two distinct non-real roots of p. Since
then \(x_1\in (-\frac{1 }{2\beta },-\frac{\beta (b-a)}{2\left( b\beta ^2+1\right) })\). Moreover
hence there exists \(x_0\in (-\frac{1}{\beta },-\frac{a}{\beta b})\) such that \(q(x_0)=0\). As a consequence of this, \((x_0,0)\) satisfies the system in (A.2), which implies that \(x_0\) is the real root of p, hence we have \(\mathfrak {R}(z_i)\in (-\frac{1}{\beta },\max \{-\frac{a}{\beta b},-\frac{\beta (b-a)}{2\left( b \beta ^2+1\right) }\})\). Thanks to (A.1) we can say \(-\tfrac{a}{\beta b}\le -\tfrac{1}{c_1\beta }\) and \(-\tfrac{\beta (b-a)}{2\left( b\beta ^2+1\right) }\le \tfrac{\beta (1-c_0)a}{2(c_1a\beta ^2+1)}\le \tfrac{\beta (1-c_0)a_0}{2\left( c_1a_0\beta ^2+1\right) }\), where in the last inequality we use the decreasing property of the function \(a\mapsto \tfrac{\beta (1-c_0)a}{2(c_1a\beta ^2+1)}\). This implies
Case 2: there are only real roots. In this case we have \(b\le \frac{1}{3\beta ^2}\), otherwise \(q'(x)>0\) for every \(x\in {\mathbb {R}}\), which forces p to have also non-real roots. Thanks to (A.1) we have also \(a<b\le \frac{1}{3\beta ^2}\). By setting \({\tilde{b}}_0:=1-\sqrt{1-3b_0\beta ^2}\), we can write
which implies
Since
thanks to (A.1), (A.4), and (A.6) we get
By combining (A.5) and (A.7), we obtain the conclusion with
\(\square \)
The following easy estimate is used in the proof of Lemma 5.2.
Lemma A.2
For every \(z,w\in {\mathbb {C}}\) with \(\mathfrak {R}(z)>0\) and \(\mathfrak {R}(w)<0\) the following inequality holds:
Proof
Without loss of generality we can suppose \(\mathfrak {I}(w)>0\), otherwise we exchange the role of w with \({{\bar{w}}}\). If \(\mathfrak {I}(z)>0\), then
which give the conclusion in this case. If \(\mathfrak {I}(z)<0\), then
which conclude the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Dal Maso, G., Sapio, F. Quasistatic Limit of a Dynamic Viscoelastic Model with Memory. Milan J. Math. 89, 485–522 (2021). https://doi.org/10.1007/s00032-021-00343-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00032-021-00343-w
Keywords
- Evolution problems with memory
- Linear second order Hyperbolic systems
- Dynamic mechanics
- Elastodynamics
- Viscoelasticity