1 Introduction

The theory of variational inequalities was built in early sixties, by using arguments of monotonicity and convexity, including properties of the subdifferential of a convex function. Reference in the field are the works [1, 3, 4, 14, 23], among others. In contrast, the theory of hemivariational inequalities has started in early eighties and is based on properties of the Clarke subdifferential, defined for locally Lipschitz functions which may be nonconvex. Reference in the field include the books [11, 19, 22, 24, 27]. Both variational and hemivariational inequalities have been intensively used in the study of various boundary value problems in Contact Mechanics, see for instance [9, 10, 12, 13, 16, 19, 23,24,25,26] and the references therein. Their optimal control was the object of a number of works including [2, 29] and, more recently, [5, 15, 17, 18]. Variational–hemivariational inequalities represent a special class of inequalities, in which both convex and nonconvex functions are involved.

In [21] we have studied variational–hemivariational inequalities of the form

$$\begin{aligned}&u\in K,\quad \langle A u, v - u \rangle + \varphi (u, v) - \varphi (u, u) + j^0(u; v - u) \nonumber \\&\quad \ge \langle {\widetilde{f}}, v - u \rangle \quad \forall \, v \in K. \end{aligned}$$
(1.1)

Here and everwhere in the rest of the paper X is a reflexive Banach space, \(\langle \cdot ,\cdot \rangle \) denotes the duality pairing between \(X^*\) and its dual \(X^*\), \(K\subset X\), \(A :X \rightarrow X^*\), \(\varphi :X \times X \rightarrow \mathbb {R}\), \(j :X \rightarrow \mathbb {R}\) and \({{\widetilde{f}}}\in X^*\). Note that the function \(\varphi (u, \cdot )\) is assumed to be convex and the function j is locally Lipschitz and, in general, nonconvex. Therefore inequality (1.1) is a  variational–hemivariational inequality.

A short description of the results obtained in [21] is the following. First, the existence and uniqueness result of the solution of (1.1) was proved by using arguments of surjectivity for pseudomonotone operators and the Banach fixed point argument. Then, the continuous dependence of the solution with respect the functions \(\varphi \) and j was studied and a penalty method was introduced, for which a convergence result was proved. The proof was based on compactness and lower semicontinuity arguments. Finally, a mathematical model which describes the equilibrium of an elastic body in unilateral contact with a foundation was considered. The weak formulation of the model was in a form of a variational–hemivariational inequality for the displacement field. The abstract results were applied in the study of the corresponding inequality.

In this paper we continue the study of elliptic variational–hemivariational inequalities of the form (1.1), by considering a special case in which \({\widetilde{f}}\) is associated to an element f which belongs to a Hilbert space, and the set of constraints is of the form \(K_g=gK\) with \(K\subset X\) and \(g>0\). Such special kind of inequalities arise in the study of mathematical models in Contact Mechanics. We establish the weak-strong convergence of the solution with respect to f and g. This result represents the first trait of novelty of this paper since, in particular, it provides the continuous dependence of the solution with respect to the set of constraints. We also consider two optimal control problems for such kind of inequalities in which the controls are f and g, respectively. The analysis of these problems, including the existence of optimal pairs together with various convergence results, represents the second trait of novelty of our current work.

The rest of the manuscript is organized as follows. In Sect. 2 we introduce some notation and recall the existence and uniqueness result in [21]. In Sect. 3 we provide our first convergence result. In Sects. 4 and 5 we deal with the optimal control problems. There, we present results on the existence and convergence of the optimal pairs, respectively. Finally, in Sect. 6 we give an example which provides the physical motivation of our abstract study. It describes the contact of an elastic rod with a foundation made of a rigid body covered by an rigid-elastic layer.

2 Preliminaries

In order to recall our main existence and uniqueness result in [21] we need some preliminary material. For more details on the results we present in this section we refer to [6,7,8, 30].

We use notation \(\Vert \cdot \Vert _X\) and \(0_X\) for the norm and the zero space element of X, respectively. All the limits, upper and lower limits below are considered as \(n\rightarrow \infty \), even if we do not mention it explicitly. The symbols “\(\rightharpoonup \)” and “\(\rightarrow \)” denote the weak and the strong convergence in various spaces which will be specified. Nevertheless, for simplicity, we write \(g_n\rightarrow g\) for the convergence in \(\mathbb {R}\).

Definition 1

A function \(\varphi :X \rightarrow \mathbb {R}\) is lower semicontinuous (l.s.c.) if \(x_n \rightarrow x\) in X implies \(\liminf \varphi (x_n)\ge \varphi (x)\). A function \(\varphi :X \rightarrow \mathbb {R}\) is weakly lower semicontinuous (weakly l.s.c.) if \(x_n \rightharpoonup x\) in X implies \(\liminf \varphi (x_n)\ge \varphi (x)\).

We note that a continuous function is lower semicontinuous and a weakly lower semicontinuous function is lower semicontinuous, while the converse does not hold, in general. Nevertheless, the following result holds.

Proposition 2

Assume that \(\varphi :X \rightarrow \mathbb {R}\) is a convex function. Then \(\varphi \) is lower semicontinuous if and only if it is weakly lower semicontinuous.

We now recall the definition of the Clarke subdifferential for a locally Lipschitz function.

Definition 3

Let X be a Banach space. A function \(h :X \rightarrow \mathbb {R}\) is said to be locally Lipschitz, if for every \(x \in X\), there exists \(U_x\) a neighborhood of x and a constant \(L_x>0\) such that \( |h(y) - h(z)| \le L_x \Vert y - z \Vert _X \) for all y, \(z \in U_x\).

We note that a convex continuous function \(h :X \rightarrow \mathbb {R}\) is locally Lipschitz. Moreover, if a function \(h :X \rightarrow \mathbb {R}\) is Lipschitz continuous on bounded sets of X, then it is locally Lipschitz, while the converse does not hold, in general.

Definition 4

Let \(h :X \rightarrow \mathbb {R}\) be a locally Lipschitz function. The generalized (Clarke) directional derivative of h at the point \(x \in X\) in the direction \(v \in X\) is defined by

$$\begin{aligned} h^{0}(x; v) = \limsup _{y \rightarrow x, \ \lambda \downarrow 0} \frac{h(y + \lambda v) - h(y)}{\lambda }. \end{aligned}$$

The generalized gradient (subdifferential) of h at x is a subset of the dual space \(X^*\) given by

$$\begin{aligned} \partial h (x) = \{\, \zeta \in X^* \mid h^{0}(x; v) \ge {\langle \zeta , v \rangle } \quad \forall \, v \in X \, \}. \end{aligned}$$

A locally Lipschitz function h is said to be regular (in the sense of Clarke) at the point \(x \in X\) if for all \(v \in X\) the one-sided directional derivative \(h' (x; v)\) exists and \(h^0(x; v) = h'(x; v)\).

We shall use the following properties of the generalized directional derivative and the generalized gradient.

Proposition 5

Assume that \(h :X \rightarrow \mathbb {R}\) is a locally Lipschitz function. Then the following hold:

  1. (i)

    For every \(x \in X\), the function \(X \ni v \mapsto h^0(x;v) \in \mathbb {R}\) is positively homogeneous and subadditive, i.e., \(h^0(x; \lambda v) = \lambda h^0(x; v)\) for all \(\lambda \ge 0\), \(v\in X\) and \(h^0 (x; v_1 + v_2) \le h^0(x; v_1) + h^0(x; v_2)\) for all \(v_1\), \(v_2 \in X\), respectively.

  2. (ii)

    For every \(v \in X\), we have \(h^0(x; v) = \max \, \{ \, \langle \xi , v \rangle \mid \xi \in \partial h(x) \, \}\).

Next, we proceed with the definition of some classes of operators.

Definition 6

An operator \(A :X \rightarrow X^*\) is said to be:

  1. (a)

    monotone, if for all u, \(v \in X\), we have \(\langle Au - A v, u-v \rangle \ge 0\);

  2. (b)

    bounded, if A maps bounded sets of X into bounded sets of \(X^*\);

  3. (c)

    pseudomonotone, if it is bounded and \(u_n \rightarrow u\) weakly in X with

    $$\begin{aligned}\displaystyle \limsup \,\langle A u_n, u_n -u \rangle \le 0\end{aligned}$$

    imply \(\displaystyle \liminf \, \langle A u_n, u_n - v \rangle \ge \langle A u, u - v \rangle \) for all \(v \in X\).

In the study of (1.1), we consider the following hypotheses on the data.

$$\begin{aligned}&K \ \text{ is } \text{ nonempty, } \text{ closed } \text{ and } \text{ convex } \text{ subset } \text{ of } \ X . \end{aligned}$$
(2.1)
$$\begin{aligned}&\left\{ \begin{array}{l} A :X \rightarrow X^* \ \text{ is } \text{ such } \text{ that } \\ \ \ \mathrm{(a)} \ \text{ it } \text{ is } \text{ pseudomonotone }. \\ \ \ \mathrm{(b)} \ \text{ there } \text{ exist } \ \alpha _A> 0, \beta , \gamma \in \mathbb {R}\ \text{ and } \ u_0 \in K \ \text{ such } \text{ that } \\ \qquad \quad \langle A v, v - u_0 \rangle \ge \alpha _A \, \Vert v \Vert _X^{2} - \beta \, \Vert v \Vert _X - \gamma \quad \text{ for } \text{ all } \ v \in X. \\ \ \ \mathrm{(c)} \ \text{ strongly } \text{ monotone, } \text{ i.e., } \text{ there } \text{ exists }\ m_A > 0 \ \text{ such } \text{ that }\\ \qquad \quad \langle Av_1 - Av_2, v_1 - v_2 \rangle \ge m_A \Vert v_1 - v_2 \Vert _X^{2} \quad \text{ for } \text{ all } \ v_1, v_2 \in X. \end{array} \right. \end{aligned}$$
(2.2)
$$\begin{aligned}&\left\{ \begin{array}{l} \varphi :X \times X \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that }\\ \ \ \mathrm{(a)} \ \varphi (\eta , \cdot ) :X \rightarrow \mathbb {R}\ \text{ is } \text{ convex } \text{ and } \text{ lower } \text{ semicontinuous },\\ \qquad \ \ \text{ for } \text{ all } \ \eta \in X. \\ \ \ \mathrm{(b)} \ \text{ there } \text{ exists } \ \alpha _\varphi > 0 \ \text{ such } \text{ that } \\ \qquad \varphi (\eta _1, v_2) - \varphi (\eta _1, v_1) + \varphi (\eta _2, v_1) - \varphi (\eta _2, v_2) \le \alpha _\varphi \Vert \eta _1 - \eta _2 \Vert _X \, \Vert v_1 - v_2 \Vert _X \\ \qquad \ \text{ for } \text{ all } \ \eta _1, \eta _2, v_1, v_2 \in X. \end{array} \right. \end{aligned}$$
(2.3)
$$\begin{aligned}&\left\{ \begin{array}{l} j :X \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that }\\ \ \ \mathrm{(a)} \ j \ \text{ is } \text{ locally } \text{ Lipschitz. } \\ \ \ \mathrm{(b)} \ \Vert \xi \Vert _{X^*} \le c_0 + c_1 \, \Vert v \Vert _X \ \text{ for } \text{ all } \ v \in X,\ \xi \in \partial j(v), \ \text{ with } \ c_0, c_1 \ge 0. \\ \ \ \mathrm{(c)} \ \text{ there } \text{ exists } \ \alpha _j > 0 \ \text{ such } \text{ that } \\ \qquad \quad j^0(v_1; v_2 - v_1) + j^0(v_2; v_1 - v_2) \le \alpha _j \, \Vert v_1 - v_2 \Vert _X^2 \\ \qquad \ \text{ for } \text{ all } \ v_1, v_2 \in X. \end{array} \right. \end{aligned}$$
(2.4)
$$\begin{aligned}&{\widetilde{f}} \in X^*. \end{aligned}$$
(2.5)

It can be proved that for a locally Lipschitz function \(j :X \rightarrow \mathbb {R}\), hypothesis (2.4)(c) is equivalent to the so-called relaxed monotonicity condition see, e.g., [19]. Note also that if \(j :X \rightarrow \mathbb {R}\) is a convex function, then (2.4)(c) holds with \(\alpha _j = 0\), since it reduces to the monotonicity of the (convex) subdifferential. Examples of functions which satisfy condition (2.4)(c) have been provided in [20, 21, 27, 28].

The unique solvability of the variational–hemivariational inequality (1.1) is provided by the following existence and uniqueness result, obtained in [21].

Theorem 7

Assume (2.1)–(2.5) and, in addition, assume the smallness conditions

$$\begin{aligned}&\alpha _\varphi + \alpha _j < m_A, \end{aligned}$$
(2.6)
$$\begin{aligned}&\alpha _j < \alpha _A. \end{aligned}$$
(2.7)

Then, inequality (1.1) has a unique solution \(u \in K\).

The proof of Theorem 7 was carried out in several steps, by using the properties of the subdifferential, a surjectivity result for pseudomonotone multivalued operators and the Banach fixed point argument.

We now consider a special version of inequality (1.1). Thus, we assume in what follows that \(g>0\) and we denote by \(K_g\) the subset of X given by \(K_g=gK\). In addition, let Y be a real Hilbert space endowed with the inner product \((\cdot ,\cdot )_Y\) and \(\pi :X\rightarrow Y\). Then, the inequality problem we consider in this paper is the following.

Problem\(\mathcal{P}\). Given\(f\in Y\)and\(g>0\), find \(u\in K_g\)such that

$$\begin{aligned} \langle A u, v - u \rangle + \varphi (u, v) - \varphi (u, u) + j^0(u; v - u) \ge (f,\pi v-\pi u)_Y \quad \forall \,v \in K_g.\nonumber \\ \end{aligned}$$
(2.8)

We start with a unique solvability result for Problem \(\mathcal{P}\) and, to this end, we consider the following assumptions.

$$\begin{aligned}&\left\{ \begin{array}{l} A :X \rightarrow X^*\ \mathrm{is\ strongly\ monotone\ and\ Lipschitz\ continuous,\ i.e.,}\\ \text{(a) } \ \langle Au - Av,u -v\rangle \ge m_A \Vert u -v\Vert ^{2}_{X} \quad \forall \,u,\,v\in X\ \ \mathrm{with}\ m_A>0,\\ \text{(b) } \ \Vert Au-Av\Vert _{X^*}\le L_A\, {\Vert u-v\Vert _X}\quad \forall \,u,\,v\in X\ \ \mathrm{with}\ L_A>0. \end{array}\right. \end{aligned}$$
(2.9)
$$\begin{aligned}&\left\{ \begin{array}{l} \pi :X \rightarrow Y\ \mathrm{is\ a\ linear\ continuous\ operator,\ i.e.,}\\ \Vert \pi v\Vert _{Y} \le d_0\,\Vert v\Vert _X\quad \forall \,v\in X\ \ \mathrm{with}\ d_0>0.\\ \end{array}\right. \end{aligned}$$
(2.10)

We have the following existence and uniqueness result.

Theorem 8

Assume that (2.1), (2.3), (2.4), (2.9) and (2.10). In addition, assume that (2.6) holds. Then, for each \(f\in Y\) and \(g>0\), the variational–hemivariational inequality (2.8) has a unique solution.

Proof

First, we note that assumption (2.1) on the set K implies that the set \(K_g=gK\) satisfies condition (2.1), too. Moreover, it is well known that a monotone Lipschitz continuous operator is pseudomonotone and, therefore, assumption (2.9) on the operator A imply that \(A:X\rightarrow X^*\) is pseudomonotone. In addition, for any \(v\in V\) and \(u_0\in K_g\) we have

$$\begin{aligned} \langle Av,v-u_0\rangle= & {} \langle Av-Au_0,v-u_0\rangle +\langle Au_0,v-u_0\rangle \ge \\\ge & {} m_A \Vert v-u_0\Vert ^{2}_{X}-\Vert Au_0\Vert _{X}\Vert v-u_0\Vert _{X}\ge \\\ge & {} m_A \big (\Vert v\Vert _{X}-\Vert u_0\Vert _{X}\big )^2-\Vert Au_0\Vert _{X}\Vert v\Vert _{X}-\Vert Au_0\Vert _{X}\Vert u_0\Vert _{X} \end{aligned}$$

which shows that A satisfies condition (2.2)(b) with \(\alpha _A=m_A\). We conclude from above that assumption (2.2) holds. Next, we use assumption (2.10) to see that, given \(f\in Y\), there exists a unique element \({{\widetilde{f}}}\in X\) such that

$$\begin{aligned} \langle {{\widetilde{f}}},v\rangle =(f,\pi v)_Y\quad \ \forall \, v\in X. \end{aligned}$$
(2.11)

Finally, we note that condition (2.6) implies condition (2.7) since \(\alpha _A=m_A\) and \(\alpha _\varphi >0\). The existence and uniqueness part of Proposition 8 is now a direct consequence of Theorem 7 combined with equality (2.11). \(\square \)

3 Convergence Results

Theorem 8 allows to define the map \((f,g)\mapsto u(f,g)\) which associates to each pair \((f,g)\in Y\times (0,+\infty )\) the solution \(u=u(f,g)\in K_g\) of the variational–hemivariational inequality (2.8). An important property of this operator is its weak-strong continuity that we state and prove in this section, under additional assumptions. This property represents a crucial ingredient in the study of optimal control problems associated to inequality (2.8). Assume in what follows that

$$\begin{aligned}&0_X\in K. \end{aligned}$$
(3.1)
$$\begin{aligned}&\left\{ \begin{array}{l} \varphi :X\times X \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that } \\ \text{(a) } \ \varphi (u,\lambda v)=\lambda \varphi (u,v) \quad \forall \,u,\,v\in X, \ \lambda >0.\\ \text{(b) } \ \varphi (v,v)\ge 0\quad \forall \,v\in X.\\ \text{(c) } \eta _n\rightharpoonup \eta \quad \mathrm{in}\quad X, \ u_n\rightharpoonup u \quad \mathrm{in}\quad X\quad \Longrightarrow \quad \\ \qquad \displaystyle \limsup \,[\varphi (\eta _n,v)-\varphi (\eta _n,u_n)]\le \varphi (\eta ,v)-\varphi (\eta ,u)\quad \forall \,v\in X. \end{array}\right. \end{aligned}$$
(3.2)
$$\begin{aligned}&\left\{ \begin{array}{l} j :X \rightarrow \mathbb {R}\ \text{ is } \text{ such } \text{ that }\ u_n\rightharpoonup u \quad \mathrm{in}\quad X\quad \Longrightarrow \\ \qquad \ \limsup j^0(u_n; v - u_n) \le j^0 (u; v - u)\quad \forall \,v\in X. \end{array} \right. \end{aligned}$$
(3.3)
$$\begin{aligned}&\left\{ \begin{array}{l} \pi :X \rightarrow Y\ \text{ is } \text{ such } \text{ that }\\ v_n\rightharpoonup v\quad \mathrm{in}\quad X \quad \Longrightarrow \quad \pi v_n\rightarrow \pi v\quad \mathrm{in}\quad Y. \end{array}\right. \end{aligned}$$
(3.4)

Note that hypothesis (3.3) was already used in [21, 27]. There, sufficient conditions for functions which satisfy this hypothesis can be found. In addition, assumption (3.4) shows that the operator \(\pi :X\rightarrow Y\) is completely continuous.

The main result of this section is the following.

Theorem 9

Assume that (2.1), (2.3), (2.4), (2.9), (2.10), (3.1)–(3.4) hold. In addition, assume that (2.6) holds, too. Let \(\{f_n\}\subset Y\), \(\{g_n\}\subset (0,+\infty )\) and let \(f\in Y\), \(g>0\). Then,

$$\begin{aligned} f_n\rightharpoonup f\quad \mathrm{in}\quad Y,\quad g_n\rightarrow g \quad \Longrightarrow \quad u(f_n,g_n)\rightarrow u(f,g)\quad \mathrm{in}\quad X. \end{aligned}$$
(3.5)

The proof of Theorem 9 will be carried out in several steps that we present in what follows. Everywhere below we assume that the hypotheses of Theorem 9 hold. The first step of the proof is the following.

Lemma 10

Given \(f\in Y\) and \(g>0\), the solution \(u=u(f,g)\) of the variational–hemivariational inequality (2.8) satisfies the bound

$$\begin{aligned} \Vert u\Vert _{X}\le \frac{1}{m_A-\alpha _j}\,\big (\Vert A0_X\Vert _{X^*}+d_0\Vert f\Vert _{Y}+c_0\big ). \end{aligned}$$
(3.6)

Proof

We use assumption (3.1) and take \(v=0_X\in K_g\) in (2.8), then we use assumption (3.2)(a),(b). As a result we obtain

$$\begin{aligned} \langle A u, u \rangle \le (f,\pi u)_Y+ j^0(u; - u). \end{aligned}$$

We now write \(A u=Au-A0_X+A0_X\) and use the property (2.9)(a) of the operator A and inequality (2.10) to see that

$$\begin{aligned} m_A\Vert u\Vert ^2_{X}\le (\Vert A0_X\Vert _{X}+ d_0\,\Vert f\Vert _{Y})\Vert u\Vert _{X}+j^0(u; - u). \end{aligned}$$
(3.7)

On the other hand, taking \(v_1=u\) and \(v_2=0_X\) in (2.4)(c) we find that

$$\begin{aligned} j^0(u; - u)\le \alpha _j\Vert u\Vert ^2_{X}-j^0(0_X;u). \end{aligned}$$
(3.8)

Moreover, using Proposition 5 (ii) we have

$$\begin{aligned} -j^0(0_X;u)\le & {} | j^0(0_X;u)|= |\max _{\xi \in \partial j(0_X)} \langle \xi , u\rangle | \\\le & {} \max _{\xi \in \partial j(0_X)} |\langle \xi , u\rangle |\le \max _{\xi \in \partial j(0_X)} \Vert \xi \Vert _{X^*}\Vert u\Vert _X \end{aligned}$$

and, using condition (2.4)(b) with \(v=0_X\) yields

$$\begin{aligned} -j^0(0_X;u)\le c_0\Vert u\Vert _X. \end{aligned}$$
(3.9)

We now combine inequalities (3.8) and (3.9) to see that

$$\begin{aligned} j^0(u; - u)\le \alpha _j\Vert u\Vert ^2_{X}+c_0\Vert u\Vert _X, \end{aligned}$$

then we use this inequality in (3.7) to deduce that

$$\begin{aligned} (m_A-\alpha _j)\Vert u\Vert _{X}\le (\Vert A0_X\Vert _{X}+ d_0\,\Vert f\Vert _{Y})+c_0. \end{aligned}$$

Inequality (3.6) is now a direct consequence of the smallness assumption (2.6). \(\square \)

The next step of the proof is given by the following convergence result.

Lemma 11

Let \(\{f_n\}\subset Y\), and let \(g>0\). Then,

$$\begin{aligned} f_n\rightharpoonup f\quad \mathrm{in}\quad Y\quad \Longrightarrow \quad u(f_n,g)\rightarrow u(f,g)\quad \mathrm{in}\quad X. \end{aligned}$$
(3.10)

Proof

Let \(\{f_n\}\) be a sequence of elements in Y such that

$$\begin{aligned} f_n\rightharpoonup f\quad \mathrm{in}\quad Y\quad \text{ as } \ n\rightarrow \infty \end{aligned}$$
(3.11)

and, for simplicity, denote \(u(f_n,g)=u_n\), \(u(f,g)=u\). Then, it follows that \(\{f_n\}\) is a bounded sequence in Y, hence inequality (3.6) implies that \(\{u_n\}\) is a bounded sequence in X. Therefore, by the reflexivity of X we deduce that there exists \({\widetilde{u}}\in X\) such that

$$\begin{aligned} {u}_n\rightharpoonup {\widetilde{u}}\quad \text{ in } \ X\quad \text{ as } \ n\rightarrow \infty . \end{aligned}$$
(3.12)

On the other hand, we recall that \(K_g\) is a closed convex subset of the space X and \(\{u_{n}\}\subset K_g\). Then, (3.12) implies that

$$\begin{aligned} {\widetilde{u}}\in K_g. \end{aligned}$$
(3.13)

Let \(n\in \mathbb {N}\). We write inequality (2.8) for \(f=f_n\) to obtain

$$\begin{aligned} \langle Au_n,u_n-v\rangle\le & {} \varphi (u_n,v)-\varphi (u_n, u_n)\nonumber \\&+j^0(u_n;v-u_n)+(f_n,\pi u_n-\pi v)_Y \quad \forall \,v\in K_g, \end{aligned}$$
(3.14)

then we take \(v = {\widetilde{u}}\in K_g\) to find that

$$\begin{aligned} \langle Au_n,u_n - {\widetilde{u}} \rangle \le \varphi (u_n,{\widetilde{u}})-\varphi (u_n,u_n) +j^0(u_n;{\widetilde{u}}-u_n)+(f_n,\pi u_n-\pi {\widetilde{u}} )_Y.\nonumber \\ \end{aligned}$$
(3.15)

We use the convergences (3.11), (3.12) and assumptions (3.2)(c), (3.3), (3.4) to see that

$$\begin{aligned}&\limsup \big [\varphi (u_n,{\widetilde{u}})-\varphi (u_n,u_n)\big ]\le 0,\\&\limsup j^0(u_n;{\widetilde{u}}-u_n)\le j^0({\widetilde{u}};0_X)=0,\\&\lim \,(f_n,\pi u_n-\pi {\widetilde{u}} )_Y=0. \end{aligned}$$

Therefore, inequality (3.15) implies that

$$\begin{aligned} \limsup \,\langle Au_n,u_n - {\widetilde{u}}\rangle \le 0. \end{aligned}$$

Next, since A is pseudomonotone, the Convergence (3.12) and Definition 6 (c) guarantee that

$$\begin{aligned} \liminf \, \langle A{u}_n, {u}_n - v \rangle \ge \langle A{\widetilde{u}},{\widetilde{u}}-v\rangle \qquad \forall \, v\in X. \end{aligned}$$
(3.16)

On the other hand, passing to the upper limit in inequality (3.14) and using the Convergences, (3.11), (3.12) and Assumptions (3.2)(c), (3.3) (3.4), yields

$$\begin{aligned} \limsup \,\langle Au_n,u_n-v\rangle\le & {} \varphi ({\widetilde{u}},v)-\varphi ({\widetilde{u}},{\widetilde{u}})\nonumber \\&+j^0({\widetilde{u}};v-{\widetilde{u}})+(f,\pi {\widetilde{u}}-\pi v)_Y \quad \forall \,v\in K_g.\quad \end{aligned}$$
(3.17)

We now combine the inequalities (3.16) and (3.17) to see that

$$\begin{aligned} \langle A{{\widetilde{u}}},v-{{\widetilde{u}}}\rangle +\,\varphi ({{\widetilde{u}}},v)-\varphi ({{\widetilde{u}}},{{\widetilde{u}}}) +j^0({\widetilde{u}};v-{\widetilde{u}}) \ge (f,\pi v-\pi {{\widetilde{u}}})_Y \quad \forall \,v\in K_g.\nonumber \\ \end{aligned}$$
(3.18)

Next, it follows from (3.13) and (3.18) that \({{\widetilde{u}}}\) is a solution of inequality (2.8) and, by the uniqueness of the solution of this inequality, guaranteed by Theorem 8, we obtain that

$$\begin{aligned} {\widetilde{u}}=u \end{aligned}$$
(3.19)

where, recall, \(u=u(f,g)\). This implies that the whole sequence \(\{u_n\}\) converges weakly in X to u as \(n\rightarrow \infty \), i.e.,

$$\begin{aligned} {u}_n\rightharpoonup {u}\quad \text{ in } \ X\quad \text{ as } \ n\rightarrow \infty . \end{aligned}$$
(3.20)

Let \(n\in \mathbb {N}\) be given. We take \(v=u\) in inequality (3.14) to see that

$$\begin{aligned} \langle Au_n,u_n-u\rangle\le & {} \varphi (u_n,u)-\varphi (u_n, u_n)\nonumber \\&+j^0(u_n;u-u_n)+(f_n,\pi u_n-\pi u)_Y. \end{aligned}$$
(3.21)

Next, we use assumption (2.9) (a) and (3.21) to find that

$$\begin{aligned} m\,\Vert u_n-u\Vert _X^2\le & {} \langle Au_n-Au,u_n-u\rangle \\= & {} \langle Au_n,u_n-u\rangle -\langle Au, u_n-u\rangle \le \varphi (u_n,u)-\varphi (u_n, u_n)\\&+j^0(u_n;u-u_n)+(f_n,\pi u_n-\pi u)_Y-\langle Au, u_n-u\rangle . \end{aligned}$$

We now pass to the upper limit in this inequality and use the convergences, (3.11), (3.20) and assumptions (3.2)(c), (3.3), (3.4) to deduce that \(\Vert u_n-u\Vert _X^2\rightarrow 0\) which concludes the proof. \(\square \)

We proceed with the following result.

Lemma 12

Let \(\{f_n\}\subset Y\) be a bounded sequence and let \(\{g_n\}\subset (0,+\infty )\), \(g>0\). Then,

$$\begin{aligned} g_n\rightarrow g\quad \Longrightarrow \quad u(f_n,g_n)-u(f_n,g)\rightarrow 0_X\quad \mathrm{in}\quad X. \end{aligned}$$
(3.22)

Proof

Let \(n\in \mathbb {N}\) and, for simplicity, denote \(u(f_n,g_n)=u_n\), \(u(f_n,g)={\widetilde{u}}_n\), \(c_n=\frac{g_n}{g}\). First, we write (2.8) for \(f=f_n\) to see that

$$\begin{aligned}&\langle A {\widetilde{u}}_n, v - {\widetilde{u}}_n \rangle +\, \varphi ({\widetilde{u}}_n, v) - \varphi ({\widetilde{u}}_n, {\widetilde{u}}_n)\nonumber \\&\quad + j^0({\widetilde{u}}_n; v - {\widetilde{u}}_n) \ge (f_n,\pi v-\pi {\widetilde{u}}_n)_Y \quad \forall \,v \in K_g, \end{aligned}$$
(3.23)

then we write (2.8) for \(f=f_n\) and \(g=g_n\) to obtain

$$\begin{aligned}&\langle Au_n,v-u_n\rangle +\,\varphi (u_n,v)-\varphi (u_n, u_n)\nonumber \\&\quad +j^0(u_n;v-u_n)\ge (f_n,\pi v-\pi u_n)_Y \quad \forall \,v\in K_{g_n}. \end{aligned}$$
(3.24)

Next, since \(K_{g_n}=g_nK=\frac{g_n}{g}K_g=c_nK_g\), we are allowed to take \(v=\frac{1}{c_n}u_n\) in (3.23) and \(v=c_n{\widetilde{u}}_n\in K_{g_n}\). As a result we obtain

$$\begin{aligned}&\left\langle A {\widetilde{u}}_n, \frac{1}{c_n}u_n - {\widetilde{u}}_n \right\rangle \nonumber \\&\qquad +\,\varphi \left( {\widetilde{u}}_n, \frac{1}{c_n}u_n\right) - \varphi \left( {\widetilde{u}}_n, {\widetilde{u}}_n\right) \nonumber \\&\qquad + j^0({\widetilde{u}}_n; \frac{1}{c_n}u_n - {\widetilde{u}}_n) \ge \left( f_n,\pi \Big (\frac{1}{c_n}u_n\Big )-\pi {\widetilde{u}}_n\right) _Y, \end{aligned}$$
(3.25)
$$\begin{aligned}&\langle Au_n,c_n {\widetilde{u}}_n-u_n\rangle +\varphi (u_n,c_n {\widetilde{u}}_n)-\varphi (u_n, u_n)\nonumber \\&\qquad +j^0(u_n;c_n {\widetilde{u}}_n-u_n)\ge (f,\pi (c_n {\widetilde{u}}_n)-\pi u_n)_Y. \end{aligned}$$
(3.26)

We now multiplying inequality (3.25) with \(c_n>0\), use assumptions (3.2)(b) and Proposition 5 (i), then we add the resulting inequality to (3.26) to find that

$$\begin{aligned}&\langle Au_n-A{\widetilde{u}}_n,u_n-c_n {\widetilde{u}}_n\rangle \\&\quad \le \varphi (u_n,c_n {\widetilde{u}}_n)-\varphi (u_n, u_n)+\varphi ({\widetilde{u}}_n,u_n)-\varphi ({\widetilde{u}}_n,c_n {\widetilde{u}})\\&\qquad +j^0({\widetilde{u}}_n;u_n-c_n {\widetilde{u}}_n)+j^0(u_n;c_n {\widetilde{u}}_n-u_n). \end{aligned}$$

Therefore,

$$\begin{aligned}&\langle Au_n-A{\widetilde{u}}_n,u_n-{\widetilde{u}}_n\rangle \le \langle Au_n-A{\widetilde{u}}_n,c_n {\widetilde{u}}_n-{\widetilde{u}}_n\rangle \nonumber \\&\quad +\varphi (u_n,c_n {\widetilde{u}}_n)-\varphi (u_n, u_n)+\varphi ({\widetilde{u}}_n,u_n)-\varphi ({\widetilde{u}}_n,c_n {\widetilde{u}}_n)\nonumber \\&\quad +j^0({\widetilde{u}}_n;u_n-c_n {\widetilde{u}}_n)+j^0(u_n;c_n {\widetilde{u}}_n-u_n). \end{aligned}$$
(3.27)

We now use the properties (2.9) of the operator A to see that

$$\begin{aligned}&\langle Au_n-A{\widetilde{u}}_n,u_n-{\widetilde{u}}_n\rangle \ge m_A\Vert u_n-{\widetilde{u}}_n\Vert ^2_X, \end{aligned}$$
(3.28)
$$\begin{aligned}&\langle Au_n-A{\widetilde{u}}_n,c_n {\widetilde{u}}_n-{\widetilde{u}}_n\rangle \le L_A|1-c_n|\,\Vert u_n-{\widetilde{u}}_n\Vert _X\Vert {\widetilde{u}}_n\Vert _X. \end{aligned}$$
(3.29)

Moreover, assumption (2.3)(b) yields

$$\begin{aligned}&\varphi (u_n,c_n {\widetilde{u}}_n)-\varphi (u_n, u_n)+\varphi ({\widetilde{u}}_n,u_n)-\varphi ({\widetilde{u}}_n,c_n {\widetilde{u}}_n)\\&\qquad \le \alpha _\varphi \Vert u_n-{\widetilde{u}}_n\Vert _X\Vert u_n-c_n {\widetilde{u}}_n\Vert _X \end{aligned}$$

and, writing \(u_n-c_n {\widetilde{u}}_n=u_n-{\widetilde{u}}_n+{\widetilde{u}}_n-c_n{\widetilde{u}}_n\) we deduce that

$$\begin{aligned}&\varphi (u_n,c_n {\widetilde{u}}_n)-\varphi (u_n, u_n)+\varphi ({\widetilde{u}}_n,u_n)-\varphi ({\widetilde{u}}_n,c_n {\widetilde{u}}_n)\nonumber \\&\qquad \le \alpha _\varphi \Vert u_n-{\widetilde{u}}_n\Vert ^2_X+\alpha _\varphi |1-c_n|\,\Vert u_n-{\widetilde{u}}_n\Vert _X\Vert {\widetilde{u}}_n\Vert _X. \end{aligned}$$
(3.30)

On the other hand, using Proposition 5 (i), again, we deduce that

$$\begin{aligned}&j^0({\widetilde{u}}_n;u_n-c_n {\widetilde{u}}_n)+j^0(u_n;c_n {\widetilde{u}}_n-u_n)\\&\quad =j^0({\widetilde{u}}_n;u_n-{\widetilde{u}}_n+{\widetilde{u}}_n-c_n {\widetilde{u}}_n)+j^0(u_n;{\widetilde{u}}_n-u_n+c_n{\widetilde{u}}_n-{\widetilde{u}}_n)\\&\quad \le j^0({\widetilde{u}}_n;u_n-{\widetilde{u}}_n)+ j^0({\widetilde{u}}_n;{\widetilde{u}}_n-c_n{\widetilde{u}}_n)+j^0({u}_n;{\widetilde{u}}_n-u_n)+j^0(u_n;c_n{\widetilde{u}}_n-{\widetilde{u}}_n). \end{aligned}$$

Therefore, assumption (2.4)(c) yields

$$\begin{aligned}&j^0({\widetilde{u}}_n;u_n-c_n {\widetilde{u}}_n)+j^0(u_n;c_n {\widetilde{u}}_n-u_n)\nonumber \\&\quad \le \alpha _j\Vert u_n-{\widetilde{u}}_n\Vert ^2_X+ j^0({\widetilde{u}}_n;{\widetilde{u}}_n-c_n{\widetilde{u}}_n)+j^0(u_n;c_n{\widetilde{u}}_n-{\widetilde{u}}_n). \end{aligned}$$
(3.31)

Moreover, Proposition 5(ii) implies that

$$\begin{aligned} j^0({\widetilde{u}}_n;{\widetilde{u}}_n-c_n{\widetilde{u}}_n)= & {} \max _{\xi \in \partial j({\widetilde{u}}_n)} \langle \xi , {\widetilde{u}}_n-c_n{\widetilde{u}}_n\rangle \\\le & {} \max _{\xi \in \partial j({\widetilde{u}}_n)} \Vert \xi \Vert _{X^*}\Vert {\widetilde{u}}_n-c_n{\widetilde{u}}_n\Vert _X \end{aligned}$$

and, using condition (2.4)(b) yields

$$\begin{aligned} j^0({\widetilde{u}}_n;{\widetilde{u}}_n-c_n{\widetilde{u}}_n)\le |1-c_n|\,(c_0+c_1\Vert {\widetilde{u}}_n\Vert _X)\Vert {\widetilde{u}}_n\Vert _X. \end{aligned}$$
(3.32)

A similar arguments shows that

$$\begin{aligned} j^0(u_n;c_n{\widetilde{u}}_n-{\widetilde{u}}_n)\le |1-c_n|\,(c_0+c_1\Vert u_n\Vert _X)\Vert {\widetilde{u}}_n\Vert _X. \end{aligned}$$
(3.33)

We now combine inequalities (3.31)–(3.33) to obtain

$$\begin{aligned}&j^0({\widetilde{u}}_n;u_n-c_n {\widetilde{u}}_n)+j^0(u_n;c_n {\widetilde{u}}_n-u_n)\nonumber \\&\quad \le \alpha _j\Vert u_n-{\widetilde{u}}_n\Vert ^2_X+ |1-c_n|\,(2c_0+c_1\Vert {\widetilde{u}}_n\Vert _X+c_1\Vert u_n\Vert _X)\Vert {\widetilde{u}}_n\Vert _X. \end{aligned}$$
(3.34)

Therefore, using inequalities (3.27)–(3.30) and (3.34) we see that

$$\begin{aligned}&(m_A-\alpha _\varphi -\alpha _j)\Vert u_n-{\widetilde{u}}_n\Vert ^2_X\nonumber \\&\quad \le (L_A+\alpha _\varphi )|1-c_n|\,\Vert u_n-{\widetilde{u}}_n\Vert _X\Vert {\widetilde{u}}_n\Vert _X\nonumber \\&\qquad +|1-c_n|\,(2c_0+c_1\Vert {\widetilde{u}}_n\Vert _X+c_1\Vert u_n\Vert _X)\Vert {\widetilde{u}}_n\Vert _X. \end{aligned}$$
(3.35)

On the other hand, since \({\widetilde{u}}_n\) and \(u_n\) are solutions to inequalities (3.23) and (3.14), respectively, and the sequence \(\{f_n\}\) is bounded in Y, it follows from Lemma 10 that \(\{u_n\}\) and \(\{{\widetilde{u}}_n\}\) are bounded sequences in X. Therefore, inequality (3.35) combined with the smallness assumption (2.6) imply that there exists a positive constant k which does not depend on n such that

$$\begin{aligned} \Vert u_n-{\widetilde{u}}_n\Vert ^2_X\le k\,|1-c_n|. \end{aligned}$$

Finally, we pass to the limit as \(n\rightarrow \infty \) and use the convergence \(c_n\rightarrow 1\) to see that \(\Vert u_n-{\widetilde{u}}_n\Vert _X\rightarrow 0\), which concludes the proof. \(\square \)

We now have all the ingredients to provide the proof of Theorem 9.

Proof

Assume that \(f_n\rightharpoonup f\ \mathrm{in}\ Y\) and \(g_n\rightarrow g\). We write

$$\begin{aligned} \Vert u(f_n,g_n)-u(f,g)\Vert _X\le \Vert u(f_n,g_n)-u(f_n,g)\Vert _X+\Vert u(f_n,g)-u(f,g)\Vert _X, \end{aligned}$$

then we apply Lemmas 11 and 12 to see that \(\Vert u(f_n,g_n)-u(f,g)\Vert _X\rightarrow 0\) which concludes the proof. \(\square \)

4 Two Optimal Control Problems

In this section we study two optimal control problems associated to inequality (2.8). In the first problem the control is \(f\in Y\) and, in the second one, we control the solution of the variational–hemivariational inequality (2.8) with \(g>0\). Everywhere below \(X\times Y\) the represents the product of the spaces X and Y, equipped with the canonical topology product. Notation \(X\times \mathbb {R}\) will have a a similar meaning.

We start with the first control problem. Let \(g>0\) be given and consider the set of admissible pairs for inequality (2.8) defined by

$$\begin{aligned} \mathcal{V}_{ad}^1 = \{\,(u, f)\in K_g\times Y \ \text{ such } \text{ that }\ 2.8\,\text{ holds }\,\}. \end{aligned}$$
(4.1)

It follows from here that a pair (uf) belongs to \(\mathcal{V}_{ad}^1\) if and only if \(f\in Y\) and, moreover, u is the solution of the variational–hemivariational inequality (2.8) with the data f and g, i.e. \(u=u(f,g)\). Consider also a cost functional \(\mathcal{L}_1:X\times Y\rightarrow \mathbb {R}\). Then, the problem we are interested in is the following.

Problem\(\mathcal{Q}_1\). Find \((u^*, f^*)\in \mathcal{V}_{ad}^1\)such that

$$\begin{aligned} \mathcal{L}_1(u^*,f^*)=\min _{(u,f)\in \mathcal{V}_{ad}^1} \mathcal{L}_1(u,f). \end{aligned}$$
(4.2)

We assume that

$$\begin{aligned} \mathcal{L}_1(u,f)=U(u)+F(f)\qquad \forall \, u\in X,\ f\in Y \end{aligned}$$
(4.3)

where U and F are functions which satisfy the following conditions.

$$\begin{aligned} \left\{ \begin{array}{l} U:X\rightarrow \mathbb {R}\ \mathrm{is\ continuous,\ bounded\ and\ positive, \ i.e.,} \\ \text{(a) } \ v_n\rightarrow v\quad \mathrm{in}\quad X \quad \Longrightarrow \quad U(v_n)\rightarrow U(v). \\ \text{(b) } \ U\ \text{ maps } \text{ bounded } \text{ sets } \text{ in }\ X\ \text{ into } \text{ bounded } \text{ sets } \text{ in }\ \mathbb {R}. \\ \text{(c) } \ U(v)\ge 0\quad \forall \, v\in X.\end{array}\right. \end{aligned}$$
(4.4)
$$\begin{aligned} \left\{ \begin{array}{l} F:Y\rightarrow \mathbb {R}\ \mathrm{is\ weakly\ lower\ semicontinuous,\ positive\ and\ coercive, \, i.e.,}\\ \text{(a) } \ f_n\rightharpoonup f\quad \mathrm{in}\quad Y \quad \Longrightarrow \quad \liminf F(f_n)\ge F(f). \\ \text{(b) } \ F(f)\ge 0\quad \forall \, f\in Y.\\ \text{(c) } \ \Vert f_n\Vert _Y\rightarrow +\infty \quad \Longrightarrow \quad F(f_n) \rightarrow +\infty . \end{array}\right. \end{aligned}$$
(4.5)

Our first result in this section is the following.

Theorem 13

Assume that (2.1), (2.3), (2.4), (2.9), (2.10), (3.1)–(3.4), (4.3)–(4.5) hold. In addition, assume that (2.6) holds and \(g>0\). Then, there exists at least one solution \((u^*, f^*)\in \mathcal{V}_{ad}^1\) of Problem \(\mathcal{Q}_1\).

Proof

Let

$$\begin{aligned} \theta =\inf _{(u,f)\in \mathcal{V}_{ad}^1} \mathcal{L}_1(u, f)\in \mathbb {R} \end{aligned}$$
(4.6)

and let \(\{(u_n,f_n)\}\subset \mathcal{V}_{ad}^1\) be a minimizing sequence for the functional \(\mathcal{L}_1\), i.e.,

$$\begin{aligned} \lim \,\mathcal{L}_1(u_n, f_n)=\theta . \end{aligned}$$
(4.7)

We claim that the sequence \(\{f_n\}\) is bounded in Y. Arguing by contradiction, assume that \(\{f_n\}\) is not bounded in Y. Then, passing to a subsequence still denoted \(\{f_n\}\), we have

$$\begin{aligned} \Vert f_n\Vert _Y\rightarrow +\infty \quad \text {as}\quad n\rightarrow +\infty . \end{aligned}$$
(4.8)

We now use equality (4.3) and inequality (4.4)(c) to see that

$$\begin{aligned} \mathcal{L}_1(u_n, f_n) \ge F(f_n). \end{aligned}$$

Therefore, passing to the limit as \(n\rightarrow +\infty \) and using (4.8) combined with assumption (4.5)(c) we deduce that

$$\begin{aligned} \lim \,\mathcal{L}_1(u_n, f_n)= +\infty . \end{aligned}$$
(4.9)

Equalities (4.7) and (4.9) imply that \(\theta =+\infty \) which is in contradiction with (4.6).

We conclude from above that the sequence \(\{f_n\}\) is bounded in Y and, therefore there exists \(f^*\in Y\) such that, passing to a subsequence still denoted \(\{f_n\}\), we have

$$\begin{aligned} f_n\rightharpoonup f^*\quad \text {in}\quad Y\quad \text {as}\quad n\rightarrow +\infty . \end{aligned}$$
(4.10)

Let \(u^*\) be the solution of the variational inequality (2.8) for \(f=f^*\), i.e., \(u^*=u(f^*,g)\). Then, by the definition (4.1) of the set \(\mathcal{V}_{ad}^1\) we have

$$\begin{aligned} (u^*, f^*)\in \mathcal{V}_{ad}^1. \end{aligned}$$
(4.11)

Moreover, using (4.10) and (3.5) it follows that

$$\begin{aligned} u_n \rightarrow u^*\quad \text {in}\quad X\quad \text {as}\quad n\rightarrow +\infty . \end{aligned}$$
(4.12)

We now use the convergences (4.10), (4.12) and the weakly lower semicontinuity of the functional \(\mathcal{L}_1\), guaranteed by assumptions (4.4)(a) and (4.5)(a), to deduce that

$$\begin{aligned} \liminf \ \mathcal{L}_1(u_n,f_n)\ge \mathcal{L}_1(u^*,f^*). \end{aligned}$$
(4.13)

It follows now from (4.7) and (4.13) that

$$\begin{aligned} \theta \ge \mathcal{L}_1(u^*,f^*). \end{aligned}$$
(4.14)

In addition, (4.6) and (4.11) yield

$$\begin{aligned} \theta \le \mathcal{L}_1(u^*,f^*). \end{aligned}$$
(4.15)

We now combine (4.11) with inequalities (4.14) and (4.15) to see that (4.2) holds, which concludes the proof. \(\square \)

We now move to the second problem in which the control is \(g>0\). To this end we assume that \(f\in Y\) is given and we consider the set \(W=[g_0,\infty )\) where \(g_0>0\) is given, as well. Also, we define the set of admissible pairs by

$$\begin{aligned} \mathcal{V}_{ad}^2 = \{\,(u, g)\in K\times W \ \text{ such } \text{ that }\ 2.8\,\text{ holds }\,\}. \end{aligned}$$
(4.16)

It follows from here that a pair (ug) belongs to \(\mathcal{V}_{ad}^2\) if and only if \(g\in W\) and, moreover, u is the solution of the variational–hemivariational inequality (2.8) with the data f and g, i.e., \(u=u(f,g)\). Let \(\mathcal{L}_2:X\times W\rightarrow \mathbb {R}\) be a cost functional that we shall describe below. Then, the second optimal control problem we study in this section is the following.

Problem\(\mathcal{Q}_2\). Find\((u^*, g^*)\in \mathcal{V}_{ad}^2\)such that

$$\begin{aligned} \mathcal{L}_2(u^*,g^*)=\min _{(u,g)\in \mathcal{V}_{ad}^2} \mathcal{L}_2(u,g). \end{aligned}$$
(4.17)

We assume that

$$\begin{aligned} \mathcal{L}_2(u,f)=U(u)+G(g)\qquad \forall \, u\in X,\ g\in W \end{aligned}$$
(4.18)

where U satisfies condition (4.4) and G is such that

$$\begin{aligned} \left\{ \begin{array}{l} G:W\rightarrow \mathbb {R}\ \mathrm{is\ weakly\ lower\ semicontinuous,\ positive\ and\ coercive, \ i.e.,}\\ \text{(a) } \ g_n\rightarrow g \quad \Longrightarrow \quad \liminf \, G(g_n)\ge G(g). \\ \text{(b) } \ G(g)\ge 0\quad \forall \, g\in W.\\ \text{(c) } \ g_n\rightarrow +\infty \quad \Longrightarrow \quad G(g_n) \rightarrow +\infty . \end{array}\right. \end{aligned}$$
(4.19)

Our second result in this section is the following.

Theorem 14

Assume that (2.1), (2.3), (2.4), (2.9), (2.10), (3.1)–(3.4), (4.4), (4.18), (4.19) hold. In addition, assume that (2.6) holds and \(f\in Y\). Then, there exists at least one solution \((u^*, g^*)\in \mathcal{V}_{ad}^2\) of Problem \(\mathcal{Q}_2\).

The proof of Theorem 14 is based on arguments similar to those used on the proof of Theorem 13 and, therefore, we skip it.

5 Convergence Results for the Optimal Pairs

In this section we focus on the dependence of the optimal pairs of problems \(\mathcal{Q}_1\) and \(\mathcal{Q}_2\) with respect the data g and f, respectively.

We start with the study of Problem \(\mathcal{Q}_1\) and, to this end, we assume in what follows that (2.1), (2.3), (2.4), (2.9), (2.10), (3.1)–(3.4) hold. In addition, we assume that (2.6) holds and let \(g_n\) be a perturbation of g. As usual, we denote \(K_n=g_nK\) and we consider the following perturbation of Problem \(\mathcal{P}\).

Problem\(\mathcal{P}_n^1\). Given \(f\in Y\)and\(g_n>0\), find\(u_n\in K_{g_n}\)such that

$$\begin{aligned}&\langle A u_n, v - u_n \rangle + \varphi (u_n, v) - \varphi (u_n, u_n) + j^0(u_n; v - u_n) \ge (f,\pi v-\pi u_n)_Y \nonumber \\&\quad \forall \,v \in K_{g_n}. \end{aligned}$$
(5.1)

It follows from Theorem 8 that for each \(f\in Y\) and \(g_n>0\) there exists a unique solution \(u_n=u(f,g_n)\) to the variational–hemivariational inequality (5.1). Moreover, the solution satisfies

$$\begin{aligned} \Vert u_n\Vert _{X}\le \frac{1}{m_A-\alpha _j}\,\big (\Vert A0_X\Vert _{X^*}+d_0\Vert f\Vert _{Y}+c_0\big ). \end{aligned}$$
(5.2)

We define set of admissible pairs for inequality (5.1) by

$$\begin{aligned} \mathcal{V}_{ad}^{1n} = \{\,(u_n, f)\in K_{g_n}\times Y \ \text{ such } \text{ that }\ 5.1\,\text{ holds }\,\}. \end{aligned}$$
(5.3)

Then, optimal control problem associated to Problem \(\mathcal{P}_n^1\) is the following.

Problem\(\mathcal{Q}_n^1\). Find\((u^*_n, f^*_n)\in \mathcal{V}_{ad}^{1n}\)such that

$$\begin{aligned} \mathcal{L}_1(u^*_n,f^*_n)=\min _{(u_n,f_n)\in \mathcal{V}_{ad}^{1n}} \mathcal{L}_1(u_n,f_n). \end{aligned}$$
(5.4)

Using Theorem 13 it follows that, if (4.3)–(4.5) hold, then for each \(n\in \mathbb {N}\) there exists at least one solution \((u^*_n, f^*_n)\in \mathcal{V}_{ad}^{1n}\) of Problem \(\mathcal{Q}_n^1\). Our first result in this section, valid on the above-mentioned assumptions, is the following.

Theorem 15

Let \(\{(u_{n}^{*},f_{n}^{*})\}\) be a sequence of solutions of Problem \(\mathcal{Q}_n^1\) and assume that \(g_n\rightarrow g\). Then, there exists a subsequence of the sequence \(\{(u_{n}^{*}, f_{n}^{*})\}\), again denoted \(\{(u_{n}^{*}, f_{n}^{*})\}\), and a solution \((u^*,f^*)\) of Problem \(\mathcal{Q}_1\), such that

$$\begin{aligned} u_n\rightarrow u^*\quad \text{ in }\quad X\quad \mathrm{and}\quad f_n^* \rightharpoonup f^*\quad \text{ in }\quad Y. \end{aligned}$$
(5.5)

Proof

Let \(n\in \mathbb {N}\). We claim that the sequence \(\{f_n^*\}\) is bounded in Y. Arguing by contradiction, assume that \(\{f_n^*\}\) is not bounded in Y. Then, passing to a subsequence still denoted \(\{f_n^*\}\), we have

$$\begin{aligned} \Vert f_n^*\Vert _Y\rightarrow +\infty \quad \text {as}\quad n\rightarrow +\infty . \end{aligned}$$
(5.6)

We use equality (4.3) and inequality (4.4)(b) to see that

$$\begin{aligned} \mathcal{L}_1(u_n^*, f_n^*) \ge F(f_n^*). \end{aligned}$$

Therefore, passing to the limit as \(n\rightarrow \infty \) in this inequality and using (5.6) combined with assumption (4.5)(c) we deduce that

$$\begin{aligned} \lim \ \mathcal{L}_1(u_n^*, f_n^*)= +\infty . \end{aligned}$$
(5.7)

On the other hand, since \((u_{n}^{*},f_{n}^{*})\) represents a solution to Problem \(\mathcal{Q}_n^1\) we have

$$\begin{aligned} \mathcal{L}_1(u^*_n,f^*_n)\le \mathcal{L}_1(u_n,f_n) \qquad \forall \,(u_n,f_n)\in \mathcal{V}_{ad}^{1n}. \end{aligned}$$
(5.8)

We now fix an element \(f^0\in Y\) and we denote by \({u}_n^0\) the solution of Problem \(\mathcal{P}_n^1\) for \(f=f^0\), i.e. \(u_n^0=u(f^0,g_n)\). Then \((u_n^0,f^0)\in \mathcal{V}_{ad}^{1n}\) and, therefore, (5.8), (4.3) imply that

$$\begin{aligned} \mathcal{L}_1(u^*_n,f^*_n)\le U(u_n^0)+F(f^0). \end{aligned}$$
(5.9)

We now use the bound (5.2) and assumption (4.4)(c) on the function U to see that there exists \(D>0\) which does not depend on n such that

$$\begin{aligned} U(u_n^0)+F(f^0)\le D\qquad \forall \, n>0. \end{aligned}$$
(5.10)

Relations (5.7), (5.9) and (5.10) lead to a contradiction, which concludes the claim.

Next, since the sequence \(\{f_n^*\}\) is bounded in Y we can find a subsequence, again denoted \(\{f_n^*\}\), and an element \(f^*\in Y\) such that

$$\begin{aligned} f_n^* \rightharpoonup f^*\quad \text {in}\quad Y\quad \text {as}\quad n\rightarrow 0. \end{aligned}$$
(5.11)

Denote by \(u^*\) the solution of Problem \(\mathcal{P}\) for \(f=f^*\), i.e. \(u^*=u(f^*,g)\). Then, we have

$$\begin{aligned} (u^*, f^*)\in \mathcal{V}_{ad}^1 \end{aligned}$$
(5.12)

where, recall, \(\mathcal{V}_{ad}^1\) is defined by (4.1). Moreover, (3.5) yields

$$\begin{aligned} u_n^* \rightarrow u^*\quad \text {in}\quad X\quad \text {as}\quad n\rightarrow 0. \end{aligned}$$
(5.13)

We now prove that \((u^*,f^*)\) is a solution to the optimal control problem \(\mathcal{Q}_1\). To this end we use the convergences (5.11), (5.13) and the weakly lower semicontinuity of the functional \(\mathcal{L}_1\), guaranteed by (4.4)(a) and (4.5)(a), to see that

$$\begin{aligned} \mathcal{L}_1(u^*,f^*)\le \liminf _{n\rightarrow 0}\mathcal{L}_1(u_n^*,f_n^*). \end{aligned}$$
(5.14)

Next, we fix a solution \(({u}^*_0,{f}^*_0)\) of Problem \(\mathcal{Q}_1\) and, for each \(n\in \mathbb {N}\) we denote by \({\widetilde{u}}_n^{0}\) the solution of Problem \(\mathcal{P}_n^1\) for \(f_n={f}^*_0\), i.e. \({\widetilde{u}}_n^0=u(f_0^*,g_n)\). It follows from here that \(({\widetilde{u}}_n^0,{f}^*_0)\in \mathcal{V}^{1n}_{ad}\) and, by the optimality of the pair \((u_n^*,f_n^*)\), we have that

$$\begin{aligned} \mathcal{L}_1(u_n^*,f_n^*)\le \mathcal{L}_1({\widetilde{u}}_n^0,{f}^*_0)\qquad \forall \, n>0. \end{aligned}$$

We pass to the upper limit in this inequality to see that

$$\begin{aligned} \limsup \,\mathcal{L}_1(u_n^*, f_n^*)\le \limsup \,\mathcal{L}_1({\widetilde{u}}_n^0,{f}^*_0). \end{aligned}$$
(5.15)

Now, remember that \({u}^*_0\) is the solution of the inequality (2.8) for \(f= {f}^*_0\) and \({\widetilde{u}}_n^0\) is the solution of the inequality (5.1) for \(f_n= {f}^*_0\), i.e., \({\widetilde{u}}_0^*=u(f_0^*,g)\) and \({\widetilde{u}}_n^0=u(f_0^*,g_n)\). Therefore, (3.5) implies that

$$\begin{aligned} {\widetilde{u}}_n^0 \rightarrow {u}^*_0\quad \text {in}\quad X\quad \text {as}\quad n\rightarrow +\infty . \end{aligned}$$

Hence, the continuity of the functional \(u\mapsto \mathcal{L}(u,{f}^*_0):X\rightarrow \mathbb {R}\) yields

$$\begin{aligned} \lim \ \mathcal{L}_1({\widetilde{u}}_n^0,{f}^*_0)=\mathcal{L}_1({u}^*_0,{f}^*_0). \end{aligned}$$
(5.16)

We now combine (5.14)–(5.16) to see that

$$\begin{aligned} \mathcal{L}_1(u^*,f^*)\le \mathcal{L}_1({u}^*_0,{f}^*_0). \end{aligned}$$
(5.17)

On the other hand, since \(({u}^*_0,{f}^*_0)\) is a solution of Problem \(\mathcal{Q}_1\), inclusion (5.12) implies that

$$\begin{aligned} \mathcal{L}_1({u}^*_0,{f}^*_0)\le \mathcal{L}_1(u^*,f^*). \end{aligned}$$
(5.18)

We now use the inequalities (5.17) and (5.18) to see that \(\mathcal{L}_1(u^*, f^*)=\mathcal{L}_1({u}^*_0,{f}^*_0)\). This equality combined with (5.12) shows that

$$\begin{aligned} (u^*, f^*)\ \ \text{ is } \text{ a } \text{ solution } \text{ of } \text{ Problem }\ \mathcal{Q}_1. \end{aligned}$$
(5.19)

Theorem 15 is now a consequence of (5.11), (5.13) and (5.19). \(\square \)

We proceed with the study of Problem \(\mathcal{Q}_2\) and, to this end we assume in what follows that (2.1), (2.3), (2.4), (2.9), (2.10), (3.1)–(3.4) hold. In addition, assume that (2.6) holds and let \(f_n\) be a perturbation of f. We consider the following perturbation of Problem \(\mathcal{P}\).

Problem\(\mathcal{P}_n^2\). Given\(f_n\in Y\)and\(g>0\), find\(u_n\in K_{g}\)such that

$$\begin{aligned}&\langle A u_n, v - u_n \rangle + \varphi (u_n, v) - \varphi (u_n, u_n) + j^0(u_n; v - u_n)\nonumber \\&\qquad \ge (f_n,\pi v-\pi u_n)_Y \quad \forall \,v \in K_{g}. \end{aligned}$$
(5.20)

It follows from Theorem 8 that for each \(f_n\in Y\) and \(g>0\) there exists a unique solution \(u_n=u(f_n,g)\) to the inequality (4.16). We define set of admissible pairs for inequality (4.16) by

$$\begin{aligned} \mathcal{V}_{ad}^{2n} = \{\,(u_n, g)\in K_{g}\times W\ \text{ such } \text{ that }\ 5.20\,\text{ holds }\,\}. \end{aligned}$$
(5.21)

where \(W=[g_0,+\infty )\) with \(g_0>0\) given. Then, optimal control problem associated to Problem \(\mathcal{P}_n^2\) the following.

Problem\(\mathcal{Q}_n^2\). Find\((u^*_n, g^*_n)\in \mathcal{V}_{ad}^{2n}\)such that

$$\begin{aligned} \mathcal{L}_2(u^*_n,g^*_n)=\min _{(u_n,f_n)\in \mathcal{V}_{ad}^{2n}} \mathcal{L}_2(u_n,f_n). \end{aligned}$$
(5.22)

Using Theorem 14 it follows that, under assumptions (4.4), (4.18), (4.19), for each \(n\in \mathbb {N}\) there exists at least one solution \((u^*_n, g^*_n)\in \mathcal{V}_{ad}^{2n}\) of Problem \(\mathcal{Q}_n^2\). Our second result in this section, valid on the above-mentionned assumptions, is the following.

Theorem 16

Let \(\{(u_{n}^{*},g_{n}^{*})\}\) be a sequence of solutions of Problem \(\mathcal{Q}_n^2\) and assume that \(f_n\rightharpoonup f\) in Y. Then, there exists a subsequence of the sequence \(\{(u_{n}^{*}, g_{n}^{*})\}\), again denoted \(\{(u_{n}^{*}, g_{n}^{*})\}\), and a solution \((u^*,g^*)\) of Problem \(\mathcal{Q}_1^n\), such that

$$\begin{aligned} u_n\rightarrow u^*\quad \text{ in }\quad X\quad \mathrm{and}\quad g_n^* \rightarrow g^*. \end{aligned}$$
(5.23)

The proof of Theorem 16 is similar to that of Theorem 19 and, therefore, we skip it.

6 A Rod in Contact with Unilateral Constraints

The abstract results in Sects. 35 are useful in the study of various mathematical models which describe the equilibrium of elastic bodies in frictional contact with a foundation. To provide an example, we consider in this section an elementary one-dimensional problem.

The physical setting is depicted in Fig. 1 and is described in what follows. We consider an elastic rod which occupies, in the reference configuration, the interval [0, L] on the Ox axis. The rod is fixed in \(x=0\), is acted by body forces of density f which act along the Ox, and its extremity \(x=L\) is in contact with an obstacle made of a rigid body covered by a deformable layer of thickness \(g>0\). This layer behaves rigid-elastically, i.e. allows penetration but only when the magnitude of the stress in the contact point reaches a critical value, the yield limit, denoted by P. In addition, the reaction of this layer depends on the penetration, this dependence being described by a given positive function p. We denote by a prime the derivative with respect the spatial variable \(x\in [0,L]\). Then, the problem of finding the equilibrium of the rod in the physical setting above can be formulated as follows.

Fig. 1
figure 1

Physical setting

Problem\(\mathcal{P}^{1d}\). Find a displacement field\(u:[0,L]\rightarrow \mathbb {R}\)and a stress field\(\sigma :[0,L]\rightarrow \mathbb {R}\)such that

$$\begin{aligned}&\ \ \sigma (x) = \mathcal{F}\,u'(x) \quad \text{ for }\ x\in (0,L) , \end{aligned}$$
(6.1)
$$\begin{aligned}&\ \ \sigma '(x) + f(x)= 0\quad \text{ for }\ x\in (0,L), \end{aligned}$$
(6.2)
$$\begin{aligned}&\ \ u(0) = 0, \end{aligned}$$
(6.3)
$$\begin{aligned}&u(L)\le g,\qquad \left. \begin{array}{ll} \sigma (L) =0 \mathrm{if}\ \ u(L)< 0\\ -\sigma (L)\in [0,P] \qquad \qquad \qquad \mathrm{if}\ u(L) = 0\\ -\sigma (L)= P+p(u(L)) \quad \quad \mathrm{if}\ \ \ 0< u(L)< g\\ -\sigma (L)\ge P+p(u(L)) \quad \quad \mathrm{if}\ \ u(L)= g \end{array}\right\} . \end{aligned}$$
(6.4)

A brief description of the equations and conditions in Problem \(\mathcal{P}^{1d}\) is the following. First, Eq. (6.1) represents the elastic constitutive law in which \(\mathcal{F}\) is the elasticity operator and the derivative \(u'\) represents the linearized strain field. Equation (6.2) is the equilibrium equation and condition (6.3) represents the displacement condition. We use it here since the rod is assumed to be fixed in \(x=0\). Conditions (6.4) represent the contact conditions in \(x=L\). Our interest is in these conditions and, therefore, we describe them in detail, together with the corresponding mechanical interpretations.

First, note that conditions (6.4) provide a multivalued relation between the displacement u(L) and the opposite of the normal stress, \(-\sigma (L)\). We have four possibilities, which correspond to the four parts of the graph in Fig. 2.

(a) For the points in which \(u(L)<0\) we have \(\sigma (L)=0\). In this case we have separation between the rod and the foundation and, therefore, there is no reaction of the foundation on the point \(x=L\).

(b) When \(u(L)=0\) we have \(-\sigma (L)\in [0,P]\). In this case the rod is in contact with the foundation and the reaction of the foundation is towards the rod. Nevertheless, there is no penetration, since the magnitude of the stress in \(x=L\) is less than the yield limit P and, therefore, the deformable layer behaves like a rigid.

(c) When \(0<u(L)<g\) we have \(-\sigma (L)= P+p(u(L))\). This shows that the magnitude of the stress in \(x=L\) reached the yield limit P and, therefore, there is partial penetration into the rigid-elastic layer which now behaves elastically. The reaction of the foundation is towards the rod and depends on the penetration.

(d) When \(u(L)=g\) we have \(-\sigma (L)\ge P+p(u(L))\). This shows that the rigid-elastic layer is completely penetrated and the tip \(x=L\) reached the rigid body. The magnitude of the reaction in this point is larger than \( P+p(u(L))\), since at the reaction of the rigid-elastic layer we add the reaction of the rigid body, which now is active.

Note that the unilateral constraint \(u(L)\le g\) represents a bound for the displacement in \(x=L\) which is imposed here since the rigid body does not allow penetration. Also, note that the function p in (6.4) is not assumed to be increasing and, therefore, it could describe properties of hardening and softening of the foundation. All these ingredients make the contact model \(\mathcal{P}^{1d}\) interesting from physical point of view and challenging from the mathematical point of view.

Fig. 2
figure 2

The contact conditions

In the study of Problem \(\mathcal{P}^{1d}\) we use the standard notation for Lebesgue and Sobolev spaces and, in addition, we use the space

$$\begin{aligned} V = \lbrace \, v \in H^1(0,L)\mid v(0)=0\, \rbrace . \end{aligned}$$

It is well known that the space V is a real Hilbert space with the inner product

$$\begin{aligned} (u, v)_{V}= \int _0^L u'\,v'\,dx \qquad \forall \ u,\ v \in V. \end{aligned}$$
(6.5)

and the associated norm \(\Vert \cdot \Vert _{V}\). Recall that the completeness of the space \((V, \Vert \cdot \Vert _{V})\) follows from the Friedrichs-Poincaré inequality. Moreover, from the Sobolev trace theorem it follows that there exists a positive constant \(k_{0}\) such that

$$\begin{aligned} |v(L)| \le k_{0}\Vert v\Vert _{V} \ \quad \forall \,v \in V . \end{aligned}$$
(6.6)

We denote by \(V^*\) and \(\langle \cdot ,\cdot \rangle \) the dual of V and the duality pairing between \(V^*\) and V, respectively.

We now turn to the variational formulation of Problem \(\mathcal{P}\) and, to this end, we assume that the elasticity operator \(\mathcal{F}\) and the normal compliance function p satisfy the following conditions.

$$\begin{aligned}&\left\{ \begin{array}{ll} \mathrm{(a)}\ \mathcal{F}:(0,L)\times \mathbb {R}\rightarrow \mathbb {R}.\\ \mathrm{(b)\ There\ exists}\ L_\mathcal{F}>0\ \mathrm{such\ that}\\ {}\qquad |\mathcal{F}(x,\varepsilon _1)-\mathcal{F}(x, \varepsilon _2)| \le L_\mathcal{F}\,|\varepsilon _1- \varepsilon _2|\\ {}\qquad \qquad \quad \forall \,\varepsilon _1,\, \varepsilon _2 \in \mathbb {R},\ \mathrm{a.e.}\ x\in (0,L).\\ \mathrm{(c)\ There\ exists}\ m_\mathcal{F}>0\ \mathrm{such\ that}\\ {}\qquad (\mathcal{F}(x,\varepsilon _1)-\mathcal{F}(x,\varepsilon _2))(\varepsilon _1 - \varepsilon _2)\ge m_\mathcal{F}\, |\varepsilon _1- \varepsilon _2|^2\\ {}\qquad \qquad \quad \forall \,\varepsilon _1, \varepsilon _2 \in \mathbb {R},\ \mathrm{a.e.}\ x\in (0,L).\\ \mathrm{(d)\ The\ mapping\ }\ x\mapsto \mathcal{F}(x, \varepsilon )\ \mathrm{is\ measurable\ on\ }(0,L),\\ {}\qquad \qquad \quad \mathrm{for\ any\ }\varepsilon \in \mathbb {R}.\\ \mathrm{(e)\ The\ mapping\ } x\mapsto \mathcal{F}(x, 0)\ \mathrm{belongs\ to}\ L^2(0,L). \end{array}\right. \end{aligned}$$
(6.7)
$$\begin{aligned}&\left\{ \begin{array}{ll} \mathrm{(a)\ } p:\mathbb {R}\rightarrow \mathbb {R}.\\ \mathrm{(b)\ }\ p \mathrm{\ is\ continuous.\ } \\ \mathrm{(c)\ There\ exists}\, c_0>0,\ c_1>0\ \mathrm{\ such\ that\ }\\ \qquad |p(r)|\le c_0+c_1\,|r|\ \ \forall \, r\in \mathbb {R}.\\ \mathrm{(d)\ There\ exists}\, \alpha _p>0\ \mathrm{\ such\ that\ }\\ \qquad r\mapsto \alpha _pr+p(r)\ \mathrm{\ is \ nondecreasing.\ }\\ \mathrm{(e)\ }p(r)\ge 0\ \ \mathrm{if\ }\ r>0 \ \ \mathrm{and}\ \ p(r)=0\ \ \mathrm{if\ }\ r \le 0. \end{array}\right. \end{aligned}$$
(6.8)

We also assume the smallness condition

$$\begin{aligned} \alpha _{p} < \frac{m_\mathcal{F}}{k_{0}^{2}} \end{aligned}$$
(6.9)

where, recall, \(k_0\), \(m_\mathcal{F}\) and \(\alpha _p\) are the constants which appear in (6.6), (6.7) and (6.8), respectively. Finally, the yield limit is positive, i.e.,

$$\begin{aligned} P>0. \end{aligned}$$
(6.10)

Denote by \(q:\mathbb {R}\rightarrow \mathbb {R}\) the function defined by

$$\begin{aligned} q(r)=\int _0^rp(s)\,ds\qquad \text{ for } \text{ all } \ \ r\in \mathbb {R}. \end{aligned}$$
(6.11)

and note that the function q could be nonconvex. Nevertheless, it is a regular function in the sense of Definition 4 and, moreover, it satisfies the equality

$$\begin{aligned} q^0(s;r)=p(s)\,r \qquad \text{ for } \text{ all } \ \ r,\ s\in \mathbb {R}, \end{aligned}$$
(6.12)

where \(q^0(s;r)\) denotes the generalized directional derivative of q at the point s in the direction r.

We assume that \(f\in L^2(0,L)\), \(g>0\) and we define the set \(K_g\), the operators \(A:V\rightarrow V^*\), \(\pi :V\rightarrow L^2(0,L)\) and the functions \(\varphi :V\times V \rightarrow \mathbb {R}\), \(j:V\rightarrow \mathbb {R} \) by

$$\begin{aligned}&K_g = \lbrace \, u\in V\mid u(L) \le g \,\rbrace , \end{aligned}$$
(6.13)
$$\begin{aligned}&\langle Au,v \rangle = \int _0^L \mathcal{F}(u')\,v'\, dx\qquad \text{ for } \text{ all } \ \ u,v\,\in V , \end{aligned}$$
(6.14)
$$\begin{aligned}&\pi v = v \qquad \text{ for } \text{ all } \ \ v\,\in V , \end{aligned}$$
(6.15)
$$\begin{aligned}&\varphi (u,v)= Pv(L)^+\qquad \text{ for } \text{ all } \ \ u,\, v\,\in V, \end{aligned}$$
(6.16)
$$\begin{aligned}&j(v)= q(v(L))\qquad \text{ for } \text{ all } \ \ v\,\in V. \end{aligned}$$
(6.17)

To derive the variational formulation of Problem \(\mathcal{P}^{1d}\) we assume in what follows that \((u, \sigma )\) are sufficiently smooth functions which satisfy (6.1)–(6.4). Note that condition (6.4) implies that

$$\begin{aligned} u\in K_g. \end{aligned}$$
(6.18)

Let \(v\in K_g\). We perform an integration by parts and use the equilibrium Eq. (6.2) to see that

$$\begin{aligned} \int _0^L\,\sigma \,(v'-u')\,dx=\int _0^L\, f(v-u)\,dx + \sigma (L)(v(L)-u(L)) - \sigma (0)(v(0)-u(0)) . \end{aligned}$$

Next, since \(v(0)=u(0) = 0\), we deduce that

$$\begin{aligned} \int _0^L\,\sigma \,(v'-u')\,dx=\int _0^L f(v- u)\,dx + \sigma (L)(v(L)-u(L)). \end{aligned}$$
(6.19)

Moreover, using the contact condition (6.4), the definition (6.13) of the set \(K_g\) and the properties (6.8) of the function p it follows that

$$\begin{aligned} \sigma (L)(v(L)- u(L))\ge -P\,(v(L)^+-u(L)^+)-p(u(L))(v(L)-u(L)).\qquad \end{aligned}$$
(6.20)

We now combine (6.19), (6.20) and use equality (6.12) to find that

$$\begin{aligned}&\int _0^L\sigma \,(v'-u')\,dx + Pv(L)^+-Pu(L)^+\nonumber \\&\qquad +q^0(u(L);v(L)-u(L))\ge \int _0^Lf\,(v-u)\,dx. \end{aligned}$$
(6.21)

On the other hand, using a standard argument (Theorem 3.47 in [19] or Lemma 8(vi) in [27], for instance) we have

$$\begin{aligned} j^0(u;v)=q^0(u(L);v(L))\qquad \text{ for } \text{ all } \, u,\, v\in V, \end{aligned}$$
(6.22)

where \(j^0(u;v)\) denotes the generalized directional derivative of j at the point u in the direction v. We now substitute the constitutive law (6.1) in (6.21), then we use definitions (6.14)–(6.16), equality (6.22) and regularity (6.18) to obtain the following variational formulation of Problem \(\mathcal{P}^{1d}\).

Problem\(\mathcal{P}^{1d}_V\). Find a displacement field\(u\in K_g\)such that

$$\begin{aligned} \langle Au, v-u\rangle + \varphi (u,v) - \varphi (u,u)+j^0(u;v-u) \ge (f, \pi v- \pi u)_{L^2(0,L)}\quad \forall \, v\in K_g.\nonumber \\ \end{aligned}$$
(6.23)

The existence of a unique solution to Problem \(\mathcal{P}^{1d}_V\) follows from Theorem 9 and can be stated as follows.

Theorem 17

Assume that (6.7)–(6.10) hold. Then, for each \(f\in L^2(0,L)\) and \(g>0\), the variational–hemivariational inequality (6.23) has a unique solution.

Proof

We use Theorem 7 with \(X=V\), \(K= \lbrace u\in V\mid u(L) \le 1 \,\rbrace \) and \(Y=L^2(0,L)\). To this end we use properties (6.7) of the constitutive function \(\mathcal{F}\) to see that the operator A given by (6.14) satisfies conditions (2.9) with \(m_A=m_\mathcal{F}\) and \(L_A=L_\mathcal{F}\). In addition, it is easy to see that the function \(\varphi \) defined by (6.16) satisfies condition (2.3) with \(\alpha _\varphi =0\). Moreover, using standard arguments on subdifferential calculus (Theorem 3.47 in [19], for instance), (6.12) and (6.8) (d) it follows that the function j defined by (6.17) satisfies conditions (2.4) (a) and (b). In addition, using (6.22), (6.12) and (6.8)(d) we have

$$\begin{aligned}&j^0(u;v-u)+j^0(v;u-v)\\&\quad =\big (p(u(L))-p(v(L)\big )(u(L)-v(L))\le \alpha _p|u(L)-v(L)|^2 \end{aligned}$$

for all \(u,\, v\in V\). Therefore, using the trace inequality (6.6) we obtain that j satisfies condition (2.4)(c) with \(\alpha _j=\alpha _pk_0^2\). We now conclude from (6.9) that the smallness assumption (2.6) holds, too. The rest of the assumptions of Theorem 7 are clearly satisfies, which completes the proof. \(\square \)

Theorem 17 allows to define the map \((f,g)\mapsto u(f,g)\) which associates to each pair \((f,g)\in L^2(0,L)\times (0,+\infty )\) the solution \(u=u(f,g)\in K_g\) of the variational–hemivariational inequality (6.23). Moreover, using the compactness of the trace map and of the embedding \(V\subset L^2(0,L)\), among others, it is easy to verify that conditions (3.1)–(3.4) are satisfied. Therefore, a direct use of Theorem 9 allows us to obtain the following convergence result.

Theorem 18

Assume that (6.7)–(6.10) hold. Let \(\{f_n\}\subset L^2(0,L)\), \(\{g_n\}\subset (0,+\infty )\) and let \(f\in L^2(0,L)\), \(g>0\). Then,

$$\begin{aligned} f_n\rightharpoonup f\quad \mathrm{in}\quad L^2(0,L),\quad g_n\rightarrow g \quad \Longrightarrow \quad u(f_n,g_n)\rightarrow u(f,g)\quad \mathrm{in}\quad V. \end{aligned}$$

In addition to the mathematical interest in this convergence result it is important from the mechanical point of view, since it shows that the weak solution of the elastic one-dimensional contact problem depends continuously on the density of applied forces and the bound g.

We now formulate the optimal control problems \(\mathcal{Q}_2\) in the one-dimensional case of Problem \(\mathcal{P}^{1d}\). Let \(g_0>0\) and let \(W=[g_0,+\infty )\). We use use (4.16) to define

$$\begin{aligned} \widetilde{\mathcal{V}}_{ad}^2 = \{\,(u, g)\in K\times W \ \ \text{ such } \text{ that }\ 6.24\,\mathrm{holds}\,\} \end{aligned}$$
(6.24)

and we choose the cost functional

$$\begin{aligned} \mathcal{L}_2(u,g) = \alpha \,|u(L)-\phi | + \beta \,|g|, \end{aligned}$$
(6.25)

where \(\phi \in \mathbb {R}\), \(\alpha >0\), \(\beta >0\). Then, the problem we consider can be formulated as follows.

Problem\(\mathcal{Q}_2^{1d}\). Find \((u^*, g^*)\in \widetilde{\mathcal{V}}_{ad}^{2}\)such that

$$\begin{aligned} \mathcal{L}_2(u^*, g^*)=\min _{(u,g)\in \widetilde{\mathcal{V}}_{ad}^2} \mathcal{L}_2(u,g). \end{aligned}$$
(6.26)

The mechanical interpretation of Problem \(\mathcal{Q}_2^{1d}\) is the following : we are looking for a thickness \(g\in W \) such that the displacement of the rod in \(x=L\), given by (6.23), is as close as possible to the “desired displacement” \(\phi \). Furthermore, this choice has to fulfill a minimum expenditure condition which is taken into account by the last term in (6.25) In fact, a compromise policy between the two aims (“u(L) close to \(\phi \)” and “minimal thickness g”) has to be found and the relative importance of each criterion with respect to the other is expressed by the choice of the weight coefficients \(\alpha ,\, \beta > 0\).

Our main result in the study of Problem \(\mathcal{Q}_2^{1d}\) is the following.

Theorem 19

Assume that (6.7)–(6.10) hold and, moreover, assume that \(f\in L^2(0,L)\), \(\phi \in \mathbb {R}\), \(\alpha >0\), \(\beta >0\). Then, there exists at least one solution \((u^*,g^*)\in \widetilde{\mathcal{V}}_{ad}^2\) to \(\mathcal{Q}_2^{1d}\).

Theorem 19 is a direct consequence of Theorem 14, since conditions (4.18), (4.4), (4.19) are obviously satisfied. Finally, we note that Theorem 16 provides a convergence result for a sequence of optimal pairs of Problem \(\mathcal{Q}_2^{1d}\).