1 Introduction

Let \(\Omega \) be an open bounded subset of \(\mathbb {R}^{N}\) (\(N \ge 2\)) with the boundary \(\Gamma \) of class \(C^{1, 1}\) and A a second-order elliptic differential operator defined by \(Ay = - \sum _{i, j = 1}^{N}D_{j}(a_{ij}(x)D_{i}y)\) (\(D_{i}\) denotes the partial derivative with respect to \(x_{i}\)). Consider the semilinear elliptic equation of the form

$$\begin{aligned} Ay + \varphi (x, y) = u \text{ in } \Omega , \, \, \, \dfrac{\partial y}{\partial n_{A}} + \psi (x, y) = v \text{ on } \Gamma , \end{aligned}$$
(1)

where (uv) is a pair of distributed-boundary controls, y an associated state, \(\varphi :\Omega \times \mathbb {R}\rightarrow \mathbb {R}\) and \(\psi :\Gamma \times \mathbb {R}\rightarrow \mathbb {R}\) Carathéodory functions, and \(\dfrac{\partial y}{\partial n_{A}}\) the conormal derivative of y with respect to (shortly, wrt) A.

Throughout this note, the following supposition is made:

$$\begin{aligned} q> N/2 \, \, \text{ and } \, \, r > N - 1. \end{aligned}$$

Recall that for a given pair \((u, v) \in L^{q}(\Omega )\times L^{r}(\Gamma )\), a function \(y \in H^{1}(\Omega )\) is a weak solution of Eq. (1) iff

$$\begin{aligned}&\int _{\Omega } \left( \sum _{i, j = 1}^{N}a_{ij}(x)D_{i}yD_{j}\phi + \varphi (x, y)\phi \right) dx + \int _{\Gamma }\psi (x, y)\phi ds \\&\quad = \int _{\Omega }u\phi dx + \int _{\Gamma }v\phi ds, \forall \phi \in H^{1}(\Omega ). \end{aligned}$$

Due to assumptions (A2) and (A3) below, we deduce from Theorem 5 of [1] that for every pair \((u, v) \in L^{q}(\Omega )\times L^{r}(\Gamma )\), Eq. (1) has a unique weak solution \(y \in H^{1}(\Omega )\cap C(\overline{\Omega })\). Moreover, the solution y belongs to the space

$$\begin{aligned} Y:= \left\{ y \in H^{1}(\Omega ):\, Ay \in L^{q}(\Omega ), \, \dfrac{\partial y}{\partial n_{A}} \in L^{r}(\Gamma )\right\} . \end{aligned}$$

(For the definition of the derivative operator \(\dfrac{\partial y}{\partial n_{A}}\), see [1, page 242].) Note that furnished with the norm \(\Vert y\Vert _{H^{1}(\Omega )} + \Vert Ay\Vert _{L^{q}(\Omega )} + \Big \Vert \dfrac{\partial y}{\partial n_{A}}\Big \Vert _{L^{r}(\Gamma )}\), Y is a Banach space. In addition, by virtue of Corollaries 1 and 2 of [1], the embedding from Y into \(C(\overline{\Omega })\) is continuous and the space Y is dense in \(C(\overline{\Omega })\).

Pointwise pure state constraints of the form

$$\begin{aligned} g(x, y(x)) \le 0 \text{ for } \text{ all } x \in \overline{\Omega }, \end{aligned}$$
(2)

and pointwise mixed state-control constraints such as

$$\begin{aligned} \left\{ \begin{array}{ll} \alpha _{\Omega }(x) \le h(x, y(x)) + u(x) \le \beta _{\Omega }(x) \text{ a.e. } x \in \Omega ,\\ \alpha _{\Gamma }(x) \le k(x, y(x)) + v(x) \le \beta _{\Gamma }(x) \text{ a.e. } x \in \Gamma \\ \end{array} \right. \end{aligned}$$
(3)

are imposed on the triple (yuv). Here, \(g:\overline{\Omega }\times \mathbb {R}\rightarrow \mathbb {R}\) is a continuous mapping, \(h:\Omega \times \mathbb {R}\rightarrow \mathbb {R}\) and \(k:\Gamma \times \mathbb {R}\rightarrow \mathbb {R}\) Carathéodory functions, \(\alpha _{\Omega }, \beta _{\Omega } \in L^{q}(\Omega )\) with \(\alpha _{\Omega }(x) < \beta _{\Omega }(x)\) a.e. \(x \in \Omega \), and \(\alpha _{\Gamma }, \beta _{\Gamma } \in L^{r}(\Gamma )\) with \(\alpha _{\Gamma }(x) < \beta _{\Gamma }(x)\) a.e. \(x \in \Gamma \).

In this article, we study the following optimal control problem

$$\begin{aligned} \text{ min } \{J(y, u, v): \, y \in Y, \, (u, v) \in L^{q}(\Omega )\times L^{r}(\Gamma ) \, \mathrm{and} \, (y, u, v) \, \mathrm{satisfies} \, (1), (2), (3)\} \end{aligned}$$
(OCP)

with the cost functional

$$\begin{aligned} J(y, u, v) = \int _{\Omega }K(x, y(x), u(x))dx + \int _{\Gamma }L(x, y(x), v(x))ds, \end{aligned}$$

in which \(K:\Omega \times \mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R}\) and \(L:\Gamma \times \mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R}\) are Carathéodory functions.

Remark that problem (OCP) includes as a special case the model with distributed controls as well as the framework with Neumann/Robin boundary controls. A rich literature is devoted to first-order optimality conditions, via Lagrange multiplier rules, for such problems (see e.g., [1, 9, 26, 38]). In optimal control, and more generally in optimization, second-order optimality conditions are of great interest, since they provide significant additional information to the first-order ones. Over the years, the second-order theory of optimality conditions, in control problems of differential equations, has been analyzed and developed. Under the \(C^{2}\)-smoothness hypotheses put on the data, second-order optimality conditions have been explored in very many publications; see for example, [6, 12, 19, 30, 41] for optimal control models with ordinary differential systems, [5, 8, 10,11,12,13, 21,22,23, 26, 29, 37] for the control frameworks with linear/semilinear elliptic equations, and [3, 15, 36] for the ones with semilinear parabolic equations. The reader is referred to the monographs (for instance, [2, 7, 27, 28, 35, 38]) and also to the papers (e.g., [4, 14, 16, 17, 20, 31, 34]) for comprehensive expositions of second-order optimality conditions in abstract optimization and in optimal control as well.

Nevertheless, there are few articles that deal with second-order conditions in optimal control, involving functions that are not \(C^{2}\), driven by ordinary differential systems. Let us pay attention to some works, using generalized constructions in order to examine second-order conditions for optimality, such as [24, 25] utilizing second-order Neustadt derivatives, and [32, 33] (and some references given therein) making use of second-order Clarke generalized derivatives and/or second-order directional derivatives. Furthermore, second-order conditions for optimal control models of partial differential equations, with the data being out of class \(C^{2}\), have not yet been investigated and have been still open issues.

The above observations motivate the aim of our work in this note. We establish necessary second-order optimality conditions for local optimal solutions of problem (OCP), where the input data may not be of class \(C^2\). The functions in our problem are assumed to be twice directionally differentiable wrt the state and/or control variable but not necessarily twice Fréchet differentiable. We employ second-order directional derivatives and second-order tangent sets to set up necessary conditions for the considered framework. Also, sufficient second-order optimality conditions will be developed in a forthcoming work.

It is worth noticing that in optimal control with \(C^{2}\)-smooth data, the technique of the implicit function theorem is usually applied to obtain the second-order differentiability of the control-to-state mapping, and then to derive second-order optimality conditions (see for instance, [5, 8, 10,11,12,13,14,15, 38]). However, such a strategy cannot be used for the control model studied in this paper. Hence, in order to tackle the problem, we first deduce a Lagrange multiplier rule for an abstract optimization framework, and we then apply the obtained results to problem (OCP). The proof of this multiplier rule is based on a modification of the Dubovitskii–Milyutin scheme [17]. Moreover, since the pairs of control variables belong to the product space \(L^{q}(\Omega )\times L^{r}(\Gamma )\) with \(1< q, r < +\infty \), the pointwise mixed state-control constraints are the inclusions of the type \(H(y, u, v) \in D\), where D is a nonsolid set. In this case, a so-called regularity condition (imposed on H) is needed to handle with necessary optimality conditions for our problem. Here, we get the main outcome under the regularity condition (RC) (defined in Theorem 3.1 below), which is proposed and utilized in [21,22,23, 29, 37].

The plan of the article is as follows. We formulate assumptions and the principal result (Theorem 2.1) in Sect. 2. Some notations and preliminary facts are first recalled for our later use, and necessary optimality conditions are then developed for a mathematical programming problem in Sect. 3. By means of the theorem obtained in Sect. 3, we prove the Lagrange multiplier rule in Sect. 4. The work is finished with an example of application of Theorem 2.1.

2 Assumptions and Main Results

Let us fix now a triple \(\bar{z} = (\bar{y}, \bar{u}, \bar{v})\) in the product space \(Y\times L^{q}(\Omega )\times L^{r}(\Gamma )\). We impose the following assumptions on the data of the problem. Remark that the number \(\hat{q}\) (resp, \(\hat{r}\)) is the conjugate one of q (resp, r).

  1. (A1)

    (i) \(K:\Omega \times \mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R}\) is a Carathéodory function of class \(C^{1}\) wrt (yu) and the following conditions are fulfilled:

    1. (i.1)

      \(|K_{y}(x, y, u)| + |K(x, y, u)| \le (K_{1}(x) + c_{1}|u|^{q})\eta (|y|), \, \, \, |K_{u}(x, y, u)| \le (K_{2}(x) + c_{1}|u|^{q - 1})\eta (|y|), |K_{y}(x, y_{1}, u_{1}) - K_{y}(x, y_{2}, u_{2})| + |K_{u}(x, y_{1}, u_{1}) - K_{u}(x, y_{2}, u_{2})| \le (|y_{1} - y_{2}|^{s_{1}} + m_{1}|u_{1} - u_{2}|^{s_{2}})\eta (|y_{1}|)\eta (|y_{2}|),\) where \(c_{1} > 0\), \(s_{1} \ge 1\), \(1 \le s_{2} \le q - 1\), \(m_{1} = 0\) if \(1< q < 2\) and \(m_{1} = 1\) if \(q \ge 2\), \(K_{1} \in L^{1}(\Omega )\), \(K_{2} \in L^{\hat{q}}(\Omega )\), and \(\eta \) is a nondecreasing function from \(\mathbb {R}_{+}\) to \(\mathbb {R}_{+}\),

    2. (i.2)

      the function \(K(x, \cdot , \cdot )\) is twice directionally differentiable at \(\big (\bar{y}(x), \bar{u}(x)\big )\) for a.e. \(x \in \Omega \).

    (ii) \(L:\Gamma \times \mathbb {R}\times \mathbb {R}\rightarrow \mathbb {R}\) is a Carathéodory function of class \(C^{1}\) wrt (yv) and the following conditions are fulfilled:

    1. (ii.1)

      \(|L_{y}(x, y, v)| + |L(x, y, v)| \le (L_{1}(x) + c_{2}|v|^{r})\eta (|y|), \, \, \, |L_{v}(x, y, v)| \le (L_{2}(x) + c_{2}|v|^{r - 1})\eta (|y|), |L_{y}(x, y_{1}, v_{1}) - L_{y}(x, y_{2}, v_{2})| + |L_{v}(x, y_{1}, v_{1}) - L_{v}(x, y_{2}, v_{2})| \le (|y_{1} - y_{2}|^{s_{3}} + m_{2}|v_{1} - v_{2}|^{s_{4}})\eta (|y_{1}|)\eta (|y_{2}|),\) where \(c_{2} > 0\), \(s_{3} \ge 1\), \(1 \le s_{4} \le r - 1\), \(m_{2} = 0\) if \(1< r < 2\) and \(m_{2} = 1\) if \(r \ge 2\), \(L_{1} \in L^{1}(\Gamma )\), \(L_{2} \in L^{\hat{r}}(\Gamma )\),

    2. (ii.2)

      the function \(L(x, \cdot , \cdot )\) is twice directionally differentiable at \(\big (\bar{y}(x), \bar{v}(x)\big )\) for a.e. \(x \in \Gamma \).

  2. (A2)

    The coefficients \(a_{ij} \in L^{\infty }(\Omega )\) and there is \(\lambda > 0\) such that

    $$\begin{aligned}&\sum _{i, j = 1}^{N}a_{ij}(x)\xi _{i}\xi _{j} \ge \lambda |\xi |^{2}, \text{ for } \text{ a.e. } x \in \Omega \text{ and } \text{ for } \text{ all } \\&\qquad \xi = (\xi _{1}, \xi _{2},\ldots , \xi _{N}) \in \mathbb {R}^{N}. \end{aligned}$$
  3. (A3)

    (i) \(\varphi :\Omega \times \mathbb {R}\rightarrow \mathbb {R}\) is a Carathéodory function of class \(C^{1}\) wrt the second variable, \(\varphi (\cdot , 0) \in L^{q}(\Omega )\) and the following conditions are satisfied:

    1. (i.1)

      \(0 \le a_{0}(x) \le \varphi _{y}(x, y) \le \varphi _{1}(x)\eta (|y|), \, \, \, |\varphi _{y}(x, y_{1}) - \varphi _{y}(x, y_{2})| \le |y_{1} - y_{2}|\eta (|y_{1}|)\eta (|y_{2}|)\), in which \(a_{0}, \varphi _{1} \in L^{q}(\Omega )\),

    2. (i.2)

      the function \(\varphi (x, \cdot )\) is twice directionally differentiable at \(\bar{y}(x)\) for a.e. \(x \in \Omega \).

    (ii) \(\psi :\Gamma \times \mathbb {R}\rightarrow \mathbb {R}\) is a Carathéodory function of class \(C^{1}\) wrt the second variable, \(\psi (\cdot , 0) \in L^{r}(\Gamma )\) and the following conditions are satisfied:

    1. (ii.1)

      \(0 \le b_{0}(x) \le \psi _{y}(x, y) \le \psi _{1}(x)\eta (|y|), \, \, \, |\psi _{y}(x, y_{1}) - \psi _{y}(x, y_{2})| \le |y_{1} - y_{2}|\eta (|y_{1}|)\eta (|y_{2}|)\), in which \(b_{0}, \psi _{1} \in L^{r}(\Gamma )\),

    2. (ii.2)

      the function \(\psi (x, \cdot )\) is twice directionally differentiable at \(\bar{y}(x)\) for a.e. \(x \in \Gamma \).

    (iii) The pair \((a_{0}, b_{0})\) fulfills the ellipticity condition \((E_{m})\) for some \(m > 0\). This means that the next fact is valid:

    $$\begin{aligned}&\int _{\Omega }\left( \sum _{i, j = 1}^{N}a_{ij}(x)D_{i}yD_{j}y + a_{0}y^{2}\right) dx \\&\qquad + \int _{\Gamma }b_{0}y^{2}ds \ge m\Vert y\Vert ^{2}_{H^{1}(\Omega )}, \forall y \in H^{1}(\Omega ). \end{aligned}$$
  4. (A4)

    The function \(g:\overline{\Omega }\times \mathbb {R}\rightarrow \mathbb {R}\) is continuous and Fréchet differentiable wrt the second variable with \(g_{y}:\overline{\Omega }\times \mathbb {R}\rightarrow \mathbb {R}\) being also continuous and the following conditions hold:

    1. (i)

      \(|g_{y}(x, y_{1}) - g_{y}(x, y_{2})| \le |y_{1} - y_{2}|\eta (|y_{1}|)\eta (|y_{2}|)\),

    2. (ii)

      the function \(g(x, \cdot )\) is twice directionally differentiable at \(\bar{y}(x)\) for all \(x \in \overline{\Omega }\) and the next condition is verified for every \(M > 0\):

      where \(o(t)/t \rightarrow 0\) as \(t \rightarrow 0^+\).

  5. (A5)

    (i) \(h:\Omega \times \mathbb {R}\rightarrow \mathbb {R}\) is a Carathéodory function of class \(C^{1}\) wrt the second variable, \(h(\cdot , 0) \in L^{q}(\Omega )\) and the following conditions are valid:

    1. (i.1)

      \(0 {\le } h_{y}(x, \bar{y}) {\le } h_{1}(x), \, \, \, |h_{y}(x, y_{1}) - h_{y}(x, y_{2})| {\le } |y_{1} - y_{2}|\eta (|y_{1}|)\eta (|y_{2}|)\), in which \(h_{1} \in L^{q}(\Omega )\),

    2. (i.2)

      the function \(h(x, \cdot )\) is twice directionally differentiable at \(\bar{y}(x)\) for a.e. \(x \in \Omega \).

    (ii) \(k:\Gamma \times \mathbb {R}\rightarrow \mathbb {R}\) is a Carathéodory function of class \(C^{1}\) wrt the second variable, \(k(\cdot , 0) \in L^{r}(\Gamma )\) and the following conditions are valid:

    1. (ii.1)

      \(0 {\le } k_{y}(x, \bar{y}) {\le } k_{1}(x), \, \, \, |k_{y}(x, y_{1}) - k_{y}(x, y_{2})| \le |y_{1} - y_{2}|\eta (|y_{1}|)\eta (|y_{2}|)\), in which \(k_{1} \in L^{r}(\Gamma )\),

    2. (ii.2)

      the function \(k(x, \cdot )\) is twice directionally differentiable at \(\bar{y}(x)\) for a.e. \(x \in \Gamma \).

We observe that if the function g is twice Fréchet differentiable wrt the second variable with \(g_{yy}:\overline{\Omega }\times \mathbb {R}\rightarrow \mathbb {R}\) being continuous, then assumption (A4)-(ii) trivially holds.

Let us define the sets:

$$\begin{aligned} Q:= \{w \in C(\overline{\Omega }):\, w(x) \ge 0, \, \forall x \in \overline{\Omega }\}, \end{aligned}$$
(4)

a closed convex cone of nonnegative valued functions in \(C(\overline{\Omega })\), and

$$\begin{aligned} D_{\Omega }:= & {} \{u \in L^{q}(\Omega ):\, \alpha _{\Omega }(x) \le u(x) \le \beta _{\Omega }(x) \text{ a.e } x \in \Omega \}, \end{aligned}$$
(5)
$$\begin{aligned} D_{\Gamma }:= & {} \{v \in L^{r}(\Gamma ):\, \alpha _{\Gamma }(x) \le v(x) \le \beta _{\Gamma }(x) \text{ a.e } x \in \Gamma \}, \end{aligned}$$
(6)

closed convex sets in \(L^{q}(\Omega )\), \(L^{r}(\Gamma )\), resp.

A triple \((\bar{y}, \bar{u}, \bar{v}) \in Y\times L^{q}(\Omega )\times L^{r}(\Gamma )\) satisfying constraints (1)–(3) is said to be feasible for problem (OCP). A feasible triple \((\bar{y}, \bar{u}, \bar{v})\) is called a local optimal solution of problem (OCP) iff there exists a number \(\epsilon > 0\) such that for every feasible triple (yuv), the next implication holds true:

$$\begin{aligned} \Vert y - \bar{y}\Vert _{Y} + \Vert u - \bar{u}\Vert _{L^{q}(\Omega )} + \Vert v - \bar{v}\Vert _{L^{r}(\Gamma )} \le \epsilon \Rightarrow J(y, u, v) \ge J(\bar{y}, \bar{u}, \bar{v}). \end{aligned}$$

One needs the notion of critical direction for the problem under study.

Definition 2.1

A triple \(d = (y, u, v) \in Y\times L^{q}(\Omega )\times L^{r}(\Gamma )\) is said to be a critical direction for problem (OCP) at a feasible point \(\bar{z} = (\bar{y}, \bar{u}, \bar{v})\) iff the following conditions are satisfied:

  1. (a)

    \(\int _{\Omega }{\big (}K_{y}(\cdot , \bar{y}, \bar{u})y + K_{u}(\cdot , \bar{y}, \bar{u})u{\big )}dx + \int _{\Gamma }{\big (}L_{y}(\cdot , \bar{y}, \bar{v})y + L_{v}(\cdot , \bar{y}, \bar{v})v{\big )}ds\le 0\);

  2. (b)

    \(Ay + \varphi _{y}(\cdot , \bar{y})y = u \text{ in } \Omega , \, \, \, \dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \bar{y})y = v \text{ on } \Gamma \);

  3. (c)

    \(g_{y}(x, \bar{y}(x))y(x) \le 0\) for all \(x \in \overline{\Omega }\) fulfilling \(g(x, \bar{y}(x)) = 0\);

  4. (d)

    \(T^{i, 2, \sigma }(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u}, h_{y}(\cdot , \bar{y})y + u) \ne \emptyset \) and \(T^{i, 2, \sigma }(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v}, k_{y}(\cdot , \bar{y})y + v) \ne \emptyset \) for some sequence \(\sigma = \{t_{n}\}\) with \(t_{n} \rightarrow 0^{+}\).

Clearly, if d is a critical direction for (OCP) at \(\bar{z}\), then \(h_{y}(\cdot , \bar{y})y + u \in T(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u})\) and \(k_{y}(\cdot , \bar{y})y + v \in T(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v})\).

We now state the main result of this article.

Theorem 2.1

Let \(\bar{z} = (\bar{y}, \bar{u}, \bar{v})\) be a local optimal solution of problem (OCP) and \(d = (y, u, v)\) a critical direction. Suppose that assumptions (A1)–(A5) are satisfied. Then, there exist multipliers \(\lambda \in \mathbb {R}\), \(p \in W^{1, s}(\Omega )\) for every \(1 \le s < N/(N - 1)\), \(\mu \in \mathcal {M}(\overline{\Omega })\), \(\phi \in L^{\hat{q}}(\Omega )\) and \(\chi \in L^{\hat{r}}(\Gamma )\) with \((\lambda , \mu ) \ne (0, 0)\), \(\lambda \ge 0\), \(\mu \succeq 0\) such that the following assertions hold.

  1. (a)

    The adjoint equation:

    $$\begin{aligned} \left\{ \begin{array}{ll} A^*p + \varphi _{y}(\cdot , \bar{y})p = - \lambda K_{y}(\cdot , \bar{y}, \bar{u}) - [g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Omega } - h_{y}(\cdot , \bar{y})^{*}\phi \,\, in \,\, \Omega \text{, } \\ \dfrac{\partial p}{\partial n_{A^*}} + \psi _{y}(\cdot , \bar{y})p = - \lambda L_{y}(\cdot , \bar{y}, \bar{v}) - [g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Gamma } - k_{y}(\cdot , \bar{y})^{*}\chi \,\, on \,\, \Gamma \text{, }\\ \end{array} \right. \end{aligned}$$

    where \(A^*\) is the formal adjoint operator to A.

  2. (b)

    The stationary conditions in u and v:

    $$\begin{aligned} \left\{ \begin{array}{ll} \lambda K_{u}(\cdot , \bar{y}, \bar{u}) - p(x) + \phi (x) = 0 \,\, a.e. \,\, x \in \Omega \text{, }\\ \lambda L_{v}(\cdot , \bar{y}, \bar{v}) - p(x) + \chi (x) = 0 \,\, a.e. \,\, x \in \Gamma \text{. }\\ \end{array} \right. \end{aligned}$$
  3. (c)

    The complementary condition in z:

    $$\begin{aligned} \phi (x) \,\left\{ \begin{array}{lll}\le 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{1}\text{, }\\ \ge 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{2}\text{, }\\ = 0 &{} \text{ otherwise, }\,\,\, \\ \end{array} \right. \, \, \, \chi (x) \,\left\{ \begin{array}{lll}\le 0 &{} \text{ a.e. }\,\,\, x \in \Gamma _{1}\text{, }\\ \ge 0 &{} \text{ a.e. }\,\,\, x \in \Gamma _{2}\text{, }\\ = 0 &{} \text{ otherwise, }\,\,\, \\ \end{array} \right. \end{aligned}$$

    in which

    $$\begin{aligned}&\Omega _{1}:= \{x \in \Omega :\, h(x, \bar{y}(x)) + \bar{u}(x) = \alpha _{\Omega }(x)\}\text{, } \\&\Omega _{2}:= \{x \in \Omega :\, h(x, \bar{y}(x)) + \bar{u}(x) = \beta _{\Omega }(x)\}\text{, }\\&\Gamma _{1}:= \{x \in \Gamma :\, k(x, \bar{y}(x)) + \bar{v}(x) = \alpha _{\Gamma }(x)\}\text{, } \\&\Gamma _{2}:= \{x \in \Gamma :\, k(x, \bar{y}(x)) + \bar{v}(x) = \beta _{\Gamma }(x)\}\text{. } \end{aligned}$$
  4. (d)

    The complementary condition in y:

    $$\begin{aligned} \mathrm{supp}(\mu ) \subset \{x \in \overline{\Omega }: \, g(x, \bar{y}(x)) = 0\}\text{. } \end{aligned}$$
  5. (e)

    The second-order condition:

    $$\begin{aligned}&\mathcal {L}:= \lambda \int _{\Omega }d_{2}[K(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{u}(x)), (y(x), u(x)){\big )}dx \\&\qquad + \lambda \int _{\Gamma }d_{2}[L(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{v}(x)), (y(x), v(x)){\big )}ds\\&\qquad + \int _{\Omega }p(x)d_{2}[\varphi (x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}dx + \int _{\Gamma }p(x)d_{2}[\psi (x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}ds\\&\qquad + \int _{\overline{\Omega }}d_{2}[g(x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}d\mu \\&\qquad + \int _{\Omega }\phi (x)d_{2}[h(x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}dx + \int _{\Gamma }\chi (x)d_{2}[k(x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}ds\\&\quad \ge s{\big (}\mu , T^{i, 2}(-Q, g(\cdot , \bar{y}), g_{y}(\cdot , \bar{y})y){\big )}. \end{aligned}$$

Remark 2.1

(i) It is worth mentioning that in assertion (a), the first equation is verified in the sense of distribution in \(\Omega \). Meanwhile, the second one is understood in the sense of trace on \(\Gamma \) (see [1, Subsection 3.2] for more details). Here, \(g_{y}(\cdot , \bar{y})^{*}\mu \) is the Radon measure on \(\overline{\Omega }\) defined by \(\langle g_{y}(\cdot , \bar{y})^{*}\mu , y \rangle = \langle \mu , g_{y}(\cdot , \bar{y})y \rangle \) for all \(y \in C(\overline{\Omega })\), and \([g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Omega }\) (resp, \([g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Gamma }\)) is the restriction of \(g_{y}(\cdot , \bar{y})^{*}\mu \) to \(\Omega \) (resp, \(\Gamma \)) in the sense of measure theory.

(ii) Let us define the functions on \(\overline{\Omega }\) as below:

$$\begin{aligned}&a(x) = - g(x, \bar{y}(x)), \, b(x) = - g_{y}(x, \bar{y}(x))y(x),\\&\theta ^{g}_{\bar{y}, y}(x) = \liminf _{t \rightarrow 0^+, x^{'}\rightarrow x}\dfrac{a(x^{'}) + tb(x^{'})}{\frac{1}{2}t^{2}}. \end{aligned}$$

Then, it follows from Theorem 4.148 in [7] that

$$\begin{aligned} T^{i, 2}(-Q, g(\cdot , \bar{y}), g_{y}(\cdot , \bar{y})y) = \{w \in C(\overline{\Omega }):\, w(x) \le \theta ^{g}_{\bar{y}, y}(x), \, \forall x \in \overline{\Omega }\} \end{aligned}$$

and \(T^{i, 2}(-Q, g(\cdot , \bar{y}), g_{y}(\cdot , \bar{y})y)\) is nonempty iff \(\theta ^{g}_{\bar{y}, y}(x) > - \infty \), for all \(x \in \overline{\Omega }\). Besides, if \(T^{i, 2}(-Q, g(\cdot , \bar{y}), g_{y}(\cdot , \bar{y})y) \ne \emptyset \), then

$$\begin{aligned} s\big (\mu , T^{i, 2}(-Q, g(\cdot , \bar{y}), g_{y}(\cdot , \bar{y})y)\big ) = \int _{\overline{\Omega }}\theta ^{g}_{\bar{y}, y}(x)d\mu . \end{aligned}$$

Necessary second-order optimality conditions, in optimal control of partial differential equations, were established in many references (see e.g., [5, 7, 8, 10,11,12,13,14,15, 21,22,23, 29, 37, 38]). We would like to emphasize that the functions under consideration were required to be at least twice continuously differentiable wrt the state and/or control variable. Nevertheless, in the present note, the data of our framework are simply twice directionally differentiable. With the help of directional derivatives and tangent sets, we explore new second-order Lagrange multiplier rules for distributed-boundary optimal control problems, governed by semilinear elliptic systems.

3 Necessary Second-Order Optimality Conditions for an Optimization Problem

3.1 Notations and Preliminaries

Throughout the paper, \(\mathbb {N}\) (resp, \(\mathbb {R}\)) stands for the set of natural (resp, real) numbers. For a normed space E, \(E^{*}\) denotes the topological dual of E, \(\langle \cdot ,\cdot \rangle \) the canonical pairing and \(\Vert \cdot \Vert _{E}\) the norm in E. We set \(s(x^*, M):= \sup _{m \in M}\langle x^*, m\rangle \) for given a set \(M \subseteq E\) and an element \(x^* \in E^*\). d(zM) is used for the distance from a point z to a set M. \(B_{E}(x, r):= \{y \in E : \Vert x - y\Vert _{E} < r\}\). For a cone \(C \subset E\), the positive polar cone of C is \(C^{*}:= \{c^{*} \in E^{*}:\, \langle c^{*}, c\rangle \ge 0, \, \forall c \in C\}\). For a set \(M \subseteq E\), let core M be its algebraic interior, int M its topological interior, cl M (or \(\overline{M}\)) its closure, and cone M its conical hull.

For a set \(X \subset \mathbb {R}^{N}\) with \(N \in \mathbb {N}\), C(X) (resp, \(C_{b}(X)\)) denotes the space of all continuous (resp, bounded continuous) real functions defined on X, and \(\mathcal {M}(X)\) (resp, \(\mathcal {M}_{b}(X)\)) stands for the space of all Radon (resp, bounded Radon) measures on X. If X is compact, then \(C(X) = C_{b}(X)\) and the dual space of C(X) is \(\mathcal {M}(X) = \mathcal {M}_{b}(X)\). If \(y \in C_{b}(X)\) and \(\mu \in \mathcal {M}_{b}(X)\), then y is \(\mu \)-integrable and we define \(\langle \mu , y\rangle _{b, X}:= \int _{X}yd\mu \). For \(\mu \in \mathcal {M}(X)\), the support of \(\mu \), written as \(\text{ supp }(\mu )\), is the smallest closed subset of X such that \(\mu (X \setminus \text{ supp }(\mu )) = 0\). By \(\mu \succeq 0\) we mean that \(\mu (A) \ge 0\) for any \(A \in \mathcal {B}\), where \(\mathcal {B}\) is the Borel sigma-algebra of X. Partial derivatives are denoted with subscripts, for example \(f_{y}: = \frac{\partial f}{\partial y}\).

Assume that E is a normed space, S a subset of E, \(x_{0} \in \overline{S}\) and \(u \in E\). The contingent (resp, inner) tangent cone of S at \(x_{0}\) is

$$\begin{aligned}&T(S, x_{0}):= \{v \in E:\, \exists s_{n} \rightarrow 0^{+}, \, \exists v_{n} \rightarrow v, \, \forall n \in \mathbb {N}, \, x_{0} + s_{n}v_{n} \in S\}\\&{\Big (}\mathrm{resp,} \,\, T^{i}(S, x_{0}):= \{v \in E:\, \forall s_{n} \rightarrow 0^{+}, \, \exists v_{n} \rightarrow v, \, \forall n \in \mathbb {N}, \, x_{0} + s_{n}v_{n} \in S\}{\Big )}. \end{aligned}$$

The Clarke interior tangent cone of S at \(x_{0}\) is

$$\begin{aligned}&IT_{C}(S, x_{0})\\&\quad := \{v \in E :\, \forall x_{n} \rightarrow _{S} x_{0}, \, \forall s_{n} \rightarrow 0^{+}, \, \forall v_{n} \rightarrow v, \, \forall n \,\, \mathrm{large \,\, enough}, \, x_{n} + s_{n}v_{n} \in S\}. \end{aligned}$$

If the set S is convex, then \(T(S, x_{0}) = T^{i}(S, x_{0})\). The normal cone to S at \(x_{0}\) (see [7]) is defined by

$$\begin{aligned} N(S, x_{0}):= -[T(S, x_{0})]^*. \end{aligned}$$

The following concepts of second-order tangent set were introduced in [7] (see also [39]).

Definition 3.1

Let \(\sigma = \{t_{n}\} \in \Sigma \) be a fixed sequence, where \(\Sigma := \{\sigma = \{s_{n}\}: \, s_{n} \rightarrow 0^{+}\}\).

  1. (a)

    The second-order contingent tangent set of S at \((x_{0}, u)\) wrt \(\sigma \) is

    $$\begin{aligned} T^{i, 2, \sigma }(S, x_{0}, u):= \left\{ w \in E:\, \exists w_{n} \rightarrow w, \, \forall n \in \mathbb {N}, \, x_{0} + t_{n}u + \frac{1}{2}t_{n}^{2}w_{n} \in S\right\} . \end{aligned}$$
  2. (b)

    The second-order inner (resp, interior) tangent set of S at \((x_{0}, u)\) is

    $$\begin{aligned}&T^{i, 2}(S, x_{0}, u)\\&\quad := \left\{ w \in E:\, \forall s_{n} \rightarrow 0^{+}, \, \exists w_{n} \rightarrow w, \, \forall n \in \mathbb {N}, \, x_{0} + s_{n}u + \frac{1}{2}s_{n}^{2}w_{n} \in S\right\} \\&{\Big (}\text{ resp }, IT^{2}(S, x_{0}, u)\\&\quad := \Big \{w \in E:\, \forall s_{n} \rightarrow 0^{+}, \, \forall w_{n} \rightarrow w, \, \forall n \,\, \mathrm{large \,\, enough}, \, x_{0} + s_{n}u + \frac{1}{2}s_{n}^{2}w_{n} \in S\Big \}{\Big )}. \end{aligned}$$

Notice that every contingent/inner tangent set is closed, but the interior one is open. When S is convex, so is \(T^{i, 2, \sigma }(S, x_{0}, u)\) (and \(T^{i, 2}(S, x_{0}, u)\) as well as \(IT^{2}(S, x_{0}, u)\)) for any \(\sigma \in \Sigma \).

We will subsequently employ the next results related to second-order local approximations.

Proposition 3.1

([39]) Let \(S \subset E\), \(x_{0} \in \overline{S}\), \(u \in E\), and \(\sigma \in \Sigma \).

  1. (a)

    If \(IT_{C}(S, x_{0}) \ne \emptyset \) and \(T^{i, 2}(S, x_{0}, u) \ne \emptyset \), then

    $$\begin{aligned} \mathrm{cl} \,\, IT^{2}(S, x_{0}, u) = T^{i, 2}(S, x_{0}, u). \end{aligned}$$

    Let, in addition, S be convex and \(x_{0} \in S\). One has the following.

  2. (b)

    \(T^{i, 2, \sigma }(S, x_{0}, u) + T(T(S, x_{0}), u) \subset T^{i, 2, \sigma }(S, x_{0}, u) \subset T(T(S, x_{0}), u)\).

  3. (c)

    If \(B \in \{T^{i, 2}, T^{i, 2, \sigma }\}\), \(B(S, x_{0}, u) \ne \emptyset \) and \(e^{*} \in E^{*}\) such that \(s(e^*, B(S, x_{0}, u)) < +\infty \), then \(e^{*} \in N(S, x_{0})\) and \(\langle e^{*}, u\rangle = 0\), i.e. \(e^{*} \in - [T(T(S, x_{0}), u)]^{*}\).

Let E and F be normed spaces, \(\phi :E \rightarrow F\) a mapping and \(x_{0} \in E\). We say that \(\phi \) is strictly differentiable at \(x_{0}\) iff it has Fréchet derivative \(\triangledown \phi (x_{0})\) at \(x_{0}\) and

$$\begin{aligned} \lim _{x \rightarrow x_{0}, x^{'} \rightarrow x_{0}}\dfrac{\Vert \phi (x) - \phi (x^{'}) - \triangledown \phi (x_{0})(x - x^{'})\Vert _{F}}{\Vert x - x^{'}\Vert _{E}} = 0. \end{aligned}$$

Evidently, if \(\phi \) is strictly differentiable at \(x_{0}\), then \(\phi \) is locally Lipschitz at \(x_{0}\).

This article concerns with the following classical notion of second-order directional derivative.

Definition 3.2

([7, 35]) Let \(\phi \) be Fréchet differentiable at \(x_{0}\) and \(u \in E\). The second-order directional derivative of \(\phi \) at \(x_{0}\) in direction u is

$$\begin{aligned} d_{2}\phi (x_{0}, u):= \lim _{t \rightarrow 0^+}\dfrac{\phi (x_{0} + tu) - \phi (x_{0}) - t\triangledown \phi (x_{0})u}{t^{2}/2}, \end{aligned}$$

when the limit on the right-hand side exists as an element in the space F.

The mapping \(\phi \) is said to be twice directionally differentiable at \(x_{0}\) iff \(d_{2}\phi (x_{0}, u)\) exists for every \(u \in E\).

By the definition, one gets the next.

Proposition 3.2

Let \(x_{0} \in E\) and \(\phi :E \rightarrow F\).

  1. (a)

    If \(\phi \) is strictly differentiable at \(x_{0}\) and \(d_{2}\phi (x_{0}, u)\) exists for some \(u \in E\), then we have for all \(w \in E\),

    $$\begin{aligned}&\lim _{t \rightarrow 0^+, w^{'} \rightarrow w}\dfrac{\phi (x_{0} + tu + \frac{1}{2}t^{2}w^{'}) - \phi (x_{0}) - t\triangledown \phi (x_{0})u}{t^{2}/2} \\&\quad = \triangledown \phi (x_{0})w + d_{2}\phi (x_{0}, u). \end{aligned}$$
  2. (b)

    If \(\phi \) is twice Fréchet differentiable at \(x_{0}\), then for all \(u \in E\),

    $$\begin{aligned} d_{2}\phi (x_{0}, u) = \triangledown ^{2}\phi (x_{0})(u, u), \end{aligned}$$

    where \(\triangledown ^{2}\phi (x_{0})\) is the second-order Fréchet derivative of \(\phi \) at \(x_{0}\).

The following metric subregularity concept is utilized to obtain necessary optimality conditions for problem (P).

Definition 3.3

([18, 34]) Let \(x_{0}, u \in E\), \(S \subset E\), \(D \subset F\) and \(\phi :E \rightarrow F\). One says that the mapping \(\phi \) is directionally metrically subregular at \((x_{0}, u)\) wrt (SD) iff there exist \(\mu > 0\), \(\rho > 0\) such that, for every \(t\in (0, \rho )\) and \(v \in B_{E}(u, \rho )\) with \(x_{0} + tv \in S\),

$$\begin{aligned}&d(x_{0} + tv, S\cap \phi ^{-1}(D)) \le \mu d(\phi (x_{0} + tv), D).\quad (\text{ DMSR }_u) \end{aligned}$$

Note that if the sets S and D are closed and convex, the mapping \(\phi \) is strictly differentiable at \(x_{0}\), and the following Robinson constraint qualification holds:

$$\begin{aligned} \triangledown \phi (x_{0})(\text{ cone }(S - x_{0})) - \text{ cone }(D - \phi (x_{0})) = F, \end{aligned}$$

then condition (\(\hbox {DMSR}_u\)) of \(\phi \) wrt (SD) is fulfilled for all \(u \in E\).

3.2 Necessary Second-Order Optimality Conditions

In this subsection, we analyze necessary second-order optimality conditions for the following optimization problem with mixed constraints:

$$\begin{aligned} \text{ min } \,\, J(z), \mathrm{\,\, subject \,\, to \,\,} \Phi (z) = 0, \,\, G(z) \in - Q, \,\, H(z) \in D. \end{aligned}$$
(P)

Here, \(J:Z \rightarrow \mathbb {R}\), \(\Phi :Z \rightarrow \Pi \), \(G:Z \rightarrow \Lambda \) and \(H:Z \rightarrow \Delta \) are mappings, Z, \(\Pi \) and \(\Delta \) Banach spaces with Z and \(\Delta \) being either reflexive or separable, \(\Lambda \) a normed space, \(Q \subset \Lambda \) a convex set with nonempty interior, and \(D \subset \Delta \) a nonempty convex set with possibly empty interior.

The feasible set of problem (P) is

$$\begin{aligned} \mathcal {F}_{P}:= \{z \in Z:\, z \in S, \, G(z) \in -Q, \, H(z) \in D \}, \end{aligned}$$

in which \(S:= \{z \in Z:\, \Phi (z) = 0\}\).

The next fact describes the second-order contingent tangent set of S at \((\bar{z}, d)\) wrt \(\sigma \).

Proposition 3.3

Let \(\bar{z} \in S\) and \(d \in Z\) be given. Suppose that the mapping \(\Phi \) is strictly differentiable at \(\bar{z}\), the second-order directional derivative \(d_{2}\Phi (\bar{z}, d)\) exists and condition (\(\hbox {DMSR}_d\)) of \(\Phi \) wrt \((Z, \{0\})\) is satisfied. If \(\triangledown \Phi (\bar{z})d = 0\), then we have for all \(\sigma \in \Sigma \),

$$\begin{aligned} T^{i, 2, \sigma }(S, \bar{z}, d) = \{z \in Z :\, \triangledown \Phi (\bar{z})z + d_{2}\Phi (\bar{z}, d) = 0\}. \end{aligned}$$

Proof

Fix any \(\sigma = \{t_{n}\}\). Let \(z \in T^{i, 2, \sigma }(S, \bar{z}, d)\). Then, there is \(z_{n} \rightarrow z\) such that \(w_{n}:= \bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}z_{n} \in S\), which implies \(\Phi (w_{n}) = 0\), for all \(n \in \mathbb {N}\). Due to Proposition 3.2 (a), one obtains

$$\begin{aligned} \dfrac{\Phi (w_{n}) - \Phi (\bar{z}) - t_{n}\triangledown \Phi (\bar{z})d}{t_{n}^{2}/2} \rightarrow \triangledown \Phi (\bar{z})z + d_{2}\Phi (\bar{z}, d), \end{aligned}$$

which yields that \(\triangledown \Phi (\bar{z})z + d_{2}\Phi (\bar{z}, d) = 0\).

For the reversion, assume that \(z \in Z\) for which \(\triangledown \Phi (\bar{z})z + d_{2}\Phi (\bar{z}, d) = 0\). Choose \(z_{n} \rightarrow z\). It follows from Proposition 3.2 (a) that

$$\begin{aligned} \dfrac{\Phi (\bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}z_{n}) - \Phi (\bar{z}) - t_{n}\triangledown \Phi (\bar{z})d}{t_{n}^{2}/2} \rightarrow \triangledown \Phi (\bar{z})z + d_{2}\Phi (\bar{z}, d). \end{aligned}$$

Since \(\Phi (\bar{z}) = 0\) and \(\triangledown \Phi (\bar{z})d = 0\), we get

$$\begin{aligned} \dfrac{\Phi (\bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}z_{n})}{t_{n}^{2}/2} \rightarrow 0. \end{aligned}$$

From condition (\(\hbox {DMSR}_d\)) of \(\Phi \), one has \(\mu > 0\) such that, for every large n, \(d(\bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}z_{n}, S) \le \mu \Vert \Phi (\bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}z_{n})\Vert _{\Pi }\). Hence, for large n, there exists \(\hat{z}_{n}: \bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}\hat{z}_{n} \in S\) such that

$$\begin{aligned} \frac{1}{2}t_{n}^{2}\Vert z_{n} - \hat{z}_{n}\Vert _{Z} \le \mu \Vert \Phi (\bar{z} + t_{n}d + \frac{1}{2}t_{n}^{2}z_{n})\Vert _{\Pi } + o(t_{n}^{2}) \end{aligned}$$

(where \(o(t_{n}^{2})/t_{n}^{2} \rightarrow 0\)), and so, \(\hat{z}_{n} \rightarrow z\). Consequently, \(z \in T^{i, 2, \sigma }(S, \bar{z}, d)\). \(\square \)

Let us recall that a point \(\bar{z} \in \mathcal {F}_{P}\) is called a local optimal solution of (P) iff there exists a neighborhood \(\mathcal {U}\) of \(\bar{z}\) such that \(J(z) - J(\bar{z}) \ge 0\), for all \(z \in \mathcal {U}\cap \mathcal {F}_{P}\).

Putting \(Q(G(\bar{z})):= \) cone\((Q + G(\bar{z}))\), we obtain \(T(-Q, G(\bar{z})) = - \text{ cl } Q(G(\bar{z}))\) and \(N(-Q, G(\bar{z})) = [Q(G(\bar{z}))]^{*} \). Notice that, if Q is a convex cone, then

$$\begin{aligned} {[}Q(G(\bar{z}))]^{*}=\{\lambda ^{*} \in Q^{*}:\,\langle \lambda ^{*}, G(\bar{z})\rangle = 0\}. \end{aligned}$$

We can prove the next proposition in the same way as the proof of Theorems 3.2 and 3.6 in [40].

Proposition 3.4

Let \(\bar{z}\) be a local optimal solution of (P). Then, one has the following.

  1. (a)

    Let \(d \in T^{i}(S, \bar{z}) \setminus \{0\}\) be a given direction. Suppose that the mapping (JGH) is Fréchet differentiable at \(\bar{z}\) and condition (\(\hbox {DMSR}_d\)) of H wrt (SD) fulfilled. Then,

    $$\begin{aligned}\triangledown (J, G, H)(\bar{z})d \bigcap -\mathrm{int}[\mathbb {R}_{+}\times Q(G(\bar{z}))]\times T(D, H(\bar{z})) = \emptyset .\end{aligned}$$
  2. (b)

    Let \(d \in Z\) be a direction such that

    $$\begin{aligned} d \in T^{i}(S, \bar{z}) \,\, \text{ and } \,\, \triangledown (J, G, H)(\bar{z})d \in - [\mathbb {R}_{+}\times \mathrm{cl}\,\, Q(G(\bar{z}))]\times T(D, H(\bar{z})). \end{aligned}$$

    Assume that the mapping (JGH) is strictly differentiable at \(\bar{z}\), the directional derivative \(d_{2}(J, G, H)(\bar{z}, d)\) exists and condition (\(\hbox {DMSR}_d\)) of H wrt (SD) is satisfied. Then, for every sequence \(\sigma \in \Sigma \) and element \(z \in T^{i, 2, \sigma }(S, \bar{z}, d)\), we have

    $$\begin{aligned}&\triangledown (J, G, H)(\bar{z})z + d_{2}(J, G, H)(\bar{z}, d)\\&\quad \not \in -\mathrm{int \,\, cone}[\mathbb {R}_{+} + \triangledown J(\bar{z})d]\times IT^{2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d)\\&\qquad \times T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d). \end{aligned}$$

We use the following result to deduce necessary optimality conditions in terms of Lagrange multipliers.

Lemma 3.1

(Lemma 3.3 of [40]) Let E and N be Banach spaces, M a normed space, A a nonempty closed convex subset of E, B a convex subset of M with int \(B \ne \emptyset \), R a nonempty closed convex subset of N, and \((\overline{m}, \overline{n}) \in M\times N\) a fixed pair. Let \(\zeta :E \rightarrow M\) and \(\eta :E \rightarrow N\) be continuous linear mappings. Suppose that, for all \(e \in A\),

$$\begin{aligned} (\zeta , \eta )(e) + (\overline{m}, \overline{n}) \not \in (-\mathrm{int} \,\, B)\times R. \end{aligned}$$

If \(0 \in \mathrm{core}[\eta (A) - R + \overline{n}]\), then there is a triple \((e^*, m^*, n^*) \in E^*\times M^*\times N^*\) with \(m^* \ne 0\) such that

$$\begin{aligned}&m^*\circ \zeta + n^*\circ \eta - e^* = 0,\\&\langle m^*, \overline{m}\rangle + \langle n^*, \overline{n}\rangle + \inf _{a \in A}\langle e^*, a\rangle + \inf _{b \in B}\langle m^*, b\rangle \ge \sup _{r \in R}\langle n^*, r\rangle . \end{aligned}$$

Let us introduce the concept of critical direction for problem (P).

Definition 3.4

Assume that the mapping \((J, \Phi , G, H)\) is Fréchet differentiable at a point \(\bar{z} \in \mathcal {F}_{P}\). One says that an element \(d \in Z\) is a critical direction for (P) at \(\bar{z}\) iff

$$\begin{aligned}&\triangledown \Phi (\bar{z})d = 0, \,\, \triangledown J(\bar{z})d \le 0, \,\, \triangledown G(\bar{z})d \in - \mathrm{cl\,\,} Q(G(\bar{z})) \,\, \mathrm{and} \\&T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d) \ne \emptyset , \end{aligned}$$

for some \(\sigma \in \Sigma \).

By virtue of Proposition 3.4 and Lemma 3.1, we can show our principal result in this section.

Theorem 3.1

Let \(\bar{z}\) be a local optimal solution of (P) and d a critical direction. Suppose that the following regularity condition holds true for some \(\epsilon > 0\):

$$\begin{aligned} \triangledown H(\bar{z})\big (T(S, z))- \mathrm{cone}\big (D - H(\bar{z})\big ) = \Delta \text{, } \,\, \forall z \in B_{Z}(\bar{z}, \epsilon )\cap S. \end{aligned}$$
(RC)

Assume further that the mapping \((J, \Phi , G, H)\) is continuously differentiable at \(\bar{z}\) with \(\triangledown \Phi (\bar{z})\) surjective, and the directional derivative \(d_{2}(J, \Phi , G, H)(\bar{z}, d)\) exist. Then, there is a tuple \((v^{*}, \pi ^{*}, \lambda ^{*}, \delta ^*) \in \mathbb {R}_{+}\times \Pi ^*\times N(-Q, G(\bar{z}))\times N(D, H(\bar{z}))\) with \((v^{*}, \lambda ^{*}) \ne (0, 0)\) for which one has

$$\begin{aligned}&v^{*}\triangledown J(\bar{z}) + \pi ^{*}\circ \triangledown \Phi (\bar{z}) + \lambda ^{*}\circ \triangledown G(\bar{z}) + \delta ^*\circ \triangledown H(\bar{z}) = 0, \end{aligned}$$
(7)
$$\begin{aligned}&v^{*}d_{2}J(\bar{z}, d) + \langle \pi ^{*}, d_{2}\Phi (\bar{z}, d)\rangle + \langle \lambda ^{*}, d_{2}G(\bar{z}, d)\rangle + \langle \delta ^*, d_{2}H(\bar{z}, d) \rangle \nonumber \\&\quad \ge s\big (\lambda ^*, T^{i, 2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d)\big ) + s\big (\delta ^*, T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d)\big ), \end{aligned}$$
(8)

for some sequence \(\sigma \in \Sigma \).

Proof

By condition (RC), we observe that the following constraint qualification is obviously verified

$$\begin{aligned} \triangledown H(\bar{z})\big (T(S, \bar{z})\big ) - T(D, H(\bar{z})) = \Delta , \end{aligned}$$
(CQ)

which implies the validity of the next constraint qualification (for all \(d \in Z\))

$$\begin{aligned}&\triangledown H(\bar{z})\big (T(S, \bar{z})\big ) - T[T(D, H(\bar{z})), \triangledown H(\bar{z})d] = \Delta . \quad ({\mathrm{CQ}}_{\mathrm{d}}) \end{aligned}$$

Furthermore, it follows from Theorems 2.2 and 2.5 in [21] that condition (RC) yields condition (\(\hbox {DMSR}_d\)) of H wrt (SD) for all \(d \in Z\). Let us consider two cases as follows.

First case: \(T^{i, 2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d) = \emptyset \). According to Proposition 3.3, one gets

$$\begin{aligned} T(S, \bar{z}) = T^{i}(S, \bar{z}) = \{z \in Z :\, \triangledown \Phi (\bar{z})z = 0\}. \end{aligned}$$

In view of the surjectivity of \(\triangledown \Phi (\bar{z})\) and condition (CQ), we infer that

$$\begin{aligned} \triangledown (H, \Phi )(\bar{z})Z - T(D, H(\bar{z}))\times \{0\} = \Delta \times \Pi . \end{aligned}$$

From Proposition 3.4 (a), one obtains for all \(d \in Z\),

$$\begin{aligned} \big (\triangledown (J, G)(\bar{z}), \triangledown (H, \Phi )(\bar{z})\big )d \not \in -\mathrm{int}[\mathbb {R}_{+}\times Q(G(\bar{z}))]\times T(D, H(\bar{z}))\times \{0\}. \end{aligned}$$

Employing Lemma 3.1 with \(E = Z, \, M = \mathbb {R}\times \Lambda , \, N = \Delta \times \Pi \), \(A = Z\), \(B = \mathbb {R}_{+}\times Q(G(\bar{z}))\), \(R = T(D, H(\bar{z}))\times \{0\}\), \(\overline{m} = (0, 0)\), \(\overline{n} = (0, 0)\), \(\zeta = \triangledown (J, G)(\bar{z})\), and \(\eta = \triangledown (H, \Phi )(\bar{z})\) provides \((v^{*}, \lambda ^{*}, \delta ^{*}, \pi ^*) \in \mathbb {R}\times \Lambda ^{*}\times \Delta ^{*}\times \Pi ^*\) with \((v^{*}, \lambda ^{*}) \ne (0, 0)\) such that we have (7) and

$$\begin{aligned} \inf _{(v, \lambda ) \in \mathbb {R}_{+}\times Q(G(\bar{z}))} \{v^{*}v + \langle \lambda ^{*}, \lambda \rangle \}\ge \sup _{\delta \in T(D, H(\bar{z}))}\langle \delta ^{*}, \delta \rangle . \end{aligned}$$

Because \(\mathbb {R}_{+}\), \(Q(G(\bar{z}))\) and \(T(D, H(\bar{z}))\) are cones, the last inequality entails that \(v^{*} \in \mathbb {R}_{+}, \, \lambda ^{*} \in N(-Q, G(\bar{z}))\) and \(\delta ^{*} \in N(D, H(\bar{z}))\). On the other hand, we get

$$\begin{aligned} s\big (\lambda ^*, T^{i, 2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d)\big ) = - \infty , \end{aligned}$$

which results that inequality (8) automatically holds.

Second case: \(T^{i, 2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d) \ne \emptyset \). Let \(\sigma \in \Sigma \) be the sequence defined by the critical direction d. It follows from Proposition 3.3 that

$$\begin{aligned} T^{i, 2, \sigma }(S, \bar{z}, d) = \{z \in Z :\, \triangledown \Phi (\bar{z})z + d_{2}\Phi (\bar{z}, d) = 0\}. \end{aligned}$$

By virtue of Proposition 3.4 (b), one has for all \(z \in Z\),

$$\begin{aligned}&\big (\triangledown (J, G)(\bar{z}), \triangledown (H, \Phi )(\bar{z})\big )z + \big (d_{2}(J, G)(\bar{z}, d), d_{2}(H, \Phi )(\bar{z}, d)\big )\\&\quad \not \in -\mathrm{int \,\, cone}[\mathbb {R}_{+} + \triangledown J(\bar{z})d]\times IT^{2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d)\\&\qquad \times T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d)\times \{0\}. \end{aligned}$$

Combining condition (\(\hbox {CQ}_{{d}}\)) and the property of \(T^{i, 2, \sigma }\) (see Proposition 3.1 (b)), we deduce that

$$\begin{aligned} \triangledown H(\bar{z})\big (T(S, \bar{z})\big ) - T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d) = \Delta . \end{aligned}$$

From this and the surjectivity of \(\triangledown \Phi (\bar{z})\), it can be concluded that

$$\begin{aligned} \triangledown (H, \Phi )(\bar{z})Z - T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d)\times \{0\} = \Delta \times \Pi . \end{aligned}$$

Applying Lemma 3.1 with \(A = Z\), \(-B = -\)int cone\([\mathbb {R}_{+} + \triangledown J(\bar{z})d]\times IT^{2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d)\), \(R = T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d)\times \{0\}\), \(\overline{m} = d_{2}(J, G)(\bar{z}, d)\), \(\overline{n} = d_{2}(H, \Phi )(\bar{z}, d)\), \(\zeta = \triangledown (J, G)(\bar{z})\), \(\eta = \triangledown (H, \Phi )(\bar{z})\), and using Proposition 3.1 (a), one obtains \((v^*, \lambda ^*, \delta ^*, \pi ^*) \in \mathbb {R}\times \Lambda ^{*}\times \Delta ^{*}\times \Pi ^*\) with \((v^{*}, \lambda ^{*}) \ne (0, 0)\) such that we get (7) and for all \(v \in \) int cone\([\mathbb {R}_{+} + \triangledown J(\bar{z})d]\), \(\lambda \in T^{i, 2}(-Q, G(\bar{z}), \triangledown G(\bar{z})d)\) and \(\delta \in T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d)\),

$$\begin{aligned}&v^{*}d_{2}J(\bar{z}, d) + \langle \lambda ^{*}, d_{2}G(\bar{z}, d)\rangle + \langle \delta ^{*}, d_{2}H(\bar{z}, d)\rangle + \langle \pi ^*, d_{2}\Phi (\bar{z}, d)\rangle \\&\quad + v^{*}v \ge \langle \lambda ^{*}, \lambda \rangle + \langle \delta ^{*}, \delta \rangle . \end{aligned}$$

It implies that \(v^{*} \in \mathbb {R}_{+}\) with \(v^{*}\triangledown J(\bar{z})d = 0\). By this and Proposition 3.1 (c), we derive from the above inequality that \(\lambda ^{*} \in N(-Q, G(\bar{z}))\) with \(\langle \lambda ^{*}, \triangledown G(\bar{z})d\rangle = 0\), \(\delta ^{*} \in N(D, H(\bar{z}))\) satisfying \(\langle \delta ^{*}, \triangledown H(\bar{z})d\rangle = 0\), and the second-order necessary condition (8) is proved. \(\square \)

4 Proof of the Main Result

In the last section, we demonstrate Theorem 2.1. To this aim, we transform problem (OCP) into problem (P) and then make use of Theorem 3.1. Let us set the spaces

$$\begin{aligned} U= & {} L^{q}(\Omega ), \, V = L^{r}(\Gamma ), \, Z = Y\times U\times V,\\ \Pi= & {} U\times V, \, \Lambda = C(\overline{\Omega }), \, \Delta = U\times V \end{aligned}$$

and define the mappings

$$\begin{aligned}&\Phi :Z \rightarrow \Pi , \, \Phi (y, u, v) = {\big (}Ay + \varphi (\cdot , y) - u, \dfrac{\partial y}{\partial n_{A}} + \psi (\cdot , y) - v{\big )},\\&G:Z \rightarrow \Lambda , \, G(y, u, v) = g(\cdot , y),\\&H:Z \rightarrow \Delta , \, H(y, u, v) = {\big (}h(\cdot , y) + u, k(\cdot , y) + v{\big )}. \end{aligned}$$

Then, the optimal control problem (OCP) can be written in the framework of problem (P) as below:

$$\begin{aligned}&\text{ min } \,\, J(y, u, v), \,\, \mathrm{subject \,\, to} \,\, \Phi (y, u, v) = 0, \, G(y, u, v) \in - Q, \, H(y, u, v)\\&\qquad \in D:= D_{\Omega }\times D_{\Gamma }, \end{aligned}$$

where the sets Q, \(D_{\Omega }\) and \(D_{\Gamma }\) are defined by relations (4), (5) and (6), resp.

For the later use, we define the set

$$\begin{aligned} S:= \{(y, u, v) \in Z:\, \Phi (y, u, v) = 0\} \end{aligned}$$

and the mapping

$$\begin{aligned} G_{1}:Y \rightarrow \Lambda , \, G_{1}(y) = g(\cdot , y). \end{aligned}$$

The following four propositions gradually give some computations for the derivative and the second-order directional derivative of the mappings J, \(\Phi \), G and H.

Proposition 4.1

Suppose that assumption (A1) is satisfied. Then, the mapping J is continuously differentiable around \(\bar{z}\) with

$$\begin{aligned} \triangledown J(\bar{z})d = \int _{\Omega }{\big (}K_{y}(\cdot , \bar{y}, \bar{u})y + K_{u}(\cdot , \bar{y}, \bar{u})u{\big )}dx + \int _{\Gamma }{\big (}L_{y}(\cdot , \bar{y}, \bar{v})y + L_{v}(\cdot , \bar{y}, \bar{v})v{\big )}ds \end{aligned}$$

for all \(d = (y, u, v) \in Z\), and the directional derivative of J is calculated by

$$\begin{aligned} d_{2}J(\bar{z}, d)= & {} \int _{\Omega }d_{2}[K(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{u}(x)), (y(x), u(x)){\big )}dx\\&+ \int _{\Gamma }d_{2}[L(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{v}(x)), (y(x), v(x)){\big )}ds. \end{aligned}$$

Proof

Let us put

$$\begin{aligned} \hat{K}(y, u, v) = \int _{\Omega }K(x, y(x), u(x))dx \, \, \mathrm{and} \, \, \hat{L}(y, u, v) = \int _{\Gamma }L(x, y(x), v(x))ds. \end{aligned}$$

Utilizing assumptions (A1)-(i.1) and (A1)-(ii.1), and arguing similarly as in Lemma 3.3 of [21], one can show the continuous differentiability of \(\hat{K}\) and \(\hat{L}\) with

$$\begin{aligned}&\triangledown \hat{K}(\bar{z})d = \int _{\Omega }{\big (}K_{y}(\cdot , \bar{y}, \bar{u})y + K_{u}(\cdot , \bar{y}, \bar{u})u{\big )}dx \, \, \mathrm{and} \\&\triangledown \hat{L}(\bar{z})d = \int _{\Gamma }{\big (}L_{y}(\cdot , \bar{y}, \bar{v})y + L_{v}(\cdot , \bar{y}, \bar{v})v{\big )}ds. \end{aligned}$$

From assumption (A1)-(i.2) and the definition of \(d_{2}\), we have for a.e. \(x \in \Omega \),

$$\begin{aligned} d_{2}[K(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{u}(x)), (y(x), u(x)){\big )} = \lim _{t \rightarrow 0^+}\gamma _{K, t}(x), \end{aligned}$$

in which

$$\begin{aligned} \gamma _{K, t}(x):= & {} \frac{2}{t^{2}}{\big [}K(x, \bar{y}(x) + ty(x), \bar{u}(x) + tu(x)) - K(x, \bar{y}(x), \bar{u}(x))\\&- tK_{y}(x, \bar{y}(x), \bar{u}(x))y(x) - tK_{u}(x, \bar{y}(x), \bar{u}(x))u(x){\big ]}. \end{aligned}$$

Since the space Y is dense in \(C(\overline{\Omega })\), one can choose \(M > 0\) such that \(\Vert \bar{y}\Vert _{C(\overline{\Omega })} + \Vert y\Vert _{C(\overline{\Omega })} < M\). Then, due to the Lipschitz-type continuity of \(K_{y}\) and \(K_{u}\) in assumption (A1)-(i.1) we get for a.e. \(x \in \Omega \) and for all \(t \in (0, 1)\) small enough,

$$\begin{aligned} |\gamma _{K, t}(x)|= & {} \frac{2}{t}\Big |\int _{0}^{1}{\big [}{\big (}K_{y}(x, \bar{y}(x) + tsy(x), \bar{u}(x) + tu(x))y(x)\\&\qquad - K_{y}(x, \bar{y}(x), \bar{u}(x))y(x){\big )}+ {\big (}K_{u}(x, \bar{y}(x), \bar{u}(x) + tsu(x))u(x)\\&\qquad - K_{u}(x, \bar{y}(x), \bar{u}(x))u(x){\big )}{\big ]}ds\Big |\le \frac{2}{t}[\eta (M)]^{2}\int _{0}^{1}{\big (}|tsy(x)|^{s_{1}}|y(x)|\\&\quad + m_{1}|tu(x)|^{s_{2}}|y(x)| + m_{1}|tsu(x)|^{s_{2}}|u(x)|{\big )}ds\\&\quad \le \frac{2}{t}[\eta (M)]^{2}{\big (}t^{s_{1}}|y(x)|^{s_{1} + 1} + m_{1}t^{s_{2}}|u(x)|^{s_{2}}|y(x)| + m_{1}t^{s_{2}}|u(x)|^{s_{2} + 1}{\big )}\\&\quad \le 2[\eta (M)]^{2}{\big (}M^{2s_{1}} + m_{1}M^{s_{2}}|u(x)|^{s_{2}} + m_{1}M^{s_{2} - 1}|u(x)|^{s_{2} + 1}{\big )}. \end{aligned}$$

It follows from the dominated convergence theorem that

$$\begin{aligned} d_{2}\hat{K}(\bar{z}, d)= & {} \lim _{t \rightarrow 0^+}\dfrac{\hat{K}(\bar{z} + td) - \hat{K}(\bar{z}) - t\triangledown \hat{K}(\bar{z})d}{t^{2}/2}\\= & {} \lim _{t \rightarrow 0^+}\int _{\Omega }\gamma _{K, t}(x)dx\\= & {} \int _{\Omega }d_{2}[K(x, \cdot , \cdot )]\left( (\bar{y}(x), \bar{u}(x)), (y(x), u(x))\right) dx. \end{aligned}$$

In a similar way, one also obtains

$$\begin{aligned} d_{2}\hat{L}(\bar{z}, d) = \int _{\Gamma }d_{2}[L(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{v}(x)), (y(x), v(x)){\big )}ds. \end{aligned}$$

\(\square \)

Proposition 4.2

Let assumptions (A2) and (A3) hold. Then, the mapping \(\Phi \) is continuously differentiable around \(\bar{z}\) with

$$\begin{aligned} \triangledown \Phi (\bar{z})d = \Big (Ay + \varphi _{y}(\cdot , \bar{y})y - u, \dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \bar{y})y - v\Big ) \end{aligned}$$

for all \(d = (y, u, v) \in Z\), and the directional derivative of \(\Phi \) is given as

$$\begin{aligned} d_{2}\Phi (\bar{z}, d) = \big (\gamma _{\varphi }(\cdot ), \gamma _{\psi }(\cdot )\big ), \end{aligned}$$

where \(\gamma _{\varphi }(x):= d_{2}[\varphi (x, \cdot )]\big (\bar{y}(x), y(x)\big )\) for a.e. \(x \in \Omega \) and \(\gamma _{\psi }(x):= d_{2}[\psi (x, \cdot )]\big (\bar{y}(x), y(x)\big )\) for a.e. \(x \in \Gamma \).

Proof

Due to assumptions (A2), (A3)-(i.1) and (A3)-(ii.1), and employing standard arguments, we can prove that \(\Phi \) is continuously differentiable around \(\bar{z}\). Let us set

$$\begin{aligned} \Phi _{1}(y, u, v) = Ay + \varphi (\cdot , y) - u \, \, \mathrm{and} \, \, \Phi _{2}(y, u, v) = \dfrac{\partial y}{\partial n_{A}} + \psi (\cdot , y) - v. \end{aligned}$$

By virtue of assumption (A3)-(i.2) and the definition of \(d_{2}\), we get for a.e. \(x \in \Omega \),

$$\begin{aligned} \gamma _{\varphi }(x) = \lim _{t \rightarrow 0^+}\dfrac{\varphi (x, \bar{y}(x) + ty(x)) - \varphi (x, \bar{y}(x)) - t\varphi _{y}(x, \bar{y}(x))y(x)}{t^{2}/2}. \end{aligned}$$

Let us pick \(M > 0\) such that \(\Vert \bar{y}\Vert _{C(\overline{\Omega })} + \Vert y\Vert _{C(\overline{\Omega })} < M\). Then, for a.e. \(x \in \Omega \) and for all \(t \in (0, 1)\) small enough, from the Lipschitz-type continuity of \(\varphi _{y}\) in assumption (A3)-(i.1) one has for \(\theta _{t}(x) \in (0, 1)\),

$$\begin{aligned}&\Big |\Big (\dfrac{\Phi _{1}(\bar{z} + td) - \Phi _{1}(\bar{z}) - t\triangledown \Phi _{1}(\bar{z})d}{t^{2}/2}\Big )(x)\Big |\\&\quad = \Big |\dfrac{\varphi (x, \bar{y}(x) + ty(x)) - \varphi (x, \bar{y}(x)) - t\varphi _{y}(x, \bar{y}(x))y(x)}{t^{2}/2}\Big |\\&\quad = \Big |\dfrac{[\varphi _{y}(x, \bar{y}(x) + t\theta _{t}(x)y(x)) - \varphi _{y}(x, \bar{y}(x))]y(x)}{t/2}\Big |\\&\quad \le \frac{2}{t}[\eta (M)]^{2}|t\theta _{t}(x)y(x)||y(x)| \le 2[\eta (M)]^{2}|y(x)|^{2}, \end{aligned}$$

where \(|y|^{2} \in L^{q}(\Omega )\). Applying the dominated convergence theorem, we deduce that

$$\begin{aligned} \dfrac{\Phi _{1}(\bar{z} + td) - \Phi _{1}(\bar{z}) - t\triangledown \Phi _{1}(\bar{z})d}{t^{2}/2} \rightarrow \gamma _{\varphi }(\cdot ) \end{aligned}$$

in \(L^{q}(\Omega )\) as \(t \rightarrow 0^+\). Reasoning analogously as above, one gets

$$\begin{aligned} \dfrac{\Phi _{2}(\bar{z} + td) - \Phi _{2}(\bar{z}) - t\triangledown \Phi _{2}(\bar{z})d}{t^{2}/2} \rightarrow \gamma _{\psi }(\cdot ) \end{aligned}$$

in \(L^{r}(\Gamma )\). Hence, \(d_{2}\Phi (\bar{z}, d) = \big (\gamma _{\varphi }(\cdot ), \gamma _{\psi }(\cdot )\big )\). \(\square \)

Proposition 4.3

Suppose that assumption (A4) is fulfilled. Then, the mapping G (resp, \(G_{1}\)) is continuously differentiable around \(\bar{z}\) (resp, \(\bar{y}\)) with

$$\begin{aligned} \triangledown G(\bar{z})d = \triangledown G_{1}(\bar{y})y = g_{y}(\cdot , \bar{y}(\cdot ))y(\cdot ) \end{aligned}$$

for all \(d = (y, u, v) \in Z\), and the directional derivative of G (and \(G_{1}\)) is computed as

$$\begin{aligned} d_{2}G(\bar{z}, d) = d_{2}G_{1}(\bar{y}, y) = \gamma _{g}(\cdot ), \end{aligned}$$

where \(\gamma _{g}(x):= d_{2}[g(x, \cdot )]\big (\bar{y}(x), y(x)\big )\) for all \(x \in \overline{\Omega }\).

Proof

By virtue of assumption (A4)-(i) and using the same proof as in Lemmas 4.12 and 4.13 of [38], we can demonstrate that G (resp, \(G_{1}\)) is continuously differentiable around \(\bar{z}\) (resp, \(\bar{y}\)). Due to assumption (A4)-(ii) and the definition of \(d_{2}\), one gets for all \(x \in \overline{\Omega }\),

$$\begin{aligned} \gamma _{g}(x) = \lim _{t \rightarrow 0^+}\dfrac{g(x, \bar{y}(x) + ty(x)) - g(x, \bar{y}(x)) - tg_{y}(x, \bar{y}(x))y(x)}{t^{2}/2}. \end{aligned}$$

In addition, we have for all \(t \in (0, 1)\) small enough,

$$\begin{aligned}&\sup _{x \in \overline{\Omega }}\Big |\dfrac{g(x, \bar{y}(x) + ty(x)) - g(x, \bar{y}(x)) - tg_{y}(x, \bar{y}(x))y(x)}{t^{2}/2} - \gamma _{g}(x)\Big |\\&\quad = \sup _{x \in \overline{\Omega }}\Big |\frac{2}{t^{2}}\int _{0}^{1}[g_{y}(x, \bar{y}(x) + tsy(x)) - g_{y}(x, \bar{y}(x))]ty(x)ds - \gamma _{g}(x)\Big |\\&\quad = \frac{2}{t}\sup _{x \in \overline{\Omega }}\Big |\int _{0}^{1}\Big \{[g_{y}(x, \bar{y}(x) + tsy(x)) - g_{y}(x, \bar{y}(x))]y(x) - ts\gamma _{g}(x)\Big \}ds\Big |\\&\quad \le \frac{2}{t}\sup _{x \in \overline{\Omega }}\int _{0}^{1}\Big |[g_{y}(x, \bar{y}(x) + tsy(x)) - g_{y}(x, \bar{y}(x))]y(x) - ts\gamma _{g}(x)\Big |ds\\&\quad \le r(t):= \frac{2}{t}\sup _{x \in \overline{\Omega }, s \in [0, 1], |\hat{y}| \le M}\Big |[g_{y}(x, \bar{y}(x) + ts\hat{y}) - g_{y}(x, \bar{y}(x))]\hat{y}\\&\qquad - tsd_{2}[g(x, \cdot )](\bar{y}(x), \hat{y})\Big |. \end{aligned}$$

Here, M is a positive number such that \(\Vert y\Vert _{C(\overline{\Omega })} < M\). From condition (\(\hbox {C}_{{M}}\)), one observes that \(r(t) \rightarrow 0\) as \(t \rightarrow 0^+\), and thus,

$$\begin{aligned} \dfrac{g(\cdot , \bar{y}(\cdot ) + ty(\cdot )) - g(\cdot , \bar{y}(\cdot )) - tg_{y}(\cdot , \bar{y}(\cdot ))y(\cdot )}{t^{2}/2} \rightarrow \gamma _{g}(\cdot ) \end{aligned}$$

in \(C(\overline{\Omega })\). Therefore, \(d_{2}G(\bar{z}, d) = d_{2}G_{1}(\bar{y}, y) = \gamma _{g}(\cdot )\). \(\square \)

Proposition 4.4

Let assumption (A5) be valid. Then, the mapping H is continuously differentiable around \(\bar{z}\) with

$$\begin{aligned} \triangledown H(\bar{z})d = \big (h_{y}(\cdot , \bar{y})y + u, k_{y}(\cdot , \bar{y})y + v\big ) \end{aligned}$$

for all \(d = (y, u, v) \in Z\), and the directional derivative of H is given by

$$\begin{aligned} d_{2}H(\bar{z}, d) = \big (\gamma _{h}(\cdot ), \gamma _{k}(\cdot )\big ), \end{aligned}$$

where \(\gamma _{h}(x):= d_{2}[h(x, \cdot )]\big (\bar{y}(x), y(x)\big )\) for a.e. \(x \in \Omega \) and \(\gamma _{k}(x):= d_{2}[k(x, \cdot )]\big (\bar{y}(x), y(x)\big )\) for a.e. \(x \in \Gamma \).

Proof

To obtain the thesis, one can reason analogously as in the proof of Proposition 4.2, where (A3), \(\Phi \), \(\varphi \), \(\psi \), \(\gamma _{\varphi }\) and \(\gamma _{\psi }\) are replaced with (A5), H, h, k, \(\gamma _{h}\) and \(\gamma _{k}\), resp. \(\square \)

We will utilize in the proof of Theorem 2.1 the next result on the regularity.

Proposition 4.5

Suppose that assumptions (A2), (A3) and (A5) hold true. Then, the mapping \(\triangledown \Phi (\hat{z})\) is surjective for all \(\hat{z} \in Z\) and the regularity condition (RC) is verified.

Proof

For any fixed \((u_{0}, v_{0}) \in \Pi \), one considers the equation \(\triangledown \Phi (\hat{z})(y, u, v) = (u_{0}, v_{0})\), where \(\hat{z} = (\hat{y}, \hat{u}, \hat{v})\) and \((y, u, v) \in Z\). This means that

$$\begin{aligned} \left\{ \begin{array}{ll} Ay + \varphi _{y}(\cdot , \hat{y})y = u + u_{0} \,\, \text{ in } \,\, \Omega , \\ \dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \hat{y})y = v + v_{0} \,\, \text{ on } \,\, \Gamma .\\ \end{array} \right. \end{aligned}$$

Due to assumptions (A2) and (A3) and the fact that \(u + u_{0} \in U\) and \(v + v_{0} \in V\), it follows from Theorem 5 of [1] that the above equation has a unique solution \(y \in Y\). Hence, the mapping \(\triangledown \Phi (\hat{z})\) is surjective.

Now, we prove that \(\triangledown H(\bar{z})\big (T(S, \hat{z})\big ) = \Delta \) for all \(\hat{z} = (\hat{y}, \hat{u}, \hat{v}) \in S\) and so, condition (RC) is satisfied. From Proposition 3.3, one obtains \(T(S, \hat{z}) = \{z \in Z :\, \triangledown \Phi (\hat{z})z = 0\}\). For arbitrary \((u, v) \in \Delta \), we examine the equation

$$\begin{aligned} \left\{ \begin{array}{ll} Ay + \varphi _{y}(\cdot , \hat{y})y + h_{y}(\cdot , \bar{y})y = u \,\, \text{ in } \,\, \Omega , \\ \dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \hat{y})y + k_{y}(\cdot , \bar{y})y = v \,\, \text{ on } \,\, \Gamma .\\ \end{array} \right. \end{aligned}$$

By virtue of assumptions (A2), (A3) and (A5) and employing again Theorem 5 in [1], one deduces that the last equation has a unique solution \(y_{1} \in Y\). Setting \(u_{1}:= u - h_{y}(\cdot , \bar{y})y_{1}\) and \(v_{1}:= v - k_{y}(\cdot , \bar{y})y_{1}\), we see that \(y_{1}\) is a solution to the following system

$$\begin{aligned} \left\{ \begin{array}{ll} Ay + \varphi _{y}(\cdot , \hat{y})y = u_{1} \,\, \text{ in } \,\, \Omega , \\ \dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \hat{y})y = v_{1} \,\, \text{ on } \,\, \Gamma ,\\ \end{array} \right. \end{aligned}$$

which implies \(\triangledown \Phi (\hat{z})z_{1} = 0\), that is \(z_{1} \in T(S, \hat{z})\), for \(z_{1} = (y_{1}, u_{1}, v_{1})\). Moreover, one gets \((u, v) = \triangledown H(\bar{z})z_{1}\), and therefore, the assertion is demonstrated. \(\square \)

The next characterization of the tangent as well as normal cones of \(D_{\Omega }\) (resp, \(D_{\Gamma }\)) at \(h(\cdot , \bar{y}) + \bar{u}\) (resp, \(k(\cdot , \bar{y}) + \bar{v}\)) is an immediate consequence of Theorem 8.5.1 in [2] (see also Lemma 4.2 in [5] and Lemma 12 in [37]).

Proposition 4.6

The following statements are fulfilled.

  1. (a)
    $$\begin{aligned}&u \in T(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u}) \Leftrightarrow u \in L^{q}(\Omega ) \, \, such \, \, that \, \, u(x) \,\left\{ \begin{array}{ll} \ge 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{1},\\ \le 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{2},\\ \end{array} \right. \\&v \in T(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v}) \Leftrightarrow v \in L^{r}(\Gamma ) \, \, such \, \, that \, \, v(x) \,\left\{ \begin{array}{ll} \ge 0 &{} \text{ a.e. }\,\,\, x \in \Gamma _{1},\\ \le 0 &{} \text{ a.e. }\,\,\, x \in \Gamma _{2}.\\ \end{array} \right. \end{aligned}$$
  2. (b)
    $$\begin{aligned}&u^* \in N(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u}) \Leftrightarrow u^* \in L^{\hat{q}}(\Omega ) \, \, such \, \, that \, \, u^*(x) \,\left\{ \begin{array}{lll} \le 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{1},\\ \ge 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{2},\\ = 0 &{} \text{ otherwise },\\ \end{array} \right. \\&v^* \in N(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v}) \Leftrightarrow v^* \in L^{\hat{r}}(\Gamma ) \, \, such \, \, that \, \, v^*(x) \,\left\{ \begin{array}{lll} \le 0 &{} \text{ a.e. }\,\,\, x \in \Gamma _{1},\\ \ge 0 &{} \text{ a.e. }\,\,\, x \in \Gamma _{2},\\ = 0 &{} \text{ otherwise }.\\ \end{array} \right. \end{aligned}$$

    Here, the sets \(\Omega _{1}\), \(\Omega _{2}\), \(\Gamma _{1}\) and \(\Gamma _{2}\) are defined in Theorem 2.1 (c).

In the sequel, we will need the following fact whose proof for the case \((X, D_{X}, p) = (\Omega , D_{\Omega }, q)\) is available in Corollary 3.2 of [29]. The proof for the other case can be proceeded analogously.

Proposition 4.7

Let \((X, D_{X}, p) \in \{(\Omega , D_{\Omega }, q), (\Gamma , D_{\Gamma }, r)\}\). Assume that \(\zeta , \eta \in L^{p}(X)\) and \(\sigma \in \Sigma \) such that \(T^{i, 2, \sigma }(D_{X}, \zeta , \eta ) \ne \emptyset \). Suppose further that \(\psi \in L^{\hat{p}}(X)\) (with \(\hat{p} = p/(p - 1)\)) for which \(\psi (x) = 0\) for a.e. \(x \in X:\, \alpha _{X}(x)< \zeta (x) < \beta _{X}(x)\) and \(\psi (x)\eta (x) = 0\) a.e. \(x \in X\). If \(s\big (\psi , T^{i, 2, \sigma }(D_{X}, \zeta , \eta )\big ) < +\infty \), then \(s\big (\psi , T^{i, 2, \sigma }(D_{X}, \zeta , \eta )\big ) \ge 0.\)

Proof of Theorem 2.1

We see that conditions (a), (b), (c) and (d) of Definition 2.1 are equivalent to \(\triangledown J(\bar{z})d \le 0\), \(\triangledown \Phi (\bar{z})d = 0\), \(\triangledown G(\bar{z})d \in - \)cl \(Q(G(\bar{z}))\) and \(T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d) \ne \emptyset \) for some \(\sigma \in \Sigma \), resp. With the help of Propositions 4.14.5, one claims that all assumptions of Theorem 3.1 hold true. Applying this theorem, we receive \(\lambda \in \mathbb {R}_{+}\), \((p_{1}, p_{2}) \in \Pi ^{*} = L^{\hat{q}}(\Omega )\times L^{\hat{r}}(\Gamma )\), \(\mu \in \Lambda ^{*} = \mathcal {M}(\overline{\Omega })\), \((\phi , \chi ) \in \Delta ^{*} = L^{\hat{q}}(\Omega )\times L^{\hat{r}}(\Gamma )\) with \((\lambda , \mu ) \ne (0, 0)\) such that \(\mu \in N(-Q, G(\bar{z}))\), \((\phi , \chi ) \in N(D, H(\bar{z}))\),

$$\begin{aligned} \lambda \triangledown J(\bar{z}) + (p_{1}, p_{2}) \circ \triangledown \Phi (\bar{z}) + \mu \circ \triangledown G(\bar{z}) + (\phi , \chi ) \circ \triangledown H(\bar{z}) = 0, \end{aligned}$$
(9)

and the next is valid:

$$\begin{aligned}&\lambda \int _{\Omega }d_{2}[K(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{u}(x)), (y(x), u(x)){\big )}dx\nonumber \\&\qquad + \lambda \int _{\Gamma }d_{2}[L(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{v}(x)), (y(x), v(x)){\big )}ds\nonumber \\&\qquad + \int _{\Omega }p_{1}(x)d_{2}[\varphi (x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}dx + \int _{\Gamma }p_{2}(x)d_{2}[\psi (x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}ds\nonumber \\&\qquad + \int _{\overline{\Omega }}d_{2}[g(x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}d\mu \nonumber \\&\qquad + \int _{\Omega }\phi (x)d_{2}[h(x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}dx + \int _{\Gamma }\chi (x)d_{2}[k(x, \cdot )]{\big (}\bar{y}(x), y(x){\big )}ds\nonumber \\&\quad \ge s{\big (}\mu , T^{i, 2}(-Q, g(\cdot , \bar{y}), g_{y}(\cdot , \bar{y})y){\big )} + s{\big (}(\phi , \chi ), T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d){\big )}.\nonumber \\ \end{aligned}$$
(10)

Notice that Eq. (9) is equivalent to the four next relations:

$$\begin{aligned}&\int _{\Omega }{\big [}\lambda K_{y}(\cdot , \bar{y}, \bar{u})y + p_{1}(Ay + \varphi _{y}(\cdot , \bar{y})y) + \phi h_{y}(\cdot , \bar{y})y{\big ]}dx\nonumber \\&\quad + \langle [g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Omega }, y \rangle _{b, \Omega } = 0 \, \, \text{ for } \text{ every } \, \, y \in Y, \end{aligned}$$
(11)
$$\begin{aligned}&\int _{\Gamma }{\big [}\lambda L_{y}(\cdot , \bar{y}, \bar{v})y + p_{2}{\Big (}\dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \bar{y})y{\Big )} + \chi k_{y}(\cdot , \bar{y})y{\big ]}ds\nonumber \\&\quad + \langle [g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Gamma }, y \rangle _{\Gamma } = 0 \, \, \text{ for } \text{ every } \, \, y \in Y, \end{aligned}$$
(12)
$$\begin{aligned}&\int _{\Omega }{\big [}\lambda K_{u}(\cdot , \bar{y}, \bar{u}) - p_{1} + \phi {\big ]}udx = 0 \, \, \text{ for } \text{ every } \, \, u \in L^{q}(\Omega ), \end{aligned}$$
(13)
$$\begin{aligned}&\int _{\Gamma }{\big [}\lambda L_{v}(\cdot , \bar{y}, \bar{v}) - p_{2} + \chi {\big ]}vdx = 0 \, \, \text{ for } \text{ every } \, \, v \in L^{r}(\Gamma ). \end{aligned}$$
(14)

Here and in the sequel, \(\langle \cdot , \cdot \rangle _{\Gamma }\) denotes the canonical pairing on \(\mathcal {M}(\Gamma )\times C(\Gamma )\). By virtue of Theorem 4 in [1], there is a unique weak solution \(p \in \bigcap _{1 \le s < N/(N - 1)}W^{1, s}(\Omega )\) of the equation

$$\begin{aligned} \left\{ \begin{array}{ll} A^*p + \varphi _{y}(\cdot , \bar{y})p = - \lambda K_{y}(\cdot , \bar{y}, \bar{u}) - [g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Omega } - h_{y}(\cdot , \bar{y})^{*}\phi \,\, \text{ in } \,\, \Omega \text{, } \\ \dfrac{\partial p}{\partial n_{A^*}} + \psi _{y}(\cdot , \bar{y})p = - \lambda L_{y}(\cdot , \bar{y}, \bar{v}) - [g_{y}(\cdot , \bar{y})^{*}\mu ]_{\Gamma } - k_{y}(\cdot , \bar{y})^{*}\chi \,\, \text{ on } \,\, \Gamma \text{, }\\ \end{array} \right. \end{aligned}$$
(15)

which fulfills

$$\begin{aligned} \int _{\Omega }pAydx + \int _{\Gamma }p\dfrac{\partial y}{\partial n_{A}}ds = \langle A^{*}p, y \rangle _{b, \Omega } + {\Big \langle } \dfrac{\partial p}{\partial n_{A^*}}, y {\Big \rangle }_{\Gamma } \, \, \text{ for } \text{ every } \, \, y \in Y. \end{aligned}$$
(16)

From (11), (12), (15) and (16), one has

$$\begin{aligned}&\int _{\Omega }pAydx + \int _{\Gamma }p\dfrac{\partial y}{\partial n_{A}}ds\\&\quad = \int _{\Omega }p_{1}(Ay + \varphi _{y}(\cdot , \bar{y})y)dx - \int _{\Omega }\varphi _{y}(\cdot , \bar{y})pydx\\&\qquad + \int _{\Gamma }p_{2}{\Big (}\dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \bar{y})y{\Big )}ds - \int _{\Gamma }\psi _{y}(\cdot , \bar{y})pyds \end{aligned}$$

which implies

$$\begin{aligned}&\int _{\Omega }(p - p_{1})(Ay + \varphi _{y}(\cdot , \bar{y})y)dx \nonumber \\&\quad + \int _{\Gamma }(p - p_{2}){\Big (}\dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \bar{y})y{\Big )}ds = 0 \, \, \text{ for } \text{ every } \, \, y \in Y. \end{aligned}$$
(17)

It follows from Theorem 2 of [1] that the mapping \(y \mapsto {\Big (}Ay + \varphi _{y}(\cdot , \bar{y})y, \dfrac{\partial y}{\partial n_{A}} + \psi _{y}(\cdot , \bar{y})y{\Big )}\) is surjective from Y onto \(L^{q}(\Omega )\times L^{r}(\Gamma )\) and so, Eq. (17) yields that

$$\begin{aligned} p = p_{1} \, \, \text{ a.e. } \text{ in } \, \, \Omega \, \, \text{ and } \, \, p = p_{2} \, \, \text{ a.e. } \text{ on } \, \, \Gamma . \end{aligned}$$
(18)

Therefore, assertion (a) of Theorem 2.1 follows from (15), and assertion (b) from (13), (14) and (18). By Proposition 4.6 (b), the inclusion \((\phi , \chi ) \in N(D, H(\bar{z}))\), that is \(\phi \in N(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u})\) and \(\chi \in N(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v})\), implies assertion (c). Furthermore, the following is verified:

$$\begin{aligned} \mu \in N(-Q, G(\bar{z})) \,\, \mathrm{iff} \,\, \mu \succeq 0 \,\, \mathrm{and} \,\, \text{ supp }(\mu ) \subset \{x \in \overline{\Omega } :\, g(x, \bar{y}(x)) = 0\} \end{aligned}$$

and thus, assertion (d) is satisfied. It is obvious that

$$\begin{aligned}&s{\big (}(\phi , \chi ), T^{i, 2, \sigma }(D, H(\bar{z}), \triangledown H(\bar{z})d){\big )}\\&\quad = s{\big (}\phi , T^{i, 2, \sigma }(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u}, h_{y}(\cdot , \bar{y})y + u){\big )}\\&\qquad + s{\big (}\chi , T^{i, 2, \sigma }(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v}, k_{y}(\cdot , \bar{y})y + v){\big )}. \end{aligned}$$

We show that

$$\begin{aligned} s{\big (}\phi , T^{i, 2, \sigma }(D_{\Omega }, h(\cdot , \bar{y}) + \bar{u}, h_{y}(\cdot , \bar{y})y + u){\big )} \ge 0. \end{aligned}$$

In fact, let us put \(\zeta := h(\cdot , \bar{y}) + \bar{u}\) and \(\eta := h_{y}(\cdot , \bar{y})y + u\). Then, from assertion (c) one observes that \(\phi (x) = 0\) for a.e. \(x \in \Omega :\, \alpha _{\Omega }(x)< \zeta (x) < \beta _{\Omega }(x)\). Due to Proposition 3.1 (c), we have \(\langle \phi , \eta \rangle = 0\). Since \(\eta \in T(D_{\Omega }, \zeta )\), it follows from Proposition 4.6 (a) that

$$\begin{aligned} \eta (x) \,\left\{ \begin{array}{ll} \ge 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{1},\\ \le 0 &{} \text{ a.e. }\,\,\, x \in \Omega _{2}.\\ \end{array} \right. \end{aligned}$$

Hence, \(\phi (x)\eta (x) \le 0\) for a.e. \(x \in \Omega \). This together with \(\langle \phi , \eta \rangle = 0\) result that \(\phi (x)\eta (x) = 0\) for a.e. \(x \in \Omega \). Making use of Proposition 4.7, where \((X, D_{X}, p) = (\Omega , D_{\Omega }, q)\), one gets the desired result: \(s{\big (}\phi , T^{i, 2, \sigma }(D_{\Omega }, \zeta , \eta ){\big )} \ge 0\). Similarly, we have \(s{\big (}\chi , T^{i, 2, \sigma }(D_{\Gamma }, k(\cdot , \bar{y}) + \bar{v}, k_{y}(\cdot , \bar{y})y + v){\big )} \ge 0\) and thus, assertion (e) of Theorem 2.1 is obtained from (10) and (18). The proof is finished. \(\square \)

In the following example, we can choose some reasonable critical direction to disclaim \(\bar{z} = (\bar{y}, \bar{u}, \bar{v})\) from suspected triples, which satisfy first-order necessary conditions, for local optimal solutions of (OCP).

Example 4.1

Consider the open unit ball \(\Omega \) in \(\mathbb {R}^{2}\) with the boundary \(\Gamma \). Suppose that \(q = r = 2\), and the data of problem (OCP) are given by

$$\begin{aligned}&K(x, y, u) = 2y|y| - u^{2}, L(x, y, v) = \,\left\{ \begin{array}{ll} \frac{1}{2}y^{2} + \frac{1}{2}v^{2} &{}\quad \text{ if }\,\,\, y \ge 0,\\ \frac{1}{3}y^{3} + \frac{1}{2}v^{2} &{}\quad \text{ if }\,\,\, y < 0,\\ \end{array} \right. \\&a_{ij}(x) = \,\left\{ \begin{array}{ll} 1 &{}\quad \text{ if }\,\,\, i = j,\\ 0 &{}\quad \text{ if }\,\,\, i \ne j,\\ \end{array} \right. \varphi (x, y) = y^{3}, \psi (x, y) = y,\\&g(x, y) = - x_{1}^{2} - x_{2}^{2} + \frac{1}{2}y|y|, h(x, y) = \frac{1}{3}y^{3} + y|y| + y, k(x, y) = y,\\&\alpha _{\Omega }(x) = - 2, \,\, \beta _{\Omega }(x) = 2, \, \, \alpha _{\Gamma }(x) = - 1, \,\, \beta _{\Gamma }(x) = 1, \end{aligned}$$

where \(x = (x_{1}, x_{2})\). Then, K, L, g and h are in \(C^{1, 1}\) but not in \(C^{2}\), \(\varphi \), \(\psi \) and k are \(C^{2}\) functions, and one can calculate

$$\begin{aligned}&K_{y}(x, y, u) = 4|y|, K_{u}(x, y, u) = - 2u, \\&L_{y}(x, y, v) = \,\left\{ \begin{array}{ll} y &{}\quad \text{ if }\,\,\, y \ge 0,\\ y^{2} &{}\quad \text{ if }\,\,\, y < 0,\\ \end{array} \right. L_{v}(x, y, v) = v,\\&\varphi _{y}(x, y) = 3y^{2}, \, \, \psi _{y}(x, y) = 1, \, \, g_{y}(x, y) = |y|, \\&h_{y}(x, y) = y^{2} + 2|y| + 1, \, \, k_{y}(x, y) = 1. \end{aligned}$$

Now, we demonstrate that the feasible triple \(\bar{z} = (\bar{y}, \bar{u}, \bar{v}) \equiv (0, 0, 0)\) is not a local optimal solution of our problem. Let \((\lambda , p, \mu , \phi , \chi )\) be a tuple of Lagrange multipliers such that \((\lambda , \mu ) \ne (0, 0)\), \(\lambda \ge 0\), \(\mu \succeq 0\), and assertions (a)-(d) in Theorem 2.1 are satisfied at \(\bar{z}\). Then, \(p \equiv 0\), \(\phi \equiv 0\), \(\chi \equiv 0\), and one has for some \(\tau \ge 0\),

$$\begin{aligned} \langle \mu , w\rangle = \tau w(0), \, \forall w \in C(\overline{\Omega }). \end{aligned}$$

Let us take \(d = (y, u, v)\) as a critical direction for the problem at \(\bar{z}\), where

$$\begin{aligned} y(x) = - 1, \,\, u(x) = 0, \, \, v(x) = - 1. \end{aligned}$$

Then, it is not hard to examine that all hypotheses (A1)-(A5) are fulfilled. In addition, we can show for every \(x \in \mathbb {R}^{2}\) and \((\hat{y}, \hat{u}, \hat{v}) \in \mathbb {R}^{3}\),

$$\begin{aligned}&\displaystyle d_{2}[K(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{u}(x)), (\hat{y}, \hat{u}){\big )} = 4\hat{y}|\hat{y}| - 2\hat{u}^{2},\\&\displaystyle d_{2}[L(x, \cdot , \cdot )]{\big (}(\bar{y}(x), \bar{v}(x)), (\hat{y}, \hat{v}){\big )} = \,\left\{ \begin{array}{ll} \hat{y}^{2} + \hat{v}^{2} &{} \text{ if }\,\,\, \hat{y} \ge 0,\\ \hat{v}^{2} &{} \text{ if }\,\,\, \hat{y} < 0,\\ \end{array} \right. \\&\displaystyle d_{2}[\varphi (x, \cdot )]{\big (}\bar{y}(x), \hat{y}{\big )} {=} d_{2}[\psi (x, \cdot )]{\big (}\bar{y}(x), \hat{y}{\big )} {=} d_{2}[k(x, \cdot )]{\big (}\bar{y}(x), \hat{y}{\big )} {=} 0,\\&\displaystyle d_{2}[g(x, \cdot )]{\big (}\bar{y}(x), \hat{y}{\big )} = \hat{y}|\hat{y}|, \, \, d_{2}[h(x, \cdot )]{\big (}\bar{y}(x), \hat{y}{\big )} = 2\hat{y}|\hat{y}|,\\&\displaystyle \theta ^{g}_{\bar{y}, y}(x) = \,\left\{ \begin{array}{ll} + \infty &{}\quad \text{ if }\,\,\, x \ne 0,\\ 0 &{} \quad \text{ if }\,\,\, x = 0.\\ \end{array} \right. \end{aligned}$$

Hence, for any tuple of Lagrange multipliers as above, one obtains with the quantity \(\mathcal {L}\) defined in Theorem 2.1 (e),

$$\begin{aligned}&\mathcal {L} = \lambda \int _{\Omega }{\big (}4y(x)|y(x)| - 2[u(x)]^{2}{\big )}dx + \lambda \int _{\Gamma }[v(x)]^{2}ds + \int _{\overline{\Omega }}y(x)|y(x)|d\mu \\&\quad = - 2\lambda \pi - \tau < 0 = \int _{\overline{\Omega }}\theta ^{g}_{\bar{y}, y}(x)d\mu . \end{aligned}$$

By virtue of this theorem, the assertion is proved.