1 Introduction

Optimality conditions take a central position in mathematical optimization. For nonsmooth problems, to meet the increasing diversity of practical situations, a broad spectrum of generalized derivatives has been developed to play the role of the Fréchet and Gateaux derivatives. Each of them is suitable for several models, and none is universal. The reader is referred to systematic and comprehensive expositions of generalized derivatives and nonsmooth optimization in the well-known books [4, 6, 2226]. Note that the wide range of methods in nonsmooth optimization can be roughly separated into the primal and the dual space approaches. Almost all notions of generalized derivatives in the primal space approach are based on corresponding tangency concepts and hence carry only local information. In other words, such derivatives are local linear approximations of a considered map. Only few concepts of such derivatives admit extensions to orders greater than two, which are naturally understood to be inevitable for higher-order optimality conditions, such as contingent, adjacent and Clarke derivatives (see, e.g., [5, 710, 1214, 1821], variational sets (see [1, 16, 17]), etc. (The derivatives constructed in the dual space approach are hardly extended to orders greater than two.) The radial derivative, introduced in [27] and extended to higher orders in [2, 3], is in the first approach, but encompasses the idea of a conical hull and a contingent set, and hence contains global information of a map as its conical hull, and is closed and can be extended to higher orders as its contingent set. These properties help us to obtain higher-order optimality conditions, without convexity assumptions. Furthermore, we can deal with global solutions. To choose a particular kind of radial objects to use in this paper, observe that the higher-order radial derivative defined in [2] is different from that in [3], but the latter is more natural and has a better geometry. Recall from [3] the following.

Definition 1.1

Let \(X,Y\) be normed spaces, \(S\subset X, (u_1, v_1),\ldots , (u_{m-1},v_{m-1})\in X\times Y\), and \(F:X\rightarrow 2^Y\).

  1. (i)

    The \(m\)th-order radial set of \(S\) at \(x_0\) with respect to (shortly, wrt) \(u_1,\ldots ,u_{m-1}\) is

    $$\begin{aligned} T^{r(m)}_S(x_0,u_1,\ldots ,u_{m-1}):&= \left\{ y\in X|\;\exists t_n >0, \exists y_n\rightarrow y, \forall n, x_0+t_nu_1\right. \\&\quad \left. +\ldots +t_n^{m-1}u_{m-1}+t_n^my_n\in S\right\} . \end{aligned}$$
  2. (ii)

    The \(m\)th-order radial derivative of \(F\) at \((x_0,y_0)\) wrt \((u_1,v_1) ,\ldots , (u_{m-1},v_{m-1})\) is the multimap \(D_R^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1}): X\rightarrow 2^Y\) whose graph is

    $$\begin{aligned} \text{ gr }D_R^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})=T^{r(m)}_{\mathrm{gr}F}(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1}). \end{aligned}$$

Remark 1.1

  1. (i)

    One sees that

    $$\begin{aligned}&D_R^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})(x) \!=\! \left\{ v\in Y|\; \exists t_n >0, \exists x_n\rightarrow x, \exists v_n\rightarrow v,\forall n, \right. \\&\quad y_0+t_nv_1+\cdots +t_n^{m-1}v_{m-1}+t_n^mv_n \left. \in F(x_0+t_nu_1+\cdots +t_n^{m-1}u_{m-1}+t_n^mx_n)\right\} . \end{aligned}$$

    Definition 1.1 corresponds to the following known definition of the contingent objects

    $$\begin{aligned} T^{m}_S(x_0,u_1,\ldots ,u_{m-1}):&= \left\{ y\in X|\;\exists t_n \rightarrow 0^{+}, \exists y_n\rightarrow y, \forall n, x_0+t_nu_1\right. \\&\left. +\cdots +t_n^{m-1}u_{m-1}+t_n^my_n\in S\right\} ,\\ \text{ gr }D^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})&= T^{m}_{\mathrm{gr}F}(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1}). \end{aligned}$$

    We also have the corresponding equivalent formulation

    $$\begin{aligned}&D^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})(x) := \left\{ v\in Y|\; \exists t_n \!\rightarrow \!0^{+}, \exists x_n\rightarrow x, \exists v_n\!\rightarrow \!v,\forall n,\right. \\&\quad y_0+t_nv_{1}+\cdots +t_n^{m-1}v_{m-1}\!+\!{t}_n^mv_n \in \left. F(x_0+t_nu_{1}+\cdots +t_n^{m-1}u_{m-1}\!+\!{t}_n^mx_n)\right\} . \end{aligned}$$

    Observe another fact, which makes the radial set and derivative different from the contingent set and derivative, and hence also from other tangency notions and derivatives in variational analysis. Let us explain this difference only between \(T^{r(2)}_S(x_0,u)\) and \(T^{2}_S(x_0,u)\) for simplicity. It is known that if \(u\not \in T_S (x_0)\), then \(T^{2}_S(x_0,u)=\emptyset \). But, the simple example with \(X=\mathbb{R }, S=\{0,e_1\}, x_0=0, e_1=(1,0), e_2=(0,1)\) shows that this is not valid for radial cones: \(u=e_1 - e_2\not \in T^{r}_S (x_0)\), but \(e_2\in T^{r(2)}_S(x_0,u)\), i.e., the last cone is nonempty.

  2. (ii)

    In [3], the objects defined in Definition 1.1 are called upper radial sets and derivatives, respectively (shortly, resp). A similar notion corresponding to the adjacent set and derivative, obtained by replacing \(\exists t_n >0\) by \(\forall t_n >0\), is called a lower radial set (or derivative). However, though being applied to establishing necessary optimality conditions similarly as the upper radial concept is, this lower radial object yields weaker results, and it is not convenient for dealing with sufficient optimality conditions. Therefore, in this paper we develop only the upper radial concepts and thus omit the term “upper”.

In the following example, we compute a radial derivative and a contingent derivative in an infinite dimensional case.

Example 1.1

Let \(X =\mathbb{R }\) and \(Y = l^2\), the Hilbert space of the numerical sequences \(x=(x_i )_{i\in \mathbb{N }}\) with \(\sum _{i=1}^{\infty }x_{i}^2\) being convergent. By \((e_i )_{i\in \mathbb{N }}\) we denote standard unit basis of \(l^2\). Consider \(F : X\rightarrow 2^Y\) defined by

$$\begin{aligned} F(x) := \left\{ \begin{array}{ll} \left\{ \frac{1}{n}(-e_1 + 2e_n)\right\} ,&{} \text{ if }\; x=\frac{1}{n} , \\ \left\{ y=(y_i )_{i=1}^{\infty }\in l^2|\;y_1 \ge 0, y^2_1\ge \sum _{i=2}^{\infty }y_i^2\right\} ,&{} \text{ if }\;x = n,\\ \left\{ 0\right\} ,&{} \text{ otherwise }. \\ \end{array} \right. \end{aligned}$$

It is easy to see that \(C:=\{y=(y_i )_{i=1}^{\infty }\in l^2|\; y_1 \ge 0, y^2_1\ge \sum _{i=2}^{\infty }y_i^2\}\) is a closed, convex, and pointed cone. For \((x_0,y_0) = (0,0)\), we compute \(T^{r(1)}_{\mathrm{gr}F}(x_0,y_0)\). If \((u,v)\in T^{r(1)}_{\mathrm{gr}F}(x_0,y_0)\), by definition, there exist \(t_n >0\) and \((u_n,v_n)\rightarrow (u,v)\) such that, for all \(n\),

$$\begin{aligned} t_nv_n\in F(t_nu_n). \end{aligned}$$
(1)

If \(t_nu_n \not \in \{1/n, n\}\), then from (1) a direct computation gives \(v=0\). Now assume that \(t_nu_n \in \{1/n, n\}\). Consider the first case with \(t_nu_n = 1/n\). From (1), one has

$$\begin{aligned} t_nv_n = \frac{1}{n}(-e_1 + 2e_n). \end{aligned}$$
(2)

We have two subcases. If \(u= 0\), then \(u_n = 1/(nt_n)\rightarrow 0\), and hence (2) implies that \( v_n = u_n(-e_1 + 2e_n)\rightarrow 0, \) i.e., \(v=0\). In the second subcase with \(u>0\), we claim that there is no \(v\) such that \((u,v)\in T^{r(1)}_{\mathrm{gr}F}(x_0,y_0)\). Suppose there exists such a \((u,v)\) (with \(u>0\)). Then, from (2), the sequence \((nt_n)^{-1}(-e_1 + 2e_n) = u_n(-e_1 + 2e_n)\) converges. Hence, as \(u_ne_1\rightarrow ue_1\), the sequence \(d_n := 2u_ne_n\) is also convergent and then \(\{e_n\}\) converges as well, contradicting the fact that \(\{e_n\}\) is not a Cauchy sequence (being the standard unit basis of \(l^2\)).

Now consider the second case with \(t_nu_n = n\). From (1) we get \(t_nv_n\in C\). Thus, \(v\in C\) since \(C\) is a closed cone. It follows from \(u_n =n/t_n\) that \(u \ge 0\).

Consequently, we have proved that

$$\begin{aligned} T^{r(1)}_{\mathrm{gr}F}(x_0,y_0)\subset ([0, + \infty )\times C)\cup ((-\infty ,0)\times \{0\}). \end{aligned}$$

We now show the reverse inclusion. Let \((u,v)\in ([0, + \infty )\times C)\cup ((-\infty ,0)\times \{0\})\). We prove that there exist \(t_n>0, u_n\rightarrow u\), and \(v_n\rightarrow v\) such that (1) holds for all \(n\). Indeed, depending on \(u\) and \(v\), such \(t_n, u_n\), and \(v_n\) can be chosen as follows.

  • For \((0,v)\) such that \(v\in C\), we take \(t_n=n^2, u_n = 1/n, v_n \equiv v\).

  • For \((u,v)\in (0, + \infty )\times C\), we take \(t_n = n/u, u_n\equiv u, v_n \equiv v\).

  • For \((u,v)\in (- \infty , 0)\times \{0\}\), we take \(t_n = n/|u|, u_n \equiv u, v_n \equiv 0\).

So,

$$\begin{aligned} T^{r(1)}_{\mathrm{gr}F}(x_0,y_0) = ([0, + \infty )\times C)\cup ((-\infty ,0)\times \{0\}). \end{aligned}$$

Therefore,

$$\begin{aligned} D^1_RF(x_0,y_0)(u) = \left\{ \begin{array}{ll} C,&{}\text{ if }\; u\ge 0,\\ \{0\}, &{}\text{ if }\; u<0. \end{array}\right. \end{aligned}$$

By a similar way, with simpler calculations, we get \(T^{1}_{\mathrm{gr}F}(x_0,y_0) = \mathbb{R }\times \{0\}\), and hence \(D^1F(x_0,y_0)(u)=\{0\}\) for all \(u\in \mathbb{R }\).

Realizing advantages in some aspects of radial sets and derivatives and observing that, among various optimality notions, [3] investigates only the weak efficiency, we aim to establish both necessary and sufficient higher-order conditions in terms of radial sets and derivatives for various proper efficiency concepts in set-valued vector optimization. We choose the \(Q\)-minimality defined in [11] to unify these concepts. Thus, we first discuss optimality conditions for \(Q\)-minimality and then rephrase the results for the other kinds of solutions. This arrangement shortens and simplifies considerably the arguments. The paper is organized as follows. Section 2 contains preliminary facts we need in the paper. In Sect. 3, we discuss properties of radial sets and derivatives, especially chain rules and sum rules, including some simple but direct applications to optimality conditions for several particular optimization problems. Higher-order optimality conditions for the following general set-valued vector optimization problem are demonstrated in Sect. 4.

Let \(X, Y\) and \(Z\) be normed spaces, \(C\subset Y\) and \(D\subset Z\) pointed closed convex cones, not the entire space, \(S\subset X\) nonempty, and \(F: S\rightarrow 2^Y, G: S\rightarrow 2^Z\). Our problem is

$$\begin{aligned} \text{ min } F(x), \;\text{ s.t. }\; x \in S,\, G(x)\cap -D\ne \emptyset . \end{aligned}$$
(P)

We denote \(A := \{x \in S |\; G(x)\cap -D \ne \emptyset \}\) (the feasible set).

2 Preliminaries

Throughout the paper, if not otherwise specified, let \(X, Y\) and \(Z\) be normed spaces, \(C\subset Y\) and \(D\subset Z\) pointed closed convex cones, different from the entire space, and \(F: S\rightarrow 2^Y\), \(G: S\rightarrow 2^Z\) with \(S\subset X\) being nonempty. \(B_Y\) is the closed unit ball in \(Y\). For \(S\subset X, \text{ int }S, \text{ cl }S\) and \(\text{ bd }S\) denote its interior, closure and boundary, respectively (shortly, resp). \(X^*\) is the dual space of \(X\) and \(\langle .,.\rangle \) is the canonical paring. \(\mathbb{N }\) is the set of the natural numbers and \(\mathbb{R }_{+}\) that of the nonnegative real numbers. For \(S\subset X, C\) above, and \(u\in X\), denote

$$\begin{aligned} \mathrm{cone}_+S :&= \{{\lambda } a\;|\; \lambda > 0,\; a\in S\},\; S(u) := \mathrm{cone} (S + u),\\ C^* :&= \{y^*\in Y^*\;|\;\langle y^*,c\rangle \ge 0,\; \forall c\in C\},\;{C^{+i}}:=\{y^*\in Y^*\;|\;\langle y^*,c\rangle > 0,\; \\&\forall c\in C\setminus \{0\}\}. \end{aligned}$$

A convex set \(B\subset Y\) is called a base for \(C\) if \(0 \not \in \mathrm{cl}B\) and \(C =\{tb|\; t \in \mathbb{R }_+,b \in B\}\). For \(H : X \rightarrow 2^Y\), the domain, range, graph and epigraph of \(H\) are defined as

$$\begin{aligned}&\text{ dom }H := \{x \in X| \;H(x) \ne \emptyset \},\; \text{ im }H:=\{y\in Y| \;y\in H(X)\},\\&\text{ gr }H := \{(x, y) \in X\times Y| \;y \in H(x)\},\; \text{ epi }H := \{ (x, y) \in X\times Y| \;y \in H(x)+ C\}. \end{aligned}$$

The profile mapping of \(H\) is \(H_+\) defined by \(H_+(x) := H(x)+C\). The Painlevé–Kuratowski (sequential) upper and lower limits are defined by

$$\begin{aligned} \mathop {\mathrm{Limsup}}\limits _{x\rightarrow x_0}H(x)&:= \{y\in Y\;|\;\exists x_n\in \mathrm{dom}H:\; x_n\rightarrow x_0,\exists y_n\in H(x_n),\; y_n\rightarrow y\},\\ \mathop {\mathrm{Liminf}}\limits _{x\rightarrow x_0}H(x)&:= \{y\in Y\;|\;\forall x_n\in \mathrm{dom}H:\; x_n\rightarrow x_0,\exists y_n\in H(x_n),\; y_n\rightarrow y\}. \end{aligned}$$

In this paper we are concerned with the following concepts of proper efficiency for problem (P) (with the feasible set \(A\)), which are stronger than the Pareto and weak efficiency.

Definition 2.1

  1. (i)

    \(a_0\in A\) is a strong (or ideal) efficient point of \(A\) (\(a_0 \in \mathrm{StrMin}A\)), if \(A-a_0 \subset C\).

  2. (ii)

    Supposing \(C^{+i}\ne \emptyset \), \(a_0\in A\) is a positive-proper efficient point of \(A\) (\(a_0 \in \mathrm{Pos}A\)), if there exists \(\varphi \in C^{+i}\) such that \(\varphi (a) \ge \varphi (a_0)\) for all \(a \in A\).

  3. (iii)

    \(a_0\in A\) is a Geoffrion-proper efficient point of \(A\) (\(a_0 \in \mathrm{Ge}A\)), if \(a_0 \in \mathrm{Min}A\) (i.e., \((A-a_0)\cap -C = \{0\}\)) and there exists a constant \(M > 0\) such that, whenever there is \(\lambda \in C^*\) with norm one and \(\lambda (a_0-a) > 0\) for some \(a \in A\), one can find \(\mu \in C^{*}\) with norm one such that \( \left\langle {\lambda ,a_0-a} \right\rangle \le M\left\langle {\mu ,a-a_0} \right\rangle . \)

  4. (iv)

    \(a_0\in A\) is a Henig-proper efficient point of \(A\) (\(a_0 \in \mathrm{He}A\)), if there exists a convex pointed cone \(K\) with \(C \setminus \{0\}\subset \mathrm{int}K\) such that \((A-a_0)\cap (-\mathrm{int}K)= \emptyset \).

  5. (v)

    Supposing \(C\) has a base \(B\), \(a_0\in A\) is a strong Henig-proper efficient point of \(A\) (\(a_0 \in \mathrm{StrHe}A\)), if there is \(\epsilon > 0\) such that \( \mathrm{cl cone}(A-a_0)\cap (-\mathrm{cl cone}(B +\epsilon B_Y)) = \{0\}. \)

  6. (vi)

    \(a_0\in A\) is a super efficient point of \(A\) (\(a_0 \in \mathrm{Su}A\)), if there is \(\rho > 0\) such that

    $$\begin{aligned} \text{ cl } \text{ cone }(A-a_0)\cap (B_Y-C) \subset \rho B_Y. \end{aligned}$$

Note that Geoffrion originally defined the properness notion in (iii) for \(\mathbb{R }^n\) with the ordering cone \(\mathbb{R }^{n}_+\). The above general definition of Geoffrion properness is taken from [14]. For relations of the above properness concepts and also other kinds of efficiency see, e.g., [10, 11, 14, 15, 21]. Some of them are collected in the statement below as examples.

Proposition 2.1

  1. (i)

    \(\text{ StrMin }(A) \subset \text{ Min }(A), \text{ Pos }(A) \subset \text{ He }(A)\subset \text{ Min }(A)\).

  2. (ii)

    \(\text{ Su }(A) \subset \text{ Ge }(A)\subset \text{ Min }(A)\), \(\text{ Su }(A) \subset \text{ He }(A)\).

  3. (iii)

    \(\text{ Su }(A) \subset \text{ StrHe }(A)\), and if \(C\) has a bounded base then \(\text{ Su }(A)=\text{ StrHe }(A)\).

Let \(Q \subset Y\) be a nonempty open cone, different from \(Y\), unless otherwise specified.

Definition 2.2

([11]) We say that \(a_0\) is a \(Q\)-minimal point of \(A\) (\(a_0 \in \mathrm{Qmin}(A)\)) if

$$\begin{aligned} (A-a_0)\cap (-Q)= \emptyset . \end{aligned}$$

Recall that an open cone in \(Y\) is said to be a dilating cone (or a dilation) of \(C\), or dilating \(C\) if it contains \(C\setminus \{0\}\). Let \(B\) be, as before, a base of \(C\). Setting \( \delta :=\mathrm{inf}\{|||b|||\; b \in B\} > 0, \) for \(\epsilon \in (0,\delta \)), we associate to \(C\) a pointed convex cone \( C_\epsilon (B) := \mathrm{cone}(B + \epsilon B_Y ).\) For \(\epsilon > 0\), we also associate to \(C\) another open cone \( C(\epsilon ):= \{y \in Y |\; d_C(y) < \epsilon d_{-C}(y)\}. \)

Any kind of efficiency in Definition 2.1 is in fact a \(Q\)-minimal point with \(Q\) being appropriately chosen as follows.

Proposition 2.2

([11])

  1. (i)

    \(a_0 \in \text{ StrMin }(A)\) if and only if \(a_0 \in \text{ Qmin }(A)\) with \(Q =Y \setminus (-C)\).

  2. (ii)

    Supposing \(C^{+i}\ne \emptyset , a_0 \in \mathrm{Pos}(A)\) if and only if \(a_0 \in \mathrm{Qmin}(A)\) with \(Q = \{y \in Y |\; \varphi (y) > 0\}, \varphi \) being some functional in \(C^{+i}\).

  3. (iii)

    \(a_0 \in \text{ Ge }(A)\) if and only if \(a_0 \in \text{ Qmin }(A)\) with \(Q = C(\epsilon )\) for some \(\epsilon > 0\).

  4. (iv)

    \(a_0 \in \mathrm{He}(A)\) if and only if \(a_0 \in \mathrm{Qmin}(A)\) with \(Q\) being pointed, convex, and dilating \(C\).

  5. (v)

    \(a_0 \in \mathrm{StrHe}(A)\) if and only if \(a_0 \in \mathrm{Qmin}(A)\) with \(Q = \mathrm{int}C_\epsilon (B)\), \(\epsilon \) satisfying \(0 <\epsilon < \delta \).

  6. (vi)

    Supposing \(C\) has a bounded base, \(a_0 \in \mathrm{Su}(A)\) if and only if \(a_0 \in \mathrm{Qmin}(A)\) with \(Q = \mathrm{int}C_\epsilon (B)\), \(\epsilon \) satisfying \(0 < \epsilon < \delta \).

Proposition 2.3

Suppose that \(Q\) is any open cone given in Proposition 2.2. Then, \(Q + C \subset Q.\)

Proof

It is easy to prove the assertion, when \(Q=Y\setminus (-C), Q = \{y \in Y | \;\varphi (y) > 0\}\) for \(\varphi \in C^{+i}\), or \(Q\) is a pointed open convex cone dilating \(C\).

Now let \(Q = C(\epsilon )\) for some \(\epsilon > 0, y\in Q\) and \(c\in C\). We show that \(y+c \in Q\). It is easy to see that \(d_C(y+c) \le d_{C}(y)\) and \(d_{-C}(y)\le d_{-C}(y+c)\). Because \(y\in Q\), we have \(d_C(y) < \epsilon d_{-C}(y)\). Thus, \(d_C(y+c) < \epsilon d_{-C}(y+c)\) and hence \(y+c \in Q\).

For \(Q=C_\epsilon (B)\), it is easy to see that \(C\subset Q\) for any \(\epsilon \) satisfying \(0 <\epsilon < \delta \). So, \(Q + C \subset Q + Q \subset Q\). \(\square \)

3 Radial sets and radial derivatives

We develop now calculus rules and properties of radial sets and radial derivatives.

Definition 3.1

Let \(x_0\in S\subset X, F: X\rightarrow 2^Y, (x_0,y_0)\in \mathrm{gr} F\), and \((u_i,v_i)\in X\times Y\) for \(i=1,\ldots ,m-1\).

  1. (i)

    The \(m\)th-order lower radial set of \(S\) at \(x_0\) wrt \(u_1,\ldots ,u_{m-1}\) is

    $$\begin{aligned} T^{r\flat (m)}_S(x_0,u_1,\ldots ,u_{m-1}):&= \left\{ y\in X|\;\forall t_n >0, \exists y_n\rightarrow y, \forall n,\right. \\&\left. x_0+t_nu_1+\cdots +t_n^{m-1}u_{m-1}+t_n^my_n\in S\right\} . \end{aligned}$$
  2. (ii)

    If \(T^{r(m)}_{S}(x_0,u_1,\ldots ,u_{m-1})\) = \(T^{r\flat (m)}_{S}(x_0,u_1,\ldots ,u_{m-1})\), then this set is called a \(m\)th-order proto-radial set of \(S\) at \(x_0\) wrt \(u_1, \ldots , u_{m-1}\).

  3. (iii)

    If \(D_R^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})(x)=\{v\in Y|\; \forall t_n > 0, \forall x_n\rightarrow x, \exists v_n\rightarrow v,\forall n, y_0+t_nv_1+\cdots +t_n^{m-1}v_{m-1}+t_n^mv_n\in F(x_0+t_nu_1+\cdots +t_n^{m-1}u_{m-1}+t_n^mx_n)\}\) for all \(x\in \mathrm{dom} D_R^{m}F(x_0,y_0, u_1,v_1,\ldots ,u_{m-1},v_{m-1})\), then this derivative is called a \(m\)th-order radial semiderivative of \(F\) at \((x_0,y_0)\) wrt \((u_1,v_1),\ldots , (u_{m-1},v_{m-1})\).

Note that, following strictly Definition 1.1, we would define that \(D_R^{m}F(x_0,y_0,u_1,v_1,\ldots , u_{m-1},v_{m-1})\) is a \(m\)th-order radial semiderivative if

$$\begin{aligned} \mathrm{gr}D_R^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})=T^{r\flat (m)}_{\mathrm{gr}F}(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1}). \end{aligned}$$

But, this last condition is equivalent to

$$\begin{aligned} D_R^{m}F(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})(x)&= \left\{ v\in Y|\; \forall t_n > 0, \exists x_n\rightarrow x, \exists v_n\rightarrow v,\forall n,\right. \\&y_0+t_nv_1+\cdots +t_n^{m-1}v_{m-1}+t_n^mv_n\\&\left. \in F(x_0\!+\!t_nu_1\!+\!\cdots \!+\!t_n^{m-1}u_{m-1}+t_n^mx_n)\right\} , \end{aligned}$$

which is weaker than (iii). (This weaker condition was used to define proto-contingent derivatives in many papers in the literature.) Definition 3.1 (iii) is restrictive. However, it may be satisfied as shown in the following.

Example 3.1

Let \(X=Y=\mathbb{R }, F : X\rightarrow 2^Y\) be defined by \(F(x)=\{y\in Y|\; y \ge x\}\), and \((x_0,y_0)=(0,0)\). Then, direct calculations yield \( T^{r(1)}_{F(X)}(y_0)=T^{r\flat (1)}_{F(X)}(y_0)=\mathbb{R } \) and

$$\begin{aligned} D^1_{R}F(x_0,y_0)(x)&= \{v\in Y|\; \forall t_n >0, \forall x_n \rightarrow x, \exists v_n\rightarrow v, \forall n, y_0 + t_nv_n\in F(x_0 + t_nx_n)\}\\&= \{v\in Y|\; v\ge x\}. \end{aligned}$$

So, \( T^{r(1)}_{F(X)}(y_0)\) is a first-order proto-radial set of \(F(X)\) at \(y_0\), and \( D^1_{R}F(x_0,y_0)\) is a first-order radial semiderivative of \(F\) at \((x_0,y_0)\).

For examples of second-orders, with any \(v_1\in \mathbb{R }\), direct computations show that \(T^{r(2)}_{F(X)}(y_0,v_1)=T^{r\flat (2)}_{F(X)}(y_0,v_1)=\mathbb{R }\). Thus, \(T^{r(2)}_{F(X)}(y_0,v_1)\) is a second-order proto-radial set of \(F(X)\) at \(y_0\) wrt \(v_1\in \mathbb{R }\).

Passing to derivatives, let \((u_1,v_1)=(1,1)\). Direct computations indicate that both \(D^2_RF(x_0,y_0,u_1,v_1)(x)\) and the set on the right-hand side of the equality in Definition 3.1 (iii) are equal to \(\{v\in Y|\; v\ge x\}\). Thus, \(F\) has a second-order radial semiderivative at \((x_0,y_0)\) wrt \((1,1)\). However, for \((u_1,v_1)=(0,-1)\), the derivatives are “worse”: \(D^2_RF(x_0,y_0,u_1,v_1)(x)=\{v\in Y|\; v\ge x\}\) and the other mentioned set is empty. So, \(F\) does not have a second-order radial semiderivative at \((x_0,y_0)\) wrt \((0,-1)\).

We will develop some major calculus rules for the above objects. But, first recall several results of [3] on this theme.

Proposition 3.1

(sum rule) ([3]) Let \(F_i: X\rightarrow 2^Y, x_0\in \Omega :=\mathrm{dom}F_1\cap \mathrm{dom}F_2 \), and \(y_i\in F_i(x_0)\) for \(i=1,2\).

  1. (i)

    If either \(F_1(\Omega )\) or \(F_2(\Omega )\) has a mth-order proto-radial set at \(y_1\) wrt \(v_{1,1},\ldots ,v_{1,m-1}\) or at \(y_2\) wrt \(v_{2,1},\ldots ,v_{2,m-1}\), resp, then

    $$\begin{aligned}&T^{r(m)}_{F_1(\Omega )}(y_1,v_{1,1},\ldots ,v_{1,m-1})+T^{r(m)}_{F_2(\Omega )}(y_2,v_{2,1},\ldots ,v_{2,m-1})\\&\quad \subset T^{r(m)}_{(F_1+F_2)(\Omega )}(y_1+y_2,v_{1,1}+v_{2,1},\ldots ,v_{1,m-1}+v_{2,m-1}). \end{aligned}$$
  2. (ii)

    If \(F_2\) has a mth-order radial semiderivative at \((x_0,y_2)\) wrt \((u_1,v_{2,1}), \ldots , (u_{m-1},v_{2,m-1})\) and \(\mathrm{dom} F_1 \subset \mathrm{dom} F_2\), then, for any \(u\in X\),

    $$\begin{aligned}&D_R^{m}F_1(x_0,y_1,u_1,v_{1,1},\ldots ,u_{m-1},v_{1,m-1})(u) +D_R^{m}F_2(x_0,y_2,u_1,v_{2,1},\ldots ,u_{m-1},v_{2,m-1})(u)\\&\quad \subset D_R^{m}(F_1+F_2)(x_0,y_1+y_2,u_1,v_{1,1}+v_{2,1},\ldots ,u_{m-1},v_{1,m-1}+v_{2,m-1})(u). \end{aligned}$$

Proposition 3.2

(chain rule) ([3]) Let \(G: X\rightarrow 2^Y, F: Y\rightarrow 2^Z\) with \(\mathrm{Im}G\subset \mathrm{dom}F, (x_0,y_0)\in \mathrm{gr}G, (y_0,z_0)\in \mathrm{gr}F\), and \((u_1,v_1,w_1),\ldots ,(u_{m-1},v_{m-1},w_{m-1})\in X\times Y\times Z\). Suppose that \(F\) has a mth-order radial semiderivative at \((y_0,z_0) \text{ wrt } (v_1,w_1), \ldots , (v_{m-1},w_{m-1})\). Then,

  1. (i)

    \(D_R^{m}F(y_0,z_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})[T^{r(m)}_{G(X)}(y_0,v_1,\ldots ,v_{m-1})]\subset T^{r(m)}_{(F\circ G)(X)}(z_0,w_1, \ldots ,w_{m-1});\)

  2. (ii)

    \(D_R^{m}F(y_0,z_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})[D_R^{m}G(x_0,y_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})(X)] \subset T^{r(m)}_{(F\circ G)(X)}(z_0,w_1,\ldots ,w_{m-1}).\)

These rules will be applied in the sequel since they are simple (at least their formulations are). However, being a proto-radial set or radial semiderivative is a restrictive condition. Hence, we develop now another sum rule and another chain rule for possible better applications. For a sum \(M+N\) of two multimaps \(M,N: X\rightarrow 2^Y\), we express it as a composition as follows. Define \(G:X\rightarrow 2^{X\times Y}\) and \(F: X\times Y\rightarrow 2^Y\) by, for the identity map \(I\) on \(X\),

$$\begin{aligned} G=I\times M\quad \quad \mathrm{and}\quad \quad F(x,y)=y+N(x). \end{aligned}$$
(3)

Then, clearly \(M+N=F\circ G\). So, we will apply a chain rule. The chain rule given in Proposition 3.2, though simple and relatively direct, is not suitable for dealing with this composition \(F\circ G\), since the intermediate space (\(Y\) there and \(X\times Y\) here) is little involved. We develop another chain rule as follows. Let general multimaps \(G:X\rightarrow 2^Y\) and \(F:Y\rightarrow 2^Z\) be considered. The so-called resultant multimap \(C:X\times Z\rightarrow 2^Y\) is defined by \(C(x,z):=G(x)\cap F^{-1}(z).\) Then, \(\mathrm{dom}C=\mathrm{gr}(F\circ G).\)

We can obtain a general chain rule suitable for dealing with a sum expressed as a composition as above, and without assumption about radial semiderivatives, as follows.

Proposition 3.3

Let \((x_0, z_0) \in \mathrm{gr}(F \circ G), y_0 \in C(x_0, z_0)\), and \((u_i,v_i,w_i)\in X\times Y\times Z\).

  1. (i)

    If, for all \(w\in Z\), one has

    $$\begin{aligned} T^{r(m)}_{G(X)}(y_0,v_1,\ldots ,v_{m-1})&\cap D^m_RF^{-1}(z_0,y_0,w_1, v_1,\ldots ,w_{m-1},v_{m-1})(w)\nonumber \\&\subset {D}^{m}_RC_{X}(z_0, y_0,w_1,v_1,\cdots , w_{m-1},v_{m-1})(w), \end{aligned}$$
    (4)

    where \(C_X : Z\rightarrow 2^Y\) be defined by \(C_X(z) : = C(X,z)\), then

    $$\begin{aligned}&D^m_RF(y_0, z_0, v_1, w_1\ldots ,v_{m-1},w_{m-1})[ T^{r(m)}_{G(X)}(y_0,v_1,\ldots ,v_{m-1})]\\&\quad \subset T^{r(m)}_{(F \circ G)(X)}(z_0,w_1,\ldots ,w_{m-1}). \end{aligned}$$
  2. (ii)

    If, for all \((u,w)\in X\times Z\), one has

    $$\begin{aligned}&D^m_RG(x_0,y_0,u_1, v_1,\ldots ,u_{m-1},v_{m-1})(u)\cap D^m_RF^{-1} (z_0,y_0,w_1, v_1,\ldots ,w_{m-1},v_{m-1})(w)\nonumber \\&\quad \subset D^m_RC((x_0,z_0),y_0,(u_1,w_{1}),v_1,\ldots ,(u_{m-1},w_{m-1}),v_{m-1})(u,w), \end{aligned}$$
    (5)

    then

    $$\begin{aligned}&D^m_RF(y_0, z_0, v_1, w_1\ldots ,v_{m-1},w_{m-1})[ D^m_RG(x_0,y_0,u_1, v_1,\ldots ,u_{m-1},v_{m-1})(u)]\\&\quad \subset D^m_R(F \circ G)(x_0,z_0,u_1,w_1,\ldots ,u_{m-1},w_{m-1})(u). \end{aligned}$$

Proof

By the similarity, we prove only (i). Let \(w\in D^m_RF(y_0, z_0, v_1, w_1\ldots ,v_{m-1},w_{m-1}) [ T^{r(m)}_{G(X)}(y_0,v_1, \ldots ,v_{m-1})]\), i.e., there exists some \({y}\in T^{r(m)}_{G(X)}(y_0,v_1,\ldots ,v_{m-1})\) such that \({y}\in D^m_RF^{-1}(z_0,y_0,w_1, v_1,\ldots , w_{m-1},v_{m-1})(w)\). Then, (4) ensures that \({y}\in {D}^{m}_R{C_X}(y_0, z_0, w_1,v_1,\cdots , w_{m-1},v_{m-1})(w)\). This means the existence of \(t_n > 0\) and \(({y_n},w_n)\rightarrow ({y},w)\) such that, for all \(n \in \mathbb{N }\),

$$\begin{aligned} {y_0}+ t_nv_1 + \cdots + t_n^{m-1}v_{m-1} + t_n^m {y_n}\in C(X, z_0 + t_nw_1 + \cdots + t_n^{m-1}w_{m-1} + t_n^m w_n). \end{aligned}$$

From the definition of \(C\), we get \( z_0 + t_nw_1 + \cdots + t_n^{m-1}w_{m-1} + t_n^m w_n\in (F \circ G)(X). \) So, \(w \in T^{r(m)}_{(F \circ G)(X)}(z_0,w_1,\ldots ,w_{m-1})\) and we are done. \(\square \)

In the sequel, we will apply both the chain rules given in Propositions 3.2 and 3.3 in almost each results. But first, we show the essentialness of assumption (5) in Proposition 3.3 (it is similar for (4)) by the following.

Example 3.2

Let \(X = Y = Z = \mathbb{R }, G: X\rightarrow 2^Y\) and \(F: Y\rightarrow 2^Z\) be defined by

$$\begin{aligned} G(x)=\left\{ \begin{array}{ll} \{1,2\}, &{}\text{ if }\; x=1,\\ \{0\}, &{}\text{ if }\; x=0, \end{array}\right. \;\; F(y)=\left\{ \begin{array}{ll} \{0\}, &{}\text{ if }\; y=1,\\ \{1\}, &{}\text{ if }\; y=0. \end{array}\right. \end{aligned}$$

Then,

$$\begin{aligned} (F\circ G)(x)&= \left\{ \begin{array}{ll} \{0\}, &{}\text{ if }\; x=1,\\ \{1\}, &{}\text{ if }\; x=0, \end{array}\right. \;\; F^{-1}(z)=\left\{ \begin{array}{ll} \{0\}, &{}\text{ if }\; z=1,\\ \{1\}, &{}\text{ if }\; z=0, \end{array}\right. \\ C(x,z)&= G(x)\cap F^{-1}(z)=\left\{ \begin{array}{ll} \{1\}, &{}\text{ if }\; (x,z)=(1,0),\\ \{0\}, &{}\text{ if }\; (x,z)=(0,1). \end{array}\right. \end{aligned}$$

Let \((x_0,z_0)=(0,1)\) and \(y_0=0\in C(x_0,z_0)\). Direct calculations give

$$\begin{aligned}&D^1_RG(x_0,y_0)(1/2)=\{1/2,1\}, \;\, D^1_RF(y_0,z_0)(1)=\{-1\},\\&D^1_RF(y_0,z_0)(1/2)=\{-1/2\},\;\, D^1_R(F\circ G)(x_0,z_0)(1/2)=\{-1/2\}. \end{aligned}$$

So, the conclusion of Proposition 3.3 (ii) does not hold. The reason is the violence of (5): for \((u,w)=(1/2,-1), D^1_R(F^{-1})(z_0,y_0)(-1)=\{1\}, D^1_RC((x_0,z_0),y_0)(u,w)=\emptyset \), and hence

$$\begin{aligned} D^1_RG(x_0,y_0)(u)\cap D^1_R(F^{-1})(z_0,y_0)(w)\not \subset D^1_RC((x_0,z_0),y_0)(u,w). \end{aligned}$$

Now we apply the preceding composition rule to establish a sum rule for \(M, N : X\rightarrow 2^Y\). For this purpose, we use \(G : X\rightarrow 2^{X\times Y}\) and \(F: X\times Y \rightarrow 2^Y\) defined in (3). For \((x,z)\in X\times Y\), we set \(H(x,z) : = M(x)\cap (z -N(x)).\) Then, the resultant multimap \(C:X\times Y\rightarrow 2^{X\times Y}\) associated to these \(F\) and \(G\) is \( C(x,z) = \{x\}\times H(x,z). \)

Proposition 3.4

Let \((x_0, z_0) \in \mathrm{gr}(M + N), y_0 \in H(x_0, z_0)\) and \((u_i,v_i,w_i)\in X\times Y\times Y\).

  1. (i)

    If, for all \(w\in Y\), one has

    $$\begin{aligned}&T^{r(m)}_{M(X)}(y_0,v_1,\ldots ,v_{m-1})\cap [w - T^{r(m)}_{N(X)}(z_0-y_0,w_1,\ldots ,w_{m-1})]\nonumber \\&\quad \subset {D}^{m}_R{H_X}(z_0, y_0, v_1+w_1,v_1,\cdots , v_{m-1}+w_{m-1},v_{m-1})(w), \end{aligned}$$
    (6)

    where \(H_X : Y\rightarrow 2^Y\) be defined by \(H_X(y) : = H(X,y)\), then

    $$\begin{aligned}&T^{r(m)}_{M(X)}(y_0,v_1,\ldots ,v_{m-1}) + T^{r(m)}_{N(X)}(z_0-y_0,w_1,\ldots ,w_{m-1})\\&\quad \subset T^{r(m)}_{(M+N)(X)}(z_0,v_1+w_1,\cdots , v_{m-1}+w_{m-1}). \end{aligned}$$
  2. (ii)

    If, for all \((u,w)\in X\times Y\), one has

    $$\begin{aligned}&D^m_RM(x_0,y_0,u_1, v_1,\ldots ,u_{m-1},v_{m-1})(u) \cap [w - D^m_RN(x_0,z_0-y_0,u_1, w_1,\ldots ,u_{m-1},w_{m-1})(u)\nonumber \\&\quad \subset D^m_RH((x_0,z_0),y_0,(u_1,v_1+w_{1}),v_1,\ldots ,(u_{m-1},v_{m-1}+w_{m-1}),v_{m-1})(u,w),\\ \nonumber \end{aligned}$$
    (7)

    then

    $$\begin{aligned}&D^m_RM(x_0,y_0,u_1, v_1,\ldots ,u_{m-1},v_{m-1})(u) + D^m_RN(x_0,z_0-y_0,u_1, w_1,\ldots ,u_{m-1},w_{m-1})(u)\\&\quad \subset D^m_R(M+N)(x_0,z_0,u_1,v_1+w_1,\ldots ,u_{m-1},v_{m-1}+w_{m-1})(u). \end{aligned}$$

Proof

By the similarity, we prove only (ii). Let \(w\in D^m_RM(x_0,y_0,u_1, v_1,\ldots , u_{m-1},v_{m-1}) (u) + D^m_RN(x_0,z_0-y_0,u_1, w_1,\ldots ,u_{m-1},w_{m-1})(u)\), i.e., there exists \({y}\in D^m_RM(x_0,y_0,u_1, v_1,\ldots ,u_{m-1}, v_{m-1})(u)\) such that \({y}\in w- D^m_RN(x_0,z_0-y_0,u_1, w_1, \ldots ,u_{m-1},w_{m-1})(u)\). Then, (7) ensures that \({y}\in D^m_RH((x_0,z_0),y_0,(u_1,v_1+w_{1}),v_1, \ldots ,(u_{m-1},v_{m-1}+w_{m-1}),v_{m-1})(u,w)\). This means the existence of \(t_n > 0\) and \((u_n,{y_n},w_n)\rightarrow (u,{y},w)\) such that, for all \(n \in \mathbb{N }\),

$$\begin{aligned}&{y_0}+ t_nv_1 + \cdots + t_n^{m-1}v_{m-1} + t_n^m {y_n} \in H\left( x_0 + t_nu_1 + \cdots + t_n^{m-1}u_{m-1} + t_n^m u_n, z_0\right. \\&\quad \left. +t_n(v_1+w_1) + \cdots + t_n^{m-1}(v_{m-1}+w_{m-1}) + t_n^m w_n\right) . \end{aligned}$$

From the definition of \(H\), we get

$$\begin{aligned}&z_0 + t_n(v_1+w_1) + \cdots + t_n^{m-1}(v_{m-1}+w_{m-1}) + t_n^m w_n\\&\quad \in (M+N)(x_0 + t_nu_1 + \cdots + t_n^{m-1}u_{m-1} + t_n^m u_n). \end{aligned}$$

So, \(w \in D^m_R(M+N)(x_0,z_0,u_1,v_1+w_1,\ldots ,u_{m-1},v_{m-1}+w_{m-1})(u)\) and we are done. \(\square \)

The following example shows that assumptions (6) and (7) cannot be dispensed and are not difficult to check.

Example 3.3

Let \(X=Y={\mathbb{R }}\) and \(M, N: X\rightarrow 2^Y\) be given by

$$\begin{aligned} M(x)=\left\{ \begin{array}{l} \{1\},\; \mathrm{if}\; x = \frac{1}{n}, \;n \in \mathbb{N },\\ \{0\},\;\mathrm{if}\;x = 0,\\ \end{array}\right. \;\; N(x)=\left\{ \begin{array}{l} \{0\},\; \mathrm{if}\; x=\frac{1}{n}, \;n \in \mathbb{N },\\ \{1\},\;\mathrm{if}\;x = 0.\\ \end{array}\right. \end{aligned}$$

Then,

$$\begin{aligned} H(x,z)&= M(x)\cap (z-N(x))=\left\{ \begin{array}{l} \{0\},\; \mathrm{if}\; (x,z) = (0,1),\\ \{1\},\;\mathrm{if}\;(x,z) = (\frac{1}{n},1),\;n \in \mathbb{N },\\ \emptyset ,\; \;\;\;\mathrm{otherwise},\\ \end{array}\right. \\ (M+N)(x)&= \left\{ \begin{array}{l} \{1\},\; \mathrm{if}\; x = \frac{1}{n},\;n \in \mathbb{N },\\ \{1\},\;\mathrm{if}\;x = 0.\\ \end{array}\right. \end{aligned}$$

Choose \(x_0=0, z_0=1, y_0=0\in \mathrm{cl}H(x_0,z_0)\) and \(u=w=0\). Then, one can easily show

$$\begin{aligned} T_{M(X)}^{r(1)}(y_0)&= \mathbb{R }_+,\; T_{N(X)}^{r(1)}(z_0-y_0)=\mathbb{R }_-,\; D_R^1{H_X}(z_0,y_0)(w)=\{0\},\\ {D}_R^1M(x_0,y_0)(u)&= \mathbb{R }_+,\; {D}_R^1N(x_0,z_0-y_0)(u)=\mathbb{R }_-,\; {D}_R^1H((x_0,z_0),y_0)(u,w)=\{0\}. \end{aligned}$$

Thus, (6) and (7) are violated:

$$\begin{aligned}&T_{M(X)}^{r(1)}(y_0)\cap [w-T_{N(X)}^{r(1)}(z_0-y_0)]\not \subset D_R^1{H_X}(z_0,y_0)(w),\\&{D}_R^1M(x_0,y_0)(u)\cap [w-{D}_R^1N(x_0,z_0-y_0)(u)]\not \subset {D}_R^1H((x_0,z_0),y_0)(u,w). \end{aligned}$$

Direct computations show that conclusions of Proposition 3.4 do not hold:

$$\begin{aligned}&T_{M(X)}^{r(1)}(y_0)+ T_{N(X)}^{r(1)}(z_0-y_0)\not \subset T_{(M+N)(X)}^{r(1)}(z_0),\\&{D}_R^1M(x_0,y_0)(u)+ {D}_R^1N(x_0,z_0-y_0)(u)\not \subset {D}_R^1(M+N)(x_0,z_0)(u), \end{aligned}$$

since \(T_{(M+N)(X)}^{r(1)}(z_0)=\{0\}\) and \(D_R^1(M+N)(x_0,z_0)(u)=\{0\}\).

To illustrate the results in Propositions 3.1–3.4, we use them directly to get necessary optimality conditions for some kinds of proper efficient solutions to several particular optimization problems. Let \(F:X\rightarrow 2^Y\) and \(G:X\rightarrow 2^X\). Consider

$$\begin{aligned} (\text{ P }_1) \qquad \text{ min } F(x^{\prime }) \text{ s.t. } x\in X \text{ and } x^{\prime }\in G(x). \end{aligned}$$

This problem can be restated as the unconstrained problem: \(\text{ min }(F\circ G)(x)\). Recall that \((x_0,y_0)\) is called a \(Q\)-minimal solution if \(y_0\in (F\circ G)(x_0)\) and \(((F\circ G)(X)-y_0)\cap (-Q)=\emptyset \).

Proposition 3.5

Assume for \((\text{ P }_1)\) that \(\mathrm{im}G\subset \mathrm{dom}F, (x_0,z_0)\in \mathrm{gr}G, (z_0,y_0)\in \mathrm{gr}F\), and \((u_1,v_1,w_1),\ldots ,(u_{m-1},v_{m-1},w_{m-1})\in X\times X\times (-C)\). Assume that an open cone \(Q\) satisfies \(Q+C\subset Q\) and \((x_0,y_0)\) is a \(Q\)-minimal solution of \((\text{ P }_1).\)

  1. (i)

    If either \(F_+\) has a mth-order radial semiderivative at \((z_0, y_0) \text{ wrt } (v_1,w_1), \ldots , (v_{m-1},w_{m-1})\) or (4) holds for \(F_+\) and \(G\), then

    $$\begin{aligned} D_R^{m}F_+(z_0,y_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})[T^{r(m)}_{G(X)}(z_0,v_1,\ldots ,v_{m-1})]\cap (-Q)=\emptyset . \end{aligned}$$
  2. (ii)

    If either \(F_+\) has a mth-order radial semiderivative at \((z_0, y_0) \text{ wrt } (v_1,w_1), \ldots , (v_{m-1},w_{m-1})\) or (5) holds for \(F_+\) and \(G\), then

    $$\begin{aligned}&D_R^{m}F_+(z_0,y_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})[D_R^{m}G(x_0,z_0,u_1,v_1,\ldots ,u_{m-1},v_{m-1})(X)]\\&\quad \cap (-Q)=\emptyset . \end{aligned}$$

Proof

We prove (i). By Proposition 4.1 below (which is proved independently from this proposition), \(T^{r(m)}_{(F\circ G)_+(X)}(y_0,w_1,\ldots ,w_{m-1}) \cap (-Q)=\emptyset .\) Propositions 3.2(i) and 3.3(i) say that

$$\begin{aligned}&D_R^{m}F_+(z_0,y_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})[T^{r(m)}_{G(X)}(z_0,v_1,\ldots ,v_{m-1})]\\&\quad \subset T^{r(m)}_{(F\circ G)_+(X)}(y_0,w_1,\ldots ,w_{m-1}). \end{aligned}$$

\(\square \)

From Propositions 3.5 and 2.2, we obtain immediately the following result for \((\text{ P }_1).\)

Theorem 3.6

Assume for \((\text{ P }_1)\) that \(\mathrm{im}G\subset \mathrm{dom}F, (x_0,z_0)\in \mathrm{gr}G, (z_0,y_0)\in \mathrm{gr}F\), and \((u_1,v_1,w_1),\ldots ,(u_{m-1},v_{m-1},w_{m-1})\in X\times X\times (-C)\). Then, assertions (i) and (ii) in Proposition 3.5 hold in each of the following cases

  1. (i)

    \((x_0,y_0)\) is a strong efficient solution of \((P_1)\) and \(Q =Y \setminus -C\);

  2. (ii)

    \((x_0,y_0)\) is a positive-proper efficient solution of \((P_1)\) and \(Q = \{y|\; \varphi (y) > 0\}\) for some functional \(\varphi \in C^{+i}\);

  3. (iii)

    \((x_0,y_0)\) is a Geoffrion-proper efficient solution of \((P_1)\) and \(Q = C(\epsilon )\) for \(\epsilon > 0\);

  4. (iv)

    \((x_0,y_0)\) is a Henig-proper efficient solution of \((P_1)\) and \(Q =K\) for some pointed open convex cone \(K\) dilating \(C\);

  5. (v)

    \((x_0,y_0)\) is a strong Henig-proper efficient solution of \((P_1)\) and \(Q =\mathrm{int}C_\epsilon (B)\) for \(\epsilon \) satisfying \(0 < \epsilon < \delta \);

  6. (vi)

    \((x_0,y_0)\) is a super efficient solution of \((P_1)\) and \(Q =\mathrm{int}C_\epsilon (B)\) for \(\epsilon \in (0, \delta )\).

Our sum rule can be applied directly to the following problem

$$\begin{aligned} (\text{ P }_2)\qquad \min F(x)\; \mathrm{s. t.}\; \; g(x)\le 0, \end{aligned}$$

where \(X, Y\) are as for problem \((\text{ P }_1), F:X\rightarrow 2^Y\) and \(g:X\rightarrow Y\). Denote \(A:=\{x\in X|\, g(x)\le 0\}\) (the feasible set). Define \(G:X\rightarrow 2^Y\) by \(G(x):=\{0\}\) if \(x\in A\) and \(G(x):= \{g(x)\}\) otherwise. Consider the following unconstrained set-valued optimization problem, for arbitrary \(s>0\),

$$\begin{aligned} (\text{ P }_3) \qquad \min (F+sG)(x). \end{aligned}$$

In the particular case, when \(Y =R\) and \(F\) is single-valued, \((\text{ P }_3)\) is used to approximate \((\text{ P }_2)\) in penalty methods (see [25]). We will apply our calculus rules for radial sets to get the following necessary condition for a \(Q\)-minimal solution of \((\text{ P }_3).\)

Proposition 3.7

Let \(y_0\in F(x_0), x_0\in \Omega =\mathrm{dom}F \cap \mathrm{dom}G\), and \((u_1,v_{i,1}),\ldots ,(u_{m-1}, v_{i,m-1})\in X\times (-C)\) for \(i= 1,2\). Suppose that an open cone \(Q\) satisfies \(Q+C\subset Q\) and \((x_0,y_0)\) is a \(Q\)-minimal solution of \((\text{ P }_3)\). Then,

  1. (i)

    if either \(F_+(\Omega )\) (or \(sG_+(\Omega )\)) has a mth-order proto-radial set at \(y_0\) wrt \(v_{1,1},\ldots ,v_{1,m-1}\) (or at \(0\) wrt \(v_{2,1},\ldots ,v_{2,m-1}\)) or (6) holds for \(F_+\) and \(sG_+\), then

    $$\begin{aligned}&(T^{r(m)}_{F_+(\Omega )}(y_0,v_{1,1},\ldots ,v_{1,m-1})\\&\quad \quad +sT^{r(m)}_{G_+(\Omega )}(0,v_{2,1}/s,\ldots ,v_{2,m-1}/s))\cap (-Q)=\emptyset ; \end{aligned}$$
  2. (ii)

    if either \(sG_+\) has a mth-order radial semiderivative at \((x_0,0) \text{ wrt } (u_1,v_{2,1}), \ldots , (u_{m-1},v_{2,m-1})\) and \(\mathrm{dom} F \subset \mathrm{dom} G\) or (7) holds for \(F_+\) and \(sG_+\), then, for any \(u\in X\),

    $$\begin{aligned}&(D_R^{m}F_+(x_0,y_0,u_1,v_{1,1},\ldots ,u_{m-1},v_{1,m-1})(u)\\&\quad \quad +sD_R^{m}G_+(x_0,0,u_1,v_{2,1}/s,\ldots ,u_{m-1},v_{2,m-1}/s)(u)) \cap (-Q)=\emptyset . \end{aligned}$$

Proof

We prove (i). It follows from Propositions 3.1(i) and 3.4(i) that

$$\begin{aligned}&T^{r(m)}_{F_+(\Omega )}(y_0,v_{1,1},\ldots ,v_{1,m-1})+T^{r(m)}_{sG_+(\Omega )}(0,v_{2,1},\ldots ,v_{2,m-1})\\&\quad \quad \subset T^{r(m)}_{(F+sG)_+(\Omega )}(y_0,v_{1,1}+v_{2,1},\ldots ,v_{1,m-1}+v_{2,m-1}). \end{aligned}$$

It is easy to see that

$$\begin{aligned}&T^{r(m)}_{sG_+(\Omega )}(0,v_{2,1},\ldots ,v_{2,m-1})=sT^{r(m)}_{G_+(\Omega )}(0,v_{2,1}/s,\ldots ,v_{2,m-1}/s),\\&\quad T^{r(m)}_{F_+(\Omega )}(y_0,v_{1,1},\ldots ,v_{1,m-1})+sT^{r(m)}_{G_+(\Omega )}(0,v_{2,1}/s,\ldots ,v_{2,m-1}/s)\\&\quad \subset T^{r(m)}_{(F+sG)_+(\Omega )}(y_0,v_{1,1}+v_{2,1},\ldots ,v_{1,m-1}+v_{2,m-1}). \end{aligned}$$

By Proposition 4.1 (which is proved independently from this proposition), one gets

$$\begin{aligned} T^{r(m)}_{(F+sG)_+(\Omega )}(y_0,v_{1,1}+v_{2,1},\ldots ,v_{1,m-1}+v_{2,m-1})\cap ({-Q})=\emptyset , \end{aligned}$$

and hence the proof is complete. \(\square \)

From Propositions 3.7 and 2.2, we obtain immediately the following statement for \((\text{ P }_3).\)

Theorem 3.8

Let \(y_0\in F(x_0), x_0\in \Omega =\mathrm{dom}F \cap \mathrm{dom}G\), and \((u_1,v_{i,1}),\ldots ,(u_{m-1},v_{i,m-1}) \in X\times (-C)\) for \(i= 1,2\). Then, assertions (i) and (ii) in Proposition 3.7 hold in each of the cases like (i)–(vi) of Theorem 3.6, but for problem \((\text{ P }_3).\)

4 Optimality conditions

In this section, both necessary and sufficient optimality conditions for the mentioned proper solutions of problem (P), stated in Sect. 2, are established.

Proposition 4.1

Let \((x_0, y_0)\in \mathrm{gr}F\) be a \(Q\)-minimal solution of (P), \((u_i,v_i,w_i) \in X\times (-C)\times (-D), i = 1,\ldots , m-1\), and \(z_0 \in G(x_0)\cap -D\). Suppose that the open cone \(Q\) satisfies \(Q+C\subset Q\). Then, the following separations hold

$$\begin{aligned}&T^{r(m)}_{(F,G)_+(S)}(y_0, z_0, (v_1, w_1), \ldots , (v_{m-1}, w_{m-1})) \cap (-Q\times -\mathrm{int}D)=\emptyset ,\end{aligned}$$
(8)
$$\begin{aligned}&D_R^{m}(F,G)_+(x_0,y_0, z_0, (u_1,v_1, w_1),\ldots , (u_{m-1},v_{m-1}, w_{m-1}))(X)\nonumber \\&\quad \cap (-Q\times -\mathrm{int}D)=\emptyset . \end{aligned}$$
(9)

Proof

Suppose (8) does not hold. Then, there exists \((y, z)\) such that

$$\begin{aligned}&(y,z) \in T^{r(m)}_{(F,G)_+(S)}(y_0, z_0, (v_1, w_1), \ldots , (v_{m-1}, w_{m-1})),\end{aligned}$$
(10)
$$\begin{aligned}&(y,z) \in (-Q\times -\mathrm{int}D). \end{aligned}$$
(11)

It follows from (10) and the definition of \(m\)th-order radial sets that there exist sequences \(t_n > 0, x_n \in S\), and \((y_n,z_n) \in (F,G)(x_n) + C\times D\) such that

$$\begin{aligned} \frac{(y_n,z_n) - (y_0,z_0) - t_n(v_1,w_1) - \cdots - t_n^{m-1}(v_{m-1},w_{m-1})}{t_n^m}\rightarrow (y,z). \end{aligned}$$
(12)

From (11) and (12), one has, for large \(n\),

$$\begin{aligned} y_n -y_0- t_nv_1 - \cdots -t_n^{m-1}v_{m-1}\in -Q,\;\, z_n\in -\mathrm{int}D. \end{aligned}$$
(13)

As \(v_1,\ldots , v_{m-1}\in -C\), (13) implies that, for large \(n\),

$$\begin{aligned} y_n -y_0\in -Q. \end{aligned}$$
(14)

Because \(z_n \in G(x_n) + D\), there exist \(\overline{z_n}\in G(x_n)\) and \(d_n\in D\) such that \(z_n = \overline{z_n} + d_n \). (13) implies also that \(\overline{z_n} \in G(x_n) \cap (-D)\) for large \(n\), and then \(x_n \in A\). Because \(y_n \in F(x_n) + C\), there exist \(\overline{y_n}\in F(x_n)\) and \(c_n\in C\) such that \(y_n = \overline{y_n} + c_n \). Then, (14) implies that \(\overline{y_n} - y_0\in -Q\) for large \(n\). Therefore, \( \overline{y_n} - y_0\in (F(A) -y_0)\cap (-Q), \) which contradicts the \(Q\)-minimality of \((x_0,y_0)\). Thus, (8) holds. (9) follows from (8) and the evident fact that

$$\begin{aligned} D_{R}^m F(x_0,y_0,u_1,v_1,\ldots , u_{m-1}, v_{m-1})(X)\subset T_{F(X)}^{r(m)}(y_0,v_1,\ldots ,v_{m-1}). \square \end{aligned}$$

Propositions 4.1 and 2.2 together yield the following result.

Theorem 4.2

Let \((x_0, y_0)\in \mathrm{gr}F, (u_i,v_i,w_i) \in X\times (-C)\times (-D), i = 1,\ldots , m-1\), and \(z_0 \in G(x_0)\cap -D\). Then, (8) and (9) hold in each of the cases like (i)–(vi) of Theorem 3.6, but for problem (P).

The next example illustrates Theorem 4.2(v).

Example 4.1

Let \(X = Y=Z = \mathbb{R }, S=X, C =D= \mathbb{R }_+, G(x)\equiv \mathbb{R }\), and \( F(x) = \{{y \in Y |\; y \ge |x| }\}. \) Choose the base \(B =\{1\}\). Then, \(\delta =1\) and \(C_\epsilon (B)=\mathbb{R }_+\) for all \(\epsilon \in (0,\delta )\). Let \((x_0, y_0) = (0, 0)\) and \(z_0=0\). It is easy to see that \((x_0, y_0)\) is a strong Henig-proper efficient solution. For any \((v_1,w_1)\in -(C\times D)\), direct computations give

$$\begin{aligned} T^{r(1)}_{(F,G)_+(S)}(y_0,z_0) = \mathbb{R }_+\times \mathbb{R },\;\; T^{r(2)}_{(F,G)_+(S)}(y_0,z_0,v_1,w_1) = \mathbb{R }_+\times \mathbb{R }, \end{aligned}$$

and hence

$$\begin{aligned}&T^{r(1)}_{(F,G)_{+}(S)}(y_0,z_0)\cap -\mathrm{int}(C_{\epsilon }(B)\times D) = \emptyset ,\; T^{r(2)}_{(F,G)_{+}(S)}(y_0,z_0,v_1,w_1)\\&\quad \cap -\mathrm{int}(C_{\epsilon }(B)\times D) = \emptyset , \end{aligned}$$

i.e., the necessary optimality condition of Theorem 4.2 (v) holds.

To compare Theorem 4.2 with some known results, let us recall the following notions.

Definition 4.1

([16]) Let \(F : X \rightarrow 2^Y, (x_0,y_0) \in \mathrm{gr} F\), and \(v_1,\ldots ,v_{m-1}\in Y\).

  1. (i)

    The \(m\)th-order variational set of type 1 of \(F\) at \((x_0,y_0)\) wrt \(v_1,\ldots ,v_{m-1}\) is defined as

    $$\begin{aligned} V^m(F,x_0,y_0,v_1,\ldots ,v_{m-1} ):=\mathop {\mathrm{Limsup}}\limits _{x\stackrel{F}{\rightarrow } x_0,\; t\rightarrow 0^+}\frac{1}{t^m}(F(x)-y_0 -tv_1-\cdots -t^{m-1}v_{m-1}). \end{aligned}$$
  2. (ii)

    The \(m\)th-order variational set of type 2 of \(F\) at \((x_0,y_0)\) wrt \(v_1,\ldots ,v_{m-1}\) is defined as

    $$\begin{aligned}&W^m(F,x_0,y_0,v_1,\ldots ,v_{m-1}):=\mathop {\mathrm{Limsup}}\limits _{x\stackrel{F}{\rightarrow } x_0\; t\rightarrow 0^+}\frac{1}{t^{m-1}}\\&\quad (\mathrm{cone}_+(F(x)-y_0)-v_1-\cdots -t^{m-2}v_{m-1}). \end{aligned}$$

    Here \(x\stackrel{F}{\rightarrow } x_0\) means that \(x\in \mathrm{dom}F\) and \(x\rightarrow x_0\). In [1, 16, 17], these sets played the role of generalized derivatives as tools for obtaining optimality conditions in nonsmooth optimization. Some of their useful properties are

    $$\begin{aligned} V^m(F,x_0,y_0,v_1,\ldots ,v_{m-1} )&= \{v\in Y|\; \exists t_n\rightarrow 0^+, \exists x_n\mathop {\rightarrow }\limits ^{F} x_0, \exists v_n\rightarrow v, \forall n, y_0+t_nv_1\\&\quad +\cdots +t_n^{m-1}v_{m-1}+t_n^mv_n \in F(x_n)\},\\ W^m(F,x_0,y_0,v_1,\ldots ,v_{m-1} )&= \{v\in Y|\; \exists t_n\rightarrow 0^+,\exists x_n\mathop {\rightarrow }\limits ^{F} x_0, \exists v_n\rightarrow v, \forall n, v_1\\&\quad +\cdots +t^{m-2}v_{m-1}+t_n^{m-1}v_n \in \mathrm{cone}_+(F(x_n)-y_0)\}. \end{aligned}$$

Because the variational sets are bigger than the image of \(X\) through most of kinds of generalized derivatives (see Remark 2.1 and Proposition 4.1 in [16]), optimality conditions (obtained by separating sets as usual) in terms of these variational sets are strong. However, these sets are incomparable with radial sets, and hence the latter may be more advantageous in cases as ensured by the following.

Example 4.2

Let \(X=Y=Z=\mathbb{R }, S=\{0,1\}, C=D=\mathbb{R }_+, (x_0,y_0)=(0,0), G\) and \(F\) be \(\displaystyle G(x) = F(x)=\left\{ \begin{array}{l} \{0\},\quad \;\;\,\mathrm{if}\;\; x=0,\\ \{-1\},\quad \mathrm{if}\;\; x=1,\\ \emptyset ,\quad \quad \;\;\; \text{ otherwise }. \end{array}\right. \)

Choose the base \(B =\{1\}\). Then, \(\delta =1\) and \(C_\epsilon (B)=\mathbb{R }_+\) for all \(\epsilon \in (0,\delta )\). Let \(z_0=0\). Let us try to use optimality conditions given in [17] in terms of variational sets to eliminate \((x_0,y_0)\) as a candidate for a strong Henig-proper efficient solution. We can compute directly that \( V^1((F,G)_+,x_0,y_0,z_0)=\mathbb{R }_+\times \mathbb{R }_+. \) Let \((v_1,w_1) \in V^1((F,G)_+,x_0,y_0,z_0)\cap -\mathrm{bd}(C_\epsilon (B)\times D(z_0))\),..., \((v_{m-1},w_{m-1}) \in V^{m-1}((F,G)_+,x_0,y_0,z_0,v_1,w_1,\ldots ,v_{m-2},w_{m-2})\cap -\mathrm{bd}(C_\epsilon (B)\times D(z_0))\), for \(m\ge 2\). It is easy to check that \((v_1,w_1)= \cdots =(v_{m-1},w_{m-1})=(0,0)\) and

$$\begin{aligned} V^{m}((F,G)_+,x_0,y_0,z_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})=\mathbb{R }_+\times \mathbb{R }_+. \end{aligned}$$

Thus, for all \(m\ge 1\), we get

$$\begin{aligned} V^{m}((F,G)_+,x_0,y_0,z_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})\cap -\mathrm{int}(C_\epsilon (B)\times D(z_0))=\emptyset . \end{aligned}$$

For variational sets of type 2, by direct calculating we get

$$\begin{aligned}&W^1((F,G)_+,x_0,y_0,z_0)=\mathbb{R }_+\times \mathbb{R }_+,\\&W^1((F,G)_+,x_0,y_0,z_0)\cap -\mathrm{int}(C_\epsilon (B)\times D)=\emptyset . \end{aligned}$$

Let \((v_1,w_1) \in W^1((F,G)_+,x_0,y_0,z_0)\cap -\mathrm{bd}(C_\epsilon (B)\times D), (v_2,w_2) \in W^2((F,G)_+, x_0,y_0,z_0,v_1,w_1)\cap -\mathrm{bd}(C_\epsilon (B)\times D(w_1))\),..., \((v_{m-1},w_{m-1}) \in W^{m-1}((F,G)_+,x_0,y_0, z_0,v_1,w_1,\ldots ,v_{m-2},w_{m-2})\cap -\mathrm{bd}(C_\epsilon (B)\times D(w_1)), m\ge 3\). We have \((v_1,w_1)= \cdots =(v_{m-1},w_{m-1})=(0,0)\) and

$$\begin{aligned} W^{m}((F,G)_+,x_0,y_0,z_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})=\mathbb{R }_+\times \mathbb{R }_+. \end{aligned}$$

Thus, for all \(m\ge 2\), we get

$$\begin{aligned} W^{m}((F,G)_+,x_0,y_0,z_0,v_1,w_1,\ldots ,v_{m-1},w_{m-1})\cap -\mathrm{int}(C_\epsilon (B)\times D(w_1))=\emptyset . \end{aligned}$$

So, Theorems 3.4 and 3.5 in [17] say nothing about \((x_0,y_0)\) being strong Henig-proper efficient or not. By virtue of Remark 3.3 in [17] and Proposition 2.2 in [17], we can see that Theorems 4.1, 4.2, 5.1, and 5.2 in [19], Theorem 3.1, Proposition 3.1 in [9] and Theorem 1 in [20] cannot be in use to reject \((x_0,y_0)\) either. On the other hand, since \(T^{r(1)}_{(F,G)_+(S)}(y_0,z_0) = \mathbb{R }\times \mathbb{R }\), Theorem 4.2(v) rejects the candidate \((x_0,y_0)\).

Finally we discuss sufficient conditions for proper efficient solutions of problem (P). We need the following properties of our radial objects.

Proposition 4.3

([3]) Let \(F:X\rightarrow 2^Y, (x_0,y_0)\in \mathrm{gr} F\), and \(x\in \mathrm{dom}F\). Then,

  1. (i)

    \(F(x)-y_0\subset D^1_RF(x_0,y_0)(x-x_0)\), in particular, \(0\in {D}_R^{1}F(x_0,y_0)(0)\);

  2. (ii)

    \(F(x)-y_0\subset T^{r(1)}_{F(S)}(y_0)\), in particular, \(0\in T^{r(1)}_{F(S)}(y_0)\).

Note that these assertions say, in particular, that radial sets and derivatives possess global properties without any (relaxed) convexity assumption. To make this clear, recall that \(F: X \rightarrow 2^{Y}\) is termed pseudoconvex at \((x_{0}, y_{0}) \in \) gr\(F\) if epi\(F - (x_{0}, y_{0})\subset T_{\mathrm{epi}F}(x_{0}, y_{0})\). Furthermore, if \(F\) is pseudoconvex at \((x_{0}, y_{0})\), then, \(\forall x \in \mathrm{dom}F, F(x) - y_{0} \subset V^{1}(F_{+}, x_{0}, y_{0})\) (see Proposition 2.1 in [16]). Roughly speaking, that is why, in the following sufficient condition, no convexity assumption is needed.

Proposition 4.4

Let \((x_0,y_0)\in \mathrm{gr}F\) and \(x_0\in A\), the feasible set. Suppose that there exists \(z_0 \in G(x_0) \cap (-D)\) such that, for \((u_i, v_i , w_i ) \in X\times (- C) \times ( -D), i=1,\ldots ,m-1\), and \(x\in S\), either of the following separations holds

$$\begin{aligned}&T^{r(m)}_{(F ,G)_+(S)}(( y_0, z_0), (v_1, w_1),\ldots , (v_{m-1}, w_{m-1}))\cap -(Q\times D(z_0))=\emptyset ,\end{aligned}$$
(15)
$$\begin{aligned}&D_R^{m}(F ,G)_+(x_0, y_0, z_0, u_1,v_1, w_1,\ldots , u_{m-1},v_{m-1}, w_{m-1})(x\!-\!x_0)\nonumber \\&\quad \quad \!\cap \! -(Q\times D(z_0))=\emptyset . \end{aligned}$$
(16)

Then, \((x_0 , y_0 )\) is a \(Q\)-minimal solution of (P), for any nonempty open cone \(Q\).

Proof

By the similarity, we prove only (15). Note that (15) is required to be satisfied also for \(v_i = 0\in -C\) and \(w_i = 0\in -D, i=1,\ldots ,m-1\). Therefore, \( T^{r(1)}_{(F ,G)_+(S)}(y_0, z_0)\cap -(Q\times D(z_0))=\emptyset . \) It follows from Proposition 4.3 that \( (y - y_0, z - z_0) \in T^{r(1)}_{(F ,G)_+(S)}( y_0, z_0) \) for all \(y \in F(S), z \in G(S)\). Then,

$$\begin{aligned} (F,G)(S)- (y_0,z_0)\cap -(Q\times D(z_0))=\emptyset . \end{aligned}$$

Suppose the existence of \(x\in A\) and \(y\in F(x)\) such that \(y-y_0\in -Q\). Then, there exists \(z\in G(x)\cap -D\) such that \((y,z)-(y_0,z_0)\in -(Q\times D(z_0))\), a contradiction. \(\square \)

From Propositions 4.4 and 2.2, we obtain immediately the following result.

Theorem 4.5

Let \((x_0,y_0)\in \mathrm{gr}F\) and \(x_0\in A\). Suppose that there exists \(z_0 \in G(x_0) \cap (-D)\) such that, for \((u_i, v_i , w_i ) \in X\times (- C) \times ( -D), i=1,\ldots ,m-1\), and \(x\in S\), either of (15) or (16) holds. Then, one has assertions (i)–(vi) of Theorem 4.2.

In the following example, Theorems 4.5(v) works, while several existing results do not.

Example 4.3

Let \(X=Y=Z=\mathbb{R }, C=D=\mathbb{R }_+, G(x)\equiv \{0\}\), and

$$\begin{aligned} F(x)=\left\{ \begin{array}{l} \{0\},\quad \;\;\text{ if }\;\; x=0,\\ \{\frac{1}{n^2}\},\quad \text{ if }\;\; x={n}, \;n\in \mathbb{N },\\ \emptyset ,\quad \quad \;\;\mathrm{otherwise.}\\ \end{array}\right. \end{aligned}$$

Choose the base \(B =\{1\}\). Then, \(\delta =1\) and \(C_\epsilon (B)=\mathbb{R }_+\) for all \(\epsilon \in (0,\delta )\). Let \((x_0,y_0)=(0,0)\) and \(z_0=0\). It is easy to see that \(A=\mathbb{N }\cup \{0\}\). Then, \(T^{r(1)}_{(F,G)_+(A)}(y_0,z_0)=\mathbb{R }_+\times \mathbb{R }_+.\) It follows from Theorem 4.5(v) that \((x_0,y_0)\) is a strong Henig-proper efficient solution. It is easy to see that \(\mathrm{dom}F = \mathbb{N }\cup \{0\}\) is not convex and \(F\) is not pseudoconvex at \((x_0,y_0)\). So, Theorem 3.6 in [17], Theorems 5.3, 5.4 in [19], and Theorem 2 in [20] cannot be applied.

Remark 4.1

From Proposition 4.3, for problem (P) we always have \((0,0)\in T_{(F,G)_+(A)}^{r(1)}{(y_0,z_0)}\) and \((0,0)\in {D}^{1}_R(F,G)_+(x_0,y_0,z_0)(A-x_0)\). On the other hand, it is clear that \(T^{r(m)}_A(x_0,0_X,\ldots ,0_X) =T^{r(1)}_A(x_0)\) and \({D}_R^{m}F(x_0,y_0,0_X,0_Y,\ldots ,0_X,0_Y)={D}_R^{1}F(x_0,y_0)\). (Accordingly, observe that in the empty intersections involved in Proposition 4.1, we have the expression \( -Q\times -\mathrm{int}D\).) Consequently, if it is convenient to write dual forms of sufficient optimality conditions (i.e., in the form of Lagrange multiplier rules) as in case of weak efficiency considered in [3], we require the strict positivity of Lagrangians only for nonzero points in these radial sets and radial derivatives. Observe that our Theorems 4.5 and 4.6 in [3] about such dual forms (stated, without proof, as immediate consequences of the preceding theorem about a primal form like Proposition 4.1 here) were stated not enough clearly, when not eliminating explicitly zero in these radial objects.