1 Introduction

It is well known that the subgradient plays an important role in optimization and duality theory. The concept of subgradients for a convex function was considered by Rockafellar [1] in finite-dimensional spaces. In recent years, the concept of subgradients has been generalized to vector-valued mappings and set-valued mappings in abstract spaces by many authors; see [28]. In [9], Chen and Craven introduced the weak subgradient for a vector-valued mapping and discussed the existence of the weak subgradient. Yang [10] generalized the concept introduced by Chen and Craven [9] to set-valued mappings. Chen and Jahn [2] defined another weak subgradient, which is stronger than the weak subgradient introduced by Yang [10]. They also proved the existence of the weak subgradient by the Eidelheit separation theorem. By the Hahn–Banach theorem, Peng et al. [11] proved the existence of the weak subgradient for set-valued mappings introduced by Yang [10]. Recently, Li and Guo [12] proved some existence theorems of two kinds of weak subgradients for set-valued mappings by virtue of a Hahn–Banach extension theorem obtained by Zǎlinescu [13]. Very recently, Hernandez and Rodriguez-Marin [14] considered the weak subgradient of set-valued mappings introduced by Chen and Jahn [2] and also presented a new notion of the strong subgradient for set-valued mappings. Moreover, they obtained some existence theorems of both subgradients. Note that as mentioned above the assumptions that the cone-convexity of the objective function and the upper semicontinuity of the objective function at a given point are required. This paper is the effort in removing these restrictions.

Motivated by the work reported in [12, 14], in this paper, we consider the weak subdifferential for set-valued mappings, which was introduced by Chen and Jahn [2]. Without any convexity and upper semicontinuity assumptions on objective functions, we prove two existence theorems of weak subgradients for set-valued mappings. Moreover, we derive some properties of the weak subdifferential for set-valued mappings. Our results improve the corresponding ones in [12, 14].

2 Preliminaries

Throughout this paper, let X and Y be two real locally convex topological vector spaces, and L(X,Y) be the set of all linear continuous operators from X into Y. Let \(X':=L(X,\mathbb{R})\) and CY be a proper (i.e. {0}≠C and CY) closed, convex and pointed cone with nonempty interior \(\operatorname{int}C\). The origin of X and Y are denoted by 0 X and 0 Y , respectively. Let X and Y be the topological dual spaces of X and Y, respectively. The dual cone of C is defined by

$$\begin{aligned} C^*:=\bigl\{ f\in Y^*:f(x)\geq0,\ \mbox{for all}\ x\in C\bigr\} . \end{aligned}$$

We denote by (Y,C) the ordered topological vector space, where the ordering is induced by C. For any y 1,y 2Y, we define the following ordering relations:

$$\begin{aligned} y_1< y_2\ \Leftrightarrow\ y_2-y_1 \in\operatorname{int}C,\\ y_1\nless y_2\ \Leftrightarrow\ y_2-y_1\notin\operatorname{int}C. \end{aligned}$$

The relations > and ≯ are defined similarly.

Let F:XY be a set-valued mapping. The domain, graph and epigraph of F are, respectively, defined by

$$ \begin{aligned} \operatorname{dom}F&:=\bigl\{ x\in X:\;F(x)\neq\emptyset\bigr\} , \\ \operatorname{Gr}F&:=\bigl\{ (x,y)\in X\times Y:\;x\in\operatorname{dom}F,\;y\in F(x)\bigr\} , \\ \operatorname{epi}F&:=\bigl\{ (x,y)\in X\times Y:\;x\in\operatorname{dom}F,\;y\in F(x)+C\bigr\} , \end{aligned} $$

where the symbol ∅ denotes the empty set.

Let K be a nonempty subset of X, F:KY be a set-valued mapping. In this paper, we consider the following set-valued optimization problem (in short, SVOP):

$$\begin{aligned} \operatorname{min}_C F(x),\quad\mbox{subject to}\quad x \in{K}. \end{aligned}$$

A pair (x 0,y 0) with x 0K and y 0F(x 0) is called a weak efficient solution of (SVOP) iff \((F(K)-y_{0})\cap(-\operatorname{int}C)=\emptyset \), where F(K):=⋃ xK F(x).

Let AY. We denote by \(\operatorname{WMin}A:=\{y\in A:(A-y)\cap- \operatorname{int}C=\emptyset\}\) the set of weak efficient elements of A.

Definition 2.1

[15]

Let K be a nonempty subset of X and \(x_{0}\in\operatorname{cl}K\). The contingent cone T(K,x 0) to K at x 0 is the set of all hX for which there exist a net {t α :αI} of positive real numbers and a net {x α :αI}⊂K such that

$$\begin{aligned} \lim_{\alpha}x_\alpha=x_0\quad\mbox{and}\quad \lim_{\alpha}t_\alpha (x_\alpha-x_0)=h. \end{aligned}$$

Remark 2.1

From Definition 2.1, we have that \(T(K,x_{0})\subset\operatorname{clcone}(K-x_{0})\) and T(K,x 0) is a closed cone. Moreover, If K is convex, then T(K,x 0) is a closed and convex cone.

Remark 2.2

It is not difficult to see that hT(K,x 0) if and only if there exist a net {t α :αI} of positive real numbers and a net {h α :αI} with h α h such that t α h α →0 and x 0+t α h α K.

Definition 2.2

[3]

Let F:XY be a set-valued mapping. Let \((x_{0},y_{0})\in\operatorname{Gr}F\). The contingent derivative DF(x 0,y 0) of F at (x 0,y 0) is a set-valued mapping from X to Y defined by

$$\begin{aligned} \operatorname{Gr}\bigl(DF(x_0,y_0)\bigr):=T\bigl( \operatorname{Gr}(F);(x_0,y_0)\bigr). \end{aligned}$$

Remark 2.3

Let \((x_{0},y_{0})\in\operatorname{Gr}F\). It is easy to see that

  1. (i)

    yDF(x 0,y 0)(x) if and only if there exist a net \(\{ (x_{\alpha},y_{\alpha}):\alpha\in I\}\subset\operatorname{Gr}F\) and a net {t α :αI} of positive real numbers such that

    $$\begin{aligned} \lim_{\alpha}(x_\alpha,y_\alpha)=(x_0,y_0) \quad\mbox{and}\quad\lim_{\alpha }t_\alpha(x_\alpha-x_0,y_\alpha-y_0)=(x,y); \end{aligned}$$
  2. (ii)

    the set-valued mapping DF(x 0,y 0) is positively homogeneous with closed graphs;

  3. (iii)

    [16] \((0,0)\in\operatorname{Gr}(DF(x_{0},y_{0}))\).

Definition 2.3

[6]

Let K be a convex subset of X. A set-valued mapping F:XY is said to be C-convex on K iff, for any x 1,x 2K and λ∈[0,1],

$$\begin{aligned} \lambda F(x_1)+(1-\lambda)F(x_2)\subset F\bigl(\lambda x_1+(1-\lambda)x_2\bigr)+C. \end{aligned}$$

Remark 2.4

If the set-valued mapping F is C-convex on K, then F(K)+C is a convex set.

Definition 2.4

[17]

A set-valued mapping F:XY is said to be compactly approximable at \((x_{0},y_{0})\in\operatorname{Gr} F\) iff, for each v 0X, there exists a set-valued mapping H from X into the set of all nonempty compact subsets of Y, a neighborhood V of x 0 in X, and a function r:]0,1[×X→]0,+∞) satisfying

  1. (i)

    \(\lim_{(t,v)\rightarrow(0^{+},v_{0})} r(t,v)=0\);

  2. (ii)

    for each vV and t∈]0,1],

    $$\begin{aligned} F(x_0+tv)\subset y_0+t\bigl(H(v_0)+r(t,v) B_Y\bigr), \end{aligned}$$

where B Y is the closed unit ball around the origin of Y.

The following lemma will be used in the sequel which plays an important role in proving our main results.

Lemma 2.1

[13]

Let X, Y be separated locally convex topological vector spaces, F:XY be a C-convex set-valued mapping, X 0X be a linear subspace and T 0L(X 0,Y). Suppose that \(\operatorname{int}(\operatorname{epi}F)\neq\emptyset\), \(X_{0}\cap\operatorname{int}(\operatorname{dom}F)\neq\emptyset\), and T 0(x)≯y for all \((x,y)\in\operatorname{Gr}F\cap(X_{0}\times Y)\). If \(T_{0} (x)=\langle x, x^{*}_{0}\rangle y_{0}\) for every xX 0 with fixed \(x^{*}_{0}\in X^{*}\) and y 0Y, then there exists TL(X,Y) such that \(T\mid_{X_{0}}=T_{0}\) and T(x)≯y for all \((x,y)\in\operatorname{Gr}F\).

By Lemma 2.5 in [18], it is easy to prove the following result.

Lemma 2.2

Let CY be a closed, convex and pointed cone with \(\operatorname{int}C\neq\emptyset\), and let S be a nonempty subset of Y. Then, for yY,

$$\begin{aligned} (S-y)\cap-\operatorname{int}C=\emptyset\quad \Leftrightarrow\quad (S+\operatorname{int}C-y)\cap - \operatorname{int}C=\emptyset. \end{aligned}$$

3 Existence of Weak Subgradients

In this section, we establish two existence theorems of weak subgradients for set-valued mappings. Denote \(W:=Y\backslash(-\operatorname {int}C)\).

Definition 3.1

[2]

Let K be a subset of X with x 0K. Let F:KY be a set-valued mapping. TL(X,Y) is called a weak subgradient of F at x 0 iff

$$\begin{aligned} F(x)-F(x_0)-T(x-x_0)\subset W,\quad\forall x\in K. \end{aligned}$$

The set of all weak subgradients of F at x 0, denoted by w F(x 0), is called the weak subdifferential of F at x 0.

Theorem 3.1

Let K be a convex subset of X with \(\operatorname {int}K\neq\emptyset\). Let F:KY be a set-valued mapping with F(x)≠∅ for any xK. Let \(x_{0}\in\operatorname {int}K\) and \(y_{0}\in F(x_{0})\cap\operatorname{WMin}F(K)\). If the following conditions are satisfied:

  1. (i)

    DF(x 0,y 0) is C-convex on K−{x 0};

  2. (ii)

    there exists aY such that \(DF(x_{0},y_{0})(K-x_{0})\subset a-\operatorname{int}C\);

  3. (iii)

    F(x)−F(x 0)⊂DF(x 0,y 0)(xx 0)+C, ∀ xK;

then, w F(x 0)≠∅. Moreover, there exists T w F(x 0) such that for every xK,

$$\begin{aligned} T(x-x_0)\notin-\operatorname{int}C\quad \Leftrightarrow\quad T(x-x_0)\in C. \end{aligned}$$

Proof

We define the set-valued mapping G:KY by

$$\begin{aligned} G(x):=DF(x_0,y_0) (x-x_0). \end{aligned}$$

We now prove that G is a C-convex set-valued mapping. Indeed, for any x 1, x 2K and λ∈[0,1], by the C-convexity of DF(x 0,y 0) on K−{x 0}, we have

$$\begin{aligned} \lambda G(x_1)+(1-\lambda)G(x_2) =&\lambda DF(x_0,y_0) (x_1-x_0)+(1- \lambda)DF(x_0,y_0) (x_2-x_0) \\ \subset& DF(x_0,y_0) \bigl(\lambda( x_1-x_0)+(1-\lambda) (x_2-x_0) \bigr)+C \\ =& G\bigl(\lambda x_1+(1-\lambda)x_2\bigr)+C, \end{aligned}$$

which implies that G is a C-convex set-valued mapping.

Let

$$\begin{aligned} M:=\bigl\{ (x,y):x\in K,y\in G(x)+\operatorname{int}C\bigr\} . \end{aligned}$$

Since K is a nonempty convex set and G is C-convex, M is a nonempty convex set. The proof of the theorem is divided into the following three steps.

(I) We prove that \(\operatorname{int}M\neq\emptyset\).

Suppose that there exists aY such that

$$\begin{aligned} G(x)\subset a-\operatorname{int}C,\quad \forall x\in K. \end{aligned}$$
(1)

Let \(c\in\operatorname{int}C\) and y 0=a+c. Then, \(y_{0}-a=c\in\operatorname{int}C\). It follows that there exists a neighborhood U of 0 Y such that

$$ U+y_0-a\subset C. $$
(2)

Let \(x_{0}\in\operatorname{int}K\). Then there exists a neighborhood V of 0 X such that x 0+VK. From (1), for any xx 0+V and y x G(x), there exists \(c_{x}\in\operatorname{int}C\) such that

$$\begin{aligned} y_x=a-c_x. \end{aligned}$$

This fact together with (2) yields

$$\begin{aligned} U+y_0-y_x=U+y_0-a+c_x\subset \operatorname{int}C, \end{aligned}$$

which implies that

$$ U+y_0\subset y_x+\operatorname{int}C\subset G(x)+\operatorname{int}C. $$
(3)

On the other hand, for any xx 0+V,

$$ y_0=a+c=y_x+c_x+c\in G(x)+\operatorname{int}C+ \operatorname{int}C\subset G(x)+\operatorname {int}C. $$
(4)

Combining (3) and (4) yields

$$\begin{aligned} (x,y)\in M,\quad \forall x\in x_0+V,\ \forall y\in U+y_0. \end{aligned}$$

It follows that \(\operatorname{int}M\neq\emptyset\).

(II) We prove that (x 0,0)∉M.

Indeed, if (x 0,0)∈M, then \(0\in G(x_{0})+\operatorname{int}C\), and so \(G(x_{0})\cap-\operatorname{int}C\neq\emptyset\). This implies that there exists \(c\in\operatorname{int}C\) such that −cDF(x 0,y 0)(0). It follows that there exist nets {λ α :αI} of positive real numbers and \(\{(x_{\alpha},y_{\alpha}):\alpha\in I\}\subset\operatorname{Gr}F\) satisfying

$$\begin{aligned} \lim_\alpha(x_\alpha,y_\alpha)=(x_0,y_0) \quad\mbox{and}\quad\lim_\alpha \lambda_{\alpha} \bigl[(x_\alpha,y_\alpha)-(x_0,y_0) \bigr]=(0,-c). \end{aligned}$$

Therefore, there exists α 0I such that

$$\begin{aligned} \lambda_{\alpha}(y_\alpha-y_0)\in-\operatorname{int}C, \quad \forall \alpha\geq \alpha_0 \end{aligned}$$

and so

$$\begin{aligned} y_\alpha-y_0\in-\operatorname{int}C,\quad\forall \alpha\geq \alpha_0, \end{aligned}$$

which contradicts the fact \(y_{0}\in\operatorname{WMin} F(K)\).

(III) There exists TL(X,Y) such that T w F(x 0).

Since M is a nonempty convex set with \(\operatorname{int}M\neq\emptyset\) and (x 0,0)∉M, by the separation theorem of convex sets, there exists (−x ,y )∈X ×Y ∖{(0,0)} such that

$$\begin{aligned} \bigl\langle -x^*,x\bigr\rangle +\bigl\langle y^*,y\bigr\rangle \geq\bigl\langle -x^*,x_0\bigr\rangle +\bigl\langle y^*,0\bigr\rangle ,\quad \forall (x,y)\in M, \end{aligned}$$

or equivalently,

$$ \bigl\langle -x^*,x\bigr\rangle +\bigl\langle y^*,y\bigr\rangle \geq\bigl\langle -x^*,x_0\bigr\rangle ,\quad \forall (x,y)\in M. $$
(5)

We claim that y ≠0. In fact, if y =0, then 〈x ,x〉≤〈x ,x 0〉,∀xK. Since \(x_{0}\in\operatorname{int}K\), there exists a symmetric neighborhood U of 0 X such that x 0+UK. It follows that

$$\begin{aligned} \bigl\langle x^*,x_0\pm u\bigr\rangle \leq\bigl\langle x^*,x_0\bigr\rangle ,\quad\forall u\in U. \end{aligned}$$

This implies x =0, which contradicts that (−x ,y )≠(0,0). Therefore, y ≠0.

Note that 0∈DF(x 0,y 0)(0). This fact together with (5) yields \(\langle y^{*},c\rangle>0,\ \forall c\in\operatorname{int}C\). And so 〈y ,c〉≥0, ∀cC, that is y C . Then there exists some \(c_{0}\in\operatorname{int}C\) with 〈y ,c 0〉=1. We now define a mapping T:XY by

$$\begin{aligned} T(x):=\bigl\langle x^*,x\bigr\rangle c_0,\quad\forall x\in K- \{x_0\}. \end{aligned}$$

Obviously, T is linear and continuous. Next we prove that for this mapping T satisfying

$$\begin{aligned} F(x)-F(x_0)-T(x-x_0)\subset W, \quad\forall x\in K. \end{aligned}$$

We now prove that

$$\begin{aligned} G(x)-T(x-x_0)\subset W, \quad\forall x\in K. \end{aligned}$$

By Lemma 2.2, we only need to prove that

$$\begin{aligned} \bigl(G(x)+\operatorname{int}C-T(x-x_0)\bigr)\cap-\operatorname{int}C=\emptyset, \quad\forall x\in K. \end{aligned}$$

Suppose by contradiction that there exist xK and \(y\in G(x)+\operatorname {int}C\) such that

$$\begin{aligned} y-T(x-x_0)\in-\operatorname{int}C. \end{aligned}$$

Because of y C ∖{0}, we have

$$\begin{aligned} 0>\bigl\langle y^*,y-T(x-x_0)\bigr\rangle =\bigl\langle y^*,y\bigr\rangle -\bigl\langle x^*,x-x_0\bigr\rangle \bigl\langle y^*,c_0\bigr\rangle =\bigl\langle y^*,y\bigr\rangle -\bigl\langle x^*,x-x_0\bigr\rangle , \end{aligned}$$

which contradicts (5). Therefore, by condition (iii), T w F(x 0). Finally, for every xK, we have

$$\begin{aligned} T(x-x_0)\notin-\operatorname{int}C\ \Leftrightarrow\ \bigl\langle x^*,x-x_0\bigr\rangle c_0\notin-\operatorname{int}C \ \Leftrightarrow&\ \bigl\langle x^*,x-x_0\bigr\rangle \geq 0\\ \Leftrightarrow &\ T(x-x_0)\in C. \end{aligned}$$

This completes the proof. □

Remark 3.1

In [14], Hernandez and Modriguez-Marin obtained the existence theorem of weak subgradients for set-valued mappings. The assumptions that F(x 0) is upper bounded and F is upper semicontinuous at x 0 are required in [14]. However, Theorem 3.1 does not require these assumptions. The following example is given to illustrate the case that Theorem 3.1 is applicable, but Theorem 4.1 of [14] is not applicable.

Example 3.1

Let \(X=Y=\mathbb{R}\), \(K=\mathbb{R}\), C={y:y≥0}, and let

$$\begin{aligned} F(x)=\left \{ \begin{array}{l@{\quad}l} \{0\},&\mbox{if}\ x\leq0, \\ \{0,1\},&\mbox{if}\ x>0. \end{array} \right . \end{aligned}$$

Let (x 0,y 0)=(0,0). Then,

$$\begin{aligned} T\bigl(\operatorname{Gr}(F);(0,0)\bigr)=\bigl\{ (x,0):x\in\mathbb{R}\bigr\} . \end{aligned}$$

It is easy to see that the assumptions of Theorem 3.1 are satisfied. Obviously, 0∈ w F(0). However, Theorem 4.1 in [14] is not applicable because F is not upper semicontinuous at x 0.

We now give a sufficient condition, which guarantees the assumption (i) in Theorem 3.1 holds.

Proposition 3.1

Let K be a convex subset of X and F:KY be a C-convex set-valued mapping. Let \((x_{0},y_{0})\in\operatorname{Gr}F\). If F is compactly approximable at (x 0,y 0), then DF(x 0,y 0) is C-convex.

Proof

Since F is compactly approximable at (x 0,y 0), by Proposition 2.2 in [19],

$$\begin{aligned} D(F+C) (x_0,y_0) (x)=D(F) (x_0,y_0) (x)+C,\quad\forall x\in X. \end{aligned}$$

Since F is C-convex, \(\operatorname{epi}F\) is a convex set. It follows that \(T(\operatorname{epi}F;(x_{0},y_{0}))\) is a convex set. And so D(F+C)(x 0,y 0) is convex. Therefore, DF(x 0,y 0) is C-convex. This completes the proof. □

Theorem 3.2

Let X and Y be separated locally convex topological vector spaces, and K be a convex subset of X with \(\operatorname {int}K\neq\emptyset\). Let F:KY be a set-valued mapping with F(x)≠∅ for any xK. Let \(x_{0}\in\operatorname {int}K\) and y 0F(x 0). If the following conditions are satisfied:

  1. (i)

    DF(x 0,y 0) is C-convex on K−{x 0};

  2. (ii)

    \(DF(x_{0},y_{0})(0)\cap-\operatorname{int}C=\emptyset\);

  3. (iii)

    F(x)−F(x 0)⊂DF(x 0,y 0)(xx 0)+C, ∀xK;

then, w F(x 0)≠∅.

Proof

Let S=K−{x 0}. We define the set-valued mapping G:XY by

$$\begin{aligned} G(x):=DF(x_0,y_0) (x),\quad\forall x\in S. \end{aligned}$$

Similarly to the proof of Theorem 3.1, we can prove that G is C-convex on S.

By condition (ii), \(G(0)\cap-\operatorname{int}C=\emptyset\). It follows that

$$ 0\ngtr y,\quad\forall y\in G(0). $$
(6)

We next consider the special subspace X 0={0} and T 0(0):=0. Since \(x_{0}\in\operatorname{int}K\) and F(x)≠∅ for any xK, it is easy to see that

$$\begin{aligned} \operatorname{int}(\operatorname{epi}G)\neq\emptyset,\qquad X_0\cap\operatorname{int}(\operatorname{dom}G)\neq \emptyset. \end{aligned}$$

From (6), we have

$$\begin{aligned} T_0(x)\ngtr y,\quad\forall (x,y)\in\operatorname{Gr} G\cap(X_0 \times Y)=\bigl\{ (0,y): y\in G(0)\bigr\} . \end{aligned}$$

By Lemma 2.1, there exists TL(X,Y) such that

$$\begin{aligned} T(x)\ngtr y,\quad\forall (x,y)\in\operatorname{Gr} G \end{aligned}$$

and so

$$\begin{aligned} T(x)\notin DF(x_0,y_0) (x)+\operatorname{int} C,\quad\forall x \in S. \end{aligned}$$

It follows that

$$\begin{aligned} T(x-x_0)\notin DF(x_0,y_0) (x-x_0)+\operatorname{int} C,\quad\forall x\in K, \end{aligned}$$

or

$$\begin{aligned} DF(x_0,y_0) (x-x_0)-T(x-x_0) \subset W,\quad\forall x\in K, \end{aligned}$$

which, together with condition (iii), yields

$$\begin{aligned} F(x)-F(x_0)-T(x-x_0) \subset& DF(x_0,y_0) (x-x_0)-T(x-x_0)+C\\ \subset& W,\quad \forall x\in K. \end{aligned}$$

This implies T w F(x 0). This completes the proof. □

Remark 3.2

In [12, Theorem 3.2], Li and Guo obtained a existence theorem of weak subgradients by using similar proof methods. It is important to note that our assumptions are different from the ones used in [12]. First, the condition that F is C-convex has been relaxed because we consider C-convexity of the contingent derivative of F instead of F. Second, the assumptions that F(x 0)−C is convex and \(F(x_{0})\cap(F(x_{0})-\operatorname {int}C)=\emptyset\) are required in [12], but Theorem 3.2 does not require these assumptions.

Remark 3.3

In [2, Theorem 7] and [11, Theorem 4.1], the authors derived some existence theorems of weak subgradients for set-valued mappings. The assumptions that −F(x 0) is minorized, F is C-convex and upper semicontinuous at x 0 are required in [2, 11]. However, Theorem 3.2 does not require these assumptions.

Now, we give an example to illustrate Theorem 3.2.

Example 3.2

Let \(X=Y=\mathbb{R}\), \(K=\mathbb{R}\), C={y:y≥0}, and let

$$\begin{aligned} F(x)=\left \{ \begin{array}{l@{\quad}l} \{0\},&\mbox{if}\ x\leq0;\\ \{2,-x\},&\mbox{if}\ x>0. \end{array} \right . \end{aligned}$$

Let (x 0,y 0)=(0,0). Then,

$$\begin{aligned} T\bigl(\operatorname{Gr}(F);(0,0)\bigr)=\bigl\{ (x,0):x\leq0\bigr\} \cup\bigl\{ (x,-y):x=y,x>0\bigr\} . \end{aligned}$$

It is easy to see that the assumptions of Theorem 3.2 are satisfied. Obviously, 0∈ w F(0). However, Theorem 3.2 in [12], Theorem 7 in [2] and Theorem 4.1 in [11] are not applicable since F is not C-convex on K. Indeed, letting x 1=−4, x 2=2 and \(\lambda=\frac{1}{2}\), we have

$$\begin{aligned} \lambda F(x_1)+(1-\lambda)F(x_2)=\{-1,1\}\nsubseteq F \bigl(\lambda x_1+(1-\lambda)x_2\bigr)+\mathbb{R}_+= \mathbb{R}_+. \end{aligned}$$

4 Properties of Weak Subgradients

In this section, we obtain some properties of weak subgradients for set-valued mappings.

Theorem 4.1

Let K be a convex subset of X and x 0K. Let F:KY be a C-convex set-valued mapping with nonempty values and F(x 0)−C is convex. If T w F(x 0), then there exists y C ∖{0} such that

$$\begin{aligned} \bigl\langle y^*,y-y_1-T(x-x_0)\bigr\rangle \geq0,\quad \forall (x,y)\in\operatorname{Gr}F,\ \forall y_1\in F(x_0). \end{aligned}$$

Proof

Let T w F(x 0). Then

$$\begin{aligned} \bigl(F(x)-F(x_0)-T(x-x_0)\bigr)\cap-\operatorname{int} C= \emptyset,\quad \forall x\in K. \end{aligned}$$

This implies that

$$\begin{aligned} \bigl(F(x)+C-F(x_0)-T(x-x_0)\bigr)\cap-\operatorname{int} C= \emptyset,\quad \forall x\in K. \end{aligned}$$

We define a set-valued mapping G:KY by

$$\begin{aligned} G(x):=F(x)-F(x_0). \end{aligned}$$

Since F is C-convex and F(x 0)−C is convex, for any x 1, x 2K and λ∈[0,1],

$$\begin{aligned} \lambda G(x_1)+(1-\lambda)G(x_2) =&\lambda F(x_1)-\lambda F(x_0)+(1-\lambda)F(x_2)-(1- \lambda)F(x_0) \\ \subset& F\bigl(\lambda x_1+(1-\lambda)x_2 \bigr)+C-F(x_0)+C \\ \subset& G\bigl(\lambda x_1+(1-\lambda)x_2\bigr)+C. \end{aligned}$$

It follows that G is a C-convex set-valued mapping. Note that T is a linear operator, then

$$\begin{aligned} \bigcup_{x\in K}\bigl(F(x)+C-F(x_0)-T(x-x_0) \bigr) \end{aligned}$$

is a convex set. By the separation theorem of convex sets, there exists y Y ∖{0} such that

$$ \bigl\langle y^*,y+c-y_1-T(x-x_0)\bigr\rangle \geq0,\quad \forall x\in K,y\in F(x), y_1\in F(x_0), c\in C. $$
(7)

We claim that

$$\begin{aligned} \bigl\langle y^*,c\bigr\rangle \geq0,\quad\forall c\in C. \end{aligned}$$

In fact, if there exists c 0C such that 〈y ,c 0〉<0, then by letting x=x 0 and y=y 1 in (7), we have 〈y ,c 0〉≥0. This gives a contradiction. Thus, y C ∖{0}. Letting c=0 in (7), we get the conclusion. This completes the proof. □

Theorem 4.2

Let K be a nonempty subset of X and x 0K. Let F:KY be a set-valued mapping with nonempty values. If w F(x 0)≠∅, then w F(x 0) is a closed set.

Proof

Suppose by contradiction that there exists a net {T α :αI}⊂ w F(x 0) such that T α T, but T w F(x 0). Thus, there exist \(\overline{x}\in K\), \(\overline{y}\in F(\overline{x})\) and y 0F(x 0) such that

$$\begin{aligned} \overline{y}-y_0-T(\overline{x}-x_0)\in-\operatorname{int} C. \end{aligned}$$

Note that

$$\begin{aligned} \overline{y}-y_0-T_\alpha(\overline{x}-x_0) \rightarrow\overline {y}-y_0-T(\overline{x}-x_0). \end{aligned}$$

It follows that there exists α 0I such that

$$\begin{aligned} \overline{y}-y_0-T_\alpha(\overline{x}-x_0)\in- \operatorname{int} C,\quad\forall \alpha\geq\alpha_0, \end{aligned}$$

which contracts the fact T α w F(x 0). This completes the proof. □

Remark 4.1

Note that we prove Theorem 4.2 in locally convex topological vector spaces. But a similar result has been proved by Li and Guo [12] in normed spaces.

5 Conclusions

In this paper, we have proved two existence theorems of weak subgradients for set-valued mappings. These two results improve meaningfully the corresponding results obtained by Hernandez and Rodriguez-Marin [14] and Li and Guo [12], respectively. Moreover, two properties of the weak subdifferential for set-valued mappings are derived. It would be interesting to consider the calculations of sum mapping and composed mapping for weak subdifferentials as well as applications to set-valued optimization problems. This may be the topic of some of our forthcoming papers.