1 Introduction

A semi-infinite multiobjective optimization problem is the simultaneous minimization with a finite number of objective functions and an infinite number of inequality constraints. Recently, characterizations of the solution set, optimality conditions and duality for semi-infinite multiobjective optimization problems have been investigated by many authors. We refer the readers to the papers [7, 9, 10, 20, 24, 27, 30, 32, 36,37,38,39,40, 42, 45, 47, 50, 51, 54, 55] and the references therein. By using the Mordukhovich/limiting subdifferential, Chuong and Kim established optimality conditions and duality theorems for efficient solutions of a semi-infinite multiobjective optimization problem (SIMP) in [9]. Chuong and Yao obtained optimality conditions and duality for isolated solutions and positively properly efficient efficient solutions of the problem (SIMP) in [10]. Optimality conditions and duality theorems for efficient solutions of a fractional semi-infinite multiobjective optimization problem were given in [7, 47]. Optimality conditions for a convex problem (SIMP) were obtained in [20]. In [24], authors studied optimality conditions and mixed-type duality for a problem (SIMP). Khanh and Tung established Karush–Kuhn–Tucker optimality conditions for Borwein-proper/firm solutions of a problem (SIMP) with mixed constraints in [30]. Authors investigated approximate optimality conditions, approximate duality theorems and approximate saddle point theorems for a problem (SIMP) in [32]. Optimality conditions for approximate solutions of a problem (SIMP) were studied in [50]. The Karush-Kuhn-Tucker optimality conditions and duality for a problem (SIMP) were given in [54]. The strong Karush-Kuhn-Tucker optimality conditions for a Borwein properly solution of a problem (SIMP) were obtained in [55].

On the other hand, robust optimization has emerged as a remarkable deterministic framework for studying optimization problems with uncertain data (see, e.g., [2, 3]). By using robust optimization, theoretical and applied aspects in the area of optimization problems with data uncertainty have been investigated by many researchers (see, e.g., [4, 6, 8, 13,14,15,16, 21, 22, 29, 31, 33,34,35, 41, 52, 53] and the references therein). But only a few publications focus on the optimality conditions and the duality theorems for semi-infinite optimization problems with data uncertainty. The robust strong duality theorems for a convex nonlinear semi-infinite optimization problem with data uncertainty in constraints were given in [13]. In [14], authors obtained necessary and sufficient conditions for stable robust strong duality of a robust linear semi-infinite programming problem. A duality theory for semi-infinite linear programming problems with data uncertainty in constraints was introduced in [21]. In [22], authors established dual characterizations of robust solutions for a multiobjective linear semi-infinite program problem with data uncertainty in constraints. Optimality conditions and duality theorems for the semi-infinite multiobjective optimization problems with data uncertainty were studied in [33]. Approximate optimality conditions and approximate duality theorems for a convex semi-infinite programming problem with data uncertainty were considered in [34]. By using the Clarke subdifferential, approximate optimality conditions and approximate duality theorems for the nonsmooth semi-infinite programming problems with data uncertainty were given in [31, 52]. On the other hand, the local isolated efficient solutions are originally called “strongly unique solutions” in [12] and after that, “strictly local (efficient solutions) Pareto minimums” in [25] and “strongly isolated solutions” in [10]. Furthermore, these solutions are not necessarily isolated points due to [28] (Example 1.1). Besides, there were many studies on isolated efficient solutions and properly efficient solutions for multiobjective optimization problems (see, e.g., [5, 17, 19, 23, 26, 43, 44, 48, 49] and the references therein). Recently, the authors have studied norm-based robustness for a general vector optimization problem, and in particular, problems with conic constraints and semi-infinite optimization problems in [43, 44]. In these papers, they have addressed the relationship between norm-based robust efficiency and isolated/proper efficiency. However, quite few papers consider isolated efficient solutions and properly efficient solutions for nonsmooth semi-infinite multiobjective optimization problems (see, e.g., [10, 30, 42, 45]). As far as we know, up to now, there is no paper devoted to isolated solutions and properly efficient solutions for nonsmooth robust semi-infinite multiobjective optimization problems.

Inspired by the above observations, we provide some new results for optimality conditions and duality theorems for isolated solutions and positively properly efficient solutions of nonsmooth robust semi-infinite multiobjective optimization problems in terms of the Clarke subdifferentials. The rest of the paper is organized as follows. Sections 1, and 2 present introduction, notations and preliminaries. In Sect. 3, we establish optimality conditions for strongly isolated solutions and positively properly efficient solutions in nonsmooth robust semi-infinite multiobjective optimization problems. In Sect. 4, we study Mond–Weir-type dual problems with respect to nonsmooth robust semi-infinite multiobjective optimization problems. In Sect. 5, we provide applications to nonsmooth robust fractional semi-infinite multiobjective problems and nonsmooth robust semi-infinite minimax optimization problems. Finally, conclusions are given in Sect. 6.

2 Preliminaries

Throughout the paper, we use the standard notation of variational analysis in [11, 46]. Let us first recall some notations and preliminary results which will be used throughout this paper. Let \(\mathbb {R}^n\) denote the \(n-\)dimensional Euclidean space equipped with the usual Euclidean norm \(||\cdot ||\). The notation \(\left\langle {\cdot ,\cdot }\right\rangle \) signifies the inner product in the space \(\mathbb {R}^n\). Let D be a nonempty subset of \(\mathbb {R}^n\). The closure and the interior of D are denoted by \(\text{ cl }D\) and \(\text{ int }D\). The symbol \(\mathbb {B}\) stands for the closed unit ball in \(\mathbb {R}^n\). As usual, the polar cone of D is the set

$$\begin{aligned} D^\circ :=\left\{ y\in \mathbb {R}^n\mid \left\langle {y,x}\right\rangle \leqslant 0,\forall x\in D\right\} . \end{aligned}$$

Besides, the nonnegative (resp., nonpositive) orthant cone of Euclidean space \(\mathbb {R}^n\) is denoted by \(\mathbb {R}_+^n=\{(x_1,\cdots ,x_n)\mid x_i\geqslant 0,i=1,\cdots ,n\}\) (resp., \(\mathbb {R}_-^n\)) for \(n\in \mathbb {N}:=\{1,2,\cdots \}\), while \(\text{ int }\mathbb {R}_+^n\) is used to indicate the topological interior of \(\mathbb {R}_+^n\).

Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\). We say that f is locally Lipschitz function, if for any \(\bar{x}\in \mathbb {R}^n\), there exist a positive constant \(L>0\) and a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\), such that

$$\begin{aligned} |f(x_1)-f(x_2)|\leqslant L||x_1-x_2||,\forall x_1,x_2\in U. \end{aligned}$$

The Clarke generalized directional derivative of f at \(\bar{x}\in \mathbb {R}^n\) in the direction \(d\in \mathbb {R}^n\) is defined as follows:

$$\begin{aligned} f^C(\bar{x};d):=\limsup _{x\rightarrow \bar{x},t\downarrow 0}\dfrac{f(x+td)-f(x)}{t}. \end{aligned}$$

The Clarke subdifferential of f at \(\bar{x}\in \mathbb {R}^n\) is defined as follows:

$$\begin{aligned} \partial ^Cf(\bar{x}):=\{y\in \mathbb {R}^n\mid f^C(\bar{x};d)\geqslant \left\langle {y,d} \right\rangle ,\forall d\in \mathbb {R}^n \}. \end{aligned}$$

Let S be a nonempty closed subset of \(\mathbb {R}^n\). The Clarke tangent cone to S at \(\bar{x}\in S\) is defined by

$$\begin{aligned} T^C(\bar{x};S):=\{v\in \mathbb {R}^n\mid d_S^C(\bar{x};v)=0\}, \end{aligned}$$

where \(d_S\) denotes the distance function to S. The Clarke normal cone to S at \(\bar{x}\in S\) is defined by

$$\begin{aligned} N^C(\bar{x};S):=T^C(\bar{x};S)^\circ . \end{aligned}$$

A function \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) is said to be convex if for all \(x,y\in \mathbb {R}^n\) and all \(\lambda \in [0,1]\), then

$$\begin{aligned} f(\lambda x+(1-\lambda )y)\leqslant \lambda f(x)+(1-\lambda )f(y). \end{aligned}$$

If f is a convex function and \({\bar{x}}\in \mathbb {R}^n\)

$$\begin{aligned} \partial ^Cf(\bar{x})=\{y\in \mathbb {R}^n\mid f(x)-f(\bar{x})\ge \left\langle {y,x-\bar{x}} \right\rangle ,\forall x\in \mathbb {R}^n\}. \end{aligned}$$

Suppose that S is a nonempty closed convex subset of \(\mathbb {R}^n\) and \(\bar{x}\in S\). Then, \(N^C(\bar{x};S)\) coincides with the cone of normal in the sense of convex analysis and

$$\begin{aligned} N(\bar{x};S):=\{z\in \mathbb {R}^n\mid \left\langle {z,y-\bar{x}} \right\rangle \leqslant 0,\forall y\in S\}. \end{aligned}$$

Let T be a nonempty infinite index set and \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) be convex compact sets. Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R}\) for all \(t\in T\). We say that \(g_t,t\in T\) are locally Lipschitz functions with respect to \(\bar{x}\) uniformly in \(t\in T\), if for any \(\bar{x}\in \mathbb {R}^n\), there exist a positive constant \(L>0\), a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(v_t\in \mathcal {V}_t\) such that

$$\begin{aligned} |g_t(x_1,v_t)-g_t(x_2,v_t)|\leqslant L||x_1-x_2||,\forall x_1,x_2\in U,\forall v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

The following lemmas will be used in the sequel.

Lemma 1

(See [11], Corollary page 52) Let S be a nonempty subset of \(\mathbb {R}^n\) and \(\bar{x}\in S\). Suppose that \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) is locally Lipschitz function near \(\bar{x}\) and attains a minimum over S at \(\bar{x}\). Then,

$$\begin{aligned} 0\in \partial ^Cf(\bar{x})+N^C(\bar{x};S). \end{aligned}$$

Lemma 2

(See [11], Propositions 2.3.1 and 2.3.3) Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions around \(\bar{x}\in \mathbb {R}^n\). Then, we have the following inclusions:

  1. (i)

    \(\partial ^C(\alpha f_k)(\bar{x})=\alpha \partial ^Cf_k(\bar{x}),\forall \alpha \in \mathbb {R}\).

  2. (ii)

    \(\partial ^C(f_1+\cdots +f_m)(\bar{x})\subset \partial ^Cf_1(\bar{x})+\cdots +\partial ^Cf_m(\bar{x})\).

Lemma 3

(See [11], Proposition 2.3.12 and [50], Lemma 2.3) Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions around \(\bar{x}\in \mathbb {R}^n\). Then, the function \(\varphi (\cdot ):=\max \{f_k(\cdot )\mid k=1,\cdots ,m\}\) is locally Lipschitz function around \(\bar{x}\) and one has

$$\begin{aligned} \begin{array}{ll} \partial ^C\varphi (\bar{x})\subset &{}\bigcup \{\displaystyle \sum _{k=1}^{m}\alpha _k\partial ^Cf_k(\bar{x})\mid (\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m,\displaystyle \sum _{k=1}^{m}\alpha _k=1,\\ &{}\alpha _k\left[ f_k(\bar{x})-\varphi (\bar{x})\right] =0\}. \end{array} \end{aligned}$$

Lemma 4

(See [11], Proposition 2.3.14) Let \(f,g:\mathbb {R}^n\rightarrow \mathbb {R}\) be locally Lipschitz functions around \(\bar{x}\in \mathbb {R}^n\), and suppose that \(g(\bar{x})\ne 0\). Then \(\dfrac{f}{g}\) is locally Lipschitz function near \(\bar{x}\in \mathbb {R}^n\) and one has

$$\begin{aligned} \partial ^C\left( \dfrac{f}{g}\right) (\bar{x})\subset \dfrac{g(\bar{x})\partial ^Cf(\bar{x})-f(\bar{x})\partial ^Cg(\bar{x})}{g^2(\bar{x})}. \end{aligned}$$

In this paper, we consider a nonsmooth semi-infinite multiobjective optimization problem with data uncertainty in constraints:

$$\begin{aligned} \text{(USIMP) }\,\,\,\,\begin{array}{l} \min f(x):=\left( f_1(x),\cdots ,f_m(x)\right) ,\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall t\in T,\forall x\in \Omega , \end{array} \end{aligned}$$

where T is a nonempty infinite index set, \(\Omega \) is a nonempty closed subset of \(\mathbb {R}^n\), \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions with \(f:=(f_1,\cdots ,f_m)\). Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) be locally Lipschitz functions with respect to x uniformly in \(t\in T\) and let \(v_t\in \mathcal {V}_t,t\in T\) be uncertain parameters, where \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) are the convex compact sets.

The uncertainty set-valued mapping \(\mathcal {V}:T \rightrightarrows \mathbb {R}^q\) is defined as \(\mathcal {V}(t):=\mathcal {V}_t\) for all \(t\in T\). The notation \(v\in \mathcal {V}\) means that v is a selection of \(\mathcal {V}\), i.e., \(v:T\rightarrow \mathbb {R}^q\) and \(v_t\in \mathcal {V}_t\) for all \(t\in T\). So, the uncertainty set is the graph of \(\mathcal {V}\), that is, \(\text{ gph }\mathcal {V}:=\{(t,v_t)\mid v_t\in \mathcal {V}_t,t\in T\}\).

The robust counterpart of the problem (USIMP) is as follows:

$$\begin{aligned} \text{(RSIMP) }\,\,\,\,\begin{array}{l} \min f(x):=\left( f_1(x),\cdots ,f_m(x)\right) ,\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T,\forall x\in \Omega . \end{array} \end{aligned}$$

The feasible set of the problem (RSIMP) is defined by

$$\begin{aligned} F:=\{x\in \Omega \mid g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T\}. \end{aligned}$$

Definition 1

A point \(\bar{x}\in F\) is called

  1. (i)

    ([10]) a local efficient solution of the problem (RSIMP) if there exists a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) such that

    $$\begin{aligned} f(x)-f(\bar{x})\notin -\mathbb {R}_+^m\backslash \{0\},\forall x\in U\cap F. \end{aligned}$$
  2. (ii)

    ([18]) a local isolated efficient solution of the problem (RSIMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and a constant \(\nu >0\) such that

    $$\begin{aligned} \max _{1\leqslant k\leqslant m}\{f_k(x)-f_k(\bar{x})\}\geqslant \nu ||x-\bar{x}||,\forall x\in U\cap F. \end{aligned}$$
  3. (iii)

    ([10]) a local positively properly efficient solution of the problem (RSIMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(\beta :=(\beta _1,\cdots ,\beta _m)\in \text{ int }\mathbb {R}_+^m\) such that

    $$\begin{aligned} \left\langle {\beta ,f(x)}\right\rangle \geqslant \left\langle {\beta ,f(\bar{x})}\right\rangle ,\forall x\in U\cap F. \end{aligned}$$

When \(U:=\mathbb {R}^n\), one has the concepts of a global efficient solution, a global isolated efficient solution and a global positively properly efficient solution for the problem (RSIMP).

Let \(\mathbb {R}^{(T)}\) be the linear space given below

$$\begin{aligned} \mathbb {R}^{(T)}:=\{\lambda =(\lambda _t)_{t\in T}\mid \lambda _t=0 \text{ for } \text{ all } t\in T \text{ but } \text{ only } \text{ finitely } \text{ many } \lambda _t\ne 0\}. \end{aligned}$$

Let \(\mathbb {R}_+^{(T)}\) be the positive cone in \(\mathbb {R}^{(T)}\) defined by

$$\begin{aligned} \mathbb {R}_+^{(T)}:=\left\{ \lambda =(\lambda _t)_{t\in T}\in \mathbb {R}^{(T)}\mid \lambda _t\geqslant 0 \text{ for } \text{ all } t\in T\right\} . \end{aligned}$$

With \(\lambda \in \mathbb {R}^{(T)}\), its supporting set, \(T(\lambda ):=\{t\in T\mid \lambda _t\ne 0\}\), is a finite subset of T. Given \(\{z_t\}\subset Z,t\in T,Z\) being a real linear space, we understand that

$$\begin{aligned} \sum _{t\in T}\lambda _tz_t=\left\{ \begin{array}{cl} \displaystyle \sum _{t\in T(\lambda )}\lambda _tz_t,&{}\text{ if } \,\,\, T(\lambda )\ne \emptyset ,\\ 0,&{}\text{ if } \,\,\,T(\lambda )=\emptyset . \end{array} \right. \end{aligned}$$

For \(g_t,t\in T\),

$$\begin{aligned} \sum _{t\in T}\lambda _tg_t=\left\{ \begin{array}{cl} \displaystyle \sum _{t\in T(\lambda )}\lambda _tg_t,&{}\text{ if } \,\,\, T(\lambda )\ne \emptyset ,\\ 0,&{}\text{ if } \,\,\,T(\lambda )=\emptyset . \end{array} \right. \end{aligned}$$

3 Robust Optimality Conditions for Isolated Efficient Solution and Properly Efficient Solution

In this section, we establish optimality conditions for local isolated efficient solutions and local positively properly efficient solutions of the problem (RSIMP).

The following constraint qualification is an extension of Definition 3.2 in [52].

Definition 2

Let \(\bar{x}\in F\). We say that the following robust constraint qualification (RCQ) is satisfied at \(\bar{x}\) if

$$\begin{aligned} N^C\left( \bar{x};F\right) \subseteq \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^Cg_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega ), \end{aligned}$$

where

$$\begin{aligned} A(\bar{x}):=\{\lambda \in \mathbb {R}_+^{(T)}\mid \lambda _tg_t(\bar{x},v_t)=0,\forall v_t\in \mathcal {V}_t,\forall t\in T\} \end{aligned}$$
(1)

is set of active constraint multipliers at \(\bar{x}\in \Omega \).

Now, we propose a necessary optimality condition for a local isolated efficient solution of the problem (RSIMP) under the qualification condition (RCQ).

Theorem 1

Let \(\bar{x}\in F\) be a local isolated efficient solution of the problem (RSIMP) for some \(\nu >0\). Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are convex functions and the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\alpha :=(\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} \nu \mathbb {B}\subset \sum _{k=1}^{m}{\alpha _k\partial ^Cf_k(\bar{x})}+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$

Proof

Suppose that \(\bar{x}\in F\) is a local isolated efficient solution of the problem (RSIMP). Let

$$\begin{aligned} \psi (x):=\max _{1\leqslant k\leqslant m}\left\{ f_k(x)-f_k(\bar{x})\right\} -\nu ||x-\bar{x}|| ,\forall x\in \mathbb {R}^n. \end{aligned}$$

Since \(\bar{x}\in F\) is a local isolated efficient solution of the problem (RSIMP), there exists a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) such that

$$\begin{aligned} \psi (x)\geqslant \psi (\bar{x})=0,\forall x\in U\cap F. \end{aligned}$$

It follows easily that \(\bar{x}\) is a local minimizer of the following scalar problem

$$\begin{aligned} \min _{x\in F}\psi (x). \end{aligned}$$

Note further that we have \(||\cdot -\bar{x}||\) is a convex function. Thus, using Corollary 1 in [1], we deduce that

$$\begin{aligned} \partial ^C\left( \nu ||\cdot -\bar{x}||\right) (\bar{x})\subset \partial ^C\left( \displaystyle \max _{1\leqslant k\leqslant m}\left\{ f_k(\cdot )-f_k(\bar{x}) \right\} \right) (\bar{x})+N^C(\bar{x};F). \end{aligned}$$

From \(\partial ^C\left( \nu ||\cdot -\bar{x}||\right) (\bar{x})=\nu \mathbb {B},\forall \nu >0\), one follows

$$\begin{aligned} \nu \mathbb {B}\subset \partial ^C\left( \displaystyle \max _{1\leqslant k\leqslant m}\left\{ f_k(\cdot )-f_k(\bar{x}) \right\} \right) (\bar{x})+N^C(\bar{x};F). \end{aligned}$$
(2)

Thank to Lemma 3, we have

$$\begin{aligned} \begin{array}{l} \partial ^C\left( \displaystyle \max _{1\leqslant k\leqslant m}\left\{ f_k(\cdot )-f_k(\bar{x})\right\} \right) (\bar{x})\\ \quad \subset \left\{ \displaystyle \sum _{k=1}^{m}\alpha _k\partial ^Cf_k(\bar{x})\mid \alpha _k\geqslant 0,k=1,\cdots ,m,\sum _{k=1}^{m}\alpha _k=1\right\} . \end{array} \end{aligned}$$
(3)

Because the qualification condition (RCQ) holds at \(\bar{x}\in F\).

So, one implies

$$\begin{aligned} N^C\left( \bar{x};F\right) \subseteq \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega ), \end{aligned}$$
(4)

where

$$\begin{aligned} A(\bar{x}):=\left\{ \lambda \in \mathbb {R}_+^{(T)}\mid \lambda _tg_t(\bar{x},v_t)=0,\forall v_t\in \mathcal {V}_t,\forall t\in T\right\} . \end{aligned}$$

It yields from (2) to (4) that

$$\begin{aligned} \begin{array}{ll} \nu \mathbb {B}\subset &{}\left\{ \displaystyle \sum _{k=1}^{m}\alpha _k\partial ^Cf_k(\bar{x})\mid \alpha _k\geqslant 0,k=1,\cdots ,m,\sum _{k=1}^{m}\alpha _k=1\right\} \\ &{}+\bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] +N^C(\bar{x};\Omega ). \end{array} \end{aligned}$$

Therefore, there exist \(\alpha \in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} \nu \mathbb {B}\subset \sum _{k=1}^{m}{\alpha _k\partial ^Cf_k(\bar{x})}+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$

The proof is complete. \(\square \)

The following simple example shows that the qualification condition (RCQ) is essential in Theorem 1.

Example 1

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_1(x)=f_2(x)=x^2,x\in \mathbb {R}. \end{aligned}$$

Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=v_tx^2, x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=\{0\}\). Now, take \(\bar{x}=0\in F\). Then, it is easy to see that \(\bar{x}\) is a global isolated efficient solution of the problem (RSIMP). Indeed, we have

$$\begin{aligned} \max _{1\leqslant k\leqslant 2}\{f_k(x)-f_k(\bar{x})\}=x^2\geqslant \nu |x|=\nu ||x-\bar{x}||,\forall \nu >0,\forall x\in F. \end{aligned}$$

Besides, take \(\bar{x}=0,\mathbb {B}=[-1,1],\nu =1>0,\alpha =(\alpha _1,\alpha _2)\in \mathbb {R}_+^2\) with \(\alpha _1+\alpha _2=1\), we have \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\), \(\partial ^Cf_k(\bar{x})=\{0\},k=1,2\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\}\), for any \(v_t\in \mathcal {V}_t,t\in T\). It is easy to see that

$$\begin{aligned} \nu \mathbb {B}=[-1,1]\not \subset [0,+\infty )=\displaystyle \sum _{k=1}^{2}\alpha _k\partial ^Cf_k(\bar{x})+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{aligned}$$

for any \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). The reason is that the qualification condition (RCQ) is not satisfied at \(\bar{x}=0\). Indeed, one has

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^Cg_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega )=[0,+\infty ). \end{aligned}$$

However, \(N^C(\bar{x};F)=N^C(\bar{x};\{0\})=\mathbb {R}\). Clearly, the qualification condition (RCQ) does not hold at \(\bar{x}\).

The following simple example proves that, in general, a feasible point may satisfy the qualification condition (RCQ), but if this point is not a global isolated efficient solution of the problem (RSIMP), then

$$\begin{aligned} \nu \mathbb {B}\subset \displaystyle \sum _{k=1}^{m}\alpha _k\partial ^Cf_k(\bar{x})+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ) \end{aligned}$$

does not hold.

Example 2

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_1(x)=f_2(x)=x+1,x\in \mathbb {R}. \end{aligned}$$

Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=-v_tx^2, x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). By Choosing \(\bar{x}=0\in F\), it is easy to see that \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\), \(\partial ^Cf_k(\bar{x})=\{0\},k=1,2\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\},\forall v_t\in \mathcal {V}_t,t\in T\). Therefore, we have

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^Cg_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega )=[0,+\infty ). \end{aligned}$$

Moreover, we have \(N^C(\bar{x};F)=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\). Clearly, the qualification condition (RCQ) holds at \(\bar{x}=0\). Besides, take \(\bar{x}=0,\mathbb {B}=[-1,1],\nu =1>0,\alpha =(\alpha _1,\alpha _2)\in \mathbb {R}_+^2\) with \(\alpha _1+\alpha _2=1\), one implies \(\partial ^Cf_k(\bar{x})=\{1\},k=1,2\) and

$$\begin{aligned} \begin{array}{ll} \nu \mathbb {B}&{}=[-1,1]\not \subset [1,+\infty )=\{1\}+[0,+\infty )\\ &{}=\displaystyle \sum _{k=1}^{2}\alpha _k\partial ^Cf_k(\bar{x})+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{array} \end{aligned}$$

Hence, condition

$$\begin{aligned} \nu \mathbb {B}\subset \displaystyle \sum _{k=1}^{2}\alpha _k\partial ^Cf_k(\bar{x})+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ) \end{aligned}$$

does not hold. The reason is that \(\bar{x}=0\) is a global isolated efficient solution of the problem (RSIMP). Indeed, we can choose \(x=-2\in F=(-\infty ,0]\). Clearly,

$$\begin{aligned} \max _{1\leqslant k\leqslant 2}\{f_k(x)-f_k(\bar{x})\}=x+1-1=-2<2\nu =\nu ||x-\bar{x}||,\forall \nu >0. \end{aligned}$$

Now, we propose a necessary optimality condition for a local positively property efficient solution of the problem (RSIMP) under the qualification condition (RCQ).

Theorem 2

Let \(\bar{x}\in F\) be a local positively property efficient solution of the problem (RSIMP). Suppose that the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} 0\in \sum _{k=1}^{m}\beta _k\partial ^Cf_k(\bar{x})+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$

Proof

Let \(\bar{x}\in F\) be a local positively property efficient solution of the problem (RSIMP). Then there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(\beta :=(\beta _1,\cdots ,\beta _m)\in \textrm{int}\mathbb {R}_+^m\) such that

$$\begin{aligned} \left\langle {\beta ,f(x)}\right\rangle \geqslant \left\langle {\beta ,f(\bar{x})}\right\rangle ,\forall x\in U\cap F. \end{aligned}$$

Then, \(\forall x\in U\cap F\),

$$\begin{aligned} \sum _{k=1}^{m}\beta _kf_k(x)=\sum _{k=1}^{m}\beta _kf_k(\bar{x}). \end{aligned}$$
(5)

For any \(x\in \mathbb {R}^n\), set

$$\begin{aligned} \Phi (x):=\sum _{k=1}^{m}\beta _kf_k(x). \end{aligned}$$

Applying (5) we deduce that \(\bar{x}\) is a local minimizer of the following problem

$$\begin{aligned} \min _{x\in F}\Phi (x). \end{aligned}$$

Since function \(\Phi \) is locally Lipschitz at \(\bar{x}\), so we deduce from Lemma 1 that

$$\begin{aligned} 0\in \partial ^C\Phi (\bar{x})+N^C(\bar{x};F). \end{aligned}$$
(6)

According to Lemma 2, we have

$$\begin{aligned} \partial ^C\Phi (\bar{x})=\partial ^C\left( \sum _{k=1}^{m}\beta _kf_k(\cdot )\right) (\bar{x})=\sum _{k=1}^{m}\beta _k\partial ^Cf_k(\bar{x}). \end{aligned}$$
(7)

Because the qualification condition (RCQ) holds at \(\bar{x}\in F\). So, one implies

$$\begin{aligned} N^C\left( \bar{x};F\right) \subseteq \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega ), \end{aligned}$$
(8)

where

$$\begin{aligned} A(\bar{x}):=\{\lambda \in \mathbb {R}_+^{(T)}\mid \lambda _tg_t(\bar{x},v_t)=0,\forall v_t\in \mathcal {V}_t,\forall t\in T\}. \end{aligned}$$

It yields from (6) to (8) that

$$\begin{aligned} 0\in \sum _{k=1}^{m}\beta _k\partial ^Cf_k(\bar{x})+\bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega ). \end{aligned}$$

Therefore, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} 0\in \sum _{k=1}^{m}\beta _k\partial ^Cf_k(\bar{x})+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$

The proof is complete. \(\square \)

Now, we introduce a concept of the robust (KKT) condition for the problem (RSIMP).

Definition 3

A point \(\bar{x}\in F\) is said to satisfy the robust (KKT) condition with respect to the problem (RSIMP) if there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} 0\in \sum _{k=1}^{m}\beta _k\partial ^Cf_k(\bar{x})+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$

The following simple example proves that a point satisfying the robust (KKT) condition is not necessarily a global positively properly efficient solution of the problem (RSIMP) even in the smooth case.

Example 3

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_1(x)=f_2(x)=x^3,x\in \mathbb {R}. \end{aligned}$$

Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=-v_tx^2, x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). By choosing \(\bar{x}=0\in F\), we have \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial ^Cf_k(\bar{x})=\{0\},k=1,2\), \(\partial _x^Cg_t(\bar{x},v_t)=\{0\},v_t\in \mathcal {V}_t,t\in T\). On the other hand, take \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\), it is easy to see that

$$\begin{aligned} 0\in [0,+\infty )=\displaystyle \sum _{k=1}^{2}\beta _k\partial ^Cf_k(\bar{x})+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{aligned}$$

for all \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). Thus, the robust (KKT) condition is satisfied at \(\bar{x}\). However, \(\bar{x}=0\in F\) is not a global positively properly efficient solution of the problem (RSIMP). To see this, we can choose \(x=-1\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\). Then, it is easy to see that

$$\begin{aligned} \sum _{k=1}^{2}\beta _kf_k(x)=-(\beta _1+\beta _2)<0=\sum _{k=1}^{2}\beta _kf_k(\bar{x}). \end{aligned}$$

Before we discuss sufficient condition for a global positively properly efficient solution of the problem (RSIMP), we introduce the concepts of convexity, which are inspired by [37].

Definition 4

We say that \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) are quasiconvex on \(\Omega \) at \(\bar{x}\in \Omega \) if for all \(x\in \Omega \),

$$\begin{aligned} g_t(x,v_t)\leqslant g_t(\bar{x},v_t)\Rightarrow \left\langle {x_t,x-\bar{x}}\right\rangle \leqslant 0,\forall x_t\in \partial _x^Cg_t(\bar{x},v_t),\forall v_t\in \mathcal {V}_t,\forall t\in T. \end{aligned}$$

Definition 5

We say that \(f:=(f_1,\cdots ,f_m)\) is pseudoconvex on \(\Omega \) at \(\bar{x}\in \Omega \) if for all \(x\in \Omega \), there exist \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m\) such that

$$\begin{aligned} \left\langle {x_k,x-\bar{x}}\right\rangle \geqslant 0\Rightarrow f_k(x)\ge f_k(\bar{x}),k=1,\cdots ,m. \end{aligned}$$

Now, we will give a sufficient condition for a global positively properly efficient solution of the problem (RSIMP).

Theorem 3

Assume that \(\Omega \) is a convex set and \(\bar{x}\in F\) satisfies the robust (KKT) condition. If f is pseudoconvex on \(\Omega \) at \(\bar{x}\) and functions \(g_t,t\in T\) are quasiconvex on \(\Omega \) at \(\bar{x}\), then \(\bar{x}\in F\) is a global positively properly efficient solution of the problem (RSIMP).

Proof

Since \(\bar{x}\in F\) satisfies the robust (KKT) condition, there exist \(\beta \in \text{ int }\mathbb {R}_+^m\) and \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m,x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t,t\in T,\lambda \in A(\bar{x})\) defined in (1), as well as \(w\in N^C(\bar{x};\Omega )\) such that

$$\begin{aligned} \sum _{k=1}^{m}\beta _kx_k+\sum _{t\in T}\lambda _tx_t+w=0, \end{aligned}$$

which is equivalent to

$$\begin{aligned} \left\langle {\sum _{k=1}^{m}\beta _kx_k,x-\bar{x}}\right\rangle +\left\langle {\sum _{t\in T}\lambda _tx_t,x-\bar{x}}\right\rangle +\left\langle {w,x-\bar{x}}\right\rangle =0. \end{aligned}$$
(9)

Since \(\Omega \) is a convex set and \(w\in N^C(\bar{x};\Omega )\), it follows that, for any \(x\in \Omega \),

$$\begin{aligned} \left\langle {w,x-\bar{x}}\right\rangle \leqslant 0. \end{aligned}$$

From (9) it follows that

$$\begin{aligned} \left\langle {\sum _{k=1}^{m}\beta _kx_k,x-\bar{x}}\right\rangle +\left\langle {\sum _{t\in T}\lambda _tx_t,x-\bar{x}}\right\rangle \geqslant 0, \end{aligned}$$

which means that

$$\begin{aligned} \left\langle {\sum _{k=1}^{m}\beta _kx_k,x-\bar{x}}\right\rangle \geqslant -\left\langle {\sum _{t\in T}\lambda _tx_t,x-\bar{x}}\right\rangle . \end{aligned}$$
(10)

Moreover, for any \(\lambda \in A(\bar{x})\), then \(\lambda _tg_t(\bar{x},v_t)=0,\forall t\in T\). Note that for any \(x\in F\), then \(\lambda _tg_t(x,v_t)\leqslant 0\) for any \(v_t\in \mathcal {V}_t,t\in T\). It follows that

$$\begin{aligned} \lambda _tg_t(x,v_t)\leqslant 0=\lambda _tg_t(\bar{x},v_t),\forall t\in T. \end{aligned}$$

By \(g_t\) is quasiconvex on \(\Omega \) at \(\bar{x}\) and \(x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t\), for all \(t\in T\), we obtain \(\left\langle {\lambda _tx_t,x-\bar{x}}\right\rangle \leqslant 0,\forall t\in T\). It is easy to imply that

$$\begin{aligned} \left\langle {\sum _{t\in T}\lambda _tx_t,x-\bar{x}}\right\rangle \leqslant 0. \end{aligned}$$
(11)

Combining (10) and (11), we can assert that

$$\begin{aligned} \left\langle {\sum _{k=1}^{m}\beta _kx_k,x-\bar{x}}\right\rangle \ge 0. \end{aligned}$$

Since f is pseudoconvex on \(\Omega \) at \(\bar{x}\), it follows that

$$\begin{aligned} \sum _{k=1}^{m}\beta _kf_k(x)\geqslant \sum _{k=1}^{m}\beta _kf_k(\bar{x}). \end{aligned}$$

Therefore, \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP). The proof is complete. \(\square \)

Now, we present an example to show the importance of the pseudoconvexity in Theorem 3.

Example 4

Let \(x\in \mathbb {R}\) and \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_k(x)=\left\{ \begin{array}{cl} x^2\cos \dfrac{1}{x},&{} \text{ if } x\ne 0,\\ 0,&{} \text{ if } x=0, \end{array} \right. \end{aligned}$$

\(k=1,2\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=-v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RSIMP) with \(m=2\) and \(\Omega =[0,+\infty )\subset \mathbb {R}\). By simple computation, one has \(F=[0,+\infty )\). By selecting \(\bar{x}=0\in F\), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};[0,+\infty ))=(-\infty ,0)\),

$$\begin{aligned} \partial ^Cf_k(\bar{x})=[-1,1],k=1,2 \text{ and } \partial _x^Cg_t(\bar{x},v_t)=\{0\},\forall v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

It can be verified that \(\bar{x}=0\in F\) satisfies the robust (KKT) condition. Indeed, let us select \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\), it is easy to imply that

$$\begin{aligned} \begin{array}{ll} 0\in (-\infty ,1]&{}=[-1,1]+(-\infty ,0]\\ &{}=\displaystyle \sum _{k=1}^{2}\beta _k\partial ^Cf_k(\bar{x})+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{array} \end{aligned}$$

for all \(\lambda \in A(\bar{x})\) and \(v_t\in \mathcal {V}_t,t\in T\). However, \(\bar{x}=0\) is not a global positively properly efficient solution of the problem (RSIMP). In order to see this, let us take \(\hat{x}=\dfrac{1}{\pi }\in F=[0,+\infty )\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then,

$$\begin{aligned} \sum _{k=1}^{2}\beta _kf_k(\hat{x})=-\dfrac{1}{\pi ^2}<0=\sum _{k=1}^{2}\beta _kf_k(\bar{x}). \end{aligned}$$

The reason is that f is not pseudoconvex on \(\Omega \) at \(\bar{x}=0\). Indeed, take \(x=\dfrac{1}{3\pi }\in \Omega =[0,+\infty )\) and \(x_k=0\in \partial ^Cf_k(\bar{x})=[-1,1],k=1,2\). Clearly,

$$\begin{aligned} \left\langle {x_k,x-\bar{x}}\right\rangle =0\geqslant 0,k=1,2. \end{aligned}$$

However,

$$\begin{aligned} f_k(x)=-\dfrac{1}{9\pi ^2}<0=f_k(\bar{x}),k=1,2. \end{aligned}$$

4 Robust Duality for Properly Efficient Solution

In this section, we consider the Mond–Weir-type dual problem (MWD) with respect to the problem (RSIMP).

For \(x\in \mathbb {R}^n\), a nonempty and closed set \(\Omega \subseteq \mathbb {R}^n\), \(\beta :=(\beta _1,\cdots ,\beta _m)\in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\beta _k=1\) and \(\lambda \in \mathbb {R}_+^{(T)},v_t\in \mathcal {V}_t,t\in T,f:=(f_1,\cdots ,f_m),g_T:=(g_t)_{t\in T}\), let us denote a vector function \(L:=(L_1,\cdots ,L_m)\) by

$$\begin{aligned} L(x,\beta ,\lambda ):=f(x). \end{aligned}$$

We consider the Mond–Weir-type dual problem (MWD) with respect to the primal problem (RSIMP) as follows:

$$\begin{aligned} \text{(MWD) }\,\,\,\,\left\{ \begin{array}{ll} \max &{}L(y,\beta ,\lambda )\\ \text{ s.t. }&{}0\in \displaystyle \sum _{k=1}^{m}\beta _k\partial ^Cf_k(y)+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(y,v_t)+N^C(y;\Omega ),\\ &{}\displaystyle \sum _{t\in T}\lambda _tg_t(y,v_t)\geqslant 0,v_t\in \mathcal {V}_t,t\in T,\\ &{}y\in \Omega ,\beta \in \text{ int }\mathbb {R}_+^m,\displaystyle \sum _{k=1}^{m}\beta _k=1,\lambda \in \mathbb {R}_+^{(T)}. \end{array} \right. \end{aligned}$$

The feasible set of the problem (MWD) is defined by

$$\begin{aligned} \begin{array}{l} F_\textrm{MWD}=\left\{ (y,\beta ,\lambda )\in \Omega \times \text{ int }\mathbb {R}_+^m\times \mathbb {R}_+^{(T)}\mid 0\in \displaystyle \sum _{k=1}^{m}\beta _k\partial ^Cf_k(y)+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(y,v_t)\right. \\ \qquad \left. +N^C(y;\Omega ),\displaystyle \sum _{t\in T}\lambda _tg_t(y,v_t)\geqslant 0,v_t\in \mathcal {V}_t,t\in T,\displaystyle \sum _{k=1}^{m}\beta _k=1\right\} . \end{array} \end{aligned}$$

In what follows, we use the following notation for convenience:

$$\begin{aligned} u\preceq v\Leftrightarrow u-v\in -\mathbb {R}_+^m\backslash \{0\},\,\,\,\,u\not \preceq v \text{ is } \text{ the } \text{ negation } \text{ of } u\preceq v. \end{aligned}$$

Now, we will introduce the following definitions for a global efficient solution and a global positively properly efficient solution of the problem (MWD).

Definition 6

A point \((\bar{y},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\) is called

  1. (i)

    A global efficient solution of the problem (MWD) if

    $$\begin{aligned} L(y,\beta ,\lambda )-L(\bar{y},\bar{\beta },\bar{\lambda })\notin \mathbb {R}_+^m\backslash \{0\},\forall (y,\beta ,\lambda )\in F_\textrm{MWD}. \end{aligned}$$
  2. (ii)

    A global positively properly efficient solution of the problem (MWD) if there exists \(\theta :=(\theta _1,\cdots ,\theta _m)\in -\text{ int }\mathbb {R}_+^m\) such that

    $$\begin{aligned} \left\langle {\theta ,L(y,\beta ,\lambda )}\right\rangle \geqslant \left\langle {\theta ,L(\bar{y},\bar{\beta },\bar{\lambda })}\right\rangle ,\forall (y,\beta ,\lambda )\in F_\textrm{MWD}. \end{aligned}$$

Motivated by the definition of the generalized convexity due to [9, 10], we will introduce a concept of the generalized convexity as follows:

Definition 7

We say that \((f,g_T)\) is generalized convex on \(\Omega \) at \(\bar{x}\in \Omega \), if for any \(x\in \Omega \), \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m\) and \(x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(\bar{x};\Omega )\) such that

$$\begin{aligned}{} & {} f_k(x)-f_k(\bar{x})\geqslant \left\langle {x_k,w} \right\rangle ,k=1,\cdots ,m, \\{} & {} g_t(x,v_t)-g_t(\bar{x},v_t)\geqslant \left\langle {x_t,w} \right\rangle ,\forall t\in T. \end{aligned}$$

Remark 1

Note that, if \(\Omega \) is a convex set and \(f_k(\cdot ),k=1,\cdots ,m,g_t(\cdot ,v_t),v_t\in \mathcal {V}_t,t\in T\) are convex functions, then \((f,g_T)\) is generalized convex on \(\Omega \) at any \(\bar{x}\in \Omega \) with \(w:=x-\bar{x}\) for each \(x\in \Omega \).

The next example shows that the class of the generalized convex functions is properly larger than the one of the convex functions.

Example 5

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_1(x)=x^4,f_2(x)=x^2,x\in \mathbb {R}. \end{aligned}$$

Take \(T=[0,1],v_t\in \mathcal {V}_t=[0,2-t],\forall t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t\backslash \{0\},t\in T \text{ and } g_0(x,0)=\left\{ \begin{array}{ll} \dfrac{x}{3},&{} \text{ if } x\geqslant 0,\\ x,&{} \text{ if } x< 0. \end{array} \right. \end{aligned}$$

Consider \(\Omega =\mathbb {R},\bar{x}=0\in \Omega \), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};\mathbb {R})=\{0\},T^C(\bar{x};\Omega )=T^C(\bar{x};\mathbb {R})=\mathbb {R}\). It is easy to see that \((f,g_T)\) is generalized convex on \(\Omega \) at \(\bar{x}\). However, \(g_0(\cdot ,0)\) is not a convex function. Indeed, let \(x_1=1,x_2=-1\in \mathbb {R}\), and choose \(\lambda =\dfrac{1}{2}\in [0,1]\), we have

$$\begin{aligned} g_0(\lambda x_1+(1-\lambda )x_2,0)=0>-\dfrac{1}{3}=\lambda g_0(x_1,0)+(1-\lambda )g_0(x_2,0). \end{aligned}$$

In the line of [15], we will introduce a concept of the generalized convexity as follows:

Definition 8

We say that \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\in \Omega \), if for any \(x\in \Omega \), \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m\) and \(x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(\bar{x};\Omega )\) such that

$$\begin{aligned}{} & {} \left\langle {x_k,w} \right\rangle \geqslant 0\Rightarrow f_k(x)\geqslant f_k(\bar{x}),k=1,\cdots ,m, \\{} & {} g_t(x,v_t)\leqslant g_t(\bar{x},v_t)\Rightarrow \left\langle {x_t,w} \right\rangle \leqslant 0,\forall t\in T. \end{aligned}$$

Remark 2

If \((f,g_T)\) is generalized convex on \(\Omega \) at \(\bar{x}\in \Omega \), then \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\in \Omega \).

The next example shows that the class of the pseudogeneralized convex functions is properly larger than the one of the generalized convex functions.

Example 6

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_1(x)=\left\{ \begin{array}{ll} \dfrac{x}{3},&{} \text{ if } x\geqslant 0,\\ x,&{} \text{ if } x<0, \end{array} \right. \,\,\ f_2(x)=\left\{ \begin{array}{ll} x,&{} \text{ if } x\geqslant 0,\\ 0,&{} \text{ if } x< 0. \end{array} \right. \end{aligned}$$

Take \(T=[0,1],v_t\in \mathcal {V}_t=[0,2-t],\forall t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T=[0,1]\) be given by

$$\begin{aligned} g_t(x,v_t)=v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t\backslash \{0\},t\in T \text{ and } g_0(x,0):=\left\{ \begin{array}{ll} -\dfrac{x}{3},&{} \text{ if } x<0,\\ -x,&{} \text{ if } x\geqslant 0. \end{array} \right. \end{aligned}$$

Consider \(\Omega =\mathbb {R},\bar{x}=0\in \Omega \), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};\mathbb {R})=\{0\},T^C(\bar{x};\Omega )=T^C(\bar{x};\mathbb {R})=\mathbb {R}\). It is easy to see that \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\). Meanwhile, \((f,g_T)\) is not generalized convex function on \(\Omega \) at \(\bar{x}\).

Now, we establish the following weak duality theorem, which describes relation between the problem (RSIMP) and the problem (MWD).

Theorem 4

Suppose that \(x\in F\) and \((y,\beta ,\lambda )\in F_\textrm{MWD}\). If \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at y, then

$$\begin{aligned} f(x)\not \preceq L(y,\beta ,\lambda ). \end{aligned}$$

Proof

Since \((y,\beta ,\lambda )\in F_\textrm{MWD}\), there exist \(x_k\in \partial ^Cf_k(y),k=1,\cdots ,m,\beta \in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}{\beta _k}=1\) and \(x_t\in \partial _x^Cg_t(y,v_t),v_t\in \mathcal {V}_t,t\in T,\lambda \in \mathbb {R}_+^{(T)}\) such that

$$\begin{aligned} -\left( \sum _{k=1}^{m}\beta _kx_k+\sum _{t\in T}\lambda _tx_t\right) \in N^C(y;\Omega ) \end{aligned}$$
(12)

and

$$\begin{aligned} \sum _{t\in T}\lambda _tg_t(y,v_t)\geqslant 0. \end{aligned}$$
(13)

Let \(x\in F\). Suppose on contrary that

$$\begin{aligned} f(x)\preceq L(y,\beta ,\lambda ). \end{aligned}$$

Hence \(\left\langle {\beta ,f(x)-f(y)}\right\rangle <0\) due to \(\beta \in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\beta _k=1\). Thus,

$$\begin{aligned} \sum _{k=1}^{m}\beta _kf_k(x)<\sum _{k=1}^{m}\beta _kf_k(y). \end{aligned}$$
(14)

Note that, for \(x\in F\), we have \(g_t(x,v_t)\leqslant 0\) for any \(t\in T\). It yields that

$$\begin{aligned} \sum _{t\in T}\lambda _tg_t(x,v_t)\leqslant 0. \end{aligned}$$
(15)

From (13) together with (15)

$$\begin{aligned} \sum _{t\in T}\lambda _tg_t(x,v_t)\leqslant \sum _{t\in T}\lambda _tg_t(y,v_t). \end{aligned}$$
(16)

By the pseudogeneralized convexity of \((f,g_T)\) on \(\Omega \) at \(y\in \Omega \) and (14), (16), for such \(x\in F\subseteq \Omega ,x_k\in \partial ^Cf_k(y),k=1,\cdots ,m,x_t\in \partial _x^Cg_t(y,v_t),v_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(y;\Omega )\) such that

$$\begin{aligned} \displaystyle \sum _{k=1}^{m}\beta _k\left\langle {x_k,w}\right\rangle <0 \end{aligned}$$
(17)

and

$$\begin{aligned} \displaystyle \sum _{t\in T}\lambda _t\left\langle {x_t,w}\right\rangle \leqslant 0. \end{aligned}$$
(18)

Combining (17) with (18), we can assert that

$$\begin{aligned} \displaystyle \sum _{k=1}^{m}\beta _k\left\langle {x_k,w}\right\rangle +\displaystyle \sum _{t\in T}\lambda _t\left\langle {x_t,w}\right\rangle <0. \end{aligned}$$
(19)

On the other side, we yield from (12) and the relation \(w\in T^C(y;\Omega )\) that

$$\begin{aligned} \displaystyle \sum _{k=1}^{m}\beta _k\left\langle {x_k,w}\right\rangle +\displaystyle \sum _{t\in T}\lambda _t\left\langle {x_t,w}\right\rangle \ge 0, \end{aligned}$$

which contradicts (19). Thus, \(f(x)\not \preceq L(y,\beta ,\lambda )\). The proof is complete. \(\square \)

The following example shows that the pseudogeneralized convexity of \((f,g_T)\) on \(\Omega \) imposed in Theorem 4 cannot be removed.

Example 7

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by

$$\begin{aligned} f(x)=\left( f_1(x),f_2(x)\right) , \end{aligned}$$

where \(f_1(x)=f_2(x)=x^3,x\in \mathbb {R}\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=-v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). Let us select \(\bar{x}=-1\in F\). Now, consider the dual problem (MWD). By choosing \(\bar{y}=0\in \Omega ,\bar{\beta }=(\bar{\beta }_1,\bar{\beta }_2)\in \text{ int }\mathbb {R}_+^2\) with \(\bar{\beta }_1+\bar{\beta }_2=1,\bar{\lambda }\in \mathbb {R}_+^{(T)}\), we have \(N^C(\bar{y};\Omega )=N^C(\bar{y};(-\infty ,0])=[0,+\infty )\) and \(\partial ^Cf_k(\bar{y})=\{0\},k=1,2,\partial _x^Cg_t(\bar{y},v_t)=\{0\},v_t\in \mathcal {V}_t,t\in T\). It is easy to see that

$$\begin{aligned} 0\in [0,+\infty )=\displaystyle \sum _{k=1}^{2}\bar{\beta }_k\partial ^Cf_k(\bar{y})+\displaystyle \sum _{t\in T}\bar{\lambda }_t\partial _x^Cg_t(\bar{y},v_t)+N^C(\bar{y};\Omega ) \end{aligned}$$

and \(\bar{\beta }_1+\bar{\beta }_2=1,\displaystyle \sum _{t\in T}\bar{\lambda }_tg_t(\bar{y},v_t)=0,v_t\in \mathcal {V}_t,t\in T\). Thus, \((\bar{y},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\). However, if \(\bar{x}=-1\in F\), then

$$\begin{aligned} \begin{array}{ll} f(\bar{x})&{}=(f_1(\bar{x}),f_2(\bar{x}))=(-1,-1)\preceq (0,0)=(L_1(\bar{y},\bar{\beta },\bar{\lambda }),L_2(\bar{y},\bar{\beta },\bar{\lambda }))\\ &{}=L(\bar{y},\bar{\beta },\bar{\lambda }). \end{array} \end{aligned}$$

The reason is that \((f,g_T)\) is not pseudogeneralized convex on \(\Omega \) at \(\bar{y}=0\). To see this, we can choose \(y=-3\in \Omega \) and \(x_k\in \partial ^Cf_k(\bar{y})=\{0\},k=1,2\). Then, it is easy to see that \(T^C(\bar{y};\Omega )=T^C(\bar{y};(-\infty ,0])=(-\infty ,0]\) and

$$\begin{aligned} \left\langle {x_k,w} \right\rangle =0\geqslant 0,\forall w\in T^C(\bar{y};\Omega ),k=1,2. \end{aligned}$$

However,

$$\begin{aligned} f_k(y)=-27<0=f_k(\bar{y}),k=1,2. \end{aligned}$$

Now, we establish the following strong duality theorem, which describes relation between the problem (RSIMP) and the problem (MWD).

Theorem 5

Suppose that \(\bar{x}\in F\) is a local positively properly efficient solution of the problem (RSIMP) such that the qualification condition (RCQ) is satisfied at \(\bar{x}\). Then there exists \((\bar{\beta },\bar{\lambda })\in \text{ int }\mathbb {R}_+^m\times \mathbb {R}_+^{(T)}\) such that \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\) and \(f(\bar{x})=L(\bar{x},\bar{\beta },\bar{\lambda })\). If in addition \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(y\in \Omega \), then \((\bar{x},\bar{\beta },\bar{\lambda })\) is a global efficient solution of the problem (MWD).

Proof

According to Theorem 2, there exist \(\beta \in \text{ int }\mathbb {R}_+^m\) and \(v_t\in \mathcal {V}_t,t\in T,\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} 0\in \sum _{k=1}^{m}{\beta _k\partial ^Cf_k(\bar{x})}+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$
(20)

Putting

$$\begin{aligned} \bar{\beta }_k:=\dfrac{\beta _k}{\displaystyle \sum _{k=1}^{m}\beta _k},k=1,\cdots ,m,\bar{\lambda }_t:=\dfrac{\lambda _t}{\displaystyle \sum _{k=1}^{m}\beta _k},t\in T, \end{aligned}$$

one has \(\bar{\beta }:=(\bar{\beta }_1,\cdots ,\bar{\beta }_m)\in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\bar{\beta }_k=1,\) and \(\bar{\lambda }:=(\bar{\lambda }_t)_{t\in T}\in \mathbb {R}_+^{(T)}\). Furthermore, the assertion in (20) is also valid when \(\beta _k\)’s and \(\lambda _t\)’s are replaced by \(\bar{\beta }_k\)’s and \(\bar{\lambda }_t\)’s, respectively. Besides, since \(\lambda \in A(\bar{x})\) defined in (1), we have \(\lambda _tg_t(\bar{x},v_t)=0,\forall t\in T\), it implies that \(\displaystyle \sum _{t\in T}\bar{\lambda }_tg_t(\bar{x},v_t)=0\ge 0\). Therefore, one has \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\). It is easy to imply that

$$\begin{aligned} f(\bar{x})=L(\bar{x},\bar{\beta },\bar{\lambda }). \end{aligned}$$

Because \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at any \(y\in \Omega \), so we apply the result of Theorem 4 to deduce that

$$\begin{aligned} L(\bar{x},\bar{\beta },\bar{\lambda })=f(\bar{x})\not \preceq L(y,\beta ,\lambda ), \end{aligned}$$

for any \((y,\beta ,\lambda )\in F_\textrm{MWD}\). Therefore, one has

$$\begin{aligned} L(y,\beta ,\lambda )-L(\bar{x},\bar{\beta },\bar{\lambda })\notin \mathbb {R}_+^m\backslash \{0\},\forall (y,\beta ,\lambda )\in F_\textrm{MWD}. \end{aligned}$$

This means that \((\bar{x},\bar{\beta },\bar{\lambda })\) is a global efficient solution of the problem (MWD). The proof is complete. \(\square \)

Remark 3

Note that our strong duality result appeared in Theorem 5 in not in an ordinary way; that is, the solution of the dual problem is not guaranteed to be positively properly efficient, only efficient, although the solution to the primal one is local positively properly efficient. As shown by [5] (Example 4.6), for the case when \(\mathcal {V}_t,t\in T\) are singletons and T is a finite set, we cannot gain in general a positively properly efficient solution for the dual problem, even in the convex framework.

The next example asserts the importance of the qualification condition (RCQ) imposed in Theorem 5. More precisely, if \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP) at which the qualification condition (RCQ) is not satisfied, then we may not find out a pair \((\bar{\beta },\bar{\lambda })\in \text{ int }\mathbb {R}_+^m\times \mathbb {R}_+^{(T)}\) such that \((\bar{x},\bar{\beta },\bar{\lambda })\) belongs to the feasible set \(F_\textrm{MWD}\) of the dual problem (MWD).

Example 8

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where

$$\begin{aligned} f_1(x)=f_2(x)=x,x\in \mathbb {R}. \end{aligned}$$

Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=\{0\}\). Now, take \(\bar{x}=0\in F,\bar{\beta }=(\bar{\beta }_1,\bar{\beta }_2)\in \text{ int }\mathbb {R}_+^2\) with \(\bar{\beta }_1+\bar{\beta }_2=1\). Then, it is easy to show that \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP). Indeed, we have

$$\begin{aligned} \sum _{k=1}^{2}\bar{\beta }_kf_k(x)=x\geqslant 0=\sum _{k=1}^{2}\bar{\beta }_kf_k(\bar{x}),\forall x\in F. \end{aligned}$$

Now, consider the dual problem (MWD). By choosing \(\bar{x}=0\in \Omega ,\bar{\lambda }\in \mathbb {R}_+^{(T)},\bar{\beta }=(\bar{\beta }_1,\bar{\beta }_2)\in \text{ int }\mathbb {R}_+^2\) with \(\bar{\beta }_1+\bar{\beta }_2=1\), we have

$$\begin{aligned} N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty ) \end{aligned}$$

and \(\partial ^Cf_k(\bar{x})=\{1\},k=1,2,\partial _x^Cg_t(\bar{x},v_t)=\{0\},v_t\in \mathcal {V}_t,\forall t\in T\). It is easy to see that

$$\begin{aligned} \begin{array}{ll} 0&{}\notin \left[ 1,+\infty \right) =\{1\}+[0,+\infty )\\ &{}=\displaystyle \sum _{k=1}^{2}\bar{\beta }_k\partial ^Cf_k(\bar{x})+\displaystyle \sum _{t\in T}\bar{\lambda }_t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{array} \end{aligned}$$

Thus, \((\bar{x},\bar{\beta },\bar{\lambda })\notin F_\textrm{MWD}\). The reason is that the qualification condition (RCQ) is not satisfied at \(\bar{x}=0\in F\). Indeed, we have \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\}\) for any \(v_t\in \mathcal {V}_t,t\in T\),

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega )=[0,+\infty ). \end{aligned}$$

Besides, one has \(N^C(\bar{x};F)=N^C(\bar{x};{0})=\mathbb {R}\). Therefore, the qualification condition (RCQ) is not satisfied at \(\bar{x}\).

Finally, we establish the following converse duality theorem, which describes relation between the problem (RSIMP) and the problem (MWD).

Theorem 6

Assume that \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\). If \(\bar{x}\in F\) and \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\), then \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP).

Proof

Since \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\), there exist \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m,\bar{\beta }:=(\bar{\beta }_1,\cdots ,\bar{\beta }_m)\in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}{\bar{\beta }_k}=1\) and \(x_t\in \partial _x^Cg_t(\bar{x},\bar{v}_t),\bar{v}_t\in \mathcal {V}_t,t\in T\), \(\bar{\lambda }:=(\bar{\lambda }_t)_{t\in T}\in \mathbb {R}_+^{(T)}\) such that

$$\begin{aligned} -\left( \sum _{k=1}^{m}\bar{\beta }_kx_k+\sum _{t\in T}\bar{\lambda }_tx_t\right) \in N^C(\bar{x};\Omega ) \end{aligned}$$
(21)

and

$$\begin{aligned} \sum _{t\in T}\bar{\lambda }_tg_t(\bar{x},\bar{v}_t)\geqslant 0. \end{aligned}$$
(22)

Let \(\bar{x}\in F\). Suppose on contrary that \(\bar{x}\in F\) is not a global positively properly efficient solution of the problem (RSIMP). For such \(\bar{\beta }:=(\bar{\beta }_1,\cdots ,\bar{\beta }_m)\in \text{ int }\mathbb {R}_+^m\), it then follows that there exists \(\hat{x}\in F\) such that

$$\begin{aligned} \left\langle {\bar{\beta },f(\hat{x})}\right\rangle <\left\langle {\bar{\beta },f(\bar{x})}\right\rangle . \end{aligned}$$

Thus,

$$\begin{aligned} \sum _{k=1}^{m}\bar{\beta }_kf_k<\sum _{k=1}^{m}\bar{\beta }_kf_k(\bar{x}). \end{aligned}$$
(23)

Note that, for \(\hat{x}\in F\) we have \(g_t(\hat{x},\bar{v}_t)\leqslant 0\) for any \(t\in T\). It yields that

$$\begin{aligned} \sum _{t\in T}\bar{\lambda }_tg_t(\hat{x},\bar{v}_t)\leqslant 0. \end{aligned}$$
(24)

From (22) together with (24)

$$\begin{aligned} \sum _{t\in T}\bar{\lambda }_tg_t(\hat{x},\bar{v}_t)\leqslant \sum _{t\in T}\bar{\lambda }_tg_t(\bar{x},\bar{v}_t). \end{aligned}$$
(25)

By the pseudogeneralized convexity of \((f,g_T)\) on \(\Omega \) at \(\bar{x}\in \Omega \) and (23), (25), for such \(\hat{x}\in F\subseteq \Omega ,x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m,x_t\in \partial _x^Cg_t(\bar{x},\bar{v}_t),\bar{v}_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(\bar{x};\Omega )\) such that

$$\begin{aligned} \displaystyle \sum _{k=1}^{m}\bar{\beta }_k\left\langle {x_k,w}\right\rangle <0 \end{aligned}$$
(26)

and

$$\begin{aligned} \displaystyle \sum _{t\in T}\bar{\lambda }_t\left\langle {x_t,w}\right\rangle \leqslant 0. \end{aligned}$$
(27)

Combining (26) with (27), we can assert that

$$\begin{aligned} \displaystyle \sum _{k=1}^{m}\bar{\beta }_k\left\langle {x_k,w}\right\rangle +\displaystyle \sum _{t\in T}\bar{\lambda }_t\left\langle {x_t,w}\right\rangle <0. \end{aligned}$$
(28)

On the other hand, we yield from (21) and the relation \(w\in T^C(\bar{x};\Omega )\) that

$$\begin{aligned} \displaystyle \sum _{k=1}^{m}\bar{\beta }_k\left\langle {x_k,w}\right\rangle +\displaystyle \sum _{t\in T}\bar{\lambda }_t\left\langle {x_t,w}\right\rangle \ge 0, \end{aligned}$$

which contradicts (28). This means that \(\bar{x}\in F\) is a global positively properly efficient solution of the problem (RSIMP). The proof is complete. \(\square \)

5 Application

5.1 Application to Semi-infinite Multiobjective Fractional Problem

In this section, we consider a nonsmooth fractional semi-infinite multiobjective optimization problem with data uncertainty in the constraints:

$$\begin{aligned} \text{(UFSIMP) }\,\,\,\,\begin{array}{l} \min f(x):=\left( \dfrac{p_1(x)}{q_1(x)},\cdots ,\dfrac{p_m(x)}{q_m(x)}\right) ,\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall t\in T,\forall x\in \Omega , \end{array} \end{aligned}$$

where T is a nonempty infinite index set, \(\Omega \) is a nonempty closed subset of \(\mathbb {R}^n\), \(p_k,q_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions. For the sake of convenience, we further assume that \(q_k(x)>0,k=1,\cdots ,m\) for all \(x\in \Omega \) and that \(p_k(\bar{x})\leqslant 0,k=1,\cdots ,m\) for the reference point \(\bar{x}\in \Omega \). In what follows, we also use the notation \(f:=(f_1,\cdots ,f_m)\), where \(f_k:=\dfrac{p_k}{q_k},k=1,\cdots ,m\). Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) be locally Lipschitz functions with respect to x uniformly in \(t\in T\) and let \(v_t\in \mathcal {V}_t,t\in T\) be uncertain parameters, where \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) are the convex compact sets.

The robust counterpart of the problem (UFSIMP) is as follows:

$$\begin{aligned} \text{(RFSIMP) }\,\,\,\,\begin{array}{l} \min f(x):=\left( \dfrac{p_1(x)}{q_1(x)},\cdots ,\dfrac{p_m(x)}{q_m(x)}\right) ,\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T,\forall x\in \Omega . \end{array} \end{aligned}$$

The feasible set of the problem (RFSIMP) is defined by

$$\begin{aligned} F:=\{x\in \Omega \mid g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T\}. \end{aligned}$$

Definition 9

A point \(\bar{x}\in F\) is called a local positively properly efficient solution of the problem (RFSIMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(\beta \in \text{ int }\mathbb {R}_+^m\) such that

$$\begin{aligned} \left\langle {\beta ,f(x)}\right\rangle \geqslant \left\langle {\beta ,f(\bar{x})}\right\rangle ,\forall x\in U\cap F. \end{aligned}$$

When \(U:=\mathbb {R}^n\), one has the concept of a global positively properly efficient solution for the problem (RFSIMP)

Theorem 7

Let \(\bar{x}\in F\) be a local positively properly efficient solution of the problem (RFSIMP). Suppose that the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} \begin{array}{l} 0\in \displaystyle \sum _{k=1}^{m}{\mu _k\left( \partial ^C p_k(\bar{x})-\dfrac{p_k(\bar{x})}{q_k(\bar{x})}\partial ^C q_k(\bar{x})\right) }+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ),\\ \mu _k:=\dfrac{\beta _k}{q_k(\bar{x})},k=1,\cdots ,m. \end{array} \end{aligned}$$
(29)

Proof

Suppose that \(\bar{x}\in F\) is a local positively properly efficient solution of the problem (RFSIMP), then \(\bar{x}\) is a local positively properly efficient solution of the problem (RSIMP) with \(f_k:=\dfrac{p_k}{q_k},k=1,\cdots ,m\). According to Theorem 2, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} 0\in \sum _{k=1}^{m}{\beta _k\partial ^Cf_k(\bar{x})}+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$
(30)

Thanks to Lemma 4, for \(k=1,\cdots ,m\), one has

$$\begin{aligned} \begin{array}{ll} \partial ^Cf_k(\bar{x})=\partial ^C\left( \dfrac{p_k}{q_k}\right) (\bar{x})&{}\subset \dfrac{q_k(\bar{x})\partial ^Cp_k(\bar{x})-p_k(\bar{x})\partial ^Cq_k(\bar{x})}{\left[ q_k(\bar{x})\right] ^2}\\ &{}=\dfrac{1}{q_k(\bar{x})}\left( \partial ^Cp_k(\bar{x})-\dfrac{p_k(\bar{x})}{q_k(\bar{x})}\partial ^Cq_k(\bar{x})\right) . \end{array} \end{aligned}$$
(31)

Combining (30) with (31), we can assert that

$$\begin{aligned} 0\in \displaystyle \sum _{k=1}^{m}\dfrac{\beta _k}{q_k(\bar{x})}\left( \partial ^Cp_k(\bar{x})-\dfrac{p_k(\bar{x})}{q_k(\bar{x})}\partial ^Cq_k(\bar{x})\right) +\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$

Now, by letting \(\mu _k:=\dfrac{\beta _k}{q_k(\bar{x})}\) for \(k=1,\cdots ,m\), we get

$$\begin{aligned} 0\in \displaystyle \sum _{k=1}^{m}{\mu _k\left( \partial ^C p_k(\bar{x})-\dfrac{p_k(\bar{x})}{q_k(\bar{x})}\partial ^C q_k(\bar{x})\right) }+\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{aligned}$$

where \(\lambda \in A(\bar{x})\) defined in (1). The proof of Theorem 7 is complete. \(\square \)

The following simple example shows that the qualification condition (RCQ) is essential in Theorem 7.

Example 9

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by

$$\begin{aligned} f(x)=\left( \dfrac{p_1(x)}{q_1(x)},\dfrac{p_2(x)}{q_2(x)}\right) , \end{aligned}$$

where \(p_1(x)=p_2(x)=x,q_1(x)=q_2(x)=x^2+1,x\in \mathbb {R}\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\). Let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RFSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=\{0\}\). Now, take \(\bar{x}=0\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then, it is easy to show that \(\bar{x}=0\) is a global positively properly efficient solution of the problem (RFSIMP). Indeed, we have

$$\begin{aligned} \sum _{k=1}^{2}\beta _kf_k(x)=\dfrac{x}{x^2+1}\ge 0=\sum _{k=1}^{2}\beta _kf_k(\bar{x}),\forall x\in F. \end{aligned}$$

Since \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\}\) at \(\bar{x}=0\) for any \(v_t\in \mathcal {V}_t,t\in T\), one has

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega )=[0,+\infty ). \end{aligned}$$

Moreover, \(N^C(\bar{x};F)=N^C(\bar{x};\{0\})=\mathbb {R}\). Therefore, the qualification condition (RCQ) is not satisfied at \(\bar{x}=0\). Now, take \(\bar{x}=0\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then, it is easy to see that \(\partial ^Cp_k(\bar{x})=\{1\},\partial ^Cq_k(\bar{x})=\{0\},\mu _k=\dfrac{\beta _k}{q_k(\bar{x})}=\beta _k,k=1,2\),

$$\begin{aligned} \begin{array}{ll} 0\notin [1,+\infty )&{}=\{1\}+[0,+\infty )\\ &{}=\displaystyle \sum _{k=1}^{2}{\mu _k\left( \partial ^C p_k(\bar{x})-\dfrac{p_k(\bar{x})}{q_k(\bar{x})}\partial ^Cq_k(\bar{x})\right) }\\ &{}\quad +\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{array} \end{aligned}$$

for any \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). This means that (29) does not hold. Hence, Theorem 7 is not valid.

The following simple example proves that, in general, a feasible point may satisfy the qualification condition (RCQ), but if this point is not a global positively properly efficient solution of the problem (RFSIMP), then (29) does not hold.

Example 10

Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by

$$\begin{aligned} f(x)=\left( \dfrac{p_1(x)}{q_1(x)},\dfrac{p_2(x)}{q_2(x)}\right) , \end{aligned}$$

where \(p_1(x)=p_2(x)=x,q_1(x)=q_2(x)=x^2+1,x\in \mathbb {R}\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\). Let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by

$$\begin{aligned} g_t(x,v_t)=-v_tx^2,x\in \mathbb {R},v_t\in \mathcal {V}_t,t\in T. \end{aligned}$$

We consider the problem (RFSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). By choosing \(\bar{x}=0\in F\), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\},\forall v_t\in \mathcal {V}_t,t\in T\). Therefore, we have

$$\begin{aligned} \bigcup \limits _{\begin{array}{c} \lambda \in A(\bar{x})\\ v_t\in \mathcal {V}_t \end{array}}{\left[ {\sum \limits _{t\in T} \lambda _t\partial _x^C g_t(\bar{x},v_t) }\right] }+N^C(\bar{x};\Omega )=[0,+\infty ). \end{aligned}$$

Moreover, we have \(N^C(\bar{x};F)=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\). Clearly, the qualification condition (RCQ) holds at \(\bar{x}=0\). Now, take \(\bar{x}=0\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then, it is easy to see that \(\partial ^Cp_k(\bar{x})=\{1\},\partial ^Cq_k(\bar{x})=\{0\},\mu _k=\dfrac{\beta _k}{q_k(\bar{x})}=\beta _k,k=1,2\),

$$\begin{aligned} \begin{array}{ll} 0\notin [1,+\infty )&{}=\{1\}+[0,+\infty )\\ &{}=\displaystyle \sum _{k=1}^{2}{\mu _k\left( \partial ^C p_k(\bar{x})-\dfrac{p_k(\bar{x})}{q_k(\bar{x})}\partial ^Cq_k(\bar{x})\right) }\\ &{}\quad +\displaystyle \sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{array} \end{aligned}$$

for any \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). Hence, condition (29) is not true. The reason is that \(\bar{x}=0\) is not a global positively properly efficient solution of the problem (RFSIMP). Indeed, we can choose \(\bar{x}=-1\in F=(-\infty ,0]\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Clearly,

$$\begin{aligned} \sum _{k=1}^{2}\beta _kf_k(x)=\dfrac{x}{x^2+1}=-\dfrac{1}{2}< 0=\sum _{k=1}^{2}\beta _kf_k(\bar{x}). \end{aligned}$$

5.2 Application to Semi-infinite Minimax Problem

In this section, we consider a nonsmooth semi-infinite minimax optimization problem with data uncertainty in the constraints:

$$\begin{aligned} \text{(UMMP) }\,\,\,\,\begin{array}{l} \displaystyle \min \max _{1\leqslant k\leqslant m} f_k(x),\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall t\in T,\forall x\in \Omega , \end{array} \end{aligned}$$

where T is a nonempty infinite index set, \(\Omega \) is a nonempty closed subset of \(\mathbb {R}^n\), \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions with \(f:=(f_1,\cdots ,f_m)\). Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) be locally Lipschitz functions with respect to x uniformly in \(t\in T\) and let \(v_t\in \mathcal {V}_t,t\in T\) be uncertain parameters, where \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) are the convex compact sets.

The robust counterpart of the problem (UMMP) is as follows:

$$\begin{aligned} \text{(RMMP) }\,\,\,\,\begin{array}{l} \displaystyle \min \max _{1\leqslant k\leqslant m} f_k(x),\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T,\forall x\in \Omega . \end{array} \end{aligned}$$

The feasible set of the problem (RMMP) is defined by

$$\begin{aligned} F:=\{x\in \Omega \mid g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T\}. \end{aligned}$$

Definition 10

Let \(\varphi (x):=\displaystyle \max _{1\leqslant k\leqslant m}f_k(x),x\in \mathbb {R}^n\). A point \(\bar{x}\in F\) is called a local isolated efficient solution of the problem (RMMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and a constant \(\nu >0\) such that

$$\begin{aligned} \varphi (x)-\varphi (\bar{x})\geqslant \nu ||x-\bar{x}||,\forall x\in U\cap F\backslash \{\bar{x}\}. \end{aligned}$$

When \(U:=\mathbb {R}^n\), one has the concept of a global isolated efficient solution for the problem (RMMP).

Theorem 8

Let \(\bar{x}\in F\) be a local isolated efficient solution of the problem (RMMP) for some \(\nu >0\). Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are convex functions and the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\alpha :=(\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} \nu \mathbb {B}\subset \sum _{k=1}^{m}{\alpha _k\partial ^Cf_k(\bar{x})}+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{aligned}$$
$$\begin{aligned} \alpha _k\left( f_k(\bar{x})-\max _{1\leqslant k\leqslant m}f_k(\bar{x})\right) =0,k=1,\cdots ,m. \end{aligned}$$

Proof

If \(\bar{x}\in F\) is a local isolated efficient solution of the problem (RMMP), then it is also a local isolated efficient solution of the following problem

$$\begin{aligned} \text{(RSIMP) }\,\,\,\,\begin{array}{l} \displaystyle \min \varphi (x),\\ \text{ s.t. } g_t(x,v_t)\leqslant 0,\forall v_t\in \mathcal {V}_t,\forall t\in T,\forall x\in \Omega . \end{array} \end{aligned}$$

Theorem 1 says that there are \(\nu >0\) and \(v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} \nu \mathbb {B}\subset \partial ^C\varphi (\bar{x})+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ). \end{aligned}$$
(32)

According to Lemma 3, one has

$$\begin{aligned} \begin{array}{ll} \partial ^C\varphi (\bar{x})=&{}\partial ^C\left( \displaystyle \max _{1\leqslant k\leqslant m}f_k(\bar{x})\right) \\ &{}\subset \left\{ \displaystyle \sum _{k=1}^{m}\gamma _k\partial ^Cf_k(\bar{x})\mid (\gamma _1,\cdots ,\gamma _m)\in \mathbb {R}_+^m,\displaystyle \sum _{k=1}^{m}\gamma _k=1,\right. \\ &{}\quad \left. \gamma _k\left( f_k(\bar{x})-\displaystyle \max _{1\leqslant k\leqslant m}f_k(\bar{x})\right) =0\right\} . \end{array} \end{aligned}$$
(33)

By setting \(\alpha _k:=\gamma _k,k=1,\cdots ,m\). From (32) to (33), we deduce that there exist \(\alpha :=(\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that

$$\begin{aligned} \nu \mathbb {B}\subset \sum _{k=1}^{m}{\alpha _k\partial ^Cf_k(\bar{x})}+\sum _{t\in T}\lambda _t\partial _x^Cg_t(\bar{x},v_t)+N^C(\bar{x};\Omega ), \end{aligned}$$
$$\begin{aligned} \alpha _k\left( f_k(\bar{x})-\max _{1\leqslant k\leqslant m}f_k(\bar{x})\right) =0,k=1,\cdots ,m. \end{aligned}$$

The proof is complete. \(\square \)

6 Conclusion

In this paper, we obtained some new results for robust optimality conditions and robust duality theorems for isolated efficient solutions and positively properly efficient solutions of nonsmooth robust semi-infinite multiobjective optimization problems by Clarke subdifferentials. In addition, some of these results are applied to study robust optimality conditions for nonsmooth robust fractional semi-infinite multiobjective problems and nonsmooth robust semi-infinite minimax optimization problems. The results obtained in this paper improve the corresponding results in the recent literature.