Abstract
In this paper, we deal with nonsmooth robust semi-infinite multiobjective optimization problems. Both necessary and sufficient optimality conditions are established. We also investigate Mond–Weir-type dual problems under assumptions of generalized convexity. Applications to nonsmooth robust fractional semi-infinite multiobjective optimization problems and nonsmooth robust semi-infinite minimax optimization problems are also provided. Some remarks and examples are provided to illustrate our results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
A semi-infinite multiobjective optimization problem is the simultaneous minimization with a finite number of objective functions and an infinite number of inequality constraints. Recently, characterizations of the solution set, optimality conditions and duality for semi-infinite multiobjective optimization problems have been investigated by many authors. We refer the readers to the papers [7, 9, 10, 20, 24, 27, 30, 32, 36,37,38,39,40, 42, 45, 47, 50, 51, 54, 55] and the references therein. By using the Mordukhovich/limiting subdifferential, Chuong and Kim established optimality conditions and duality theorems for efficient solutions of a semi-infinite multiobjective optimization problem (SIMP) in [9]. Chuong and Yao obtained optimality conditions and duality for isolated solutions and positively properly efficient efficient solutions of the problem (SIMP) in [10]. Optimality conditions and duality theorems for efficient solutions of a fractional semi-infinite multiobjective optimization problem were given in [7, 47]. Optimality conditions for a convex problem (SIMP) were obtained in [20]. In [24], authors studied optimality conditions and mixed-type duality for a problem (SIMP). Khanh and Tung established Karush–Kuhn–Tucker optimality conditions for Borwein-proper/firm solutions of a problem (SIMP) with mixed constraints in [30]. Authors investigated approximate optimality conditions, approximate duality theorems and approximate saddle point theorems for a problem (SIMP) in [32]. Optimality conditions for approximate solutions of a problem (SIMP) were studied in [50]. The Karush-Kuhn-Tucker optimality conditions and duality for a problem (SIMP) were given in [54]. The strong Karush-Kuhn-Tucker optimality conditions for a Borwein properly solution of a problem (SIMP) were obtained in [55].
On the other hand, robust optimization has emerged as a remarkable deterministic framework for studying optimization problems with uncertain data (see, e.g., [2, 3]). By using robust optimization, theoretical and applied aspects in the area of optimization problems with data uncertainty have been investigated by many researchers (see, e.g., [4, 6, 8, 13,14,15,16, 21, 22, 29, 31, 33,34,35, 41, 52, 53] and the references therein). But only a few publications focus on the optimality conditions and the duality theorems for semi-infinite optimization problems with data uncertainty. The robust strong duality theorems for a convex nonlinear semi-infinite optimization problem with data uncertainty in constraints were given in [13]. In [14], authors obtained necessary and sufficient conditions for stable robust strong duality of a robust linear semi-infinite programming problem. A duality theory for semi-infinite linear programming problems with data uncertainty in constraints was introduced in [21]. In [22], authors established dual characterizations of robust solutions for a multiobjective linear semi-infinite program problem with data uncertainty in constraints. Optimality conditions and duality theorems for the semi-infinite multiobjective optimization problems with data uncertainty were studied in [33]. Approximate optimality conditions and approximate duality theorems for a convex semi-infinite programming problem with data uncertainty were considered in [34]. By using the Clarke subdifferential, approximate optimality conditions and approximate duality theorems for the nonsmooth semi-infinite programming problems with data uncertainty were given in [31, 52]. On the other hand, the local isolated efficient solutions are originally called “strongly unique solutions” in [12] and after that, “strictly local (efficient solutions) Pareto minimums” in [25] and “strongly isolated solutions” in [10]. Furthermore, these solutions are not necessarily isolated points due to [28] (Example 1.1). Besides, there were many studies on isolated efficient solutions and properly efficient solutions for multiobjective optimization problems (see, e.g., [5, 17, 19, 23, 26, 43, 44, 48, 49] and the references therein). Recently, the authors have studied norm-based robustness for a general vector optimization problem, and in particular, problems with conic constraints and semi-infinite optimization problems in [43, 44]. In these papers, they have addressed the relationship between norm-based robust efficiency and isolated/proper efficiency. However, quite few papers consider isolated efficient solutions and properly efficient solutions for nonsmooth semi-infinite multiobjective optimization problems (see, e.g., [10, 30, 42, 45]). As far as we know, up to now, there is no paper devoted to isolated solutions and properly efficient solutions for nonsmooth robust semi-infinite multiobjective optimization problems.
Inspired by the above observations, we provide some new results for optimality conditions and duality theorems for isolated solutions and positively properly efficient solutions of nonsmooth robust semi-infinite multiobjective optimization problems in terms of the Clarke subdifferentials. The rest of the paper is organized as follows. Sections 1, and 2 present introduction, notations and preliminaries. In Sect. 3, we establish optimality conditions for strongly isolated solutions and positively properly efficient solutions in nonsmooth robust semi-infinite multiobjective optimization problems. In Sect. 4, we study Mond–Weir-type dual problems with respect to nonsmooth robust semi-infinite multiobjective optimization problems. In Sect. 5, we provide applications to nonsmooth robust fractional semi-infinite multiobjective problems and nonsmooth robust semi-infinite minimax optimization problems. Finally, conclusions are given in Sect. 6.
2 Preliminaries
Throughout the paper, we use the standard notation of variational analysis in [11, 46]. Let us first recall some notations and preliminary results which will be used throughout this paper. Let \(\mathbb {R}^n\) denote the \(n-\)dimensional Euclidean space equipped with the usual Euclidean norm \(||\cdot ||\). The notation \(\left\langle {\cdot ,\cdot }\right\rangle \) signifies the inner product in the space \(\mathbb {R}^n\). Let D be a nonempty subset of \(\mathbb {R}^n\). The closure and the interior of D are denoted by \(\text{ cl }D\) and \(\text{ int }D\). The symbol \(\mathbb {B}\) stands for the closed unit ball in \(\mathbb {R}^n\). As usual, the polar cone of D is the set
Besides, the nonnegative (resp., nonpositive) orthant cone of Euclidean space \(\mathbb {R}^n\) is denoted by \(\mathbb {R}_+^n=\{(x_1,\cdots ,x_n)\mid x_i\geqslant 0,i=1,\cdots ,n\}\) (resp., \(\mathbb {R}_-^n\)) for \(n\in \mathbb {N}:=\{1,2,\cdots \}\), while \(\text{ int }\mathbb {R}_+^n\) is used to indicate the topological interior of \(\mathbb {R}_+^n\).
Let \(f:\mathbb {R}^n\rightarrow \mathbb {R}\). We say that f is locally Lipschitz function, if for any \(\bar{x}\in \mathbb {R}^n\), there exist a positive constant \(L>0\) and a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\), such that
The Clarke generalized directional derivative of f at \(\bar{x}\in \mathbb {R}^n\) in the direction \(d\in \mathbb {R}^n\) is defined as follows:
The Clarke subdifferential of f at \(\bar{x}\in \mathbb {R}^n\) is defined as follows:
Let S be a nonempty closed subset of \(\mathbb {R}^n\). The Clarke tangent cone to S at \(\bar{x}\in S\) is defined by
where \(d_S\) denotes the distance function to S. The Clarke normal cone to S at \(\bar{x}\in S\) is defined by
A function \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) is said to be convex if for all \(x,y\in \mathbb {R}^n\) and all \(\lambda \in [0,1]\), then
If f is a convex function and \({\bar{x}}\in \mathbb {R}^n\)
Suppose that S is a nonempty closed convex subset of \(\mathbb {R}^n\) and \(\bar{x}\in S\). Then, \(N^C(\bar{x};S)\) coincides with the cone of normal in the sense of convex analysis and
Let T be a nonempty infinite index set and \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) be convex compact sets. Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R}\) for all \(t\in T\). We say that \(g_t,t\in T\) are locally Lipschitz functions with respect to \(\bar{x}\) uniformly in \(t\in T\), if for any \(\bar{x}\in \mathbb {R}^n\), there exist a positive constant \(L>0\), a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(v_t\in \mathcal {V}_t\) such that
The following lemmas will be used in the sequel.
Lemma 1
(See [11], Corollary page 52) Let S be a nonempty subset of \(\mathbb {R}^n\) and \(\bar{x}\in S\). Suppose that \(f:\mathbb {R}^n\rightarrow \mathbb {R}\) is locally Lipschitz function near \(\bar{x}\) and attains a minimum over S at \(\bar{x}\). Then,
Lemma 2
(See [11], Propositions 2.3.1 and 2.3.3) Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions around \(\bar{x}\in \mathbb {R}^n\). Then, we have the following inclusions:
-
(i)
\(\partial ^C(\alpha f_k)(\bar{x})=\alpha \partial ^Cf_k(\bar{x}),\forall \alpha \in \mathbb {R}\).
-
(ii)
\(\partial ^C(f_1+\cdots +f_m)(\bar{x})\subset \partial ^Cf_1(\bar{x})+\cdots +\partial ^Cf_m(\bar{x})\).
Lemma 3
(See [11], Proposition 2.3.12 and [50], Lemma 2.3) Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions around \(\bar{x}\in \mathbb {R}^n\). Then, the function \(\varphi (\cdot ):=\max \{f_k(\cdot )\mid k=1,\cdots ,m\}\) is locally Lipschitz function around \(\bar{x}\) and one has
Lemma 4
(See [11], Proposition 2.3.14) Let \(f,g:\mathbb {R}^n\rightarrow \mathbb {R}\) be locally Lipschitz functions around \(\bar{x}\in \mathbb {R}^n\), and suppose that \(g(\bar{x})\ne 0\). Then \(\dfrac{f}{g}\) is locally Lipschitz function near \(\bar{x}\in \mathbb {R}^n\) and one has
In this paper, we consider a nonsmooth semi-infinite multiobjective optimization problem with data uncertainty in constraints:
where T is a nonempty infinite index set, \(\Omega \) is a nonempty closed subset of \(\mathbb {R}^n\), \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions with \(f:=(f_1,\cdots ,f_m)\). Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) be locally Lipschitz functions with respect to x uniformly in \(t\in T\) and let \(v_t\in \mathcal {V}_t,t\in T\) be uncertain parameters, where \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) are the convex compact sets.
The uncertainty set-valued mapping \(\mathcal {V}:T \rightrightarrows \mathbb {R}^q\) is defined as \(\mathcal {V}(t):=\mathcal {V}_t\) for all \(t\in T\). The notation \(v\in \mathcal {V}\) means that v is a selection of \(\mathcal {V}\), i.e., \(v:T\rightarrow \mathbb {R}^q\) and \(v_t\in \mathcal {V}_t\) for all \(t\in T\). So, the uncertainty set is the graph of \(\mathcal {V}\), that is, \(\text{ gph }\mathcal {V}:=\{(t,v_t)\mid v_t\in \mathcal {V}_t,t\in T\}\).
The robust counterpart of the problem (USIMP) is as follows:
The feasible set of the problem (RSIMP) is defined by
Definition 1
A point \(\bar{x}\in F\) is called
-
(i)
([10]) a local efficient solution of the problem (RSIMP) if there exists a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) such that
$$\begin{aligned} f(x)-f(\bar{x})\notin -\mathbb {R}_+^m\backslash \{0\},\forall x\in U\cap F. \end{aligned}$$ -
(ii)
([18]) a local isolated efficient solution of the problem (RSIMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and a constant \(\nu >0\) such that
$$\begin{aligned} \max _{1\leqslant k\leqslant m}\{f_k(x)-f_k(\bar{x})\}\geqslant \nu ||x-\bar{x}||,\forall x\in U\cap F. \end{aligned}$$ -
(iii)
([10]) a local positively properly efficient solution of the problem (RSIMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(\beta :=(\beta _1,\cdots ,\beta _m)\in \text{ int }\mathbb {R}_+^m\) such that
$$\begin{aligned} \left\langle {\beta ,f(x)}\right\rangle \geqslant \left\langle {\beta ,f(\bar{x})}\right\rangle ,\forall x\in U\cap F. \end{aligned}$$
When \(U:=\mathbb {R}^n\), one has the concepts of a global efficient solution, a global isolated efficient solution and a global positively properly efficient solution for the problem (RSIMP).
Let \(\mathbb {R}^{(T)}\) be the linear space given below
Let \(\mathbb {R}_+^{(T)}\) be the positive cone in \(\mathbb {R}^{(T)}\) defined by
With \(\lambda \in \mathbb {R}^{(T)}\), its supporting set, \(T(\lambda ):=\{t\in T\mid \lambda _t\ne 0\}\), is a finite subset of T. Given \(\{z_t\}\subset Z,t\in T,Z\) being a real linear space, we understand that
For \(g_t,t\in T\),
3 Robust Optimality Conditions for Isolated Efficient Solution and Properly Efficient Solution
In this section, we establish optimality conditions for local isolated efficient solutions and local positively properly efficient solutions of the problem (RSIMP).
The following constraint qualification is an extension of Definition 3.2 in [52].
Definition 2
Let \(\bar{x}\in F\). We say that the following robust constraint qualification (RCQ) is satisfied at \(\bar{x}\) if
where
is set of active constraint multipliers at \(\bar{x}\in \Omega \).
Now, we propose a necessary optimality condition for a local isolated efficient solution of the problem (RSIMP) under the qualification condition (RCQ).
Theorem 1
Let \(\bar{x}\in F\) be a local isolated efficient solution of the problem (RSIMP) for some \(\nu >0\). Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are convex functions and the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\alpha :=(\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
Proof
Suppose that \(\bar{x}\in F\) is a local isolated efficient solution of the problem (RSIMP). Let
Since \(\bar{x}\in F\) is a local isolated efficient solution of the problem (RSIMP), there exists a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) such that
It follows easily that \(\bar{x}\) is a local minimizer of the following scalar problem
Note further that we have \(||\cdot -\bar{x}||\) is a convex function. Thus, using Corollary 1 in [1], we deduce that
From \(\partial ^C\left( \nu ||\cdot -\bar{x}||\right) (\bar{x})=\nu \mathbb {B},\forall \nu >0\), one follows
Thank to Lemma 3, we have
Because the qualification condition (RCQ) holds at \(\bar{x}\in F\).
So, one implies
where
It yields from (2) to (4) that
Therefore, there exist \(\alpha \in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
The proof is complete. \(\square \)
The following simple example shows that the qualification condition (RCQ) is essential in Theorem 1.
Example 1
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=\{0\}\). Now, take \(\bar{x}=0\in F\). Then, it is easy to see that \(\bar{x}\) is a global isolated efficient solution of the problem (RSIMP). Indeed, we have
Besides, take \(\bar{x}=0,\mathbb {B}=[-1,1],\nu =1>0,\alpha =(\alpha _1,\alpha _2)\in \mathbb {R}_+^2\) with \(\alpha _1+\alpha _2=1\), we have \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\), \(\partial ^Cf_k(\bar{x})=\{0\},k=1,2\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\}\), for any \(v_t\in \mathcal {V}_t,t\in T\). It is easy to see that
for any \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). The reason is that the qualification condition (RCQ) is not satisfied at \(\bar{x}=0\). Indeed, one has
However, \(N^C(\bar{x};F)=N^C(\bar{x};\{0\})=\mathbb {R}\). Clearly, the qualification condition (RCQ) does not hold at \(\bar{x}\).
The following simple example proves that, in general, a feasible point may satisfy the qualification condition (RCQ), but if this point is not a global isolated efficient solution of the problem (RSIMP), then
does not hold.
Example 2
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). By Choosing \(\bar{x}=0\in F\), it is easy to see that \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\), \(\partial ^Cf_k(\bar{x})=\{0\},k=1,2\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\},\forall v_t\in \mathcal {V}_t,t\in T\). Therefore, we have
Moreover, we have \(N^C(\bar{x};F)=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\). Clearly, the qualification condition (RCQ) holds at \(\bar{x}=0\). Besides, take \(\bar{x}=0,\mathbb {B}=[-1,1],\nu =1>0,\alpha =(\alpha _1,\alpha _2)\in \mathbb {R}_+^2\) with \(\alpha _1+\alpha _2=1\), one implies \(\partial ^Cf_k(\bar{x})=\{1\},k=1,2\) and
Hence, condition
does not hold. The reason is that \(\bar{x}=0\) is a global isolated efficient solution of the problem (RSIMP). Indeed, we can choose \(x=-2\in F=(-\infty ,0]\). Clearly,
Now, we propose a necessary optimality condition for a local positively property efficient solution of the problem (RSIMP) under the qualification condition (RCQ).
Theorem 2
Let \(\bar{x}\in F\) be a local positively property efficient solution of the problem (RSIMP). Suppose that the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
Proof
Let \(\bar{x}\in F\) be a local positively property efficient solution of the problem (RSIMP). Then there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(\beta :=(\beta _1,\cdots ,\beta _m)\in \textrm{int}\mathbb {R}_+^m\) such that
Then, \(\forall x\in U\cap F\),
For any \(x\in \mathbb {R}^n\), set
Applying (5) we deduce that \(\bar{x}\) is a local minimizer of the following problem
Since function \(\Phi \) is locally Lipschitz at \(\bar{x}\), so we deduce from Lemma 1 that
According to Lemma 2, we have
Because the qualification condition (RCQ) holds at \(\bar{x}\in F\). So, one implies
where
It yields from (6) to (8) that
Therefore, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
The proof is complete. \(\square \)
Now, we introduce a concept of the robust (KKT) condition for the problem (RSIMP).
Definition 3
A point \(\bar{x}\in F\) is said to satisfy the robust (KKT) condition with respect to the problem (RSIMP) if there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
The following simple example proves that a point satisfying the robust (KKT) condition is not necessarily a global positively properly efficient solution of the problem (RSIMP) even in the smooth case.
Example 3
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). By choosing \(\bar{x}=0\in F\), we have \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial ^Cf_k(\bar{x})=\{0\},k=1,2\), \(\partial _x^Cg_t(\bar{x},v_t)=\{0\},v_t\in \mathcal {V}_t,t\in T\). On the other hand, take \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\), it is easy to see that
for all \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). Thus, the robust (KKT) condition is satisfied at \(\bar{x}\). However, \(\bar{x}=0\in F\) is not a global positively properly efficient solution of the problem (RSIMP). To see this, we can choose \(x=-1\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\). Then, it is easy to see that
Before we discuss sufficient condition for a global positively properly efficient solution of the problem (RSIMP), we introduce the concepts of convexity, which are inspired by [37].
Definition 4
We say that \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) are quasiconvex on \(\Omega \) at \(\bar{x}\in \Omega \) if for all \(x\in \Omega \),
Definition 5
We say that \(f:=(f_1,\cdots ,f_m)\) is pseudoconvex on \(\Omega \) at \(\bar{x}\in \Omega \) if for all \(x\in \Omega \), there exist \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m\) such that
Now, we will give a sufficient condition for a global positively properly efficient solution of the problem (RSIMP).
Theorem 3
Assume that \(\Omega \) is a convex set and \(\bar{x}\in F\) satisfies the robust (KKT) condition. If f is pseudoconvex on \(\Omega \) at \(\bar{x}\) and functions \(g_t,t\in T\) are quasiconvex on \(\Omega \) at \(\bar{x}\), then \(\bar{x}\in F\) is a global positively properly efficient solution of the problem (RSIMP).
Proof
Since \(\bar{x}\in F\) satisfies the robust (KKT) condition, there exist \(\beta \in \text{ int }\mathbb {R}_+^m\) and \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m,x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t,t\in T,\lambda \in A(\bar{x})\) defined in (1), as well as \(w\in N^C(\bar{x};\Omega )\) such that
which is equivalent to
Since \(\Omega \) is a convex set and \(w\in N^C(\bar{x};\Omega )\), it follows that, for any \(x\in \Omega \),
From (9) it follows that
which means that
Moreover, for any \(\lambda \in A(\bar{x})\), then \(\lambda _tg_t(\bar{x},v_t)=0,\forall t\in T\). Note that for any \(x\in F\), then \(\lambda _tg_t(x,v_t)\leqslant 0\) for any \(v_t\in \mathcal {V}_t,t\in T\). It follows that
By \(g_t\) is quasiconvex on \(\Omega \) at \(\bar{x}\) and \(x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t\), for all \(t\in T\), we obtain \(\left\langle {\lambda _tx_t,x-\bar{x}}\right\rangle \leqslant 0,\forall t\in T\). It is easy to imply that
Combining (10) and (11), we can assert that
Since f is pseudoconvex on \(\Omega \) at \(\bar{x}\), it follows that
Therefore, \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP). The proof is complete. \(\square \)
Now, we present an example to show the importance of the pseudoconvexity in Theorem 3.
Example 4
Let \(x\in \mathbb {R}\) and \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
\(k=1,2\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t],t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RSIMP) with \(m=2\) and \(\Omega =[0,+\infty )\subset \mathbb {R}\). By simple computation, one has \(F=[0,+\infty )\). By selecting \(\bar{x}=0\in F\), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};[0,+\infty ))=(-\infty ,0)\),
It can be verified that \(\bar{x}=0\in F\) satisfies the robust (KKT) condition. Indeed, let us select \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\), it is easy to imply that
for all \(\lambda \in A(\bar{x})\) and \(v_t\in \mathcal {V}_t,t\in T\). However, \(\bar{x}=0\) is not a global positively properly efficient solution of the problem (RSIMP). In order to see this, let us take \(\hat{x}=\dfrac{1}{\pi }\in F=[0,+\infty )\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then,
The reason is that f is not pseudoconvex on \(\Omega \) at \(\bar{x}=0\). Indeed, take \(x=\dfrac{1}{3\pi }\in \Omega =[0,+\infty )\) and \(x_k=0\in \partial ^Cf_k(\bar{x})=[-1,1],k=1,2\). Clearly,
However,
4 Robust Duality for Properly Efficient Solution
In this section, we consider the Mond–Weir-type dual problem (MWD) with respect to the problem (RSIMP).
For \(x\in \mathbb {R}^n\), a nonempty and closed set \(\Omega \subseteq \mathbb {R}^n\), \(\beta :=(\beta _1,\cdots ,\beta _m)\in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\beta _k=1\) and \(\lambda \in \mathbb {R}_+^{(T)},v_t\in \mathcal {V}_t,t\in T,f:=(f_1,\cdots ,f_m),g_T:=(g_t)_{t\in T}\), let us denote a vector function \(L:=(L_1,\cdots ,L_m)\) by
We consider the Mond–Weir-type dual problem (MWD) with respect to the primal problem (RSIMP) as follows:
The feasible set of the problem (MWD) is defined by
In what follows, we use the following notation for convenience:
Now, we will introduce the following definitions for a global efficient solution and a global positively properly efficient solution of the problem (MWD).
Definition 6
A point \((\bar{y},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\) is called
-
(i)
A global efficient solution of the problem (MWD) if
$$\begin{aligned} L(y,\beta ,\lambda )-L(\bar{y},\bar{\beta },\bar{\lambda })\notin \mathbb {R}_+^m\backslash \{0\},\forall (y,\beta ,\lambda )\in F_\textrm{MWD}. \end{aligned}$$ -
(ii)
A global positively properly efficient solution of the problem (MWD) if there exists \(\theta :=(\theta _1,\cdots ,\theta _m)\in -\text{ int }\mathbb {R}_+^m\) such that
$$\begin{aligned} \left\langle {\theta ,L(y,\beta ,\lambda )}\right\rangle \geqslant \left\langle {\theta ,L(\bar{y},\bar{\beta },\bar{\lambda })}\right\rangle ,\forall (y,\beta ,\lambda )\in F_\textrm{MWD}. \end{aligned}$$
Motivated by the definition of the generalized convexity due to [9, 10], we will introduce a concept of the generalized convexity as follows:
Definition 7
We say that \((f,g_T)\) is generalized convex on \(\Omega \) at \(\bar{x}\in \Omega \), if for any \(x\in \Omega \), \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m\) and \(x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(\bar{x};\Omega )\) such that
Remark 1
Note that, if \(\Omega \) is a convex set and \(f_k(\cdot ),k=1,\cdots ,m,g_t(\cdot ,v_t),v_t\in \mathcal {V}_t,t\in T\) are convex functions, then \((f,g_T)\) is generalized convex on \(\Omega \) at any \(\bar{x}\in \Omega \) with \(w:=x-\bar{x}\) for each \(x\in \Omega \).
The next example shows that the class of the generalized convex functions is properly larger than the one of the convex functions.
Example 5
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
Take \(T=[0,1],v_t\in \mathcal {V}_t=[0,2-t],\forall t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
Consider \(\Omega =\mathbb {R},\bar{x}=0\in \Omega \), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};\mathbb {R})=\{0\},T^C(\bar{x};\Omega )=T^C(\bar{x};\mathbb {R})=\mathbb {R}\). It is easy to see that \((f,g_T)\) is generalized convex on \(\Omega \) at \(\bar{x}\). However, \(g_0(\cdot ,0)\) is not a convex function. Indeed, let \(x_1=1,x_2=-1\in \mathbb {R}\), and choose \(\lambda =\dfrac{1}{2}\in [0,1]\), we have
In the line of [15], we will introduce a concept of the generalized convexity as follows:
Definition 8
We say that \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\in \Omega \), if for any \(x\in \Omega \), \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m\) and \(x_t\in \partial _x^Cg_t(\bar{x},v_t),v_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(\bar{x};\Omega )\) such that
Remark 2
If \((f,g_T)\) is generalized convex on \(\Omega \) at \(\bar{x}\in \Omega \), then \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\in \Omega \).
The next example shows that the class of the pseudogeneralized convex functions is properly larger than the one of the generalized convex functions.
Example 6
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
Take \(T=[0,1],v_t\in \mathcal {V}_t=[0,2-t],\forall t\in T\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T=[0,1]\) be given by
Consider \(\Omega =\mathbb {R},\bar{x}=0\in \Omega \), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};\mathbb {R})=\{0\},T^C(\bar{x};\Omega )=T^C(\bar{x};\mathbb {R})=\mathbb {R}\). It is easy to see that \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\). Meanwhile, \((f,g_T)\) is not generalized convex function on \(\Omega \) at \(\bar{x}\).
Now, we establish the following weak duality theorem, which describes relation between the problem (RSIMP) and the problem (MWD).
Theorem 4
Suppose that \(x\in F\) and \((y,\beta ,\lambda )\in F_\textrm{MWD}\). If \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at y, then
Proof
Since \((y,\beta ,\lambda )\in F_\textrm{MWD}\), there exist \(x_k\in \partial ^Cf_k(y),k=1,\cdots ,m,\beta \in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}{\beta _k}=1\) and \(x_t\in \partial _x^Cg_t(y,v_t),v_t\in \mathcal {V}_t,t\in T,\lambda \in \mathbb {R}_+^{(T)}\) such that
and
Let \(x\in F\). Suppose on contrary that
Hence \(\left\langle {\beta ,f(x)-f(y)}\right\rangle <0\) due to \(\beta \in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\beta _k=1\). Thus,
Note that, for \(x\in F\), we have \(g_t(x,v_t)\leqslant 0\) for any \(t\in T\). It yields that
By the pseudogeneralized convexity of \((f,g_T)\) on \(\Omega \) at \(y\in \Omega \) and (14), (16), for such \(x\in F\subseteq \Omega ,x_k\in \partial ^Cf_k(y),k=1,\cdots ,m,x_t\in \partial _x^Cg_t(y,v_t),v_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(y;\Omega )\) such that
and
Combining (17) with (18), we can assert that
On the other side, we yield from (12) and the relation \(w\in T^C(y;\Omega )\) that
which contradicts (19). Thus, \(f(x)\not \preceq L(y,\beta ,\lambda )\). The proof is complete. \(\square \)
The following example shows that the pseudogeneralized convexity of \((f,g_T)\) on \(\Omega \) imposed in Theorem 4 cannot be removed.
Example 7
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by
where \(f_1(x)=f_2(x)=x^3,x\in \mathbb {R}\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). Let us select \(\bar{x}=-1\in F\). Now, consider the dual problem (MWD). By choosing \(\bar{y}=0\in \Omega ,\bar{\beta }=(\bar{\beta }_1,\bar{\beta }_2)\in \text{ int }\mathbb {R}_+^2\) with \(\bar{\beta }_1+\bar{\beta }_2=1,\bar{\lambda }\in \mathbb {R}_+^{(T)}\), we have \(N^C(\bar{y};\Omega )=N^C(\bar{y};(-\infty ,0])=[0,+\infty )\) and \(\partial ^Cf_k(\bar{y})=\{0\},k=1,2,\partial _x^Cg_t(\bar{y},v_t)=\{0\},v_t\in \mathcal {V}_t,t\in T\). It is easy to see that
and \(\bar{\beta }_1+\bar{\beta }_2=1,\displaystyle \sum _{t\in T}\bar{\lambda }_tg_t(\bar{y},v_t)=0,v_t\in \mathcal {V}_t,t\in T\). Thus, \((\bar{y},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\). However, if \(\bar{x}=-1\in F\), then
The reason is that \((f,g_T)\) is not pseudogeneralized convex on \(\Omega \) at \(\bar{y}=0\). To see this, we can choose \(y=-3\in \Omega \) and \(x_k\in \partial ^Cf_k(\bar{y})=\{0\},k=1,2\). Then, it is easy to see that \(T^C(\bar{y};\Omega )=T^C(\bar{y};(-\infty ,0])=(-\infty ,0]\) and
However,
Now, we establish the following strong duality theorem, which describes relation between the problem (RSIMP) and the problem (MWD).
Theorem 5
Suppose that \(\bar{x}\in F\) is a local positively properly efficient solution of the problem (RSIMP) such that the qualification condition (RCQ) is satisfied at \(\bar{x}\). Then there exists \((\bar{\beta },\bar{\lambda })\in \text{ int }\mathbb {R}_+^m\times \mathbb {R}_+^{(T)}\) such that \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\) and \(f(\bar{x})=L(\bar{x},\bar{\beta },\bar{\lambda })\). If in addition \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(y\in \Omega \), then \((\bar{x},\bar{\beta },\bar{\lambda })\) is a global efficient solution of the problem (MWD).
Proof
According to Theorem 2, there exist \(\beta \in \text{ int }\mathbb {R}_+^m\) and \(v_t\in \mathcal {V}_t,t\in T,\lambda \in A(\bar{x})\) defined in (1) such that
Putting
one has \(\bar{\beta }:=(\bar{\beta }_1,\cdots ,\bar{\beta }_m)\in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\bar{\beta }_k=1,\) and \(\bar{\lambda }:=(\bar{\lambda }_t)_{t\in T}\in \mathbb {R}_+^{(T)}\). Furthermore, the assertion in (20) is also valid when \(\beta _k\)’s and \(\lambda _t\)’s are replaced by \(\bar{\beta }_k\)’s and \(\bar{\lambda }_t\)’s, respectively. Besides, since \(\lambda \in A(\bar{x})\) defined in (1), we have \(\lambda _tg_t(\bar{x},v_t)=0,\forall t\in T\), it implies that \(\displaystyle \sum _{t\in T}\bar{\lambda }_tg_t(\bar{x},v_t)=0\ge 0\). Therefore, one has \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\). It is easy to imply that
Because \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at any \(y\in \Omega \), so we apply the result of Theorem 4 to deduce that
for any \((y,\beta ,\lambda )\in F_\textrm{MWD}\). Therefore, one has
This means that \((\bar{x},\bar{\beta },\bar{\lambda })\) is a global efficient solution of the problem (MWD). The proof is complete. \(\square \)
Remark 3
Note that our strong duality result appeared in Theorem 5 in not in an ordinary way; that is, the solution of the dual problem is not guaranteed to be positively properly efficient, only efficient, although the solution to the primal one is local positively properly efficient. As shown by [5] (Example 4.6), for the case when \(\mathcal {V}_t,t\in T\) are singletons and T is a finite set, we cannot gain in general a positively properly efficient solution for the dual problem, even in the convex framework.
The next example asserts the importance of the qualification condition (RCQ) imposed in Theorem 5. More precisely, if \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP) at which the qualification condition (RCQ) is not satisfied, then we may not find out a pair \((\bar{\beta },\bar{\lambda })\in \text{ int }\mathbb {R}_+^m\times \mathbb {R}_+^{(T)}\) such that \((\bar{x},\bar{\beta },\bar{\lambda })\) belongs to the feasible set \(F_\textrm{MWD}\) of the dual problem (MWD).
Example 8
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by \(f(x)=\left( f_1(x),f_2(x)\right) \), where
Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\) and let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=\{0\}\). Now, take \(\bar{x}=0\in F,\bar{\beta }=(\bar{\beta }_1,\bar{\beta }_2)\in \text{ int }\mathbb {R}_+^2\) with \(\bar{\beta }_1+\bar{\beta }_2=1\). Then, it is easy to show that \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP). Indeed, we have
Now, consider the dual problem (MWD). By choosing \(\bar{x}=0\in \Omega ,\bar{\lambda }\in \mathbb {R}_+^{(T)},\bar{\beta }=(\bar{\beta }_1,\bar{\beta }_2)\in \text{ int }\mathbb {R}_+^2\) with \(\bar{\beta }_1+\bar{\beta }_2=1\), we have
and \(\partial ^Cf_k(\bar{x})=\{1\},k=1,2,\partial _x^Cg_t(\bar{x},v_t)=\{0\},v_t\in \mathcal {V}_t,\forall t\in T\). It is easy to see that
Thus, \((\bar{x},\bar{\beta },\bar{\lambda })\notin F_\textrm{MWD}\). The reason is that the qualification condition (RCQ) is not satisfied at \(\bar{x}=0\in F\). Indeed, we have \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\}\) for any \(v_t\in \mathcal {V}_t,t\in T\),
Besides, one has \(N^C(\bar{x};F)=N^C(\bar{x};{0})=\mathbb {R}\). Therefore, the qualification condition (RCQ) is not satisfied at \(\bar{x}\).
Finally, we establish the following converse duality theorem, which describes relation between the problem (RSIMP) and the problem (MWD).
Theorem 6
Assume that \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\). If \(\bar{x}\in F\) and \((f,g_T)\) is pseudogeneralized convex on \(\Omega \) at \(\bar{x}\), then \(\bar{x}\) is a global positively properly efficient solution of the problem (RSIMP).
Proof
Since \((\bar{x},\bar{\beta },\bar{\lambda })\in F_\textrm{MWD}\), there exist \(x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m,\bar{\beta }:=(\bar{\beta }_1,\cdots ,\bar{\beta }_m)\in \text{ int }\mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}{\bar{\beta }_k}=1\) and \(x_t\in \partial _x^Cg_t(\bar{x},\bar{v}_t),\bar{v}_t\in \mathcal {V}_t,t\in T\), \(\bar{\lambda }:=(\bar{\lambda }_t)_{t\in T}\in \mathbb {R}_+^{(T)}\) such that
and
Let \(\bar{x}\in F\). Suppose on contrary that \(\bar{x}\in F\) is not a global positively properly efficient solution of the problem (RSIMP). For such \(\bar{\beta }:=(\bar{\beta }_1,\cdots ,\bar{\beta }_m)\in \text{ int }\mathbb {R}_+^m\), it then follows that there exists \(\hat{x}\in F\) such that
Thus,
Note that, for \(\hat{x}\in F\) we have \(g_t(\hat{x},\bar{v}_t)\leqslant 0\) for any \(t\in T\). It yields that
By the pseudogeneralized convexity of \((f,g_T)\) on \(\Omega \) at \(\bar{x}\in \Omega \) and (23), (25), for such \(\hat{x}\in F\subseteq \Omega ,x_k\in \partial ^Cf_k(\bar{x}),k=1,\cdots ,m,x_t\in \partial _x^Cg_t(\bar{x},\bar{v}_t),\bar{v}_t\in \mathcal {V}_t,t\in T\), there exists \(w\in T^C(\bar{x};\Omega )\) such that
and
Combining (26) with (27), we can assert that
On the other hand, we yield from (21) and the relation \(w\in T^C(\bar{x};\Omega )\) that
which contradicts (28). This means that \(\bar{x}\in F\) is a global positively properly efficient solution of the problem (RSIMP). The proof is complete. \(\square \)
5 Application
5.1 Application to Semi-infinite Multiobjective Fractional Problem
In this section, we consider a nonsmooth fractional semi-infinite multiobjective optimization problem with data uncertainty in the constraints:
where T is a nonempty infinite index set, \(\Omega \) is a nonempty closed subset of \(\mathbb {R}^n\), \(p_k,q_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions. For the sake of convenience, we further assume that \(q_k(x)>0,k=1,\cdots ,m\) for all \(x\in \Omega \) and that \(p_k(\bar{x})\leqslant 0,k=1,\cdots ,m\) for the reference point \(\bar{x}\in \Omega \). In what follows, we also use the notation \(f:=(f_1,\cdots ,f_m)\), where \(f_k:=\dfrac{p_k}{q_k},k=1,\cdots ,m\). Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) be locally Lipschitz functions with respect to x uniformly in \(t\in T\) and let \(v_t\in \mathcal {V}_t,t\in T\) be uncertain parameters, where \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) are the convex compact sets.
The robust counterpart of the problem (UFSIMP) is as follows:
The feasible set of the problem (RFSIMP) is defined by
Definition 9
A point \(\bar{x}\in F\) is called a local positively properly efficient solution of the problem (RFSIMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and \(\beta \in \text{ int }\mathbb {R}_+^m\) such that
When \(U:=\mathbb {R}^n\), one has the concept of a global positively properly efficient solution for the problem (RFSIMP)
Theorem 7
Let \(\bar{x}\in F\) be a local positively properly efficient solution of the problem (RFSIMP). Suppose that the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
Proof
Suppose that \(\bar{x}\in F\) is a local positively properly efficient solution of the problem (RFSIMP), then \(\bar{x}\) is a local positively properly efficient solution of the problem (RSIMP) with \(f_k:=\dfrac{p_k}{q_k},k=1,\cdots ,m\). According to Theorem 2, there exist \(\beta \in \text{ int }\mathbb {R}_+^m,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
Thanks to Lemma 4, for \(k=1,\cdots ,m\), one has
Combining (30) with (31), we can assert that
Now, by letting \(\mu _k:=\dfrac{\beta _k}{q_k(\bar{x})}\) for \(k=1,\cdots ,m\), we get
where \(\lambda \in A(\bar{x})\) defined in (1). The proof of Theorem 7 is complete. \(\square \)
The following simple example shows that the qualification condition (RCQ) is essential in Theorem 7.
Example 9
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by
where \(p_1(x)=p_2(x)=x,q_1(x)=q_2(x)=x^2+1,x\in \mathbb {R}\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\). Let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RFSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=\{0\}\). Now, take \(\bar{x}=0\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then, it is easy to show that \(\bar{x}=0\) is a global positively properly efficient solution of the problem (RFSIMP). Indeed, we have
Since \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\}\) at \(\bar{x}=0\) for any \(v_t\in \mathcal {V}_t,t\in T\), one has
Moreover, \(N^C(\bar{x};F)=N^C(\bar{x};\{0\})=\mathbb {R}\). Therefore, the qualification condition (RCQ) is not satisfied at \(\bar{x}=0\). Now, take \(\bar{x}=0\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then, it is easy to see that \(\partial ^Cp_k(\bar{x})=\{1\},\partial ^Cq_k(\bar{x})=\{0\},\mu _k=\dfrac{\beta _k}{q_k(\bar{x})}=\beta _k,k=1,2\),
for any \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). This means that (29) does not hold. Hence, Theorem 7 is not valid.
The following simple example proves that, in general, a feasible point may satisfy the qualification condition (RCQ), but if this point is not a global positively properly efficient solution of the problem (RFSIMP), then (29) does not hold.
Example 10
Let \(f:\mathbb {R}\rightarrow \mathbb {R}^2\) be defined by
where \(p_1(x)=p_2(x)=x,q_1(x)=q_2(x)=x^2+1,x\in \mathbb {R}\). Take \(T=[0,1],v_t\in \mathcal {V}_t=[2-t,2+t]\). Let \(g_t:\mathbb {R}\times \mathcal {V}_t\rightarrow \mathbb {R}\) be given by
We consider the problem (RFSIMP) with \(m=2\) and \(\Omega =(-\infty ,0]\subset \mathbb {R}\). By simple computation, one has \(F=(-\infty ,0]\). By choosing \(\bar{x}=0\in F\), one has \(N^C(\bar{x};\Omega )=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\) and \(\partial _x^Cg_t(\bar{x},v_t)=\{0\},\forall v_t\in \mathcal {V}_t,t\in T\). Therefore, we have
Moreover, we have \(N^C(\bar{x};F)=N^C(\bar{x};(-\infty ,0])=[0,+\infty )\). Clearly, the qualification condition (RCQ) holds at \(\bar{x}=0\). Now, take \(\bar{x}=0\in F\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Then, it is easy to see that \(\partial ^Cp_k(\bar{x})=\{1\},\partial ^Cq_k(\bar{x})=\{0\},\mu _k=\dfrac{\beta _k}{q_k(\bar{x})}=\beta _k,k=1,2\),
for any \(\lambda \in A(\bar{x}),v_t\in \mathcal {V}_t,t\in T\). Hence, condition (29) is not true. The reason is that \(\bar{x}=0\) is not a global positively properly efficient solution of the problem (RFSIMP). Indeed, we can choose \(\bar{x}=-1\in F=(-\infty ,0]\) and \(\beta =(\beta _1,\beta _2)\in \text{ int }\mathbb {R}_+^2\) with \(\beta _1+\beta _2=1\). Clearly,
5.2 Application to Semi-infinite Minimax Problem
In this section, we consider a nonsmooth semi-infinite minimax optimization problem with data uncertainty in the constraints:
where T is a nonempty infinite index set, \(\Omega \) is a nonempty closed subset of \(\mathbb {R}^n\), \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are locally Lipschitz functions with \(f:=(f_1,\cdots ,f_m)\). Let \(g_t:\mathbb {R}^n\times \mathcal {V}_t\rightarrow \mathbb {R},t\in T\) be locally Lipschitz functions with respect to x uniformly in \(t\in T\) and let \(v_t\in \mathcal {V}_t,t\in T\) be uncertain parameters, where \(\mathcal {V}_t\subseteq \mathbb {R}^q,t\in T\) are the convex compact sets.
The robust counterpart of the problem (UMMP) is as follows:
The feasible set of the problem (RMMP) is defined by
Definition 10
Let \(\varphi (x):=\displaystyle \max _{1\leqslant k\leqslant m}f_k(x),x\in \mathbb {R}^n\). A point \(\bar{x}\in F\) is called a local isolated efficient solution of the problem (RMMP) if there exist a neighborhood \(U\subseteq \mathbb {R}^n\) of \(\bar{x}\) and a constant \(\nu >0\) such that
When \(U:=\mathbb {R}^n\), one has the concept of a global isolated efficient solution for the problem (RMMP).
Theorem 8
Let \(\bar{x}\in F\) be a local isolated efficient solution of the problem (RMMP) for some \(\nu >0\). Suppose that \(f_k:\mathbb {R}^n\rightarrow \mathbb {R},k=1,\cdots ,m\) are convex functions and the qualification condition (RCQ) at \(\bar{x}\) holds. Then, there exist \(\alpha :=(\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
Proof
If \(\bar{x}\in F\) is a local isolated efficient solution of the problem (RMMP), then it is also a local isolated efficient solution of the following problem
Theorem 1 says that there are \(\nu >0\) and \(v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
According to Lemma 3, one has
By setting \(\alpha _k:=\gamma _k,k=1,\cdots ,m\). From (32) to (33), we deduce that there exist \(\alpha :=(\alpha _1,\cdots ,\alpha _m)\in \mathbb {R}_+^m\) with \(\displaystyle \sum _{k=1}^{m}\alpha _k=1,v_t\in \mathcal {V}_t,t\in T\) and \(\lambda \in A(\bar{x})\) defined in (1) such that
The proof is complete. \(\square \)
6 Conclusion
In this paper, we obtained some new results for robust optimality conditions and robust duality theorems for isolated efficient solutions and positively properly efficient solutions of nonsmooth robust semi-infinite multiobjective optimization problems by Clarke subdifferentials. In addition, some of these results are applied to study robust optimality conditions for nonsmooth robust fractional semi-infinite multiobjective problems and nonsmooth robust semi-infinite minimax optimization problems. The results obtained in this paper improve the corresponding results in the recent literature.
References
Amahroq, T., Penot, J.-P., Syam, A.: On the subdifferentiability of difference of two functions and local minimization. Set Valued Anal. 16, 413–427 (2008)
Ben-Tal, A., Ghaoui, L.E., Nemirovski, A.: Robust Optimization. Princeton Series in Applied Mathematics, Princeton University Press, Princeton (2009)
Bertsimas, D., Brown, D., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53, 464–501 (2011)
Chen, J.W., Köbis, E., Yao, J.C.: Optimality conditions and duality for robust nonsmooth multiobjective optimization problems with constraints. J. Optim. Theory Appl. 181, 411–436 (2019)
Chuong, T.D.: Optimality and duality for proper and isolated efficiencies in multiobjective optimization. Nonlinear Anal. 76, 93–104 (2013)
Chuong, T.D.: Optimality and duality for robust multiobjective optimization problems. Nonlinear Anal. 134, 127–143 (2016)
Chuong, T.D.: Nondifferentiable fractional semi-infinite multiobjective optimization problems. Oper. Res. Lett. 44, 260–266 (2016)
Chuong, T.D.: Robust optimality and duality in multiobjective optimization problems under data uncertainty. SIAM J. Optim. 30, 1501–1526 (2020)
Chuong, T.D., Kim, D.S.: Nonsmooth semi-infinite multiobjective optimization problems. J. Optim. Theory Appl. 160, 748–762 (2014)
Chuong, T.D., Yao, J.-C.: Isolated and proper efficiencies in semi-infinite vector optimization problems. J. Optim. Theory Appl. 162, 447–462 (2014)
Clarke, F.H.: Optimization and Nonsmooth Analysis. Wiley-Interscience, New York (1983)
Cromme, L.: Strong uniqueness. Numer. Math. 29, 179–193 (1978)
Dinh, N., Goberna, M.A., Lopez, M.A., Volle, M.: A unifying approach to robust convex infinite optimization duality. J. Optim. Theory Appl. 174, 650–685 (2017)
Dinh, N., Long, D.H., Yao, J.C.: Duality for robust linear infinite programming problems revisited. Vietnam J. Math. 46, 293–328 (2020)
Fakhar, M., Mahyarinia, M.R., Zafarani, J.: On nonsmooth robust multiobjective optimization under generalized convexity with applications to portfolio optimization. Eur. J. Oper. Res. 265, 39–48 (2018)
Fakhara, M., Mahyarinia, M.R., Zafarani, J.: On approximate solutions for nonsmooth robust multiobjective optimization problems. Optimization 68, 1653–1683 (2019)
Ginchev, I., Guerraggio, A., Rocca, M.: Isolated minimizers and proper efficiency for \(C^{0,1}\) constrained vector optimization problems. J. Math. Anal. Appl. 309, 353–368 (2005)
Ginchev, I., Guerraggio, A., Rocca, M.: From scalar to vector optimization. Appl. Math. 51, 5–36 (2006)
Ginchev, I., Guerraggio, A., Rocca, M.: Stability of property efficient points and isolated minimizers of constrained vector optimization problems. Rend. Circ. Mat. Palermo 56, 137–156 (2007)
Goberna, M.A., Kanzi, N.: Optimality conditions in convex multiobjective SIP. Math. Program. Ser. A 164, 167–191 (2017)
Goberna, M.A., Jeyakumar, V., Li, G., López, M.: Robust linear semi-infinite programming duality. Math. Program Ser. B 139, 185–203 (2013)
Goberna, M.A., Jeyakumar, V., Li, G., Vicente-Pèrez, J.: Robust solutions of multiobjective linear semi-infinite programs under constraint data uncertainty. SIAM J. Optim. 24, 1402–1419 (2014)
Guerraggio, A., Molho, E., Zaffaroni, A.: On the notion of proper efficiency in vector optimization. J. Optim. Theory Appl. 82, 1–21 (1994)
Jiao, L.G., Dinh, B.V., Kim, D.S., Yoon, M.: Mixed type duality for a class of multiple objective optimization problems with an infinite number of constraints. J. Nonlinear Convex Anal. 21, 49–61 (2020)
Jimenez, B.: Strict efficiency in vector optimization. J. Math. Anal. Appl. 265, 264–284 (2002)
Jimenez, B., Novo, V., Sama, M.: Scalarization and optimality conditions for strict minimizers in multiobjective optimization via contingent epiderivatives. J. Math. Anal. Appl. 352, 788–798 (2009)
Kabgani, A., Soleimani-damaneh, M.: Characterization of (weakly/properly/robust) efficient solutions in nonsmooth semi-infinite multiobjective optimization using convexificators. Optimization 67, 217–235 (2018)
Kanzi, N., Shaker Ardekani, J., Caristi, G.: Optimality, scalarization and duality in linear vector semi-infinite programming. Optim. 67, 523–536 (2018)
Kerdkaew, J., Wangkeeree, R., Lee, G.M.: On optimality conditions for robust weak sharp solution in uncertain optimizations. Carpathian J. Math. 36, 443–452 (2020)
Khanh, P.Q., Tung, N.M.: On the Mangasarian-Fromovitz constraint qualification and Karush-Kuhn-Tucker conditions in nonsmooth semi-infinite multiobjective programming. Optim. Lett. 14, 2055–2072 (2020)
Khantree, C., Wangkeeree, R.: On quasi approximate solutions for nonsmooth robust semiinfinite optimization problems. Carpathian J. Math. 35, 417–426 (2019)
Kim, D.S., Son, T.Q.: An approach to \(\varepsilon -\)duality theorems for nonconvex semi-infinite multiobjective optimization problems. Taiwan. J. Math. 22, 1261–1287 (2018)
Lee, J.H., Lee, G.M.: On optimality conditions and duality theorems for robust semi-infinite multiobjective optimization problems. Ann. Oper. Res. 269, 419–438 (2018)
Lee, J.H., Lee, G.M.: On \(\varepsilon -\)solutions for robust semi-infinite optimization problems. Positivity 23, 651–669 (2019)
Liu, J., Long, X.J., Sun, X.K.: Characterizing robust optimal solution sets for nonconvex uncertain semi-infinite programming problems involving tangential subdifferentials. J. Glob. Optim. (2022). https://doi.org/10.1007/s10898-022-01134-2
Long, X.J., Peng, Z.Y., Wang, X.F.: Characterizations of the solution set for nonconvex semi-infinite programming problems. J. Nonlinear Convex Anal. 17, 251–265 (2016)
Long, X.J., Xiao, Y.B., Huang, N.J.: Optimality conditions of approximate solutions for nonsmooth semi-infinite programming problems. J. Oper. Res. Soc. China 6, 289–299 (2018)
Long, X.J., Peng, Z.Y., Wang, X.: Stable Farkas lemmas and duality for nonconvex composite semi-infinite programming problems. Pac. J. Optim. 15, 295–315 (2019)
Long, X.J., Tang, L.P., Peng, J.W.: Optimality conditions for semi-infinite programming problems under relaxed quasiconvexity assumptions. Pac. J. Optim. 15, 519–528 (2019)
Long, X.J., Liu, J., Huang, N.J.: Characterizing the solution set for nonconvex semiinfinite programs involving tangential subdifferentials. Numer. Funct. Anal. Opt. 42, 279–297 (2021)
Mashkoorzadeh, F., Movahedian, N., Nobakhtian, S.: Robustness in nonsmooth nonconvex optimization problems. Positivity 25, 701–729 (2021)
Rahimi, M., Soleimani-damaneh, M.: Isolated efficiency in nonsmooth semi-infinite multi-objective programming. Optimization 67, 1923–1947 (2018)
Rahimi, M., Soleimani-damaneh, M.: Robustness in deterministic vector optimization. J. Optim. Theory Appl. 179(1), 137–162 (2018)
Rahimi, M., Soleimani-damaneh, M.: Characterization of norm-based robust solutions in vector optimization. J. Optim. Theory Appl. 185(2), 554–573 (2020)
Rezayi, A.: Characterization of isolated efficient solutions in nonsmooth multiobjective semi-infinite programming. Iran J Sci Technol Trans Sci 43, 1835–1839 (2019)
Rockafellar, R.T.: Convex Analysis. Princeton Landmarks in Mathematics, Princeton University Press, Princeton (1997)
Shitkovskaya, T., Hong, Z., Kim, D.S., Piao, G.R.: Approximate necessary optimality in fractional semi-infinite multiobjective optimization. J. Nonlinear Convex Anal. 21, 195–204 (2020)
Soleimani-damaneh, M.: Multiple-objective programs in Banach spaces: sufficiency for (proper) optimality. Nonlinear Anal. 67, 958–962 (2007)
Soleimani-damaneh, M.: Nonsmooth optimization using Mordukhovich’s subdifferential. SIAM J. Control Optim. 48, 3403–3432 (2010)
Son, T.Q., Tuyen, N.V., Wen, C.F.: Optimality conditions for approximate Pareto solutions of a nonsmooth vector optimization problem with an infinite number of constraints. Acta Math. Vietnam 45, 435–448 (2020)
Su, T.V., Luu, D.V.: Higher-order Karush-Kuhn-Tucker optimality conditions for Borwein properly efficient solutions of multiobjective semi-infinite programming. Optimization 71, 1749–1775 (2022)
Sun, X.K., Teo, K.L., Zheng, J., Liu, L.: Robust approximate optimal solutions for nonlinear semi-infinite programming with uncertainty. Optimization 69, 2109–2020 (2020)
Sun, X.K., Teo, K.L., Long, X.J.: Characterizations of robust \(\varepsilon -\)quasi optimal solutions for nonsmooth optimization problems with uncertain data. Optimization 70, 847–870 (2021)
Tung, L.T.: Karush-Kuhn-Tucker optimality conditions and duality for multiobjective semi-infinite programming via tangential subdifferentials. Numer. Funct. Anal. Optim. 41, 659–684 (2020)
Tung, L.T.: Strong Karush-Kuhn-Tucker optimality conditions for Borwein properly efficient solutions of multiobjective semi-infinite programming. Bull. Braz. Math. Soc. 52, 1–22 (2021)
Acknowledgements
The author would like to thank the editors for the help in the processing of the article. The author is very grateful to the two anonymous referees for many valuable comments and suggestions, which helped to improve the quality of the article.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Anton Abdulbasah Kamil.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Pham, TH. On Isolated/Properly Efficient Solutions in Nonsmooth Robust Semi-infinite Multiobjective Optimization. Bull. Malays. Math. Sci. Soc. 46, 73 (2023). https://doi.org/10.1007/s40840-023-01466-6
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40840-023-01466-6
Keywords
- Optimality condition
- Duality theorem
- Positively properly efficient solution
- Robust semi-infinite multiobjective optimization
- Generalized convexity