Abstract
In this paper, we are interested in deriving the sufficient and necessary conditions for an optimal solution of special classes of programming problems. These classes involve generalized \(E\mbox{-}[0,1]\) convex functions. The characterization of efficient solutions for \(E\mbox{-}[0,1]\) convex multi-objective programming problems is obtained. Finally, sufficient and necessary conditions for a feasible solution to be an efficient or properly efficient solution are derived.
Similar content being viewed by others
1 Introduction
The study of multi-objective programming problems was very active in recent years. The weak minimum (weakly efficient, weak Pareto) solution is an important concept in mathematical models, economics, decision theory, optimal control, and game theory (see, for example, [1–3]). In most works, the assumption of convexity was made for the objective functions. The extension of convexity is an area of active current research in the field of optimization theory. Various relaxations of convexity were possible, and were called generalized convex functions. The definition of generalized convex functions has occupied the attention of a number of mathematicians; for an overview of generalized convex functions we refer to [4–6]. A significant generalization of convexity is the concept of \(E\mbox{-}[0,1]\) convexity [7]. \(E\mbox{-}[0,1]\) convexity depends on the effect of an operator E on the range of the function and the closed unit interval \([0,1]\). Inspired and motivated by above reasons, the purpose of this paper is to formulate the problems which involve generalized \(E\mbox{-}[0,1]\) convex functions. The paper is organized as follows. In Section 2, we define generalized \(E\mbox{-}[0,1]\) convex functions, which are called pseudo \(E\mbox{-}[0,1]\) convex functions, and obtain sufficient and necessary conditions for an optimal solution of \(E\mbox{-}[0,1]\) convex programming problems. In Section 3, we consider the Mond-Weir type dual and generalize its results under the \(E\mbox{-}[0,1]\) convexity assumptions. In Section 4, we formulate the multi-objective programming problem which involves \(E\mbox{-}[0,1]\) convex functions. An efficient solution for the problem considered is characterized by weighting, and ε-constraint approaches. At the end of this paper, we obtain sufficient and necessary conditions for a feasible solution to be an efficient or properly efficient solution for problems involving generalized \(E\mbox{-}[0,1]\) convex functions. Let us survey, briefly, the definitions and some results as regards \(E\mbox{-}[0,1]\) convexity.
Definition 1
[7]
A real valued function \(f:M\subseteq R^{n} \to R\) is said to be a \(E\mbox{-}[0,1]\) convex function on M with respect to \(E:R\times[0,1]\to R\), if M is a convex set and, for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),
If \(f(\lambda_{1} x+\lambda_{2} y) \ge E(f(x),\lambda_{1} )+E(f(y) ,\lambda_{2} )\), then f is called a \(E\mbox{-}[0,1]\) concave function on M. If the inequality signs in the previous two inequalities are strict, then f is called strictly \(E\mbox{-}[0,1]\) convex and strictly \(E\mbox{-}[0,1]\) concave, respectively.
Every \(E\mbox{-}[0,1]\) convex function with respect to \(E:R\times [0,1]\to R\) is a convex function if \(E(f(x),\lambda)= \lambda f(x)\). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=(1+\lambda) t\), \(t \in R\), \(\lambda\in[0,1]\), then the function \(h(x)=\sum_{i=1}^{k}a_{i} f_{i} (x)\) is \(E\mbox{-}[0,1]\) convex on M for \(a_{i} \ge0\), \(i=1,2,\ldots,k\), if the functions \(f_{i} :R^{n} \to R\) are all \(E\mbox{-}[0,1]\) convex on a convex set \(M\subseteq R^{n} \). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min\{ \lambda t,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then a numerical function \(f:M\subset R^{n} \to R^{+} \) defined on a convex set \(M\subseteq R^{n} \) is \(E\mbox{-}[0,1]\) convex if and only if its \(\operatorname{epi}(f)\) is convex. Let B be an open convex subset of \(R^{n} \) and let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min\{ \lambda,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then f is continuous on B if f is \(E\mbox{-}[0,1]\) convex on B. If \(f:R^{n} \to R\) is a differentiable \(E\mbox{-}[0,1]\) convex function at \(y\in M\) with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda)=\min\{ \lambda t,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then, for each \(x\in M\), we have \((x-y)\nabla f(y)\le f(x)-f(y)\). For more details as regards \(E\mbox{-}[0,1]\) convex functions, see [7].
Definition 2
[8]
A real valued function \(f:M\subseteq R^{n} \to R\) is said to be a quasi \(E\mbox{-}[0,1]\) convex function on M with respect to \(E:R\times[0,1]\to R\), if M is a convex set and, for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),
If \(f(\lambda_{1} x+\lambda_{2} y) \ge \min\{ E(f(x),\lambda _{1} ),E(f(y) ,\lambda_{2} )\} \), then f is called a quasi \(E\mbox{-}[0,1]\) concave function on M. If the inequality signs in the previous two inequalities are strict, then f is called strictly quasi \(E\mbox{-}[0,1]\) convex and strictly quasi \(E\mbox{-}[0,1]\) concave, respectively.
Every quasi \(E\mbox{-}[0,1]\) convex function with respect to \(E:R\times[0,1]\to R\) is a convex function if \(E(f(x),\lambda)= \lambda f(x)\). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(f(x),\lambda)=f(\lambda x) \) for each \(x\in M\), \(\lambda\in [0,1]\), then \(f(\sum_{i=1}^{n}\lambda_{i} x_{i} )\le {\max } _{1\le i\le n} E(f(x_{i} ), \lambda_{i} )\) for each \(x_{i} \in M\), \(\lambda_{i} \ge0\), \(\sum_{i=1}^{n}\lambda_{i}=1\), if \(f:R^{n} \to R\) is \(E\mbox{-}[0,1]\) convex on a convex set \(M\subseteq R^{n} \). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min\{ \lambda ,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then the level set \(L_{\alpha}^{E\mbox{-}[0,1]} \) is a convex set for each \(\alpha\in R\) if \(f:R^{n} \to R\) is quasi \(E\mbox{-}[0,1]\) convex on a convex set \(M\subseteq R^{n} \). Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\max \{ \lambda ,t \} \), \(t \in R\), \(\lambda\in[0,1]\), and let \(\alpha= \min_{x} \min_{\lambda} E(f(x),\lambda)\), then the level set \(L_{\alpha}^{E\mbox{-}[0,1]} \) is a convex set if and only if f is quasi \(E\mbox{-}[0,1]\) convex. If \(f:R^{n} \to R\) is a differentiable quasi \(E\mbox{-}[0,1]\) convex function at \(y\in M\) with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda)=\min\{ \lambda ,t \} \), \(t \in R\), \(\lambda\in[0,1]\), then, for each \(x\in M\), we have \((x-y)\nabla f(y)\le0\). For more details as regards quasi \(E\mbox{-}[0,1]\) convex functions, see [8].
2 Generalized \(E\mbox{-}[0,1]\) convex programming problems
In this section, we define generalized \(E\mbox{-}[0,1]\) convex functions, which are called pseudo strongly E-convex functions, and obtain sufficient and necessary conditions for an optimal solution for problems involving generalized \(E\mbox{-}[0,1]\) convex functions.
Definition 3
A real valued function \(f:M\subseteq R^{n} \to R\) is said to be a pseudo \(E\mbox{-}[0,1]\) convex function on a convex set \(M\subseteq R^{n} \) if there exists a strictly positive function \(b:R^{n} \times R^{n} \to R\) such that
for all \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in [0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\).
Remark 1
Every pseudo \(E\mbox{-}[0,1]\) convex function with respect to \(E:R\times[0,1]\to R\) is convex function if \(E(t,\lambda)=\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\).
Proposition 1
Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)= \max\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). A convex function \(f:R^{n} \to R\) on a convex set \(M\subseteq R^{n} \) is a pseudo \(E\mbox{-}[0,1]\) convex function on M.
Proof
Let \(E(f(x),\lambda_{1} )< E(f(y) ,\lambda_{2} )\). Since f is a convex function on a convex set \(M\subseteq R^{n} \), for all \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\), we have
That is,
since \(b(x,y)=E(f(y) ,\lambda_{2} )-E(f(x) ,\lambda_{1} )>0\). This is the required result. □
Theorem 1
Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)=\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\) and \(M\subseteq R^{n} \) be a convex set. If \(f:R^{n} \to R\) is a differentiable pseudo \(E\mbox{-}[0,1]\) convex function at \(y\in M\), then \((x-y)\nabla f(y) <0\), for each \(x\in M\).
Proof
Since \(f:R^{n} \to R\) is a differentiable pseudo \(E\mbox{-}[0,1]\) convex function at \(y\in M\),
for each \(x\in M \) and \(\lambda_{1} ,\lambda_{2} \in [0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\). That is,
Dividing the above inequality by \(\lambda_{1} >0\) and letting \(\lambda _{1} \to0\), we get
for each \(x\in M\). □
Remark 2
Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)=\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\), and \(M\subseteq R^{n} \) be a convex set. If \(f:R^{n} \to R\) is a differentiable pseudo \(E\mbox{-}[0,1]\) convex function at \(y\in M\), then \((x-y)\nabla f(y) \ge0 \Rightarrow E(f(x),\lambda_{1} )\ge E(f(y) ,\lambda_{2} )\), for each \(x\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\).
Lemma 1
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). If \(g_{i} :R^{n} \to R\) is an \(E\mbox{-}[0,1]\) convex function on \(R^{n} \), \(i=1,2,\ldots,m\), then the set \(M=\{ x\in R^{n} : g_{i} (x)\le 0,i=1,2,\ldots,m\}\) is convex set.
Proof
Since \(g_{i} (x)\), \(i=1,2,\ldots,m\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E(t,\lambda)=\lambda\min \{ t,\lambda\} \), for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),
hence \(\lambda_{1} x+\lambda_{2} y\in M\) for all \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\). This means that M is convex set. □
Lemma 2
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\min \{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). If \(g_{i} :R^{n} \to R\) is a quasi \(E\mbox{-}[0,1]\) convex function on \(R^{n} \), \(i=1,2,\ldots,m\), then the set \(M=\{ x\in R^{n} : g_{i} (x)\le0,i=1,2,\ldots,m\}\) is convex set.
Proof
Since \(g_{i} (x)\), \(i=1,2,\ldots,m\), are quasi \(E\mbox{-}[0,1]\) convex functions with respect to \(E(t,\lambda)=\min \{ t,\lambda\} \), for each \(x,y\in M\) and \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\),
hence \(\lambda_{1} x+\lambda_{2} y\in M\) for all \(\lambda_{1} ,\lambda_{2} \in[0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\). This means that M is convex set. □
Now, we discuss the necessary and sufficient conditions for a feasible solution to be an optimal solution for \(E\mbox{-}[0,1]\) convex programming problems. Consider the following \(E\mbox{-}[0,1]\) convex programming problem:
Here \(f:R^{n} \to R\) and \(g_{i} :R^{n} \to R\), \(i=1,2,\ldots,m\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\).
Theorem 2
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exists a feasible solution \(x^{*} \) for (\(\bar{\mathrm{P}}\)), and f, g are differentiable \(E\mbox{-}[0,1]\) convex functions with respect to the same E at \(x^{*} \). If there is \(u\in R^{m}\) and \(u\ge0\) such that \((x^{*} ,u)\) satisfies the following conditions:
then \(x^{*} \) is an optimal solution for problem (\(\bar{\mathrm{P}}\)).
Proof
For each \(x\in M\), we have
where the above inequalities hold because f, g are \(E\mbox{-}[0,1]\) convex at \(x^{*}\) with respect to the same E (see Theorem 4.1 in [7]). Thus, \(x^{*} \) is the minimizer of \(f(x)\) under the constraint \(g(x)\le0\), which implies that \(x^{*} \) is an optimal solution for problem (\(\bar{\mathrm{P}}\)). □
Remark 3
[9]
In Theorem 2 above, since \(u\ge0\), \(g(x^{*} )\le0\), and \(u^{T} \nabla g(x^{*} )=0\), we have
If \(I(x^{*})=\{ i: g_{i} (x^{*} )=0\} \) and \(J=\{ i: g_{i} (x^{*} )<0\} \), then \(I\cup J=\{ 1,2,\ldots,m\}\), and (2) gives \(u_{i} =0\) for \(i\in J\). It is obvious then, from the proof of Theorem 2, that \(E\mbox{-}[0,1]\) convexity of \({g}_{I}\) at \(x^{*} \) is all that is needed instead of the \(E\mbox{-}[0,1]\) convexity of g at \(x^{*} \) as was assumed in the theorem above.
Theorem 3
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (\(\bar{\mathrm{P}}\)) and scalars, \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (1) of Theorem 2 holds. If f is pseudo \(E\mbox{-}[0,1]\) convex, and \(g_{I} \) are quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ) ,\lambda_{2} )\), \(\lambda_{2} \in[0,1]\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)).
Proof
Since \(E(g_{I} (x),\lambda_{1} )\le E(g_{I} (x^{*} ) ,\lambda_{2} )=0\), \(u_{i} \ge0\), \(\lambda_{1}, \lambda_{2} \in [0,1]\), \(\lambda_{1} +\lambda_{2} =1\), and \(g_{I} \) are quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \), we have
by using the above inequality in (1), and pseudo \(E\mbox{-}[0,1]\) convexity of f at \(x^{*} \), we obtain
Hence, \(E(f(x^{*} ) ,\lambda_{2} )\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)). □
The next two theorems use the idea proposed by Mahajan and Vartak [10].
Theorem 4
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (\(\bar{\mathrm{P}}\)) and scalars, \(u_{i} \ge 0\), \(i\in I(x^{*} )\), such that (1) of Theorem 2 holds. If f is pseudo \(E\mbox{-}[0,1]\) convex, and \(u_{I}^{T} g_{I} \) is quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ) ,\lambda_{2} )\), \(\lambda_{2} \in [0,1]\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)).
Proof
The proof of this theorem is similar to the proof of Theorem 3 except that the argument to get the inequality (3) is as follows: Since \(E(g_{I} (x),\lambda_{1} )\le E(g_{I} (x^{*} ) ,\lambda_{2} )\), \(u_{I} \ge0\), \(\lambda_{1}, \lambda_{2} \in [0,1]\), \(\lambda_{1} +\lambda_{2} =1\), we obtain
for all \(x\in M\). Quasi \(E\mbox{-}[0,1]\) convexity of \(u_{I}^{T} g_{I}\) at \(x^{*} \) yields
We can proceed as in the above theorem to prove that \(E(f(x^{*} ) ,\lambda_{2} )\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)). □
Theorem 5
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in[0,1]\). Suppose that there exists a feasible point \(x^{*} \) for (\(\bar{\mathrm{P}}\)) and the numerical function \(f+u_{I}^{T} g_{I} \) is pseudo \(E\mbox{-}[0,1]\) convex at \(x^{*} \). If there is a scalar \(u\in R^{m} \) such that \((x^{*}, u)\) satisfies the conditions (1) of Theorem 2, then \(E(f(x^{*} ) ,\lambda_{2} )\), \(\lambda_{2} \in[0,1]\), is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)).
Proof
The proof of this theorem is similar to the proof of Theorem 4 except that the arguments are as follows: (1) can be written as
This can be rewritten in the form
which gives
because \(f+ u_{I}^{T} g_{I} \) is pseudo \(E\mbox{-}[0,1]\) convex at \(x^{*} \), i.e.,
It follows, by using the definition of I, that
Hence, \(E(f(x^{*} ) ,\lambda_{2} )\) is an optimal solution in the objective space of problem (\(\bar{\mathrm{P}}\)). □
Theorem 6
(Necessary optimality criteria)
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\lambda\min\{ t,\lambda\}\), \(t\in R\), \(\lambda\in [0,1]\). Assume that \(x^{*}\) is an optimal solution for problem (\(\bar {\mathrm{P}}\)) and there exists a feasible point x for (\(\bar{\mathrm{P}}\)) such that \(g_{i} (x)<0\), \(i=1,2,\ldots,m\). If \(g_{i} \), \(i\in I(x^{*} )\), is \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then there exist scalars \(u_{i} \ge 0\), \(i \in I(x^{*} )\), such that \((x^{*} , u_{i} )\) satisfies
Proof
We desire to show that
The result will follow as in [11] by applying Farkas’ lemma. Assume (5) does not hold, i.e., there exists \(x\in R^{n} \) such that
Since by the assumed Slater-type condition,
and from \(E\mbox{-}[0,1]\) convexity of \(g_{i}\) at \(x^{*} \), we get
Hence for some positive β small enough
Similarly, for \(i\notin I(x^{*} )\), \(g_{i} (x^{*} )<0\), and for \(\beta >0\) small enough,
Thus, for β sufficiently small and all \(\rho>0\), \(x^{*} +\beta [(x-x^{*} )+\rho(\tilde{x}-x^{*} )]\) is feasible for problem (\(\bar {\mathrm{P}}\)). For sufficiently small \(\rho>0\) (6) gives
which contradicts the optimality of \(x^{* }\) for (\(\bar{\mathrm{P}}\)). Hence, the system (6) has no solution. The result then follows from an application of Farkas’ lemma, namely
□
3 Duality in \(E\mbox{-}[0,1]\) convexity
We consider the Wolfe type dual and generalized its results under the \(E\mbox{-}[0,1]\) convexity assumptions. Consider the following Wolfe type dual of problem (\(\bar{\mathrm{P}}\)):
where f, g are differentiable functions defined on \(R^{n} \). We now prove the following duality theorems, relating problem (\(\bar {\mathrm{P}}\)) and (\(\bar{\mathrm{D}}\)).
Theorem 7
(Weak duality)
Let \(E:R\times[0,1]\to R\) be a map such that \(E(t,\lambda)=\lambda \min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\), and let there exist a feasible solution x for (\(\bar{\mathrm{P}}\)) and \((y,u)\), a feasible solution for (\(\bar{\mathrm{D}}\)). If f, g are \(E\mbox{-}[0,1]\) convex functions at y, then \(f(x)\nless f(y)+u^{T} g(y)\).
Proof
Let x be a feasible solution for (\(\bar {\mathrm{P}}\)) and \((y,u)\) be a feasible solution for (\(\bar{\mathrm{D}}\)). Suppose contrary to the result \(f(x)< f(y)+u^{T} g(y)\), then
\(E\mbox{-}[0,1]\) convexity of f, g at y, implies that
and combining the above two inequalities gives
and by using inequality (9), we get
which contradicts the constraint \(\nabla f(y)+u^{T} \nabla g(y)=0\) of (\(\bar{\mathrm{D}}\)). □
Theorem 8
(Strong duality)
Let \(x^{*}\) be an optimal solution for (\(\bar{\mathrm{P}}\)) and g satisfy the Kuhn-Tucker constraint qualification at \(x^{*}\). Then, there exists \(u^{*} \in R^{m} \), such that \((x^{*} ,u^{*} )\) is a feasible solution for (\(\bar{\mathrm{D}}\)) and the (\(\bar{\mathrm{P}}\))-objective at \(x^{*} \) equals the (\(\bar{\mathrm{D}}\))-objective at \((x^{*} ,u^{*} )\). If f, g are \(E\mbox{-}[0,1]\) convex functions at \(x^{*}\) with respect to \(E:R\times [0,1]\to R\) such that \(E(t,\lambda)=\lambda\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\), then \((x^{*} ,u^{*})\) is an optimal solution for problem (\(\bar{\mathrm{D}}\)).
Proof
Since g satisfies the Kuhn-Tucker constraint qualification at \(x^{*}\), there exists \(u^{*} \in R^{m} \), such that the following Kuhn-Tucker conditions are satisfied:
(10) and (13) show that \((x^{*} ,u^{*} )\) a feasible solution for (\(\bar{\mathrm{D}}\)). Also, (11) shows that the (\(\bar{\mathrm{P}}\))-objective at \(x^{*} \) equal to the (\(\bar{\mathrm{D}}\))-objective at \((x^{*} ,u^{*} )\). Now, from \(E\mbox{-}[0,1]\) convexity of f and g, we have
for each feasible point \((x,u)\) of (\(\bar{\mathrm{D}}\)). Hence, \((x^{*} ,u^{*} )\) is an optimal solution for problem (\(\bar{\mathrm{D}}\)). □
Example 1
Let \(E\mbox{-}[0,1]:R\times[0,1] \to R \) be defined as \(E(t,\lambda)=\lambda\sqrt[3]{t}\), where \(t \in R\), and \(\lambda\in[0,1]\). Consider the problem (\(\bar{\mathrm{P}}\))
where f is \(E\mbox{-}[0,1]\) convex function on convex set M. Formulate the dual problem (\(\bar{\mathrm{D}}\)) as follows:
From the system (10)-(13), we have
where \(u_{i}^{*} \ge0\), \(i=1,2,3,4\). By solving this system, we conclude that \((x^{*},u^{*})\) is the optimal solution of the dual problem (\(\bar{\mathrm{D}}\)) such that \(x^{*}=(2,1)\) and \(u^{*}=(3,0,6,0)\). Hence, \(x^{*}=(2,1)\) is the optimal solution of (\(\bar{\mathrm{P}}\)).
4 Generalized \(E\mbox{-}[0,1]\) convex multi-objective programming problems
In this section, we formulate a multi-objective programming problem which involves \(E\mbox{-}[0,1]\) convex functions. An efficient solution for the considered problem is characterized by weighting and ε-constraint approaches. At the end of this section, we obtain sufficient and necessary conditions for a feasible solution to be an efficient or properly efficient solution for this kind of problems. An \(E\mbox{-}[0,1]\) convex multi-objective programming problem is formulated as follows:
where \(f_{j} :R^{n} \to R \), \(j=1,2,\ldots,k\), and \(g_{i} :R^{n} \to R\), \(i=1,2,\ldots,m\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\).
Definition 4
[12]
A feasible solution \(x^{*} \) for (P) is said to be an efficient solution for (P) if and only if there is no other feasible x for (P) such that, for some \(i\in\{ 1,2,\ldots,k\} \),
Definition 5
[12]
An efficient solution \(x^{*} \in M\) for (P) is a properly efficient solution for (P) if there exists a scalar \(\mu>0\) such that for each i, \(i=1,2,\ldots,k\), and each \(x\in M\) satisfying \(f_{i} (x)< f_{i} (x^{*} )\), there exists at least one \(j\ne i\) with \(f_{j} (x)>f_{j} (x^{*} )\), and \([f_{i} (x)-f_{i} (x^{*} )]/[f_{j} (x^{*} )-f_{j}(x)]\le\mu\).
Lemma 3
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). If \(f:R^{n} \to R^{k} \) is an \(E\mbox{-}[0,1]\) convex function on a convex set \(M\subseteq R^{n} \), then the set \(A= \bigcup_{x\in M} A(x)\) is convex set such that
Proof
Let \(z^{1} ,z^{2} \in A\), then for all \(x^{1} ,x^{2} \in M\) and \(\lambda_{1}, \lambda_{2} \in[0, 1]\), \(\lambda _{1} +\lambda_{2} = 1\), we have
since f is \(E\mbox{-}[0,1]\) convex function on convex set M. Then \(\lambda_{1} z^{1} +\lambda_{2} z^{2} \in A \), and hence A is convex set. □
4.1 Characterizing efficient solutions by weighting approach
To characterize an efficient solution for problem (P) by weighting approach [12] let us scalarize problem (P) to get the form
where \(w_{j} \ge0\), \(j=1,2,\ldots,k\), \(\sum_{j=1}^{k}w_{j} =1\), and \(f_{j} \), \(j=1,2,\ldots,k\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in [0,1]\), on convex set M.
Theorem 9
If \(\bar{x}\in M\) is an efficient solution for problem (P), then there exists \(w_{j} \ge0\), \(j=1,2,\ldots,k\), \(\sum_{j=1}^{k}w_{j} =1\), such that \(\bar{x}\) is an optimal solution for problem (\(\mathrm{P}_{w}\)).
Proof
Let \(\bar{x}\in M\) be an efficient solution for problem (P), then the system \(f_{j} (x)-f_{j} (\bar {x})<0\), \(j=1,2,\ldots,k\), has no solution \(x\in M\). Upon Lemma 3 and applying the generalized Gordan theorem [13], there exists \(p_{j} \ge0\), \(j=1,2,\ldots,k\), such that \(p_{j} [f_{j} (x)-f_{j} (\bar{x})]\ge0\), \(j=1,2,\ldots,k\), and \(\frac{p_{j} }{\sum_{j=1}^{k}p_{j} } f_{j} (x)\ge\frac{p_{j} }{\sum_{j=1}^{k}p_{j} } f_{j} (\bar{x})\).
Denote \(w_{j} =\frac{p_{j} }{\sum_{j=1}^{k}p_{j} } \), then \(w_{j} \ge 0\), \(j=1,2,\ldots,k\), \(\sum_{j=1}^{k}w_{j} =1\), and \(\sum_{j=1}^{k}w_{j} f_{j} (\bar{x}) \le\sum_{j=1}^{k}w_{j} f_{j} (x) \). Hence \(\bar{x}\) is an optimal solution for problem (\(\mathrm{P}_{w}\)). □
Theorem 10
If \(\bar{x}\in M\) is an optimal solution for (\(\mathrm{P}_{\bar{w}}\)) corresponding to \(\bar{w}_{j} \), then \(\bar{x}\) is an efficient solution for problem (P) if one of the following two conditions holds:
-
(i)
\(\bar{w}_{j} >0\), \(\forall j=1,2,\ldots,k\);
-
(ii)
\(\bar{x}\) is the unique solution of (\(\mathrm{P}_{\bar{w}}\)).
Proof
For the proof see Chankong and Haimes [12]. □
4.2 Characterizing efficient solutions by ε-constraint approach
The ε-constraint approach is one of the common approaches for characterizing efficient solutions of multi-objective programming problems [12]. In the following we shall characterize an efficient solution for the multi-objective \(E\mbox{-}[0,1]\) convex programming problem (P) in terms of an optimal solution of the following scalar problem:
Here \(f_{j} \), \(j=1,2,\ldots,k\), are \(E\mbox{-}[0,1]\) convex functions with respect to \(E:R\times[0,1]\to R\) such that \(E(t,\lambda)=\min \{ t ,\lambda \} \), \(t \in R\), \(\lambda\in[0,1]\), on the convex set M.
Theorem 11
If \(\bar{x}\in M\) is an efficient solution for problem (P), then \(\bar{x}\) is an optimal solution for problem \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E} )\) and \(\bar{\varepsilon}_{j} =f_{j} (\bar{x})\).
Proof
Let \(\bar{x}\) be not optimal solution for \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E})\) where \(\bar {\varepsilon}_{j} =f_{j} (\bar{x})\), \(j=1,2,\ldots,k\). So there exists \(x\in M\) such that
since \(\bar{E}(\bar{\varepsilon}_{j},\bar{\lambda}_{j} )=\min(\bar {\varepsilon}_{j},\bar{\lambda}_{j} )\) and convexity of M. This implies that the system \(f_{j} (x)-f_{j} (\bar{x})<0 \), \(j=1,2,\ldots,k\), has a solution \(x\in M\). Thus, \(\bar{x}\) is an inefficient solution for problem (P), which is a contradiction. Hence \(\bar{x}\) is an optimal solution for problem \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E} )\). □
Theorem 12
Let \(\bar{x}\in M\) be an optimal solution, for all q of \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E} )\), where \(\bar{\varepsilon}_{j} =f_{j} ( \bar{x})\), \(j=1,2,\ldots,k\). Then \(\bar{x}\) is an efficient solution for problem (P).
Proof
Since \(\bar{x}\in M\) is an optimal solution for \(\mathrm{P}_{q} (\bar{\varepsilon},\bar{E})\), where \(\bar {\varepsilon}_{j} =f_{j} (\bar{x})\), \(j=1,2,\ldots,k\), for each \(x\in M\), we get
where \(\bar{E}(\bar{\varepsilon}_{j},\bar{\lambda}_{j} )=\min(\bar {\varepsilon}_{j},\bar{\lambda}_{j} )\). This implies the system \(f_{j} (x)-f_{j} (\bar{x})<0\), \(j=1,2,\ldots,k\), has no solution \(x\in M\), i.e., \(\bar{x}\) is an efficient solution for problem (P). □
4.3 Sufficient and necessary conditions for efficiency
In this section, we discuss the sufficient and necessary conditions for a feasible solution \(x^{*}\) to be efficient or properly efficient for problem (P) in the form of the following theorems.
Theorem 13
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in [0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} >0\), \(i=1,2,\ldots,k\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that
If \(f_{i} \), \(i=1,2,\ldots,k\), and \(g_{i} \), \(i\in I(x^{*} )\), are differentiable \(E\mbox{-}[0,1]\) convex functions at \(x^{*} \in M\), then \(x^{*} \) is a properly efficient solution for problem (P).
Proof
Since \(f_{i} \), \(i=1,2,\ldots,k\), and \(g_{i}\), \(i\in I(x^{*} )\), are differentiable \(E\mbox{-}[0,1]\) convex functions at \(x^{*} \in M\), for any \(x\in M\), we have
Thus, \(\sum_{i=1}^{k}\gamma_{i} f_{i} (x) \ge\sum_{i=1}^{k}\gamma _{i} f_{i} (x^{*} ) \), for all \(x\in M\), which implies that \(x^{*} \) is the minimizer of \(\sum_{i=1}^{k}\gamma_{i} f_{i} (x) \) under the constraint \(g(x)\le0\). Hence, from Theorem 4.11 of [12], \(x^{*} \) is a properly efficient solution for problem (P). □
Theorem 14
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda )=\lambda\min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1 \), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that the triplet \((x^{*} ,\gamma_{i} ,u_{i} )\) satisfies (14) of Theorem 13. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is strictly \(E\mbox{-}[0,1]\) convex, and \(g_{I} \) is \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(x^{*} \) is an efficient solution for problem (P).
Proof
Suppose that \(x^{*} \) is not an efficient solution for (P), then there exist a feasible \(x\in M\) and an index r such that
Since \(\sum_{i=1}^{k}\gamma_{i} f_{i}\) is strictly \(E\mbox{-}[0,1]\) convex at \(x^{*} \), the previous two inequalities lead to
Also, \(E\mbox{-}[0,1]\) convexity of \(g_{i} \), \(i\in I(x^{*} )\), at \(x^{*} \) implies
and, for \(u_{i} \ge0\), \(i\in I(x^{*} )\), we get
Adding (15) and (16) contradicts (14). Hence, \(x^{*} \) is an efficient solution for problem (P). □
Remark 4
Similarly to Theorem 13, it can easily be seen that \(x^{*} \) becomes a properly efficient solution for (P), in the above theorem, if \(\gamma _{i} >0\), for all \(i=1,2,\ldots,k\).
Theorem 15
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma _{i} >0\), \(i=1,2,\ldots,k\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (14) of Theorem 13 holds. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is pseudo \(E\mbox{-}[0,1]\) convex, and \(g_{I}\) are quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2} )\), \(\lambda_{2} \in[0,1]\) is a properly nondominated solution in the objective space of problem (P).
Proof
Since \(E(g_{I} (x),\lambda_{1} )\le E(g_{I} (x^{*} ),\lambda_{2} )=0\), \(\lambda_{1} ,\lambda_{2} \in [0, 1]\), \(\lambda_{1} +\lambda_{2} = 1\), and from quasi \(E\mbox{-}[0,1]\) convexity of \(g_{I} \) at \(x^{*} \), \(u_{I} \ge0\), we get
by using the above inequality in (14), and from pseudo \(E\mbox{-}[0,1]\) convexity of \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) at \(x^{*} \), we get
Hence, \(E(f(x^{*} ),\lambda_{2} )\) is a properly nondominated solution in the objective space of problem (P). □
Theorem 16
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min \{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma _{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1 \), \(u_{i} \ge 0\), \(i\in I(x^{*} )\), such that (14) of Theorem 13 holds. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is strictly pseudo \(E\mbox{-}[0,1]\) convex and \(g_{I} \) is quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\) is a nondominated solution in the objective space of problem (P).
Proof
Suppose that \(E(f(x^{*} ),\lambda_{2} )\) is dominated solution for (P), then there exist a feasible x for (P) and an index r such that
Since \(E(t,\lambda_{1})=\min\{ t,\lambda_{1} \} \), \(t \in R\), \(\lambda_{1} \in[0,1]\), we have
The strictly pseudo \(E\mbox{-}[0,1]\) convexity of \(\sum_{i=1}^{k}\gamma_{i} f_{i}\) at \(x^{*} \) implies that
Also, quasi \(E\mbox{-}[0,1]\) convexity of \(g_{I} \) at \(x^{*} \) implies that
The proof now follows along lines similar to Theorem 14. □
Remark 5
Similarly to Theorem 15, it can easily be seen that \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\), becomes a properly nondominated solution for (P), in the above theorem, if \(\gamma_{i} >0\), for all \(i=1,2,\ldots,k\).
Theorem 17
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} >0\), \(i=1,2,\ldots,k\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (14) of Theorem 13 holds. If \(\sum_{i=1}^{k}\gamma_{i} f_{i} \) is pseudo \(E\mbox{-}[0,1]\) convex and \(u_{I} g_{I} \) is quasi \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\) is a properly nondominated solution in the objective space of problem (P).
Proof
The proof is similar to the proof of Theorem 15. □
Theorem 18
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)= \min\{ t,\lambda\}\), \(t \in R\), \(\lambda\in[0,1]\). Suppose that there exist a feasible solution \(x^{*} \) for (P) and scalars \(\gamma_{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1\), \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that (14) of Theorem 13 holds. If \(I(x^{*} )\ne\phi\), \(\sum_{i=1}^{k}\gamma_{i} f_{i}\) is quasi \(E\mbox{-}[0,1]\) convex and \(u_{I} g_{I} \) is strictly pseudo \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\), then \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\), is a nondominated solution in the objective space of problem (P).
Proof
The proof is similar to the proof of Theorem 16. □
Remark 6
Similarly to Theorem 15, it can easily be seen that \(E(f(x^{*} ),\lambda_{2})\), \(\lambda_{2} \in[0,1]\), becomes a properly nondominated solution for (P), in the above theorem, if \(\gamma_{i} >0\), for all \(i=1,2,\ldots,k\).
Theorem 19
(Necessary efficiency criteria)
Let \(E:R\times[0,1]\to R\) be a mapping such that \(E(t,\lambda)=\lambda\min\{ t,\lambda\} \), \(t \in R\), \(\lambda\in[0,1]\), and \(x^{*}\) be a properly efficient solution for problem (P). Assume that there exists a feasible point x for (P) such that \(g_{i} (x)<0\), \(i=1,2,\ldots,m\), and each \(g_{i} \), \(i\in I(x^{*} )\), is \(E\mbox{-}[0,1]\) convex at \(x^{*} \in M\). Then there exist scalars \(\gamma_{i} >0\), \(i=1,2,\ldots,k\) and \(u_{i} \ge0\), \(i\in I(x^{*} )\), such that the triplet \((x^{*} ,\gamma_{i} ,u_{i} )\) satisfies
Proof
Let the system
have a solution for every \(q=1,2,\ldots,k\). Since, by the assumed Slater-type condition,
and from \(E\mbox{-}[0,1]\) convexity of \(g_{i} \) at \(x^{*} \), we get
Hence for some positive β small enough
Similarly, for \(i\notin I(x^{*} )\), \(g_{i} (x^{*} )<0\) and for \(\beta>0\) small enough
Thus, for β sufficiently small and all \(\rho>0\), \(x^{*} +\beta [(x-x^{*} )+\rho(\tilde{x}-x^{*} )]\) is feasible for problem (P). For sufficiently small \(\rho>0\) (18) gives
Now, for all \(j\ne q\) such that
consider the ratio
From (18), \(N(\beta,\rho)\to-(x-x^{*} )^{T} \nabla f_{q} (x^{*} )>0\). Similarly, \(D(\beta,\rho)\to(x-x^{*} )^{T} \nabla f_{j} (x^{*} )\le0\); but, by (21), \(D(\beta,\rho)>0\), so \(D(\beta,\rho)\to0\). Thus, the ratio in (22) becomes unbounded, contradicting the proper efficiency of \(x^{*}\) for (P). Hence, for each \(q=1,2,\ldots,k\), the system (18) has no solution. The result then follows from an application of Farkas’ lemma, namely
□
Theorem 20
Assume that \(x^{*} \) is an efficient solution for problem (P) at which the Kuhn-Tucker constraint qualification is satisfied. Then, there exist scalars \(\gamma_{i} \ge0\), \(i=1,2,\ldots,k\), \(\sum_{i=1}^{k}\gamma_{i} =1\), \(u_{j} \ge0\), \(j=1,2,\ldots,m\), such that
Proof
Since every efficient solution is a weak minimum, by applying Theorem 2.2 of Weir and Mond [14] for \(x^{*} \), we see that there exist \(\gamma\in R^{k} \), \(u\in R^{m} \) such that
where \(e=(1,1,\ldots,1)\in R^{k} \). □
Example 2
Let \(E\mbox{-}[0,1]:R\times[0,1] \to R \) be defined as \(E(t,\lambda)=\lambda\sqrt[3]{t}\), where \(t \in R\), and \(\lambda\in[0,1]\). Consider the problem:
where \(f_{1} \), and \(f_{2} \) are \(E\mbox{-}[0,1]\) convex functions on convex set M. It is clear that \(f(M)\) is \(R_{\geqq}^{2}\)-nonconvex set (see Figure 1(a)), but the image of the objective space \(f(M)\) under the map \(E\mbox{-}[0,1]\) is \(R_{\geqq}^{2}\)-convex set (see Figure 1(b)).
(i) Formulate the weighting problem (\(\mathrm{P}_{w}\)) as
where \(w_{1} ,w_{2} \ge0\), \(w_{1} +w_{2} =1\).
It is clear that a point \((0,y)\in M\), \(1 \le y \le3\), is an optimal solution for (\(\mathrm{P}_{w} \)) corresponding \(w=(w_{1} ,0)\), \(0 < w_{1} \le1\), and a point \((x,1)\in M\), \(0 \le x \le2\) is an optimal solution for (\(\mathrm{P}_{w} \)) corresponding to \(w=(0,w_{2} )\), \(0 < w_{2} \le1\). Hence the set of efficient solutions of problem (P) can be described as
(ii) Formulate the problem \(\mathrm{P}_{q} (\varepsilon )\) as
and
It is easy to see that the points \(\{ (x,1)\in M: 0 \le x \le2 \mbox{ and } (0,y)\in M: 1 \le y \le3\} \) are optimal solutions corresponding to
(iii) Applying the Kuhn-Tucker conditions yields
and
where \(\gamma_{i} \ge0\), \(i=1,2\), \(\gamma_{1} +\gamma _{2}=1\), and \(u_{i} \ge0\), \(i=1,2,3,4\). From this system we conclude that the set of efficient solutions can be described as
References
Chen, J-W, Li, J, Wang, J-N: Optimality conditions and duality for nonsmooth multiobjective optimization problems with cone constraints. Acta Math. Sci. 32(1), 1-12 (2012)
Youness, EA, Emam, T: Characterization of efficient solutions of multi-objective optimization problems involving semi-strongly and generalized semi-strongly E-convexity. Acta Math. Sci. Ser. B 28(1), 7-16 (2008)
Mishra, SK, Wang, SY, Lai, KK: Generalized Convexity and Vector Optimization. Nonconvex Optimization and Its Applications, vol. 90. Springer, Berlin (2009)
Mishra, SK, Giorgi, G: Invexity and Optimization. Nonconvex Optimization and Its Applications, vol. 88. Springer, Berlin (2008)
Emam, T: Roughly B-invex programming problems. Calcolo 48(2), 173-188 (2011)
Komlosi, S, Rapesak, T, Schaible, S: Generalized Convexity. Springer, Berlin (1994)
Youness, EA, El-Banna, AZ, Zorba, S: \(E\mbox{-}[0, 1]\) Convex functions. Nuovo Cimento 120(4), 397-406 (2005)
Youness, EA, El-Banna, AZ, Zorba, S: Quasi \(E\mbox{-}[0, 1]\) convex functions. In: 8th International Conference on Parametric Optimization, 27 November - 1 December (2005)
Kaul, RN, Kaur, S: Optimality criteria in nonlinear programming involving nonconvex functions. J. Math. Anal. Appl. 105, 104-112 (1985)
Mahajan, DG, Vartak, MN: Generalizations of some duality theorems in nonlinear programming. Math. Program. 12, 293-317 (1977)
Bazaraa, MS, Shetty, CM: Nonlinear Programming: Theory and Algorithms. Wiley, New York (1979)
Chankong, V, Haimes, YY: Multiobjective Decision Making: Theory and Methodology. North-Holland, Amsterdam (1983)
Mangasarian, OL: Nonlinear Programming. McGraw-Hill, New York (1969)
Weir, T, Mond, B: Generalized convexity and duality in multiple objective programming. Bull. Aust. Math. Soc. 39, 287-299 (1989)
Acknowledgements
The author is grateful to everyone who contributed, in one way or another, to the success of this paper. Also, UOH funds are gratefully acknowledged for a research grant that has been given so that this research could be conducted.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The author declares that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Emam, T. Optimality for \(E\mbox{-}[0,1]\) convex multi-objective programming problems. J Inequal Appl 2015, 160 (2015). https://doi.org/10.1186/s13660-015-0675-7
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-015-0675-7