Abstract
We consider nonsmooth multiobjective fractional programming on normed spaces. Using first- and second-order approximations as generalized derivatives, first- and second-order optimality conditions are established. Unlike the existing results, we avoid completely convexity assumptions. Our results can be applied even in infinite-dimensional cases, involving non-Lipschitz maps.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Fractional programming has been an intensively developed topic in optimization, see, e.g., research papers (Borwein 1976; Schaible 1982; Singh 1981, 1986), a basic presentation in a handbook (Schaible 1995), and bibliographies (Schaible 1982; Stancu-Minasian 2006). Along with numerous contributions to multiobjective optimization, a very important area with significant practical applications in science, economics and engineering, multiobjective problems of fractional programming has also become attractive to many researchers, see, e.g., (Bector et al. 1993; Kim et al. 2005; Kuk et al. 2001; Liang et al. 2001; Lyall et al. 1997; Nobakhtian 2008; Reedy and Mukherjee 2001; Soleimani-Damaneh 2008; Zalmai 2006). In these papers, increasing efforts of dealing with nonsmooth problems, relying on various generalized derivatives, can be recognized. Severe convexity requirements, especially in sufficient optimality conditions, have been gradually reduced, using relaxed convexity notions. However, we observe that almost no contributions to problems in infinite-dimensional spaces and that convexity assumptions have not been completely removed so far.
Inspired by these observations, we consider in this paper a nonsmooth multiobjective fractional programming problem in normed spaces. To avoid completely convexity restrictions, we employ first- and second-order approximations as generalized derivatives. This kind of derivatives proved to be effective in problems with a high level of nonsmoothness and without convexity of the data, see Khanh and Tuan (2006, 2008, 2009, 2011, 2014).
The organization of this paper is as follows. In Sect. 2, we state our fractional problem and recall notions needed in the sequel. Section 3 is devoted to properties and calculus rules for first- and second-order approximations for later use. First-order optimality conditions are discussed in Sect. 4. The last Sect. 5 deals with second-order optimality conditions in both first-order differentiable cases and completely nonsmooth cases.
2 Preliminaries
Throughout the paper, if not otherwise specified, let spaces under consideration like \(X, Y,\) and \(Y_i\) (for \(i\) in a given index set) be normed spaces, \(K\subseteq Y\) and \(C\subseteq \mathbb {R}^m\) be proper closed convex cones with nonempty interior, \(C\) being pointed. For \(A\subseteq X\), int\(A\), cl\(A\), bd\(A\), \(A_\infty \) and cone\(A\) denote its interior, closure, boundary, recession cone (i.e., the cone \(\{\lim t_na_n\;|\; a_n\in A, t_n\downarrow 0\}\)) and the cone generated by \(A\) (i.e., \(\{tx|\; x\in A,\; t\ge 0\}\)), respectively (shortly, respectively). \(X^*\) is the dual space of \(X\), \(B_X\) stands for the closed unit ball in \(X\), and \(B(x_0,\epsilon )\) is the open ball of center \(x_0\) and radius \(\epsilon \). We consider the following multiobjective fractional programming problem
where \(f_i,g_i:X\rightarrow \mathbb {R}\), \(h:X\rightarrow Y\) with \(g_i\) being continuous and nonzero-valued for \(i=1,\ldots ,m\).
Set \(f(x):=(f_1(x),\ldots ,f_m(x))\), \(g(x):=(g_1(x),\ldots ,g_m(x))\) and \(S:=\{x\in X\;|\; h(x)\in -K \}\) (the feasible set).
Definition 2.1
(e.g., Khanh and Tuan 2008)
-
(i)
A point \(x_0\in S\) is called a local weak solution (local Pareto solution) of (P) if there exists a neighborhood \(U\) of \(x_0\) such that, for every \(x\in U\cap S\),
$$\begin{aligned} \varphi (x)-\varphi (x_0)\not \in -\mathrm{int} C \; (\varphi (x)-\varphi (x_0)\not \in -C{\setminus }\{0\}, \text{ respectively }). \end{aligned}$$The set of all local weak (local Pareto, respectively) solutions of (P) is denoted by LWE\((\varphi , S)\) (LE\((\varphi , S)\), respectively).
-
(ii)
For \(k\in {\mathbb {N}},\; x_0 \in S\) is called a local firm solution of order \(k\) of (P), denoted by \(x_0\in \mathrm{LFE(}k,\varphi ,S)\), if there are \( \gamma >0\) and neighborhood \(U\) of \(x_0\) such that, for all \(x\in U\cap S{\setminus } \{x_0\}\),
$$\begin{aligned} (\varphi (x)+C)\cap B_{{\mathbb {R}}^m}(\varphi (x_0),\gamma \Vert x-x_0\Vert ^k)=\emptyset . \end{aligned}$$
Note that a firm solution is known in the literature also as an isolated solution or strict solution. Observe that, for \(p \ge m\),
So, necessary conditions for the right-most term hold true also for the others and sufficient conditions for the left-most term are valid for the others as well.
Let \(L(X, Y )\) be the space of the continuous linear mappings from \(X\) into \(Y\) and \(B(X,X,Y)\) that of the continuous bilinear mappings from \(X\times X\) into \(Y\). For \(A \subseteq L(X, Y )\) and \(x\in X\) (\(B\subseteq L(X,X,Y)\) and \(x,z\in X\times X\)), denote \(A(x) := \{ M(x)\;|\; M \in A\} \;\; (B(x,z):=\{N(x,z)\;|\; N\in B\})\). \(o(t^k)\), for \(t > 0\) and \(k \in {\mathbb {N}}\), stands for a moving point such that \(o(t^k)/t^k \rightarrow 0\) as \(t\downarrow 0\). For a cone \(K\subseteq Y\), the positive polar cone of \(K\) is
Definition 2.2
(Classic) Let \(x_0, v \in X\) and \(S \subseteq X\).
-
(i)
The contingent (or Bouligand) cone of \(S\) at \(x_0\) is
$$\begin{aligned} T(S,x_0):=\{v\in X\,|\,\exists t_n\downarrow 0, \, \exists v_n\rightarrow v,\forall n\in {\mathbb {N}}, x_0+t_nv_n\in S\}. \end{aligned}$$ -
(ii)
The second-order contingent set of \(S\) at \((x_0, v)\) is
$$\begin{aligned} T^2(S,x_0,v):=\left\{ w\in X\,|\,\exists t_n\downarrow 0, \, \exists w_n\rightarrow w,\forall n\in {\mathbb {N}}, x_0+t_nv+\frac{1}{2}t_n^2w_n\in S\right\} . \end{aligned}$$ -
(iii)
The asymptotic second-order tangent cone of \(S\) at \((x_0, v)\), see Penot (2000), is
$$\begin{aligned} T''(S,x_0,v)&:= \left\{ w\in X\,|\,\exists (t_n,r_n)\downarrow (0,0):\frac{t_n}{r_n}\rightarrow 0, \, \exists w_n\rightarrow w, \right. \\&\quad \quad \left. \forall n\in {\mathbb {N}}, x_0+t_nv+\frac{1}{2}t_nr_nw_n\in S\ \right\} . \end{aligned}$$
A subset \(S\subseteq X\) is said to be polyhedral if it is the intersection of a finite number of closed half-spaces.
Definition 2.3
(Jourani and Thibault 1993) Let \(f:X\rightarrow Y\).
-
(i)
A set \(A_f(x_0) \subseteq L(X, Y )\) is said to be a first-order approximation of \(f\) at \(x_0\in X\) if there exists a neighborhood \(U\) of \(x_0\) such that, for all \(x \in U\),
$$\begin{aligned} f(x) - f(x_0) \in A_f(x_0)(x - x_0) + o(\Vert x - x_0\Vert ); \end{aligned}$$ -
(ii)
A set \((A_f(x_0),B_f(x_0))\subseteq L(X,Y)\times L(X,X,Y)\) is called a second-order approximation of \(f\) at \(x_0\) if
-
(a)
\(A_f(x_0)\) is a first-order approximation of \(f\) at \(x_0\);
-
(b)
\(f(x) - f(x_0) \in A_f(x_0)(x - x_0) + B_f(x_0)(x-x_0,x-x_0)+o(\Vert x - x_0\Vert ^2)\).
-
(a)
This kind of generalized derivatives contains a major part of known notions of derivatives as special cases (see Khanh and Tuan 2006, 2008). Furthermore, it is advantageous that even an infinitely discontinuous map may have approximations as is shown by the following example.
Example 2.1
Let \(X=Y={\mathbb {R}}\), \(x_0=0\), and
Then, \(f\) is infinitely discontinuous at \(x_0\), but it admits \(A_f(x_0)= ]-\infty ,\alpha [\) with \(-1<\alpha <0\) as an approximation. Indeed, consider \(x\) close to \(x_0\). If \(x\le 0\), then \(f(x)=-x=(-1)(x-0)+o(|x|)\) with \(o(|x|)=0\) and \(-1\in A_f(x_0)=(-\infty ,\alpha )\). Now consider \(x>0\). We need to show that \(f(x)=-1/x\in A_f(x_0)(x-0)+o(|x|)\), i.e., there exists \(\beta _x\in A_f(x_0)\) such that \(o(|x|)/|x|=-1/x^2-\beta _x\). If \(0<x<\delta \), then \(-\infty <-1/x^2<-1/\delta ^2\), and hence for \(\delta >0\) satisfying \(-1/\delta ^2<\alpha \), one has \(-1/x^2\in (-\infty , \alpha )\). Hence, for any \(\epsilon >0\), there exists \(\delta >0\) satisfying \(-1/\delta ^2<\alpha \) and \(\beta _x\in A_f(x_0)=(-\infty ,\alpha )\) such that \(-1/x^2-\beta _x<\epsilon \) for all \(x\in (0,\delta )\), and we are done.
For \(m\in {\mathbb {N}}\), \(f:X\rightarrow Y\) is said to be \(m\)-calm at \(x_0\) if there exists \(L>0\) and neighborhood \(U\) of \(x_0\) such that, for all \(x\in U\),
In this case, \(L\) is called the coefficient of calmness of \(f\). (1-calmness is called simply calmness). Of course, if \(f\) is \(m\)-calm at \(x_0\), then \(f\) is continuous at \(x_0\), for any \(m\in \mathbb {N}\).
Let \(M_{\alpha }\) and \(M\) be in \(L(X,Y)\). The net \(\{M_{\alpha }\}\) is said to pointwise converge to \(M\), and written as \(M_{\alpha }\mathop {\rightarrow }\limits ^{p}M\) or \(M=\) p-lim\(M_{\alpha }\), if \(\lim M_{\alpha }(x)=M(x)\) for all \(x\in X\). A similar definition is adopted for \(N_{\alpha }, N\in L(X,X,Y)\). Note that the pointwise convergence topology is not metrizable. If \(Y=\mathbb {R}\), this topology collapses to the star-weak topology. A subset \(A \subseteq L(X, Y )\) (\(B\subseteq L(X,X,Y)\)) is called asymptotically pointwise compact (shortly asymptotically p-compact) if (see Bector et al. 1993; Kim et al. 2005)
-
(a)
each bounded net \(\{M_{\alpha }\}\subseteq A\) (\(\subseteq B\), respectively) has a subnet \(\{M_{\beta }\}\) and \(M\in L(X,Y)\) (\(M\in L(X,X,Y)\)) such that \(M=\) p-lim\(M_{\beta }\);
-
(b)
for each net \(\{M_{\alpha }\}\subseteq A\) (\(\subseteq B\), respectively) with \(\lim \Vert M_{\alpha }\Vert = \infty \), the net \(\{M_{\alpha }/\Vert M_{\alpha }\Vert \}\) has a subnet converging pointwise to some \(M\in L(X,Y){\setminus } \{0\}\) (\(M\in L(X,X,Y){\setminus } \{0\}\)).
If pointwise convergence is replaced by convergence (in the norm topology), the term “asymptotic compactness” is used. If \(X\) and \(Y\) are finite dimensional, every subset is asymptotically p-compact and asymptotically compact. But, in infinite dimensions, the asymptotical p-compactness is weaker than asymptotical compactness.
For \(A\subseteq L(X,Y)\) and \(B\subseteq L(X,X,Y)\), we adopt the notations:
Observe that, p-cl\(A\), p-cl\(B\) are the pointwise closures, \(A_{\infty }\) is the recession cone and p-\(A_{\infty }\), p-\(B_{\infty }\) are the pointwise recession cones of the given sets.
3 Properties and calculus rules of approximations
First, some properties of approximations of maps with regular characters are collected in the following proposition.
Proposition 3.1
Let \(f:X\rightarrow Y\).
-
(i)
Suppose \((\{0\},B_f(x_0))\) is a second-order approximation of \(f\) at \(x_0\) and \(B_f(x_0)\) is bounded. Then, \(f\) is 2-calm at \(x_0\).
-
(ii)
Let \(Y={\mathbb {R}}\). If the Fréchet derivative \(f'\) exists in a convex neighborhood \(U\) of \(x_0\) is calm at \(x_0\) with coefficient \(L\), and \(f'(x_0)=0\), then \(f\) is 2-calm at \(x_0\) with the same coefficient \(L\).
-
(iii)
If \(f\) is 2-calm at \(x_0\), then \(f'(x_0)=0\).
Proof
-
(i)
By the assumption, there exists \(L>0\) such that \(\Vert M\Vert \le L\) for all \(M\in B_f(x_0)\). Furthermore, there is a neighborhood \(U\) of \(x_0\) such that, for all \(x\in U\), there exists \(M_x\in B_f(x_0)\) with
$$\begin{aligned} \Vert f(x)-f(x_0)\Vert =\Vert M_{x}(x-x_0,x-x_0)+o(\Vert x-x_0\Vert ^2)\Vert \\ \le L.\Vert x-x_0\Vert .\Vert x-x_0\Vert +\Vert o(\Vert x-x_0\Vert ^2)\Vert \le (L+\epsilon )\Vert x-x_0\Vert ^2, \end{aligned}$$for some \(\epsilon >0\). Hence, \(f\) is 2-calm at \(x_0\).
-
(ii)
From the mean value theorem, for \(x\in U\), there exists \(c:=\alpha x_0+(1-\alpha )x\) with \(\alpha \in [0,1]\) such that \(f(x)-f(x_0)=f'(c)(x-x_0)\). Hence,
$$\begin{aligned} |f(x)-f(x_0)|&= \Vert f'(c)\Vert .\Vert x-x_0\Vert =\Vert f'(c)-f'(x_0)\Vert .\Vert x-x_0\Vert \\ {}&\le L\Vert c-x_0\Vert .\Vert x-x_0\Vert \\&= L\Vert \alpha x_0+(1-\alpha )x-x_0\Vert .\Vert x-x_0\Vert =L(1-\alpha )\Vert x-x_0\Vert ^2\\ {}&\le L\Vert x-x_0\Vert ^2. \end{aligned}$$ -
(iii)
We have
$$\begin{aligned} \lim \limits _{x\rightarrow x_0}\frac{\Vert f(x)-f(x_0)-0.(x-x_0)\Vert }{\Vert x-x_0\Vert }&= \lim \limits _{x\rightarrow x_0}\frac{\Vert f(x)-f(x_0)\Vert }{\Vert x-x_0\Vert } \\&= \lim \limits _{x\rightarrow x_0}\frac{\Vert f(x)-f(x_0)\Vert }{\Vert x-x_0\Vert ^2}.\Vert x-x_0\Vert \\&\le \lim \limits _{x\rightarrow x_0}L\Vert x-x_0\Vert =0. \end{aligned}$$
Therefore, \(f\) is Fréchet differentiable at \(x_0\) and \(f'(x_0)=0\). \(\square \)
In the following two propositions, some simple calculus rules, needed in establishing optimality conditions for problem (P), are developed.
Proposition 3.2
Let \(f_i:X\rightarrow Y_i\), \(f:=(f_1,\ldots ,f_k):X\rightarrow Y_1\times \cdots \times Y_k\) and \(\lambda _i\in \mathbb {R}\) for \(i=1,\ldots ,k\). Let \(A_{f_i}(x_0)\) be a first-order approximation of \(f_i\) at \(x_0\). Then, the following assertions hold.
-
(i)
\(\sum _{i=1}^k\lambda _iA_{f_i}(x_0)\) is a first-order approximation of \(\sum _{i=1}^k\lambda _i{f_i}\) at \(x_0\).
-
(ii)
Let \(f=(f_1,f_2,\ldots ,f_k)\) and \(A_{f_1}(x_0),\ldots ,A_{f_k}(x_0)\) be first-order approximations of \(f_1,\ldots ,f_k\), respectively, at \(x_0\). Then, \(A_{f_1}(x_0)\times \ldots \times A_{f_k}(x_0)\) is a first-order approximation of \(f\) at that point.
-
(iii)
Let \(Y\) be a Hilbert space, \(f,g: X\rightarrow Y\) and \(\langle f,g\rangle (x):=\langle f(x),g(x) \rangle \). If \(A_f(x_0), A_g(x_0)\) are first-order bounded approximations of \(f\) and \(g\) at \(x_0\), then \(\langle g(x_0),A_f(x_0)\rangle +\langle f(x_0),A_g(x_0)\rangle \) is a first-order approximation of \(\langle f,g\rangle \) at \(x_0\).
-
(iv)
Let \(f:X\rightarrow Y\) and \(g: Y\rightarrow Z\). If \(A_f(x_0), A_g(f(x_0))\) are bounded approximations, then \(A_g(f(x_0))\circ A_f(x_0)\) is a first-order approximation of \(g\circ f\) at \(x_0\).
Proof
-
(i)
For each \(i=1,\ldots ,k\), there exists a neighborhood \(U_i\) of \(x_0\) such that, for all \(x\in U_i\),
$$\begin{aligned} f_i(x)-f(x_0)\in A_{f_i}(x_0)(x-x_0)+o_i(\Vert x-x_0\Vert ). \end{aligned}$$Hence, for all \(x\in U:=\bigcap _{i=1}^kU_i\),
$$\begin{aligned} \sum \limits _{i=1}^kf_i(x)-\sum \limits _{i=1}^kf_i(x_0)\in \sum \limits _{i=1}^k A_{f_i}(x_0)(x-x_0)+o(\Vert x-x_0\Vert ), \end{aligned}$$where \(o(\Vert x-x_0\Vert )=\sum _{i=1}^ko_i(\Vert x-x_0\Vert )\).
-
(ii)
This is immediate.
-
(iii)
There exists a neighborhood \(U\) of \(x_0\) such that, for all \(x \in U\),
$$\begin{aligned}&\langle f,g\rangle (x)- \langle f,g\rangle (x_0)\\&\quad =(\langle f(x),g(x)\rangle -\langle f(x),g(x_0)\rangle )+(\langle f(x),g(x_0)\rangle -\langle f(x_0),g(x_0)\rangle ) \\&\quad =\langle f(x_0),g(x)-g(x_0)\rangle +\langle f(x)-f(x_0),g(x)- g(x_0)\rangle \\&\qquad +\langle g(x_0),f(x)-f(x_0)\rangle \in \langle f(x_0),A_g(x_0)(x-x_0)\\&\qquad +\,o_1(\Vert x-x_0\Vert )\rangle + \langle g(x_0),A_f(x_0)(x-x_0)\\&\qquad +\,o_2(\Vert x-x_0\Vert )\rangle +\langle f(x)-f(x_0),g(x)-g(x_0)\rangle \\&\quad =(\langle f(x_0),A_g(x_0)\rangle +\langle g(x_0),A_f(x_0)\rangle ) (x-x_0)+ \langle f(x_0),o_1(\Vert x-x_0\Vert )\rangle \\&\qquad +\,\langle g(x_0),o_2(\Vert x-x_0\Vert )\rangle +\langle f(x)-f(x_0),g(x)-g(x_0)\rangle . \end{aligned}$$By the boundedness of \(A_f(x_0),A_g(x_0)\), we have \(L_1\) and \(L_2\) such that, for \(x\) close to \(x_0\), \(\Vert f(x)-f(x_0)\Vert \le L_1\Vert x-x_0\Vert \) and \(\Vert g(x)-g(x_0)\Vert \le L_2\Vert x-x_0\Vert \). Hence,
$$\begin{aligned} \Vert \langle f(x)-f(x_0),g(x)-g(x_0)\rangle \Vert&\le \Vert f(x)-f(x_0)\Vert .\Vert g(x)-g(x_0)\Vert \\&\le L_1L_2 \Vert x-x_0\Vert ^2. \end{aligned}$$Summarizing the above estimates we get, for some \(o(\Vert x-x_0\Vert )\),
$$\begin{aligned} \langle f,g\rangle (x)- \langle f,g\rangle (x_0)&= (\langle f(x_0),A_g(x_0)\rangle +\langle g(x_0),A_f(x_0)\rangle ) (x-x_0)\\&+\,o(\Vert x-x_0\Vert ). \end{aligned}$$ -
(iv)
There exists a neighborhood \(U\) of \(x_0\) and \(V\) of \(f(x_0)\) such that, for all \(x \in U\cap f^{-1}(V)\),
$$\begin{aligned}&(g\circ f)(x)-(g\circ f)(x_0)\in A_g(f(x_0))(f(x)-f(x_0))+o_2(\Vert f(x)-f(x_0\Vert ) \\&\quad \subseteq A_g(f(x_0))[A_f(x_0)(x-x_0)+o_1(\Vert x-x_0\Vert )]+o_2(\Vert f(x)-f(x_0\Vert ) \\&\quad =[(A_g(f(x_0))\circ A_f(x_0)](x-x_0)+A_g(f(x_0))o_1(\Vert x-x_0\Vert )\\&\qquad +\,o_2(\Vert f(x)-f(x_0\Vert ). \end{aligned}$$Let \(u\in A_g(f(x_0))o_1(\Vert x-x_0\Vert )+o_2(\Vert f(x)-f(x_0\Vert )\). We need to prove that \(u\Vert x-x_0\Vert ^{-1}\rightarrow 0\). Indeed, by the boundedness of \(A_f(x_0),A_g(x_0)\), there exist \(L_1\) and \(L_2\) such that \(\Vert M\Vert \le L_1\) for all \(M\in A_g(f(x_0))\) and, for \(x\) close to \(x_0\), \(\Vert f(x)-f(x_0)\Vert \le L_2\Vert x-x_0\Vert \). Hence, with \(M_u\in A_g(f(x_0))\),
$$\begin{aligned} \frac{\Vert u\Vert }{\Vert x-x_0\Vert }&= \left\| M_u\frac{o_1(\Vert x-x_0\Vert )}{\Vert x-x_0\Vert }+ \frac{o_2(\Vert f(x)-f(x_0)\Vert )}{\Vert f(x)-f(x_0)\Vert }.\frac{\Vert f(x)-f(x_0)\Vert }{\Vert x-x_0\Vert }\right\| \\&\le L_1\frac{\Vert o_1(\Vert x-x_0\Vert )\Vert }{\Vert x-x_0\Vert }+L_2\frac{\Vert o_2(\Vert f(x)-f(x_0)\Vert )\Vert }{\Vert f(x)-f(x_0)\Vert }\rightarrow 0. \end{aligned}$$\(\square \)
For \(f,g:X\rightarrow \mathbb {R}\), we define \(f.g\) and \(f/g\) as usual: \((f.g)(x):=f(x).g(x)\) and \((f/g)(x):=f(x).(g(x))^{-1}\) for \(x\in X\).
Proposition 3.3
Let \(f,g:X\rightarrow \mathbb {R}\) and \(A_f(x_0)\), \(A_g(x_0)\) be first-order approximations of \(f\) and \(g\), respectively, at \(x_0\). Then, the following assertions hold.
-
(i)
If \(g\) is continuous at \(x_0\) and \(A_f(x_0)\) is bounded, then \(g(x_0)A_f(x_0)+f(x_0)A_g(x_0)\) is a first-order approximation of \(f.g\) at \(x_0\).
-
(ii)
If \(A_f(x_0), A_g(x_0)\) are bounded and \(g(x_0)\ne 0\), then \([g(x_0)A_f(x_0)-f(x_0)A_g(x_0)]/g^2(x_0)\) is a first-order approximation of \(f/g\) at \(x_0\).
Proof
-
(i)
From the assumptions, there exists a neighborhood \(U\) of \(x_0\) such that, for all \(x\in U\),
$$\begin{aligned}&(f.g)(x)-(f.g)(x_0)=g(x)[f(x)-f(x_0)]+f(x_0)[g(x)-g(x_0)] \\&\quad \in g(x)[A_f(x_0)(x-x_0)+o_1(\Vert x-x_0\Vert )]+f(x_0)[A_g(x_0)(x-x_0)\\&\qquad +o_2(\Vert x-x_0\Vert )]\\&\quad =[g(x_0)A_f(x_0)+f(x_0)A_g(x_0)](x-x_0)+(g(x)-g(x_0))A_f(x_0)(x-x_0)\\&\qquad +g(x)o_1(\Vert x-x_0\Vert )+f(x_0)o_2(\Vert x-x_0\Vert )]. \end{aligned}$$We have to show that, for any \(u\) in the set being the last line above, \(u\Vert x-x_0\Vert ^{-1}\rightarrow 0\) when \(x\rightarrow x_0\). Indeed, there is \(x^*\in A_f(x_0)\) (depending on\(x\)) such that
$$\begin{aligned} u\Vert x-x_0\Vert ^{-1}&= [(g(x)-g(x_0))\langle x^*,x-x_0\rangle +g(x)o_1(\Vert x-x_0\Vert )\\&+f(x_0)o_2(\Vert x-x_0\Vert ].\Vert x-x_0\Vert ^{-1}. \end{aligned}$$Because of the boundedness of \(A_f(x_0)\), there exists \(L>0\) such that \(\Vert x^*\Vert \le L, \forall x^*\in A_f(x_0)\). Passing to limit, one has \(u\Vert x-x_0\Vert ^{-1}\rightarrow 0\), since
$$\begin{aligned} \lim \limits _{x\rightarrow x_0}|g(x)-g(x_0)|.\frac{|\langle x^*,x-x_0\rangle |}{\Vert x-x_0\Vert }&\le \lim \limits _{x\rightarrow x_0} |g(x)-g(x_0)|\frac{\Vert x^*\Vert .\Vert x-x\Vert }{\Vert x-x_0\Vert }\\&\le \lim \limits _{x\rightarrow x_0} L.| g(x)-g(x_0)|=0. \end{aligned}$$ -
(ii)
We have
$$\begin{aligned} \left( \frac{f}{g}\right) (x)-\left( \frac{f}{g}\right) (x_0)&= \frac{g(x_0)(f(x)-f(x_0))-f(x_0)(g(x)-g(x_0))}{g(x)g(x_0)} \\&\in \frac{1}{g(x)g(x_0)}[ g(x_0)(A_f(x_0)(x\!-\!x_0)+o_1(\Vert x\!-\!x_0\Vert ))\\ {}&- f(x_0)(A_g(x_0)(x\!-\!x_0) +o_2(\Vert x-x_0\Vert ))] \\&= \frac{g(x_0)A_f(x_0)-f(x_0)A_g(x_0)}{g(x)g(x_0)}(x-x_0) \\&+ \frac{g(x_0)o_1(\Vert x-x_0\Vert )-f(x_0)o_2(\Vert x-x_0\Vert )}{g(x)g(x_0)} \\&= \frac{g(x_0)A_f(x_0)\!-\!f(x_0)A_g(x_0)}{g^2(x_0)}(x-x_0)\!-\! [(g(x_0)A_f(x_0)\\&- f(x_0)A_g(x_0))(\frac{g(x)-g(x_0)}{g(x)g^2(x_0)})](x-x_0)\\&+ \frac{g(x_0)o_1(\Vert x-x_0\Vert )-f(x_0)o_2(\Vert x-x_0\Vert )}{g(x)g(x_0)} \\&\in \frac{g(x_0)A_f(x_0)\!-\!f(x_0)A_g(x_0)}{g^2(x_0)}(x-x_0)\!-\!(g(x_0)A_f(x_0)\\&- f(x_0)A_g(x_0))(x-x_0) \frac{g(x)-g(x_0)}{g(x)g^2(x_0)} \\&+ \frac{g(x_0)o_1(\Vert x-x_0\Vert )-f(x_0)o_2(\Vert x-x_0\Vert )}{g(x)g(x_0)}. \end{aligned}$$Similarly as in (i), one gets the required conclusion, by the boundedness of \(A_f(x_0), A_g(x_0)\).\(\square \)
Proposition 3.4
-
(i)
Let \(\lambda _1,\lambda _2\ \in \mathbb {R}\), \(\lambda _1\ne 0\). If \(A_1,A_2\subseteq L(X,Y)\) are asymptotically p-compact sets with \(A_2\) being bounded, then \(\lambda _1 A_1+\lambda _2 A_2\) is an asymptotically p-compact set.
-
(ii)
For asymptotically p-compact sets \(A_i\subseteq L(X,Y_i), i=1,\ldots ,k\), \(\prod _{i=1}^kA_i\subseteq L(X,\prod _{i=1}^kY_i)\) is also asymptotically p-compact.
Proof
-
(i)
Let \(\{\lambda _1 M_{\alpha }+\lambda _2 N_{\alpha }\}\) be a net in \(\lambda _1 A_1+\lambda _2 A_2\). Since \(\{N_{\alpha }\}\) is bounded, we assume that \(N_{\alpha }\mathop {\rightarrow }\limits ^{p} N\). If \(\{M_{\alpha }\}\) is bounded, \(\{\lambda _1 M_{\alpha }+\lambda _2 N_{\alpha }\}\) admits also a pointwise convergent subnet. If \(\{M_{\alpha }\}\) is unbounded, we may assume that \(\Vert M_{\alpha }\Vert \rightarrow \infty \) and \(M_{\alpha }\Vert M_{\alpha }\Vert ^{-1}\mathop {\rightarrow }\limits ^{p} M\) with \(M\in L(X,Y){\setminus } \{0\}\). Since \(\displaystyle N_{\alpha }/\Vert M_{\alpha }\Vert \mathop {\rightarrow }\limits ^{p} 0\), one has
$$\begin{aligned} \frac{\lambda _1 M_{\alpha }+\lambda _2 N_{\alpha }}{\Vert \lambda _1 M_{\alpha }+\lambda _1 N_{\alpha }\Vert } \mathop {\rightarrow }\limits ^{p} \frac{\lambda _1 M}{\Vert \lambda _1 M\Vert }\in L(X,Y){\setminus } \{0\}. \end{aligned}$$Consequently, \(\lambda _1 A+\lambda _2 B\) is an asymptotically p-compact set.
-
(ii)
Let \(\{(M^1_{\alpha },M^2_{\alpha },\ldots ,M^k_{\alpha })\}\) be a net in \(A_1\times A_2\times \cdots \times A_k\). If \(\{M^i_{\alpha }\}, i=1,\ldots ,k\), are bounded, then clearly \(\{(M^1_{\alpha },M^2_{\alpha },\ldots ,M^k_{\alpha })\}\) has a pointwise convergent subnet. If at least one of \(\{ M^i_{\alpha }\}, i=1,\ldots ,k\), is unbounded, say all are unbounded, we may assume that \(\Vert M^i_{\alpha }\Vert \rightarrow \infty , i=1,\ldots ,k\), and \(M^i_{\alpha }/\Vert M^i_{\alpha }\Vert \mathop {\rightarrow }\limits ^{p} M^i, i=1,\ldots ,k\), with \(M^i\in L(X,Y_i){\setminus } \{0\}\). Since \(\{\Vert M^i_{\alpha }\Vert \}, i=1,\ldots ,k\) are nonnegative sequences in \(\mathbb {R}\), there exist only three cases (using subsequences). Case 1. There exists \(\Vert M^{i_0}_{\alpha }\Vert \) such that \( \Vert M^{i}_{\alpha }\Vert /\Vert M^{i_0}_{\alpha }\Vert \rightarrow 0, \forall i\in \{1,\ldots ,k\}{\setminus } \{i_0\}\). One has
$$\begin{aligned}&\frac{(M^1_{\alpha },M^2_{\alpha },\ldots ,M^k_{\alpha })}{\Vert (M^1_{\alpha },M^2_{\alpha },\ldots ,M^k_{\alpha })\Vert } =\frac{(M^1_{\alpha }/ \Vert M^{i_0}_{\alpha }\Vert ,\ldots ,M^k_{\alpha })/\Vert M^{i_0}_{\alpha }\Vert )}{\Vert (M^1_{\alpha }/ \Vert M^{i_0}_{\alpha }\Vert ,\ldots ,M^k_{\alpha }/ \Vert M^{i_0}_{\alpha }\Vert )\Vert }\\&\mathop {\rightarrow }\limits ^{p}\frac{(0,\ldots ,M^{i_0},\ldots ,0)}{\Vert (0,\ldots ,M^{i_0},\ldots ,0)\Vert }\in L(X,\prod \limits _{i=1}^kY_i){\setminus } \{0\}. \end{aligned}$$
Case 2. There exists \(\Vert M^{i_0}_{\alpha }\Vert \) such that \( \Vert M^{i}_{\alpha }\Vert /\Vert M^{i_0}_{\alpha }\Vert \rightarrow a_i>0\) for all \(i\in \{1,\ldots ,k\}{\setminus } \{i_0\}\). Since \(M^i_{\alpha }/\Vert M^{i_0}_{\alpha }\Vert \mathop {\rightarrow }\limits ^{p} a_iM^{i}\), one gets
Case 3. There exists \(\Vert M^{i_0}_{\alpha }\Vert \) such that \( \Vert M^{i}_{\alpha }\Vert /\Vert M^{i_0}_{\alpha }\Vert \rightarrow 0\) for all \(i\in I_1\), and \( \Vert M^{i}_{\alpha }\Vert /\Vert M^{i_0}_{\alpha }\Vert \rightarrow a_i>0\) for all \(i \in I_2\), with \(I_1\cup I_2=\{1,\ldots ,k\}{\setminus } \{i_0\}, I_1\cap I_2=\emptyset \). Then, there exists \(N\in L(X,\prod \nolimits _{i=1}^kY_i){\setminus } \{0\}\) such that
Therefore, \(\prod _{i=1}^kA_i\) is asymptotically p-compact. \(\square \)
Now, we pass to calculus rules for second-order approximations. In the following proposition, when \(Y\) is a Hilbert space, \(y\in Y\), \(A_1,A_2\subseteq L(X,Y)\), and \(B\subseteq L(X,X,Y)\), we denote \(\langle y,A_1\rangle (.):=\langle y,A_1(.)\rangle \), \(\langle y,B\rangle (.,.):=\langle y,B(.,.)\rangle \) and \(\langle A_1,A_2\rangle (.,.):=\langle A_1(.),A_2(.)\rangle \).
Proposition 3.5
-
(i)
Let \(f_i:X\rightarrow Y\), \(\lambda _i\in \mathbb {R}\), and \((A_{f_i}(x_0),B_{f_i}(x_0))\) be a second-order approximation of \(f_i\) at \(x_0\) for \(i\!=\!1,\ldots ,k\). Then, \(\big (\sum _{i=1}^k\lambda _iA_{f_i}(x_0),\sum _{i=1}^k\lambda _iB_{f_i}(x_0)\big )\) is a second-order approximation of \(\sum _{i=1}^k\lambda _i{f_i}\) at \(x_0\).
-
(ii)
Let \(f_i: X\rightarrow Y_i\), \(i=1,\ldots ,k\), \(f:=(f_1,f_2,\ldots ,f_k)\), and \((A_{f_1}(x_0),B_{f_1}(x_0)),\) \(\ldots ,(A_{f_k}(x_0),B_{f_k}(x_0))\) be second-order approximations of \(f_1,\ldots ,f_k\), respectively, at \(x_0\). Then, \((A_{f_1}(x_0)\times \cdots \times A_{f_k}(x_0), B_{f_1}(x_0)\times \cdots \times B_{f_k}(x_0))\) is a second-order approximation of \(f\) at that point.
-
(iii)
Let \(Y\) be a Hilbert space and \(f,g: X\rightarrow Y\). If \((A_f(x_0), B_f(x_0))\) and \((A_g(x_0), B_g(x_0))\) are second-order approximations of \(f\) and \(g\), respectively, at \(x_0\) and \(A_f(x_0), A_g(x_0)\) are bounded at \(x_0\), then \((\langle g(x_0),A_f(x_0)\rangle +\langle f(x_0),A_g(x_0)\rangle ,\langle g(x_0),B_f(x_0)\rangle + \langle f(x_0),B_g(x_0)\rangle + \langle A_f(x_0),A_g(x_0)\rangle )\) is a second-order approximation of \(\langle f,g\rangle \) at \(x_0\).
Proof
(i) and (ii) are easy consequences of Proposition 3.2.
(iii) Also by Proposition 3.2, \( \langle g(x_0),A_f(x_0)\rangle +\langle f(x_0),A_g(x_0)\rangle \) is a first-order approximation of \(\langle f,g\rangle \) at \(x_0\). Furthermore, from the boundedness of \(A_f(x_0), A_g(x_0)\), one gets
By the definition of a second-order approximation, the proof is complete. \(\square \)
Proposition 3.6
Let \(f,g:X\rightarrow \mathbb {R}\), \(g\) be 2-calm at \(x_0\), and \((A_f(x_0),B_f(x_0))\), \((0,B_g(x_0))\) be second-order approximations of \(f\) and \(g\), respectively, at \(x_0\). Then,
-
(i)
if \(A_f(x_0),B_f(x_0)\) are bounded, then \((f(x_0)A_g(x_0)+g(x_0)A_f(x_0), g(x_0)B_f(x_0)+f(x_0)B_g(x_0))\) is a second-order approximation of \(f.g\) at \(x_0\);
-
(ii)
if \(A_f(x_0),B_f(x_0),B_g(x_0)\) are bounded and \(g(x_0)\ne 0\), then
$$\begin{aligned} \left( \frac{A_f(x_0)}{g(x_0)}, \frac{g(x_0)B_f(x_0)-f(x_0)B_g(x_0)}{g^2(x_0)}\right) \end{aligned}$$is a second-order approximation of \(f/g\) at \(x_0\).
Proof
-
(i)
Proposition 3.3 implies that \(g(x_0)A_f(x_0)\) is a first-order approximation of \(f.g\) at \(x_0\). On the other hand,
$$\begin{aligned}&\!\!\! f(x)g(x)-f(x_0)g(x_0)=g(x)[f(x)-f(x_0)]+f(x_0)[g(x)-g(x_0)]\\&\in g(x)[A_f(x_0)(x-x_0)+B_f(x_0)(x-x_0,x-x_0)+o_1(\Vert x-x_0\Vert ^2)]\\&\quad +f(x_0)[B_g(x_0)(x-x_0,x-x_0)+o_2(\Vert x-x_0\Vert ^2)]\\&=g(x_0)A_f(x_0)(x-x_0)+[g(x_0)B_f(x_0)+f(x_0)B_g(x_0)](x-x_0,x-x_0)\\&\quad +[(g(x)-g(x_0))(A_f(x_0)(x-x_0)+B_f(x_0)(x-x_0,x-x_0))\\&\quad +g(x)o_1(\Vert x-x_0\Vert ^2)+f(x_0)o_2(\Vert x-x_0\Vert ^2)]. \end{aligned}$$We have to show that \(u\Vert x-x_0\Vert ^{-2}\rightarrow 0\), for all \(u\) in the set being the last term of the last side above, when \(x\rightarrow x_0\). For such a \(u\), there are \(M_u\in A_f(x_0)\) and \(N_u\in B_f(x_0)\) such that
$$\begin{aligned} u\Vert x-x_0\Vert ^{-2}&= [(g(x)-g(x_0))(\langle M_u,x-x_0\rangle +N_u(x-x_0,x-x_0))\\&+ g(x)o_1(\Vert x-x_0\Vert ^2)+f(x_0)o_2(\Vert x-x_0\Vert ^2)].\Vert x-x_0\Vert ^{-2}. \end{aligned}$$Clearly, this element tends to 0 as \(x\rightarrow x_0\), since \(g\) is 2-calm at \(x_0\) and \(A_f(x_0),B_f(x_0)\) are bounded.
-
(ii)
We have
$$\begin{aligned}&\left( \frac{f}{g}\right) (x)-\left( \frac{f}{g}\right) (x_0)= \frac{g(x_0)(f(x)-f(x_0))-f(x_0)(g(x)-g(x_0))}{g(x)g(x_0)}\\&\quad \in \frac{1}{g(x)g(x_0)}[g(x_0)(A_f(x_0)(x-x_0)+B_f(x_0)(x-x_0,x-x_0)\\&\qquad +o_1(\Vert x-x_0\Vert ^2))-f(x_0)(B_g(x_0)(x-x_0,x-x_0)+o_2(\Vert x-x_0\Vert ^2))]\\&\quad =\frac{A_f(x_0)}{g(x)}(x-x_0) +\frac{g(x_0)B_f(x_0)-f(x_0)B_g(x_0)}{g(x)g(x_0)}(x-x_0,x-x_0)\\&\qquad +\frac{g(x_0)o_1(\Vert x-x_0\Vert ^2)-f(x_0)o_2(\Vert x-x_0\Vert ^2)}{g(x)g(x_0)}\\&\quad =\frac{A_f(x_0)}{g(x_0)}(x-x_0)+ g(x_0)A_f(x_0)(x-x_0)\left[ \frac{1}{g(x)g(x_0)}-\frac{1}{g^2(x_0)}\right] \\&\qquad +\frac{g(x_0)B_f(x_0)-f(x_0)B_g(x_0)}{g^2(x_0)}(x-x_0,x-x_0)\\&\qquad +(g(x_0)B_f(x_0)-f(x_0)B_g(x_0))(x-x_0,x-x_0)\left[ \frac{1}{g(x)g(x_0)}-\frac{1}{g^2(x_0)}\right] \\&\qquad +\frac{g(x_0)o_1(\Vert x-x_0\Vert ^2)-f(x_0)o_2(\Vert x-x_0\Vert ^2)}{g(x)g(x_0)}\\&\quad =\frac{A_f(x_0)}{g(x_0)}(x-x_0)+\frac{g(x_0)B_f(x_0) -f(x_0)B_g(x_0)}{g^2(x_0)}(x-x_0,x-x_0)\\&\qquad +[g(x_0)A_f(x_0)(x-x_0)+(g(x_0)B_f(x_0)\\&\qquad -f(x_0)B_g(x_0))(x-x_0,x-x_0)] \left[ \frac{g(x_0)-g(x)}{g(x)g^2(x_0)}\right] \\&\qquad +\frac{g(x_0)o_1(\Vert x-x_0\Vert ^2) -f(x_0)o_2(\Vert x-x_0\Vert ^2)}{g(x)g(x_0)}. \end{aligned}$$It remains to prove that the last two terms of the last side above are of the form \(o(\Vert x-x_0\Vert ^2)\). This proof is similar to the corresponding one in (i). \(\square \)
The assumption that \(g\) is 2-calm at \(x_0\) in Proposition 3.6 (ii) cannot be dispensed as shown by the following example.
Example 3.1
Let \(f,g:\mathbb {R}\rightarrow \mathbb {R}\) be defined by \(f(x)=x^3+1\), \(g(x)=x+1\), and \(x_0=0\). Then, \(\displaystyle \varphi (x):=(\frac{f}{g})(x)=x^2-x+1,\; x\ne -1\). We can check that \(g\) is calm at \(x_0\), but not 2-calm at this point. By direct calculations, we have \(f(x_0)=1, g(x_0)=1\), and
So, \(\{(-1,1)\}\) is a second-order approximation of \(\varphi (x)=x^2-x+1\) at \(x_0\). Hence,
is not a second-order approximation of \(\varphi (x)=x^2-x+1\) at \(x_0\).
4 First-order optimality conditions
To establish necessary optimality conditions for our fractional problem, we need Lemma 4.1 below on such conditions for local weak solutions to the constrained vector minimization problem (P\(_1\)) below. Let \(X, Y\) and \(Z\) be normed spaces, \(C\) and \(K\) be proper closed convex cones with nonempty interior in \(Z\) and \(Y\), respectively (note that, for (P\(_1\)), \(C\) does not need to be pointed). Let \(F: X \rightarrow Z\) and \(h: X \rightarrow Y\). Consider the vector minimization problem
Denote \(\Omega :=\{x\in X|\; h(x)\in -K\}\) (the feasible set). We recall that the cone of weak feasible directions to \(S\subseteq X\) at \(x_0\in S\) is
Lemma 4.1
Assume that \(A_F(x_0)\) and \(A_h(x_0)\) are asymptotically p-compact first-order approximations of \(F\) and \(h\), respectively, at \(x_0\).
If \(x_0\) is a local weak solution of (P\(_1\)), then, \(\forall u\in X, \exists P\in \) p-\(A_{F}(x_0), \exists Q\in A_h(x_0)\), \(\exists (c^*,d^*)\in C^*\times K^*{\setminus } \{(0,0)\}\),
Furthermore, for \(u\) satisfying \(0\in \mathrm{int}(Q(u)+h(x_0)+K)\) for all \(Q\in A_k(x_0)\), we have \(c^*\ne 0\).
Proof
Let \(x_0\) be a local weak solution of (P\(_1\)). For \(u\in X\), there are two cases.
Case 1. \(u\in W_f(\Omega ,x_0)\). Then, there exists \(P\in \) p-\(A_{F}(x_0)\) such that \(P(u)\not \in \mathrm{-int}C\). Indeed, for the sequence \(t_n\downarrow 0\) associated with \(u\) (in the definition of \(W_f(\Omega ,x_0)\)) and \(n\) large enough, we have \(F(x_0+t_nu) -F(x_0)\not \in \mathrm{-int}C.\) Therefore, there is \(P_n\in A_{F}(x_0)\) such that
We have two possibilities. If \(\{P_n\}\) is bounded, we can assume that \(P_n\mathop {\rightarrow }\limits ^{p}P\in \) p-cl\(A_{F}(x_0)\). Passing (1) to limit, we obtain \(Pu\not \in \mathrm{-int}C\) as required. If \(\{P_n\}\) is unbounded, we can assume that \(P_n/\Vert P_n\Vert \mathop {\rightarrow }\limits ^{p}P\in \) p-\(A_{F}(x_0)_{\infty }{\setminus } \{0\}\). Dividing (1) by \(\Vert P_n\Vert \) and letting \(n\rightarrow \infty \), we also obtain \(P(u)\not \in \mathrm{-int}C\).
Case 2. \(u\not \in W_f(\Omega ,x_0)\). Then, \(\forall t_n\downarrow 0\), \(\exists n\), \(h(x_0+t_nu)\not \in -K\). We claim the existence of \(Q\in A_h(x_0)\) such that \(Q(u)\not \in \mathrm{-int}K-h(x_0).\) Indeed, suppose to the contrary that \(A_h(x_0)(u)\subseteq -\mathrm{int}K-h(x_0)\). Then, for \(n\) large enough,
where \(r_nn\rightarrow 0\), and \(B_Z\) is the closed unit ball of \(Z\). Hence,
By the contradiction assumption that \(A_h(x_0)(u)\subseteq -\mathrm{int}K-h(x_0)\), one gets \(h(x_0)+A_h(x_0)(u)+r_nnB_Z\subseteq - K\), for \(n\) large enough. Hence, \(h(x_0+\frac{1}{n}u)\in -K.\) This leads to a contradiction with the assumption that \(u\not \in W_f(\Omega ,x_0)\).
Now, in both cases, one has some \(P\in \) p-cl\(A_{F}(x_0)\) and \(Q\in A_h(x_0)\) with \((P(u),Q(u))\not \in \mathrm{-int}[C\times (K+h(x_0))].\) According to a classic separation theorem from convex analysis, we have \((c^*,d^*)\in C^*\times K^*{\setminus } \{(0,0)\}\) such that, for all \(u\in X\),
Now let \(u\) satisfy \(0\in \mathrm{int}(Q(u)+h(x_0)+K)\) for all \(Q\in A_h(x_0)\), and suppose to the contrary that \(c^*=0\). Then, the separation result collapses to
This implies that \(d^*(Q(u)+h(x_0)+d)\ge 0\) for all \(d\in K\). So, \(0\not \in \mathrm{int}(Q(u)+h(x_0)+K)\), contradicting the assumption. \(\square \)
Note that, Lemma 4.1 sharpens Theorem 3.1 of Khanh and Tuan (2009), by removing the assumed boundedness of \(A_h(x_0)\) and adding the case \(c^*\ne 0\). This removal is important, since a map with a bounded first-order approximation at a point must be continuous (even calm) at this point. Now, we pass to our fractional programming. Denote
Theorem 4.1
(Necessary condition) For problem (P), let \(A_{f_i} (x_0)\), \(A_{g_i}(x_0)\), \(A_h(x_0)\) be asymptotically p-compact first-order approximations of \(f_i,g_i\) and \(h\), respectively, at \(x_0\), with \(A_{f_i}(x_0), A_{g_i}(x_0)\) being bounded, for \(i=1,\ldots ,m\). If \(x_0\) is a local weak solution of (P), then, \(\forall u\in X\), \(\exists P\in \) p-cl\(A_{\varphi }(x_0)\cup (\)p-\(A_{\varphi }(x_0)_{\infty }{\setminus }\{0\})\), \(\exists Q\in \mathrm{cl}A_h(x_0)\), \(\exists (c^*,d^*)\in C^*\times K^*{\setminus }\{(0,0)\}\),
Furthermore, for \(u\) satisfying \(0\in \mathrm{int}(Q(u)+h(x_0)+K)\) for all \(Q\in A_h(x_0)\), we have \(c^*\ne 0\).
Proof
Since \(A_{g_i}(x_0), i=1,\ldots ,m\), are bounded, all \(g_i\) are calm at \(x_0\). By Propositions 3.2 and 3.3, \(A_{\varphi }(x_0)\) is a first-order approximation of \(\varphi \) at \(x_0\). As \(A_{f_i}(x_0)\), \(A_{g_i}(x_0)\) are bounded and \(g_i(x_0)\ne 0,i=1,\ldots ,m\), from Proposition 3.4, we see that \(A_{\varphi }(x_0)\) is asymptotically p-compact. To complete the proof, invoke Lemma 4.1 for \(F=\varphi \). \(\square \)
Note that in most of the known optimality conditions for fractional problems, \(X\) is assumed to be finite dimensional. Furthermore, when applied to the finite-dimensional case, Theorem 4.1 is also advantageous, since \(f\) is not required to be Lipschitz continuous. In the following example, the results for cases with assumed Lipschitz continuity in (Kim et al. 2005; Kuk et al. 2001; Nobakhtian 2008; Reedy and Mukherjee 2001; Soleimani-Damaneh 2008; Bao et al. 2007; Chinchuluun et al. 2007; Liu and Feng 2007) or with continuous differentiability in Singh (1981); Liang et al. (2001); Zalmai (2006); Mishra (1997); Cambini et al. (2005); Husain and Jabeen (2005) are not applicable, while Theorem 4.1 works well.
Example 4.1
Let \(X={\mathbb {R}}, m=1,Y={\mathbb {R}}\), \(C=K=\mathbb {R}_+\), \(x_0=0\),
\(g(x)=x^2+1\), and \(h(x)=-\root 3 \of {x}+x^2\). We can take approximations \(A_g(x_0)=\{0\}\) and \(A_h(x_0)=]-\infty ,\beta [\) with \(\beta <0\) being arbitrary and fixed. Since \(f\) is not Lipschitz at \(x_0=0\), the mentioned known results [e.g., Theorem 2.1 of Kim et al. (2005), Theorem 4.2 of Bao et al. (2007)] are not in use. Since \(g(x_0)=1, f(x_0)=0\) and \(A_f(x_0)=[-2,-1]\), one has \(A_{\varphi }(x_0)=A_f(x_0)\), \(\mathrm{cl}A_{\varphi }(x_0)\cup (A_{\varphi }(x_0)_{\infty }{\setminus } \{0\})=[-2,-1]\). For \(u=1\), we see that, \(\forall P\in \mathrm{cl}A_{\varphi }(x_0)\cup (A_{\varphi }(x_0)_{\infty }{\setminus } \{0\})\), \(\forall Q\in \mathrm{cl} A_h(x_0)\), \(\forall (c^*,d^*)\in C^*\times K^*{\setminus } \{(0,0)\}=\mathbb {R}^2_+{\setminus } \{(0,0)\}\) with \(\langle d^*,h(x_0)\rangle =0\),
According to Theorem 4.1, \(x_0\) is not a local weak solution of (P).
Theorem 4.2
(Sufficient condition) Let \(X={\mathbb {R}}^n\) and \(x_0\in h^{-1}(-K)\). Assume that, for \(i=1,\ldots ,m\), \(A_{f_i} (x_0), A_{g_i}(x_0), A_h(x_0)\) are asymptotically p-compact first-order approximations of \(f_i,g_i\) and \(h\), respectively, at \(x_0\), with all \(A_{f_i}(x_0),A_{g_i}(x_0)\) being bounded. Suppose that, for all \(u\in T(h^{-1}(K),x_0)\) with norm one, \(P\in \mathrm{cl}A_{\varphi }(x_0)\cup (A_{\varphi }(x_0)_{\infty }{\setminus } \{0\})\) and \(Q\in \) p-cl \(A_h(x_0)\cup \) (p- \(A_h(x_0)_{\infty }{\setminus } \{0\})\), there exists \((y^*,z^*)\in C^*\times K^*{\setminus } \{(0,0)\}\) such that
Then, \(x_0\) is a local firm solution of order 1 of (P).
Proof
Since \(A_{g_i}(x_0)\) is bounded, \(g_i\) is calm at \(x_0\) for \(i=1,\ldots ,m\). By Propositions 3.2 and 3.3, \(A_{\varphi }(x_0)\) is a first-order approximation of \(\varphi \) at \(x_0\). Since \(A_{\varphi }(x_0)\) is finite dimensional, it is asymptotically p-compact. Now, apply Theorem 3.3 of Khanh and Tuan (2009) to complete the proof. \(\square \)
Example 4.2
Let \(n=1, m=2,Y={\mathbb {R}}, C={\mathbb {R}}_+^2, K={\mathbb {R}}_+, x_0=0, f(x)=(x,f_1(x)),\)
\( g(x)=(e^{-x},1)\), and \(h(x)=x^2-2x\). Then, \(T(h^{-1}(-K),x_0)=[0,\infty [\), and \(f,g\) and \(h\) admit first-order approximations \(A_f(x_0)=\{(1,\alpha )\in {\mathbb {R}}^2|\; \alpha \in [1,2]\}\), \(A_g(x_0)=\{(-1,0)\}\), and \(A_h(x_0)=\{-2\}\), respectively, for any fixed \(\alpha > 0\). Hence,
Choosing \((y^*,z^*)=((0,1),0)\in C^*\times K^*{\setminus } \{(0,0)\}\), one sees that, for all \(u\in T(h^{-1}(K),x_0)\) with norm one, \(P\in \mathrm{cl}A_{\varphi }(x_0)\cup (A_{\varphi }(x_0)_{\infty }{\setminus } \{0\})\), and \(Q\in \mathrm{cl}A_h(x_0)\cup (A_h(x_0)_{\infty }{\setminus } \{0\})\),
In view of Theorem 4.2, \(x_0\) is a local firm solution of order 1 of (P).
Now, we check directly, with \(\gamma =1\) and an arbitrary neighborhood \(U\) of \(x_0=0\), that, for all \(x\in U\cap S{\setminus } \{x_0\}\),
Indeed, since the feasible set is \(S=[0,2]\), then \(x>0\) for all \(x\in U\cap S{\setminus } \{x_0\}\), and we have
Hence, for all \(x\in U\cap S{\setminus } \{x_0\}\),
5 Second-order optimality conditions
By the calculus rules obtained in Sect. 3, we can apply easily second-order optimality conditions in Khanh and Tuan (2009, 2011) for vector optimization to multiobjective fractional programming. Hence, the deriving of the results here is immediate. However, since these optimality conditions have advantages in applications, we present them and illustrate applications for the sake of completeness. For necessary conditions, we admit the following notation for problem (P), with \(z^*\in K^*\),
5.1 The case \(f\) and \(h\) are first-order differentiable
In this subsection, assume that \(f_i\) and \(h\) are Fréchet differentiable at \(x_0\) for \(i=1,\ldots ,m\), and set
Theorem 5.1
(Necessary condition) Assume that \(C\) is polyhedral, \(g_i\) are 2-calm at \(x_0\) for \(i=1,\ldots ,m\), and \(z^*\in K^*\) with \(\langle z^*,h(x_0)\rangle =0\). Impose further that \((f'_i(x_0),B_{f_i}(x_0))\), \((0,B_{g_i}(x_0))\) and \((h'(x_0),B_h(x_0))\) are bounded asymptotically p-compact second-order approximations of \(f_i,g_i\) and \(h\), respectively, at \(x_0\), for \(i=1,\ldots ,m\).
If \(x_0\) is a local weak solution of (P), then, for any \(v\in T(H(z^*),x_0)\), there exists \(y^*\in B\), where \(B\) is finite and \(\mathrm{cone(co}B)=C^*\), such that \(\langle y^*,A_{\varphi }(x_0)v\rangle +\langle z^*,h'(x_0)v\rangle \ge 0\).
If, furthermore, \(y^*\circ A_{\varphi }(x_0)+z^*\circ h'(x_0)=0\), we have either \(M\in \) p-cl\(B_{\varphi }(x_0)\) and \(N\in \) p-cl\(B_h(x_0)\) such that \(\langle y^*,M(v,v)\rangle +\langle z^*,N(v,v)\rangle \ge 0,\) or \(M\in \) p-\(B_{\varphi }(x_0)_{\infty }{\setminus } \{0\}\) such that \(\langle y^*,M(v,v)\rangle \ge 0.\)
Proof
By Propositions 3.1, 3.5 and 3.6, \((A_{\varphi }(x_0),B_{\varphi }(x_0))\) is a second-order approximation of \(\varphi \) at \(x_0\). Furthermore, since \((f'_i(x_0),B_{f_i}(x_0))\), \((0,B_{g_i}(x_0))\) are asymptotically p-compact, \(g_i(x_0)\ne 0\), and \(B_{f_i}(x_0)\), \(B_{g_i}(x_0)\) are bounded, for \(i=1,\ldots ,m\), Proposition 3.4 implies that \((A_{\varphi }(x_0),B_{\varphi }(x_0))\) is asymptotically p-compact. Now, applying Theorem 4.1 of Khanh and Tuan (2009, 2011) ends the proof. \(\square \)
Note that in this statement and Theorem 5.2 below, the assumptions imposed on \(g_i\) are restrictive. However, in applications we can always rewrite the fractions involved in the problem, with new \(f_i\) and \(g_i\), so that the new \(g_i\) satisfy these assumptions. We illustrate Theorem 5.1 by the following example.
Example 5.1
Let \(X=l^2,m=1,Y=\mathbb {R}, C=D=\mathbb {R}_+, B=\{1\}\), and \(x_0=0\). Let \( f(x)=-\Vert x\Vert ^2=-\sum _{i=1}^{\infty }x_i^2,g(x)=\Vert x\Vert ^{4/3}+1 =\big (\sum _{i=1}^{\infty }x_i^2\big )^{2/3}+1 ,\; h(x)=x_1^2-x_1\). Then, \(g(x_0)=1\ne 0\), \(g'(x_0)=0\), i.e., \(g\) is Fréchet differentiable at \(x_0\) but \(g\) is not 2-calm at \(x_0\), and \(B_g(x_0)=\{N_{\lambda }\in B(l^2,l^2,\mathbb {R})\;|\; \lambda >1\},\) where \(N_{\lambda }(x,y)=\lambda \sum _{i=1}^{\infty }x_iy_i\) for \(x,y\in l_2\). We have \(f(x_0)=0\), \(f'(x_0)=0\), \(B_f(x_0)=\{-1\}\), \(h'(x_0)=\{(-1,0,0,\ldots )\}\), and \(B_h(x_0)=\{N\in B(l^2,l^2,\mathbb {R})\;|\; N(x,y)=x_1.y_1 \}.\) Therefore, \(A_{\varphi }(x_0)=A_f(x_0)\) and \(B_{\varphi }(x_0)=B_f(x_0)\), and \((A_{\varphi }(x_0),B_{\varphi }(x_0))\) is an asymptotically p-compact second-order approximation of \(\varphi \) at \(x_0\). We have p-cl\(B_{\varphi }(x_0)=\{-1\}\) and p-\(B_{\varphi }(x_0)_{\infty }=\{0\}\). Choose \(z^*=0\in K^*=\mathbb {R}_+\) and
Then, for any \(y^*\in B\), i.e., \(y^*=1\), we have \(y^*\circ A_{\varphi }(x_0)+z^*\circ h'(x_0)=0\) and
for all \(M\in \) p-cl\(B_{\varphi }(x_0)\). Due to Theorem 5.1, \(x_0\) is not a local weak solution of problem (P). But, since \(X=l^2\) is infinite dimensional, Theorem 4.1 of Reedy and Mukherjee (2001) cannot be applied.
Theorem 5.2
(Sufficient condition) Assume that \(X\) is finite dimensional, \(x_0\in h^{-1}(-K)\), \(g_i\) is 2-calm at \(x_0\) and \((f'_i(x_0)\), \(B_{f_i}(x_0))\), \((0,B_{g_i}(x_0))\), and \((h'(x_0),B_h(x_0))\) are bounded asymptotically p-compact second-order approximations of \(f_i\), \(g_i\) and \(h\), respectively, at \(x_0\), for \(i=1,\ldots ,m\). Set
Impose further the existence of \((y^*,z^*)\in C_0^*\times K_0^*\) such that, for all \(v\in T(h^{-1}(-K),x_0)\) with \(\Vert v\Vert =1\) and \(\langle y^*,A_{\varphi }(x_0)v\rangle =\langle z^*,h'(x_0)\rangle =0,\) one has
-
(i)
for each \(M\in \) cl\(B_{\varphi }(x_0)\) and \(N\in \) p-cl\(B_h(x_0),\) \(\langle y^*,M(v,v)\rangle +\langle z^*,N(v,v)\rangle >0;\)
-
(ii)
for each \(M\in \) p-\(B_{\varphi }(x_0)_{\infty }{\setminus } \{0\},\) \(\langle y^*,M(v,v)\rangle >0.\) Then, \(x_0\) is a local firm solution of order 2.
Proof
Propositions 3.1, 3.5 and 3.6 together imply that the finite dimensional set \((A_{\varphi }(x_0),B_{\varphi }(x_0))\) is an asymptotically p-compact second-order approximation of \(\varphi \) at \(x_0\). Applying Theorem 4.5 of Khanh and Tuan (2009), the conclusion is obtained. \(\square \)
Example 5.2
Let \(n=1, m=2\), \(Y={\mathbb {R}}^2\), \(C={\mathbb {R}}^2_+,K={\mathbb {R}}_+, x_0=0, f(x)=(f_1(x),x^2)\) with
\( g(x)=(-x^2+1,\cos ^2 x)\), and \(h(x)=-x+x^2\). Then, we have \(g(x_0)=(1,1),g'(x_0)=(0,0)\), \(B_g(x_0)=\{(-1,-1)\}\) (\(g\) is 2-calm at \(x_0\)), \(f(x_0)=0\), \(f'(x_0)=(0,0),h'(x_0)=-1,\)
\(B_f(x_0)=\{(0,1)\}\), \(B_h(x_0)=\{1\}, h^{-1}(-K)=[0,1]\), and \(T(h^{-1}(-K),x_0)=[0,\infty [\). Hence, \(A_{\varphi }(x_0)=\{(0,0)\}\), \(B_{\varphi }(x_0)=\mathrm{cl}B_{\varphi }(x_0)=\{(0,1)\}\), \(B_{\varphi }(x_0)_{\infty }=\{(0,0)\}\), and \(C_0^*\times K_0^*=\{(y^*,0)\;|\;y^*\in {\mathbb {R}}_+^2{\setminus }\{0\}\}\).
Choose \((y^*,z^*)=((1,0),0)\in C^*_0\times K_0^*\). Then, for all \(v\in T(h^{-1}(-K),x_0)\) with norm one, i.e., \(v=1\), we see that, for each \(M\in \mathrm{cl}B_{\varphi }(x_0)\) and \(N\in \mathrm{cl}B_h(x_0)\),
According to Theorem 5.2, \(x_0\) is a local firm solution order 2 of (P).
5.2 The case \(f\) and \(h\) are not differentiable
In this general case, we set
Theorem 5.3
(Necessary condition) Let \(C\) be polyhedral, \(g_i\) be 2-calm at \(x_0\) for \(i=1,\ldots ,m\), and \(z^*\in K^*\) with \(\langle z^*,h(x_0)\rangle =0\). Suppose \((A_{f_i}(x_0),B_{f_i}(x_0))\), \((A_{g_i}(x_0),B_{g_i}(x_0))\), and \((A_h(x_0),B_h(x_0))\) are bounded asymptotically p-compact second-order approximations of \(f_i,\;g_i,\) \( i=1,\ldots ,m\), and \(h\), respectively, at \(x_0\).
If \(x_0\) is a local weak solution of (P), then, for any \(v\in T(H(z^*),x_0)\),
-
(i)
for all \(w\in T^2(H(z^*),x_0,v)\), there exist \(y^*\in B\) with \(B\) being finite and \(\mathrm{cone(co}B)=C^*\), \(P\in \) p-cl\(A_{\varphi }(x_0)\), and \(Q\in \) p-cl\(A_h(x_0)\) such that \(\langle y^*,Pv\rangle +\langle z^*,Qv\rangle \ge 0\). If, in addition, \(v\in P(x_0,y^*,z^*)\), then either there are \(P\in \) p-cl\(A_{\varphi }(x_0),\) \( Q\in \) p-cl\(A_h(x_0), M\in \) p-cl\(B_{\varphi }(x_0)\), and \(N\in \) p-cl\(B_h(x_0)\) such that
$$\begin{aligned} \langle y^*,Pw\rangle +\langle z^*,Qw\rangle +2\langle y^*,M(v,v)\rangle +2\langle N(v,v)\rangle \ge 0, \end{aligned}$$or there exists \(M\in \) p-\(B_{\varphi }(x_0)_{\infty }{\setminus } \{0\}\) with \(\langle y^*,M(v,v)\rangle \ge 0;\)
-
(ii)
for all \(w\in T''(G(z^*),x_0,v)\), there exist \(P\in \) p-cl\(A_{\varphi }(x_0)\) and \(Q\in \) p-cl\(A_h(x_0)\) such that \(\langle y^*,Pv\rangle +\langle z^*,Qv\rangle \ge 0\). If, in addition, \(v\in P(x_0,y^*,z^*)\), then either \(P\in \) p-cl\(A_{\varphi }(x_0)\), \(Q\in \) p-cl\(A_h(x_0)\), and \(M\in \) p-cl\(B_{\varphi }(x_0)_{\infty }\) exist such that
$$\begin{aligned} \langle y^*,Pw\rangle +\langle z^*,Qw\rangle +2\langle y^*,M(v,v)\rangle \ge 0, \end{aligned}$$or some \(M\in \) p-\(B_{\varphi }(x_0)_{\infty }{\setminus } \{0\}\) exists with \(\langle y^*,M(v,v)\rangle \ge 0.\)
Proof
The proof is similar to that of Theorem 5.1, but now we apply Theorem 4.7 of Khanh and Tuan (2009, 2011) and we do not need Proposition 3.1. \(\square \)
Theorem 5.3 rejects a candidate for a weak solution in the following illustrative example.
Example 5.3
Let \(X={\mathbb {R}}^2,\; Y={\mathbb {R}},\; m=2,\; C={\mathbb {R}}_+^2,\; B=\{y_1^*=(1,0), y_2^*=(0,1)\},\;\;\) \(K=\{0\}, x_0=(0,0), f(x,y)=(-y,x+|y|), g(x,y)=(x^2+1,y^2+1),\;\mathrm{and}\; h(x,y)=-x^3+y^2\). Then, we have \(g(x_0)=(1,1)\), \(A_g(x_0)=\{0\}\), and \(B_g(x_0)=\left\{ \left( \begin{array}{llll} 1&{}\quad 0&{}\quad 0&{}\quad 0\\ 0 &{}\quad 0&{}\quad 0&{}\quad 1 \end{array} \right) \right\} \). So, \(g\) is 2-calm at \(x_0\). We have the following approximations
Therefore, \(A_{\varphi }(x_0)=A_f(x_0), B_{\varphi }(x_0)=\{0\}\). Let \(z^*=0\). Then, \(H(z^*)=\{(x,y)\in {\mathbb {R}}^2|\) \(-x^3+y^2=0\}, T(H(z^*),x_0)={\mathbb {R}}_+\times \{0\}\).
Choosing \(v=(1,0)\in T(H(z^*),x_0)\), we have
Now, let \(w=(0,1)\in T''(H(z^*),x_0,v)\). For \(y_1^*=(1,0)\in B\), \(\forall P\in \mathrm{cl}A_1(x_0)\), \(\forall Q\in \mathrm{cl}A_h(x_0)\), one gets \(\langle y_1^*,Pv\rangle +\langle z^*,Qv\rangle \ge 0,\) and \(v\in P(x_0,y_1^*,z^*)=\{(v_1,v_2)\in {\mathbb {R}}^2|v_2=0\}.\) Hence, for all \(P\in \mathrm{cl}A_{\varphi }(x_0)\), \(Q\in \mathrm{cl}A_h(x_0)\) and \(M\in B_{\varphi }(x_0)_{\infty }\), one has
For \(y_2^*=(0,1)\in B\), and for all \(P\in \mathrm{cl}A_{\varphi }(x_0)\), all \(Q\in \mathrm{cl}A_h(x_0)\), one obtains \(\langle y_1^*,Pv\rangle +\langle z^*,Qv\rangle =1> 0,\) and \(v\not \in P(x_0,y_2^*,z^*).\)
Taking into account Theorem 5.3, one sees that \(x_0\) is not a local weak solution of (P).
We pass finally to sufficient conditions.
Theorem 5.4
(Sufficient condition) Let \(X={\mathbb {R}}^n\), \(x_0\in h^{-1}(-K)\), \(g_i\) be 2-calm at \(x_0\), \((y^*,z^*)\in C^*\times K^*\) with \(\langle z^*,g(x_0)\rangle =0\) and \((A_{f_i}(x_0),B_{f_i}(x_0))\), \((A_{g_i}(x_0),B_{g_i}(x_0))\), \((A_h(x_0),B_h(x_0))\) be bounded asymptotically p-compact second-order approximations of \(f_i,g_i,\) and \(h\), respectively, at \(x_0\), for \(i=1,\ldots ,m\). Then, \(x_0\) is a local firm solution of order 2 of \(\mathrm{(P)}\), if the following conditions hold
-
(i)
for all \(v\in T(h^{-1}(-K),x_0)\), \(P\in A_1(x_0)\), and \(Q\in A_h(x_0)\), one has \(\langle y^*,Pv\rangle +\langle z^*,Qv\rangle =0;\)
-
(ii)
\(\forall w\in T^2(h^{-1}(-K),x_0,v)\,:\,\Vert w\Vert =1\), \(\exists \overline{P}\in \mathrm{cl}A_{\varphi }(x_0)\,:\,\overline{P}w\in -C, \exists \overline{Q}\in \) p-cl\(B_h(x_0)\,:\, \overline{Q}w\in -K(h(x_0))\), and \(\forall M\in B_{\varphi }(x_0)_{\infty }{\setminus } \{0\}\), one has \(\langle y^*,M(v,v)\rangle >0;\) \(\mathrm{(ii}_1\mathrm{)}\) for all \(w\in T^2(h^{-1}(-K),x_0,v)\cap v^{\perp }\), \(P\in \mathrm{cl}A_{\varphi }(x_0)\), \(Q\in \) cl\(A_h(x_0)\), \(M\in \mathrm{cl}B_{\varphi }(x_0)\), and \(N\in \) p-cl\(B_h(x_0)\), one has
$$\begin{aligned} \langle y^*,Pw\rangle +\langle z^*,Qw\rangle +2\langle y^*,M(v,v)\rangle +2\langle N(v,v)\rangle > 0; \end{aligned}$$\(\mathrm{(ii}_2\mathrm{)}\) for all \(w\in T''(h^{-1}(-K),x_0,v)\cap v^{\perp }{\setminus } \{0\}\), \(P\in \mathrm{cl}A_{\varphi }(x_0)\), \(Q\in \) p-cl\(A_h(x_0)\), and \(M\in \) p-cl\(B_{\varphi }(x_0)\), one has
$$\begin{aligned} \langle y^*,Pw\rangle +\langle z^*,Qw\rangle +\langle y^*,M(w,w)\rangle > 0. \end{aligned}$$
Proof
By Propositions 3.5 and 3.6, we can apply Theorem 4.9 of Khanh and Tuan (2009) to complete the proof. \(\square \)
6 Conclusion
Multiobjective fractional programming on finite dimensional spaces has been intensively investigated recently. In this paper, we consider this problem with nonsmooth data, in infinite-dimensional normed spaces. We develop first and second-order optimality conditions in terms of generalized derivatives called approximations. Then, unlike the existing papers on fractional programming which could only weaken convexity assumptions, we can avoid completely such conditions. Furthermore, the maps in our problem may not be Lipschitz, a condition imposed usually in earlier existing results. Our necessary optimality conditions are established for local weak solutions and sufficient conditions are for local firm solutions. The obtained conditions of orders 1 and 2 are expressed in terms of approximations of orders 1 and 2, respectively. We provide also a number of examples to illustrate in detail the results.
References
Bao TQ, Gupta P, Mordukhovich BS (2007) Necessary conditions in multiobjective optimization with equilibrium constraints. J Optim Theory Appl 135:179–203
Bector CR, Chandra S, Husain I (1993) Optimality conditions and duality in subdifferentiable multiobjective fractional programming. J Optim Theory Appl 79:105–125
Borwein JM (1976) Fractional programming without differentiability. Math Prog 11:283–290
Cambini R, Carosi L, Schaible S (2005) Duality in fractional programming problems with set constraints. In: Eberhard A, Hadjisavvas N, Luc DT (eds) Nonconvex optimization and its applications. Springer, Berlin, pp 147–160
Chinchuluun A, Yuan DH, Pardalos PM (2007) Optimality conditions and duality for nondifferentiable multiobjective fractional programming with generalized convexity. Ann Oper Res 154:133–147
Husain I, Jabeen Z (2005) On fractional programming containing support functions. J Appl Math Comp 18:361–376
Jourani A, Thibault L (1993) Approximations and metric regularity in mathematical programming in Banach spaces. Math Oper Res 18:390–400
Khanh PQ, Tung NM (2014) First and second-order optimality conditions without differentiability in multivalued vector optimization, submitted for publication
Khanh PQ, Tuan ND (2006) First and second-order optimality conditions using approximations for nonsmooth vector optimization in Banach spaces. J Optim Theory Appl 136:238–265
Khanh PQ, Tuan ND (2008) First and second-order approximations as derivatives of mappings in optimality conditions for nonsmooth vector optimization. Appl Math Optim 58:147–166
Khanh PQ, Tuan ND (2009) Optimality conditions using approximations for nonsmooth vector optimization problems under general inequality constraints. J Convex Anal 16:169–186
Khanh PQ, Tuan ND (2011) Corrigendum to “Optimality conditions using approximations for nonsmooth vector optimization problems under general inequality constraints”. J Convex Anal 18:897–901
Khanh PQ, Tung NM (2014) First and second-order optimality conditions without differentiability in multivalued vector optimization, submitted for publication
Kim DS, Kim MH, Lee GM (2005) On optimality and duality for nonsmooth multiobjective fractional optimization problems. Nonlinear Anal 63:1867–1876
Kuk H, Lee GM, Tanino T (2001) Optimality and duality for nonsmooth multiobjective fractional programming with generalized invexity. J Math Anal Appl 262:365–375
Liang ZA, Huang HX, Pardalos PM (2001) Optimality conditions and duality for a class of nonlinear fractional programming problems. J Optim Theory Appl 110:611–619
Liu S, Feng E (2007) Optimality conditions and duality for a class of nondifferentiable multi-objective fractional programming problems. J Global Optim 38:653–666
Lyall V, Suneja S, Agarwal S (1997) Optimality and duality in fractional programming involving semilocally convex and related functions. Optimization 41:237–255
Mishra SK (1997) Second order generalized invexity and duality in mathematical programming. Optimization 42:51–69
Nobakhtian S (2008) Optimality and duality for nonsmooth multiobjective fractional programming with mixed constraints. J Global Optim 41:103–115
Penot JP (2000) Recent advances on second-order optimality conditions. In: Nguyen VH, Strodiot JJ, Tossings P (eds) Optimization. Springer, Berlin, pp 357–380
Reedy LV, Mukherjee RN (2001) Second order necessary conditions for fractional programming. Indian J Pure Appl Math 32:485–491
Schaible S (1982) Fractional programming. Z Oper Res 27:39–45
Schaible S (1982) Bibliography in fractional programming. Z Oper Res 26:211–241
Schaible S (1995) Fractional programming. In: Horst R, Pardalos PM (eds) Handbook of global optimization. Kluwer Academic, Dordrecht, pp 495–608
Singh C (1981) Optimality conditions in fractional programming. J Optim Theory Appl 33:287–294
Singh C (1986) Nondifferentiable fractional programming with Hanson–Mond classes of functions. J Optim Theory Appl 49:431–447
Soleimani-Damaneh M (2008) Optimality conditions for nonsmooth fractional multiple objective programming. Nonlinear Anal 68:2873–2878
Stancu-Minasian IM (2006) A sixth bibliography of fractional programming. Optimization 55:405–428
Zalmai GJ (2006) Generalized \((\eta, \rho )\)-invex functions and global semiparametric sufficient efficiency conditions for multiobjective fractional programming problems containing arbitrary norms. J Global Optim 36:237–282
Acknowledgments
This work was supported by National Foundation for Science and Technology Development (NAFOSTED). A part of it was completed when the authors stayed as research visitors at Vietnam Institute for Advanced Study in Mathematics (VIASM), whose hospitality is gratefully acknowledged. The second author is supported partially also by Cantho University. The authors are much indebted to the anonymous referees for their valuable remarks and suggestions.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Khanh, P.Q., Tung, L.T. First- and second-order optimality conditions for multiobjective fractional programming. TOP 23, 419–440 (2015). https://doi.org/10.1007/s11750-014-0347-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11750-014-0347-7
Keywords
- Multiobjective fractional programming
- First and second-order approximations
- Weak solutions
- Firm solutions
- Optimality conditions
- Asymptotical pointwise compactness