1 Introduction

The study of fractional programming problems is very important since many optimization problems which arise from practical needs turn out to be of fractional type. Many important results have been established for fractional programming problems without data uncertainty in the last decades, see [18] and the references therein.

As we know, the majority of practical optimization problems are often affected by data uncertainty due to modelling or prediction errors, see [911] and the references therein. Then, a great deal of attention has been focused on optimization under uncertainty. Robust optimization methodology is a powerful approach for examining and solving optimization problems under data uncertainty. It treats data uncertainty as deterministic via bounded uncertainty sets and does not limit the data values to point estimates. Recently, many authors have studied convex optimization problems with uncertainty data by using robust optimization methodology, see [1221] and the references therein.

In this paper, we consider the following fractional programming problem:

$$\begin{aligned} \min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x)\in -S\right\} , \end{aligned}$$
(P)

where \(X\) and \(Y\) are locally convex vector spaces, \(S\) is a nonempty closed convex cone of \(Y\), \(f : X \rightarrow R \cup \{+\infty \}\) is a convex function with \(f\ge 0\), \(g : X \rightarrow R\cup \{+\infty \} \) is a concave function with \(g>0\), and \(h:X\rightarrow Y\) is a \(S\)-convex function.

This problem \((\text{ P})\) in the face of data uncertainty in the constraints can be captured by the following fractional programming problem:

$$\begin{aligned} \min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S\right\} , \end{aligned}$$
(UP)

where \(Z\) is a locally convex vector space, \(h:X\times Z\rightarrow Y\), \(h(\cdot , v)\) is \(S\)-convex and \(v\in \mathcal V \) is the uncertain parameter which belongs to the uncertainty set \(\mathcal V \subseteq Z\). Robust optimization associates with the uncertain program \(\text{(UP)}\) its robust counterpart,

$$\begin{aligned} \min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S,\quad \forall v\in \mathcal V \right\} , \end{aligned}$$
(RUP)

where the uncertain constraints are enforced for every possible value of the parameters within their prescribed uncertainty set \(\mathcal V \).

On the other hand, for each fixed \(v\in \mathcal V \), the uncertain Wolfe dual program of \(\text{(UP)}\) is given by

$$\begin{aligned} \mathop {\max }\limits _{{\mathop {\lambda \in S^{*,}}\limits _{{\left( {x,r} \right) \in X \times R_{+} }} }}\left\{ r: \nabla f(x)-r\nabla g(x)\!+\!\nabla _x (\lambda h)(x,v)\!=\!0,f(x)-r g(x)\!+\!(\lambda h)(x,v)\!\ge \! 0\right\} . \end{aligned}$$
(DP)

The optimistic counterpart of the uncertain dual \((\text{ DP})\) is a deterministic optimization problem which is given by

$$\begin{aligned} \mathop {\max }\limits _{{\mathop {v\in \mathcal V , \lambda \in S^*,}\limits _{(x,r)\in X\times R_+}}}\left\{ r: \nabla f(x)\!-\!r\nabla g(x)\!+\!\nabla _x (\lambda h)(x,v)\!=\!0, f(x)-rg(x)\!+\!(\lambda h)(x,v)\!\ge \! 0\right\} , \end{aligned}$$
(ODP)

where the maximization is also over all the parameters \(v\in \mathcal V \). Robust strong duality holds between \(\text{(UP)}\) and \(\text{(DP)}\) when

$$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\!\in \!-S,\quad \forall v\!\in \! \mathcal V \right\} \\&\!=\!\mathop {\max }\limits _{{\mathop {v\in \mathcal V , \lambda \in S^*,}\limits _{(x,r)\in X\times R_+}}}\left\{ r: \nabla f(x)\!-\!r\nabla g(x)\!+\!\nabla _x (\lambda h)(x,v)\!=\!0, f(x)\!-\!rg(x)\!+\!(\lambda h)(x,v)\!\ge \! 0\right\} . \end{aligned}$$

The significance of this robust duality is that the dual problem can be solved easily for some classes of robust convex problems. In general, we also assume that the primal problem \(\text{(UP)}\) exists a minimizer throughout this paper.

The purpose of this paper is to establish duality results for a fractional programming problem with data uncertainty. We make three key contributions to the study of fractional programming under uncertainty data. First, we establish robust forms of the Farkas lemma for a general uncertain conical convex system which provides a new generalization of the celebrated Farkas lemma for cone convex systems [2224] to uncertain cone convex systems. Then, we characterize robust duality for a uncertain fractional programming problems by showing that strong duality holds between its robust counterpart and an optimistic counterpart of the uncertain Wolfe dual problem under robust characteristic cone constraint qualification. For related conditions we refer the reader to [2531]. We also prove that robust strong duality always holds for linear fractional programming problems under scenario data uncertainty or constraint-wise interval uncertainty. In this case we also see that the optimistic counterpart of the dual is tractable computationally.

The paper is organized as follows. In Sect. 2, we recall some notions and give some preliminary results. In Sect. 3, we first present a robust Farkas lemma for a uncertain cone convex system. We also introduce a robust characteristic cone constraint qualification. By using the robust Farkas lemma and the robust characteristic cone, we completely characterize robust duality for an uncertain fractional programming problem and its uncertain Wolfe dual programming problem. In Sect. 4, we show that robust strong duality always holds for linear fractional programming problems under scenario data uncertainty or constraint-wise interval uncertainty, and that the optimistic counterpart of the dual is tractable computationally.

2 Mathematical preliminaries

Throughout this paper, let \(X\) and \(Y\) be two separated locally convex vector spaces with their topological dual spaces \(X^*\) and \(Y^*\), endowed with the weak\(^*\) topologies \(w(X^*,X)\) and \(w(Y^*,Y)\), respectively. Let \(S\subseteq Y\) be a nonempty convex cone which defined the partial order of \(Y\), namely, for any \(y_1, y_2\in Y, y_1\le _S y_2\Longleftrightarrow y_2-y_1\in S.\) Let \(\langle x^*,x\rangle =x^*(x)\) be the value of a functional \(x^*\in X^*\) at \(x\in X\). The dual cone of \(S\) is defined by \(S^*=\{y^*\in Y^*:\langle y^*,y\rangle \ge 0,\forall y\in Y\}.\) Let \(D\) be a nonempty subset of \(X\). The closure (resp. interior, convex hull) of \(D\) is denoted by \(\text{ cl} D (\text{ resp.}~\text{ int} D, \text{ co} D)\). If \(D\subseteq X^*\), then the weak\(^*\) closure of \(D\) is denoted by \(\text{ cl}^{w^*}D \). The indicator function \(\delta _D:X\rightarrow {R}\cup \{+\infty \}\) of \(X\) is defined by

$$\begin{aligned} \delta _D(x)= \left\{ \begin{array}{l@{\quad }l} 0,&\text{ if} x\in D,\\ +\infty ,&\text{ if} x\not \in D. \end{array} \right. \end{aligned}$$

The support function \(\sigma _{D}:X^*\rightarrow {R}\cup \{+\infty \}\) of \(D\) is defined by

$$\begin{aligned} \sigma _{D}(x^*)=\sup _{x\in D}\;\langle x^*, x\rangle . \end{aligned}$$

Let \(f:X\rightarrow R\cup \{+\infty \}\) be an extend real valued function. The effective domain and the epigraph are defined by

$$\begin{aligned} \text{ dom}f=\{x\in X:f(x)<+\infty \} \end{aligned}$$

and

$$\begin{aligned} \text{ epi}{f}=\{(x,r)\in X\times R:f(x)\le r\}, \end{aligned}$$

respectively. We say that \(f\) is proper, if \(\text{ dom} f\not =\emptyset \). Moreover, if \(\text{ epi}f\) is closed, we say that \(f\) is lower semicontinuous. A function \(f:X\rightarrow R\cup \{+\infty \}\) is said to be convex, if for any \(\mu \in [0,1]\) and \(x,y\in X\),

$$\begin{aligned} f(\mu x+(1-\mu )y)\le \mu f(x)+(1-\mu )f(y). \end{aligned}$$

Moreover, we say that \(f\) is concave, if \(-f\) is convex. As usual, the conjugate function \(f^*:X^*\rightarrow R\cup \{+\infty \}\) of \(f\) is defined by

$$\begin{aligned} f^*(x^*)=\sup _{x\in X}\;\{\langle x^*, x\rangle -f(x)\}. \end{aligned}$$

Clearly, \(f^*\) is a proper lower semicontinuous convex function and for any \(\alpha > 0,\)

$$\begin{aligned} \alpha \text{ epi}f^* = \text{ epi}(\alpha f )^*. \end{aligned}$$

For details, see [30, 32].

There are notions given for functions with extended real values that can also be formulated for functions having their ranges in infinite dimensional spaces. So, we attach a greatest element \(+\infty \) with respect to “\( \le _S \)” and let \({Y}^{\bullet }=Y\cup \{+\infty \}\). The following operations are defined on \({Y}^{\bullet }\):

$$\begin{aligned} y+(+\infty )=(+\infty )+y=+\infty \quad \text{ and} \quad t(+\infty )=+\infty , \quad \text{ for} \text{ any} y\in Y \text{ and} t\ge 0. \end{aligned}$$

Let \(h:X\rightarrow Y^\bullet \) be an extend vector valued function. The domain and the \(S\)-epigraph of \(h\) are defined by

$$\begin{aligned} \text{ dom} h =\{x\in X:h(x)\in Y\} \end{aligned}$$

and

$$\begin{aligned} \text{ epi}_S h = \{(x,y)\in X\times Y:y\in h(x)+S\}, \end{aligned}$$

respectively. We say that \(h\) is proper, if \(\text{ dom} h \not =\emptyset \). We say that \(h\) is \(S\)-convex, if for any \(x, y\in X\) and \( \mu \in [0,1]\),

$$\begin{aligned} h(\mu x+(1-\mu )y)\le _S \mu h(x)+(1-\mu )h(y). \end{aligned}$$

Moreover, let \(\lambda \in S^*\). The function \((\lambda h): X\rightarrow {R}\cup \{+\infty \}\) is defined by

$$\begin{aligned} (\lambda h)(x)= \left\{ \begin{array}{l@{\quad }l} \langle \lambda , h(x)\rangle ,&\text{ if} x\in \text{ dom} h , \\ +\infty ,&\text{ otherwise}. \end{array} \right. \end{aligned}$$

It is easy to see that \(h\) is \(S\)-convex if and only if \((\lambda h)(\cdot ) : X \rightarrow R \cup \{+\infty \}\) is a convex function for each \(\lambda \in S^*\).

In this paper, we endow \( X^* \times R\) with the product topology of \(w(X^*,X)\) and the usual Euclidean topology. Now, we give the following important results concerning epigraphs of conjugate functions.

Lemma 2.1

[25, 29, 31] Let \(f_1, f_2: X\rightarrow {R}\cup \{+\infty \}\) be proper convex functions such that \(\text{ dom} f_1 \; \cap \;\text{ dom} f_2 \not =\emptyset .\)

  1. (i)

    If \(f_1\) and \(f_2\) are lower semicontinuous, then,

    $$\begin{aligned} \text{ epi}{ (f_1+f_2)^* }=\text{ cl}(\text{ epi}{ f^*_1 }+\text{ epi}{ f^*_2 }). \end{aligned}$$
  2. (ii)

    If one of \(f_1\) and \(f_2\) is continuous at some \(\bar{x}\in \text{ dom} f_1 \;\cap \;\text{ dom} f_2 \), then,

    $$\begin{aligned} \text{ epi}{ (f_1+f_2)^* }= \text{ epi}{ f^*_1 }+\text{ epi}{ f^*_2 }. \end{aligned}$$

Lemma 2.2

[23] Let \(I\) be an arbitrary index set and let \(f_i\) , \(i \in I \), be proper lower semicontinuous convex functions on \(X\). Suppose that there exists \(x_0\in X\) such that \(\sup _{i\in I}f_i(x_0)<\infty \). Then,

$$\begin{aligned} \text{ epi}(\sup _{i\in I}f_i)^*= \text{ cl}^{w^*} \left(\text{ co}\bigcup _{i\in I}\text{ epi}f_i^*\right), \end{aligned}$$

where \(\sup _{i\in I}f_i:X\rightarrow R\cup \{+\infty \}\) is defined by \((\sup _{i\in I}f_i)(x)=\sup _{i\in I}f_i(x)\) for all \(x\in X\).

3 Robust strong duality: complete characterizations

In this section, we present a robust Farkas lemma for a uncertain cone convex system. We also introduce a robust characteristic cone constraint qualification. By using the robust Farkas lemma and the robust characteristic cone constraint qualification, we completely characterize the robust duality between \(\text{(UP)}\) and \(\text{(DP)}\) by characterizing strong duality between their robust counterpart \(\text{(RUP)}\) and the optimistic counterpart \(\text{(ODP)}\).

Now, we present a robust Farkas lemma, which provides a new generalization of the celebrated Farkas lemma for cone convex systems to uncertain cone convex systems. Related theorems of the Farkas lemmas for convex systems can be found in [14, 21, 23, 24, 30].

Theorem 3.1

(Robust Farkas Lemma) Let \(f : X\rightarrow {R}\) be a convex function and let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(v\in Z\), \(h(\cdot ,v)\) is a \(S\)-convex function. Let \(\mathcal V \subseteq Z\) and let \(A:=\{x\in X:h(x,v)\in - S, \forall v\in \mathcal V \}\not =\emptyset \). Then, the following statements are equivalent:

  1. (i)

    \(h(x,v)\in - S, v\in \mathcal V , x\in X (i.e., x\in A)\Longrightarrow f(x)\ge 0.\)

  2. (ii)

    \((0,0)\in \text{ epi}f^*+\text{ cl}^{w^*}\left(\text{ co}\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\right)\).

Proof

By using the method of Lemma \(2.8\) in [28], we can easily get

$$\begin{aligned} \delta _A(x)=\sup _{v\in \mathcal V ,\lambda \in S^* }(\lambda h)(x,v). \end{aligned}$$

By lemma 2.2, we get

$$\begin{aligned} \text{ epi}\delta ^*_A=\text{ epi}\left(\sup _{ v\in \mathcal V ,\lambda \in S^*}(\lambda h)(\cdot ,v)\right)^*=\text{ cl}^{w^*}\left(\text{ co}\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\right). \end{aligned}$$

Thus,

$$\begin{aligned} \text{(ii)}&\Longleftrightarrow&(0,0)\in \text{ epi}f^*+\text{ epi}\delta ^*_A\\&\Longleftrightarrow&(0,0)\in \text{ epi}(f + \delta _A)^*\\&\Longleftrightarrow&(f + \delta _A)(x)\ge 0, \text{ for} \text{ any} x\in X\\&\Longleftrightarrow&f(x)\ge 0, \text{ for} \text{ any} x\in A\\&\Longleftrightarrow&\text{(i)}. \end{aligned}$$

This completes the proof. \(\Box \)

Remark 3.1

A special case of Theorem 3.1, where \(\mathcal V \) is a singleton, can be found in [24, 30]. In this case, the set \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is convex. Then, the convex hull in Theorem 3.1 \(\text{(ii)}\) is superfluous. On the other hand, slightly modifying example \(2.1\) in [14], we can obtain that \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is, in general, not necessarily convex.

Note that for a system of cone convex functions \(h(\cdot ,v), v\in \mathcal V \), it is easy to see that the set

$$\begin{aligned} \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^* \end{aligned}$$

is a cone, called robust characteristic cone.

Now, we give some characterizations of the robust characteristic cone which will play an important role in characterizing robust duality later in this paper.

Proposition 3.1

Let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(v\in Z\), \(h(\cdot ,v)\) is a \(S\)-convex function and let \(\mathcal V \subseteq Z\). Then,

$$\begin{aligned} \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^* \end{aligned}$$

is a cone

Proof

Obviously, \((0,0)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*.\) Let \((x^*,r)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) and \(\alpha >0\). Then, there exist \(\bar{v}\in \mathcal V \) and \(\bar{\lambda }\in S^*\) such that \((x^*,r)\in \text{ epi}((\bar{\lambda } h)(\cdot ,\bar{v}))^*\). Then, \(((\bar{\lambda } h)(\cdot ,\bar{v}))^*(x^*)\le r,\) which means that

$$\begin{aligned} \sup _{x\in X}\;\{\langle x^*, x\rangle -(\bar{\lambda } h)(x,\bar{v})\}\le r. \end{aligned}$$

So,

$$\begin{aligned} \sup _{x\in X}\;\{\langle \alpha x^*, x\rangle -(\alpha \bar{\lambda } h)(x,\bar{v})\}\le \alpha r. \end{aligned}$$

Then,

$$\begin{aligned} \alpha (x^*,r)\in \text{ epi}((\alpha \bar{\lambda } h)(\cdot ,\bar{v}))^*\subseteq \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*. \end{aligned}$$

Thus, \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is a cone and the proof is complete. \(\Box \)

Proposition 3.2

Let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(\lambda \in S^*, v\in \mathcal V \subseteq Z\), \((\lambda , v)\mapsto (\lambda h)(x,v)\) is a concave function for any \(x\in X\) and let \(\mathcal V \) be a compact convex set. Then,

$$\begin{aligned} \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^* \end{aligned}$$

is convex.

Proof

Let \((x_i^*,r_i)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\), \(i=1,2,\) and \(t>0\). Then, there exist \({v}_i\in \mathcal V \) and \({\lambda }_i\in S^*\) such that \((x_i^*,r_i)\in \text{ epi}((\lambda _i h)(\cdot ,v_i))^*\), \(i=1,2\). So, for any \(x\in X^*\), we have

$$\begin{aligned} \langle x_1^*, x\rangle -( \lambda _1 h)(x,v_1) \le r_1 \quad \text{ and} \quad \langle x_2^*, x\rangle -( \lambda _2 h)(x,v_2) \le r_2. \end{aligned}$$

Since \((\lambda , v)\mapsto (\lambda h)(x,v)\) is a concave function for any \(x\in X\), we have

$$\begin{aligned}&\langle t x_1^*+(1-t)x_2^*, x\rangle -\left(( t\lambda _1+(1-t)\lambda _2) h\right)(x,tv_1+(1-t)v_2) \\&\quad \le \langle t x_1^*+(1-t)x_2^*, x\rangle -\left( t(\lambda _1 h)(x,v_1)+(1-t)(\lambda _2 h)(x,v_2)\right)\\&\quad \le tr_1+(1-t)r_2. \end{aligned}$$

Then,

$$\begin{aligned} t (x_1^*,r_1)+(1-t) (x_2^*,r_2)&\in\text{ epi}\left((( t\lambda _1+(1-t)\lambda _2) h)(\cdot ,tv_1+(1-t)v_2)\right)^*\\&\subseteq\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*. \end{aligned}$$

Thus, \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is a convex set and the proof is complete. \(\Box \)

Proposition 3.3

Let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(v\in Z\), \(h(\cdot ,v)\) is a \(S\)-convex function and let \(\mathcal V \subseteq Z\) be a compact set. Suppose that \(\text{ int} S\not =\emptyset \) and there exists \(x_0\in X\) such that \(h(x_0,v)\in -\text{ int}S, \text{ for} \text{ any} v\in \mathcal V .\) Then,

$$\begin{aligned} \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^* \end{aligned}$$

is weak\(^*\) closed.

Proof

Let

$$\begin{aligned} (x^*_n, r_n)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^* \end{aligned}$$

with \((x^*_n, r_n)\rightarrow (x^*,r)\). So, we only need to prove that

$$\begin{aligned} (x^*, r)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*. \end{aligned}$$

As \((x^*_n, r_n)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\), there exist \(v_n\in \mathcal V \) and \(\lambda _n\in S^*\) such that \((x^*_n, r_n)\in \text{ epi}((\lambda _n h)(\cdot ,v_n))^*.\) Then,

$$\begin{aligned} ((\lambda _n h)(\cdot ,v_n))^*(x_n^*)\le r_n. \end{aligned}$$

Since \(\text{ int} S\not =\emptyset \), there exists a weak\(^*\) compact convex base \(B\subseteq S^*\) with \(0\not \in B\) and \(S^*= \text{ cone} B\). Since \(\lambda _n\in S^*\), there exist \(t_n>0\) and \(b_n\in B\) such that \(\lambda _n= t_n b_n.\) Since \(B\) is compact, we may assume, without loss of generality, that \(b_n\rightarrow b\). Moreover, since \(\mathcal V \) is compact, we get \(v_n\rightarrow v\in \mathcal V .\)

Now, we first show that \(\{t_n\}\) is bounded. Otherwise, assume that \(t_n\rightarrow +\infty .\) As \(\lambda _n= t_n b_n\) and \(((\lambda _n h)(\cdot ,v_n))^*(x_n^*)\le r_n \), we have

$$\begin{aligned} t_n\left(( b_n h)(\cdot ,v_n)\right)^*\left(\frac{x_n^*}{t_n}\right)&= ((t_n b_n h)(\cdot ,v_n))^*(x_n^*)\\&= ((\lambda _n h)(\cdot ,v_n))^*(x_n^*)\\&\le r_n. \end{aligned}$$

This follows that

$$\begin{aligned} \left(( b_n h)(\cdot ,v_n)\right)^*\left(\frac{x_n^*}{t_n}\right) \le \frac{r_n}{t_n}. \end{aligned}$$
(1)

Since \(b_n h\rightarrow bh\), \(\frac{x_n^*}{t_n}\rightarrow 0\) and \(\frac{r_n}{t_n}\rightarrow 0,\) we have \(\left(( b h)(\cdot ,v)\right)^*\left(0\right) \le 0. \) By the definition of the conjugate function, we get

$$\begin{aligned} \inf _{x\in X}\{(bh)(x,v)\}\ge 0, \end{aligned}$$

which means that for any \(x\in X\),

$$\begin{aligned} (bh)(x,v)\ge 0. \end{aligned}$$

However, since \(h(x_0,v)\in -\text{ int}S\) and \(b\not =0,\) we get \((bh)(x_0,v)<0\) and we have a contradiction.

Now, \(\{t_n\}\) is bounded and we assume that \(t_n\rightarrow t\). We consider two cases:

  1. (i)

    \(t>0\). As \(h\) is continuous, \(v_n\rightarrow v,\) \(b_n h\rightarrow bh\), \(\frac{x_n^*}{t_n}\rightarrow \frac{x ^*}{t }\) and \(\frac{r_n}{t_n}\rightarrow \frac{r }{t }\), it follows from (1) that

    $$\begin{aligned} \left(( b h)(\cdot ,v )\right)^*\left(\frac{x ^*}{t }\right) \le \frac{r}{t}, \end{aligned}$$

    which means that \(\left( \frac{x ^*}{t }, \frac{r}{t}\right)\in \text{ epi}\left(( b h)(\cdot ,v )\right)^*\) and then

    $$\begin{aligned} \left( x ^*, r\right)\in \text{ epi}\left(( tb h)(\cdot ,v )\right)^*\subseteq \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*. \end{aligned}$$
  2. (ii)

    \(t=0\). Then, \(\lambda _n\rightarrow 0\) and \(\lambda _n h\rightarrow 0.\) It follows from \(((\lambda _n h)(\cdot ,v_n))^*(x_n^*)\le r_n \) that

    $$\begin{aligned} (0(\cdot ,v ))^*(x ^*)\le r. \end{aligned}$$

Moreover, since

$$\begin{aligned} (0(\cdot ,v ))^*(x ^*)= \left\{ \begin{array}{l@{\quad }l} 0,&\text{ if} x^*=0,\\ +\infty ,&\text{ otherwise,} \end{array} \right. \end{aligned}$$

we get \(x^*=0\) and \(r\ge 0\). Thus,

$$\begin{aligned} (0,r)\in \text{ epi}(0(\cdot ,v ))^*\subseteq \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*. \end{aligned}$$

Hence, the cone \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is weak\(^*\) closed and the proof is complete. \(\Box \)

In the following theorem, we show that the closedness and convexity of the robust characteristic cone \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is a complete characterization of robust duality in the sense that this condition holds if and only if robust strong duality holds between \(\text{(UP)}\) and \(\text{(DP)}\).

Theorem 3.2

(Robust Strong Duality: Complete Characterization) Let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(v\in Z\), \(h(\cdot ,v)\) is a differentiable \(S\)-convex function and let \(\mathcal V \subseteq Z\). Then, the following statements are equivalent:

  1. (i)

    \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is convex and weak\(^*\) closed.

  2. (ii)

    For each differentiable convex function \(f: X \rightarrow R \) with \(f\ge 0,\) and differentiable concave function \(g : X \rightarrow R \) with \(g>0\),

    $$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S,\quad \forall v\in \mathcal V \right\} \\&\quad \!=\!\mathop {\max }\limits _{{\mathop {v \!\in \! V,\lambda \in s^{*}, }\limits _{{\left( {x,r} \right) \in X \times R_{\! + \!} }} }}\left\{ r: \nabla f(x)\!-\!r\nabla g(x)\!+\!\nabla _x (\lambda h)(x,v)\!=\!0, f(x)\!-\!rg(x)\!+\!(\lambda h)(x,v)\!\ge \! 0\right\} . \end{aligned}$$

Proof

\(\text{(i)}\Rightarrow \text{(ii)}\!\!:\) Let \(x_0\in \text{ argmin}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S,\forall v\in \mathcal V \right\} \). Then,

$$\begin{aligned} h(x,v)\in -S, v\in \mathcal V , x\in X\Longrightarrow \frac{f(x)}{g(x)}\ge \frac{f(x_0)}{g(x_0)}. \end{aligned}$$

For any \(x\in X\), set

$$\begin{aligned} \phi (x):=f(x)-\frac{f(x_0)}{g(x_0)}g(x). \end{aligned}$$

Then,

$$\begin{aligned} h(x,v)\in -S, v\in \mathcal V , x\in X\Longrightarrow \phi (x)\ge \phi (x_0)=0. \end{aligned}$$

By Theorem 3.1, we get

$$\begin{aligned} (0,0)\in \text{ epi}\phi ^*+\text{ cl}^{w^*}\left(\text{ co}\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\right). \end{aligned}$$

As \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is convex and weak\(^*\) closed, we get

$$\begin{aligned} (0,0)\in \text{ epi}\phi ^*+ \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*. \end{aligned}$$

Then, there exist \((x_1^*, r_1)\in \text{ epi}\phi ^*\) and \((x^*_2, r_2)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\) such that

$$\begin{aligned} (x^*_1, r_1)+(x^*_2, r_2)=(0,0). \end{aligned}$$

So, there exist \(\bar{v}\in \mathcal V \) and \(\bar{\lambda }\in S^*\) such that \((x^*_2, r_2)\in \text{ epi}\left((\bar{\lambda } h)(\cdot ,\bar{v})\right)^*\) and

$$\begin{aligned} (x^*_1, r_1)+(x^*_2, r_2)=(0,0). \end{aligned}$$

This gives us that for each \(x\in X\),

$$\begin{aligned} -\phi (x)-(\bar{\lambda }h)(x,\bar{v})&= \langle x^*_1,x\rangle -\phi (x)+\langle x^*_2,x\rangle -(\bar{\lambda }h)(x,\bar{v})\\&\le \phi ^*(x_1^*)+((\bar{\lambda } h)(\cdot ,\bar{v}))^*(x_2^*)\\&\le r_1+r_2\\&= 0\\&= -\phi (x_0). \end{aligned}$$

In particular, letting \(x=x_0\), we get \((\bar{\lambda }h)(x_0,\bar{v})\ge 0.\) Moreover, by \(\bar{\lambda }\in S^*\) and \(h(x_0, \bar{v})\in -S\), we get \((\bar{\lambda }h)(x_0,\bar{v})\le 0.\) Then, \((\bar{\lambda }h)(x_0,\bar{v})=0.\) Thus,

$$\begin{aligned} \phi (x)+(\bar{\lambda }h)(x,\bar{v})\ge \phi (x_0)+(\bar{\lambda }h)(x_0,\bar{v}), \end{aligned}$$

which means that \(x_0\) is a global minimizer of the differentiable convex function \(\phi +(\bar{\lambda }h)(\cdot ,\bar{v})\). By [33, Corollary 25.5.1], \(x_0\) is a global minimizer of the continuously differentiable convex function \(\phi +(\bar{\lambda }h)(\cdot ,\bar{v})\), which in turn gives

$$\begin{aligned} \bigtriangledown \phi (x_0)+\bigtriangledown _x(\bar{\lambda } h)(x_0,\bar{v})=0. \end{aligned}$$

Then,

$$\begin{aligned} \bigtriangledown f(x_0)-\frac{f(x_0)}{g(x_0)}\bigtriangledown g(x_0)+\bigtriangledown _x(\bar{\lambda } h)(x_0,\bar{v})=0. \end{aligned}$$

Let

$$\begin{aligned} \varphi :=f-\frac{f(x_0)}{g(x_0)}g+(\bar{\lambda } h)(\cdot ,\bar{v}). \end{aligned}$$

Since \(f\) is convex with \(f\ge 0\), \(g\) is concave with \(g>0\), and \(h(\cdot ,v)\) is \(S\)-convex, we get \(\frac{f(x_0)}{g(x_0)}\ge 0\) and \(\varphi \) is a convex function. Together with \(\varphi (x_0)=0\) and \(\bigtriangledown \varphi (x_0)=0\), we get

$$\begin{aligned} f(x)-\frac{f(x_0)}{g(x_0)}g (x)+(\bar{\lambda } h)(x,\bar{v})=\varphi (x)\ge \varphi (x_0)=0, \end{aligned}$$

for any \(x\in X\). Thus,

$$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S,\quad \forall v\in \mathcal V \right\} \\&\le \mathop {\max }\limits _{{\mathop {v \in \mathcal V ,\lambda \in s^{*}, }\limits _{{\left( {x,r} \right) \in X \times R_{ + } }} }}\left\{ r: \nabla f(x)- r\nabla g(x)+\nabla _x (\lambda h)(x,v)=0, f(x)-rg(x)+(\lambda h)(x,v)\ge 0\right\} . \end{aligned}$$

Now, we prove that

$$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S,\quad \forall v\in \mathcal V \right\} \\&\ge \mathop {\max }\limits _{{\mathop {v \in \mathcal V ,\lambda \in s^{*}, }\limits _{{\left( {x,r} \right) \in X \times R_{ + } }} }}\left\{ r: \nabla f(x)- r\nabla g(x)+\nabla _x (\lambda h)(x,v)=0, f(x)-rg(x)+(\lambda h)(x,v)\ge 0\right\} . \end{aligned}$$

In fact, let \(\psi :=f-rg+(\lambda h)(\cdot ,v)\) for any \(v\in \mathcal V \), \(\lambda \in S^*\) and \(r\in R_+\). It is easy to see that \(\psi \) is a convex function. Then, for any \(v\in \mathcal V \), \(\lambda \in S^*\) and \((x,r)\in X\times R_+\) with

$$\begin{aligned} \nabla f(x)- r\nabla g(x)+\nabla _x (\lambda h)(x,v)=0 \end{aligned}$$

and

$$\begin{aligned} f(x)-rg(x)+(\lambda h)(x,v)\ge 0, \end{aligned}$$

we have \(x\) is a minimizer of \(\psi (\cdot )\). Thus,

$$\begin{aligned} 0\le \psi (x)&\le \psi (x_0)\\&= f(x_0)-rg(x_0)+(\lambda h)(x_0,v)\\&\le f(x_0)-rg(x_0), \end{aligned}$$

which follows that

$$\begin{aligned} r\le \frac{f(x_0)}{g(x_0)}. \end{aligned}$$

\(\text{(ii)}\Rightarrow \text{(i)}\!\!:\) Let

$$\begin{aligned} (x^*, r)\in \text{ cl}^{w^*}\left(\text{ co}\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\right). \end{aligned}$$

Since, for any \(v\in \mathcal V \) and \(\lambda \in S^*\), \((\lambda h)(\cdot ,v) \le \delta _{A},\) and so,

$$\begin{aligned} \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\subseteq \text{ epi}\delta ^*_{A}. \end{aligned}$$

Moreover, since \(\text{ epi}\delta ^*_{A}\) is weak\(^*\) closed and convex, we have

$$\begin{aligned} \text{ cl}^{w^*}\left(\text{ co}\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\right)\subseteq \text{ epi}\delta ^*_{A}. \end{aligned}$$

Hence, \((x^*, r)\in \text{ epi}\delta ^*_{A}, \) and so \(\langle x^*,x\rangle \le r\) for any \(x\in A.\) Let \(f(x)=-\langle x^*,x\rangle +r\) and \(g(x)=1\). By \(\text{(ii)}\), there exist \(\bar{v}\in \mathcal V \), \(\bar{\lambda }\in S^*\) and \(\bar{x}\in X\) such that

$$\begin{aligned} -\langle x^*,\bar{x}\rangle +r+(\bar{\lambda }h)(\bar{x},\bar{v})\ge 0 \quad \text{ and} \quad -x^*+\bigtriangledown _x(\bar{\lambda } h)(\bar{x},\bar{v})=0. \end{aligned}$$

Set \(\eta (x):=-\langle x^*,x\rangle +r+(\bar{\lambda } h)(x,\bar{v})\). Obviously, \(\eta (x)\) is a convex function. This together with \(\eta (\bar{x})\ge 0\) and \(\bigtriangledown \eta (\bar{x})=0\) gives \(\eta (x)\ge 0\) for any \(x\in X,\) and hence,

$$\begin{aligned} (0,0)\in \text{ epi}\eta ^*&= \left(\{-x^*\}\times [-r,+\infty )\right)+\text{ epi}\left((\bar{\lambda } h)(\cdot ,\bar{v})\right)^*\\&= (-x^*,-r)+\text{ epi}\left((\bar{\lambda } h)(\cdot ,\bar{v})\right)^*, \end{aligned}$$

which means that

$$\begin{aligned} (x^*,r)\in \bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*. \end{aligned}$$

Then,

$$\begin{aligned} \text{ cl}^{w^*}\left(\text{ co}\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*\right)=\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}\left((\lambda h)(\cdot ,v)\right)^*. \end{aligned}$$

Then, \(\text{(i)}\) holds and the proof is complete. \(\Box \)

In the special case when \(\mathcal V \) is a singleton, we obtain the following Wolfe duality established in [13].

Corollary 3.1

Let \(h : X\rightarrow Y\) be a differentiable \(S\)-convex function. Then, the following statements are equivalent:

  1. (i)

    \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}(\lambda h)^*\) is weak\(^*\) closed.

  2. (ii)

    For each differentiable convex function \(f: X \rightarrow R \) with \(f\ge 0,\) and differentiable concave function \(g : X \rightarrow R \) with \(g>0\),

    $$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x)\in -S\right\} \\&\quad =\mathop {\max }\limits _{{\mathop {\lambda \in s^{*}, }\limits _{{\left( {x,r} \right) \in X \times R_{ + } }} }}\left\{ r: \nabla f(x)\!-\!r\nabla g(x)\!+\!\nabla (\lambda h)(x)\!=\!0, f(x)\!-\!rg(x)\!+\!(\lambda h)(x)\!\ge \! 0\right\} . \end{aligned}$$

Proof

By using Lemma 6.1 in [22], we have \(\bigcup _{ \lambda \in S^*}\text{ epi} (\lambda h)^*\) is convex. Then, the conclusion follows from Theorem 3.2 by letting \(\mathcal V =\{v\}\). \(\Box \)

The following theorem shows that the non-negativity of \(f\) can be dropped whenever \(g\) is an affine function.

Theorem 3.3

Let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(v\in Z\), \(h(\cdot ,v)\) is a differentiable \(S\)-convex function and let \(\mathcal V \subseteq Z\). Then, the following statements are equivalent:

  1. (i)

    \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is convex and weak\(^*\) closed.

  2. (ii)

    For each differentiable convex function \(f: X \rightarrow R \), and affine function \(g : X \rightarrow R \) such that \(g\) is positive over the feasible set,

    $$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x,v)\in -S,\quad \forall v\in \mathcal V \right\} \\&\quad \!=\!\mathop {\max }\limits _{{\mathop {v \in V,\lambda \in s^{*}, }\limits _{{\left( {x,r} \right) \!\in \! X \times R_{ + } }} }}\left\{ r: \nabla f(x)\!-\!r\nabla g(x)\!+\!\nabla _x (\lambda h)(x,v)\!=\!0, f(x)\!-\!rg(x)\!+\!(\lambda h)(x,v)\!\ge \! 0\right\} . \end{aligned}$$

Proof

The proof follows the same line of arguments as the proof of Theorem 3.2, except that, in the case of affine \(g\), the function \(\varphi :=f-\frac{f(x_0)}{g(x_0)}g+(\bar{\lambda } h)(\cdot ,\bar{v})\) is convex without the non-negativity of \(f(x_0)\). \(\Box \)

Similarly, we can get the following result.

Corollary 3.2

Let \(h : X\rightarrow Y\) be a differentiable \(S\)-convex function. Then, the following statements are equivalent:

  1. (i)

    \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}(\lambda h)^*\) is weak\(^*\) closed.

  2. (ii)

    For each differentiable convex function \(f: X \rightarrow R \), and affine function \(g : X \rightarrow R \) such that \(g\) is positive over the feasible set,

    $$\begin{aligned}&\min _{x\in X}\left\{ \frac{f(x)}{g(x)}: h(x)\in -S\right\} \\&\quad \!=\!\mathop {\max }\limits _{{\mathop {\lambda \!\in \!s^{*}, }\limits _{{\left( {x,r} \right) \in X \times R_{ + } }} }}\left\{ r: \nabla f(x)-r\nabla g(x)\!+\!\nabla (\lambda h)(x)=0, f(x)-rg(x)+(\lambda h)(x)\!\ge \! 0\right\} . \end{aligned}$$

As a direct corollary of Theorem 3.3, we obtain robust duality for convex programming problems under uncertainty. Related results can be found in [14, 16, 20].

Theorem 3.4

Let \(h : X\times Z\rightarrow Y\) be a continuous function such that for any \(v\in Z\), \(h(\cdot ,v)\) is a differentiable \(S\)-convex function and let \(\mathcal V \subseteq Z\). Then, the following statements are equivalent:

  1. (i)

    \(\bigcup _{v\in \mathcal V , \lambda \in S^*}\text{ epi}((\lambda h)(\cdot ,v))^*\) is convex and weak\(^*\) closed.

  2. (ii)

    For each differentiable convex function \(f: X \rightarrow R \),

    $$\begin{aligned}&\min _{x\in X}\left\{ {f(x)} : h(x,v)\in -S,\quad \forall v\in \mathcal V \right\} \\&\quad =\mathop {\max }\limits _{{\mathop {v \in \mathcal V ,\lambda \in s^{*}, }\limits _{{x \in X}} }}\left\{ f(x)+(\lambda h)(x,v): \nabla f(x)+\nabla _x (\lambda h)(x,v)=0\right\} . \end{aligned}$$

Proof

Let \(g(x) = 1\), for each \(x\in X\). Then, the conclusion will follow from Theorem 3.3. \(\Box \)

4 Tractable linear fractional programming under uncertainty

In this section, we focus on some classes of uncertain fractional programming problems. We show that the robust counterpart of linear fractional programming under componentwise scenario uncertainty or interval uncertainty is computationally tractable by establishing that the robust conjugate duality always holds and the optimistic counterpart of its Wolfe dual is a linear programming problem.

4.1 Componentwise scenario uncertainty

Let \(Y=R^m\) and \(S^*= R^m_+\). Consider the following linear fractional programming problem under componentwise scenario uncertainty:

$$\begin{aligned} \min _{x\in X}\frac{\langle a, x\rangle +\alpha }{\langle b, x\rangle +\beta }\quad \text{ s.t.}\quad \langle a_i, x\rangle \le \beta _i,i=1,\ldots ,m, \end{aligned}$$
(ULP)

where \(a,b\in X^*\) and \(\alpha ,\beta \in R\), the data \((a_i,\beta _i)\in X^*\times R, i=1,\ldots ,m,\) are uncertain and \((a_i,\beta _i)\) belongs to the scenario data uncertainty set \(\mathcal V _i\subseteq X^*\times R\) which defined by

$$\begin{aligned} \mathcal V _i:=\left\{ \left(a_i^{(0)},\beta _i^{(0)}\right)+\sum ^{k}_{l=1}w_i^{(l)}\left(a_i^{(l)}, \beta _i^{(l)}\right):\left(w_i^{(1)},\ldots ,w_i^{(k)} \right)\in Z_i\right\} \end{aligned}$$

with \(Z_i:=\text{ co}\{v_{i1},\ldots ,v_{iq}\}\) for some \(v_{i1},\ldots ,v_{iq}\in R^k.\) We assume that \(\langle b, x\rangle +\beta >0\) for all feasible point \(x\in X\). The robust counterpart of \((\text{ ULP})\) is

$$\begin{aligned} \min _{x\in X}\frac{\langle a, x\rangle +\alpha }{\langle b, x\rangle +\beta }\quad \text{ s.t.}\quad \langle a_i, x\rangle \le \beta _i, \quad \forall (a_i,\beta _i)\in \mathcal V _i, i=1,\ldots ,m. \end{aligned}$$
(RULP)

We assume that the feasible set of (RULP) is always nonempty. The optimistic counterpart of its Wolfe dual is given by

$$\begin{aligned}&\max _{ \lambda _i\in R,\;r\in R_+} r\\ s.t.\quad&a-rb+\sum _{i=1}^m\lambda _ia_i=0,\lambda _i\ge 0,\\&\alpha -r\beta -\sum _{i=1}^m\lambda _i\beta _i\ge 0, (a_i,\beta _i)\in \mathcal V _i,i=1,\ldots ,m. \end{aligned}$$
(ODLP)

Note that this dual problem can be rewritten as

$$\begin{aligned}&\max _{ \lambda _i\in R,\;r\in R_+} r\\ s.t.\,&a-rb+\sum _{i=1}^m\lambda _ia_i=0, \quad \alpha -r\beta -\sum _{i=1}^m\lambda _i\beta _i\ge 0, \\&a_i= a_i^{(0)}+\sum ^{k}_{l=1}\sum ^{q}_{j=1}\mu _{ij}v_{ij}^{(l)} a_i^{(l)}, \beta _i=\beta _i^{(0)} +\sum ^{k}_{l=1}\sum ^{q}_{j=1}\mu _{ij}v_{ij}^{(l)} \beta _i^{(l)},\\&\lambda _i\ge 0, i=1,\ldots ,m, \sum ^{q}_{j=1}\mu _{ij}=1 \text{ and} \mu _{ij}\ge 0, j=1,\ldots ,q, \end{aligned}$$

where \((a_i^{(l)}, \beta _i^{(l)})\in X^*\times R\), \(i=1,\ldots ,m,\) \(l=0,1,\ldots ,k,\) and \(\left(v_{ij}^{(1)},\ldots ,v_{ij}^{(k)}\right)\in R^k\), \(i=1,\ldots ,m,\) \(j=1,\ldots ,q.\) Let \(\lambda _{i0}=\lambda _i\) and \(\lambda _{ij}=\lambda _i\mu _{ij}\). Then, this problem can be simplified to

$$\begin{aligned}&\max _{ \lambda _{ij}\in R,\;r\in R_+} r\\ s.t.\,&a-rb+\sum _{i=1}^m\left(\lambda _{i0}a_i^{(0)}+\sum ^{k}_{l=1}\sum ^{q}_{j=1}\lambda _{ij}v_{ij}^{(l)} a_i^{(l)}\right)=0,\\&\alpha -r\beta -\sum _{i=1}^m\left(\lambda _{i0}\beta _i^{(0)}+\sum ^{k}_{l=1}\sum ^{q}_{j=1}\lambda _{ij}v_{ij}^{(l)} \beta _i^{(l)}\right)\ge 0, \\&\lambda _{ij}\ge 0, i=1,\ldots ,m, j=0,1,\ldots ,q, \end{aligned}$$

which is a simple inequality constrained linear programming problem.

In the following theorem, we show that the primal worst value equals the dual best value for uncertain fractional programming without qualifications under componentwise scenario uncertainty.

Theorem 4.1

Strong duality between \((\text{ RULP})\) and \((\text{ ODLP})\) holds.

Proof

Let \(v_i=(a_i,\beta _i)\in X^*\times R\) and \(h_i:X\times X^* \times R\rightarrow R\) be defined by \(h_i(x, v_i)=\langle a_i, x\rangle -\beta _i\), for any \(i=1,\ldots ,m.\) Moreover, let \(\mathcal V =\Pi _{i=1}^{m}\mathcal V _i\subseteq \Pi _{i=1}^{m}(X^*\times R):=Z\) and let \(h:X\times Z\rightarrow R^m\) be defined by \(h(x,v)=(h_1(x, v_1),\ldots ,h_m(x, v_m))\). Obviously, \(h \) is a continuous function and for any \(v\in Z\), \(h(\cdot ,v)\) is a differentiable \(S\)-convex function. Then, from Theorem 3.3, we only need to prove that \(\bigcup _{v_i\in \mathcal V _i, \lambda _i\ge 0}\text{ epi}\left(\sum _{i=1}^m\lambda _i h_i(\cdot ,v_i)\right)^*\) is convex and weak\(^*\) closed. Moreover, by using the same methods of [16, Theorem 4.1], we can easily get \(\bigcup _{v_i\in \mathcal V _i, \lambda _i\ge 0}\text{ epi}\left(\sum _{i=1}^m\lambda _i h_i(\cdot ,v_i)\right)^*\) is convex and weak\(^*\) closed. This completes the proof. \(\Box \)

4.2 Interval uncertainty

Let \(Y=R^m\) and \(S^*= R^m_+\). Consider the following linear fractional programming problem under interval uncertainty:

figure a1

where \(a,b\in X^*\) and \(\alpha ,\beta , \beta _i\in R\), the data \(v_i \in X^*\) is uncertain and it belongs to the interval data uncertainty set \([\underline{v}_i,\bar{v}_i]\subseteq X^*\), \(\underline{v}_i,\bar{v}_i\in X^*\) with \(\underline{v}_i\le \bar{v}_i\), \(i=1,\ldots ,m.\) We also assume that \(\langle b, x\rangle +\beta >0\) for all feasible point \(x\in X\). The robust counterpart of \((\text{ ULP}_1)\) is

figure a2

We assume that the feasible set of \((\text{ RULP}_1)\) is always nonempty. The optimistic counterpart of its Wolfe dual is defined by

figure a3

Note that this dual problem can be rewritten as the following linear programming problem:

figure a4

In the following theorem, we show that the primal worst value equals the dual best value for uncertain fractional programming without qualifications under interval uncertainty.

Theorem 4.2

Strong duality between \((\text{ RULP}_1)\) and \((\text{ ODLP}_1)\) holds.

Proof

Let \(\mathcal V _{i}:=[\underline{v}_i,\bar{v}_i]\), \(i=1,\ldots ,m\), \(f(x)=\langle a, x\rangle +\alpha \), \(g(x)=\langle b, x\rangle +\beta \), and \(h_{i}:X\times X^* \rightarrow R\) be defined by \(h_{i}(x, v_i)=\langle v_i, x\rangle -\beta _i\), for any \(i=1,\ldots ,m.\) Moreover, let \(\mathcal V =\Pi _{i=1}^{m}\mathcal V _i\subseteq \Pi _{i=1}^{m}X^*:=Z\) and let \(h:X\times Z\rightarrow R^m\) be defined by \(h(x,v)=(h(x, v_1),\ldots ,h(x, v_m))\). Obviously, \(h \) is a continuous function and for any \(v\in Z\), \(h(\cdot ,v)\) is a differentiable \(S\)-convex function. Then, from Theorem 3.3, we only need to prove that \(\bigcup _{v_i\in \mathcal V _i, \lambda _i\ge 0}\text{ epi}\left(\sum _{i=1}^m\lambda _i h_i(\cdot ,v_i)\right)^*\) is convex and weak\(^*\) closed. Moreover, by using the similar methods of [15, Theorem 4.1 and Corollary 4.2], we can easily get \(\bigcup _{v_i\in \mathcal V _i, \lambda _i\ge 0}\text{ epi}\left(\sum _{i=1}^m\lambda _i h_i(\cdot ,v_i)\right)^*\) is convex and weak\(^*\) closed. This completes the proof. \(\Box \)

5 Conclusions

In this paper, we present a robust Farkas lemma for a uncertain cone convex system. We also introduce a robust characteristic cone constraint qualification. By using the robust Farkas lemma and the robust characteristic cone constraint qualification, we characterize robust strong duality for an uncertain fractional programming problem and its uncertain Wolfe dual programming problem by showing strong duality between the deterministic counterparts: robust counterpart of the primal model and the optimistic counterpart of its dual problem. Moreover, we show that the robust counterpart of linear fractional programming under componentwise scenario uncertainty or interval uncertainty is computationally tractable by establishing that the robust conjugate duality always holds and the optimistic counterpart of its Wolfe dual is a linear programming problem.

Our study presented in this paper raises some questions for our further research. We have provided some special classes of uncertain linear fractional programming for which strong duality holds and the optimistic counterparts are computationally tractable mathematical programs. It would be interesting to consider this tractability for other specially structured fractional programming with broad classes of uncertainty sets. On the other hand, second-order cone programming problems and semi-definite cone programming problems as these two important classes of conical programming problems attract a lot of attention in the past decades, it can be interesting to specialize our results to the second-order cone cases and semi-definite cone cases. These may be the topic of some of our forthcoming papers.