1 Introduction

A linear fractional programming problem is a special case of fractional programming problems, where the objective function is a ratio of two linear functions and the constraints are also linear functions. Linear fractional programming has attracted several researchers for years [2, 4, 15,16,17, 22, 28,29,30]. In particular, many results related to this field are quoted in the following two well-known books by Erik B. Bazalinov [3] and I.M. Stancu Minasian [26]. We also refer the readers to the paper [27] for the information on recent developments and applications of this field.

Having applications in the physical sciences, in mechanical stress of materials, in statistical designs, etc., there exist numerous optimization problems described as semi-infinite problems [13]. Linear semi-infinite programming is an extension of linear programming where the dimension of the horizon spaces of the problems in consideration is infinite or the problems have an infinite number of constraints [9,10,11, 14, 23]. Related to this topic, there exist many results introduced in the mathematical literature. For more details, we refer the readers to the book of M.A. Goberna and M.A. López [12], and the book of Edward J. Anderson and Peter Nash [1]. It is also noted that several fundamental properties of finite-dimensional linear programming may fail when they are extended to infinite-dimensional spaces. Such challenges, e.g., basic feasible solutions and duality theory, are discussed in [19]. These challenges remain unresolved in the context of linear semi-infinite fractional programming.

To the best of our knowledge, although semi-infinite fractional programming has attracted many researchers for years, there were rarely articles concerning linear semi-infinite fractional programming [31]. Our aim in this paper is to study a solution method for the following linear countable semi-infinite fractional programming problem:

$$\begin{aligned} \begin{array}{lll} \mathrm{(P)}&{}\mathrm{Max}&{}\displaystyle {\frac{c^Tx+c_0}{d^Tx+d_0}}\\ &{}\mathrm{s.t}&{} a_i^Tx \le b_i, i \in \mathbb {N}:=\{1,2, \ldots \}.\\ &{}&{} x \ge 0. \end{array} \end{aligned}$$

where \(c, d, a_i, i \in \mathbb {N},\) are given column vectors in \(\mathbb {R}^n\); \( c_0, d_0, b_i, i\in \mathbb {N},\) are given real numbers, and x stands for some unknown variable in \(\mathbb {R}^n\).

To solve a linear fractional programming problem with finite constraints, one can use the simplex method for fractional programming. Another approach is to transform the problem to a linear problem by linearization methods, e.g., Charnes-Cooper or Dinkelbach methods [5, 8]. Hence, optimal solutions of linear fractional programming problems can be obtained by finding optimal solutions of the corresponding linear problems. Furthermore, for every mathematical programming problem, there exists an associated dual problem. The relationships between primal-dual problems can help to investigate the existence of optimal solutions of both the problems. Therefore, it is useful to study dual methods in solving linear fractional programming [7, 18, 20, 24, 25].

For the problem (P), if we apply the transformations of Charnes-Cooper or Dinkelbach methods to the problem, we can transfer (P) to a linear semi-infinite problem. This implies that optimal solutions to (P) can be found by solving the corresponding linear semi-infinite problems. At this point, if we formulate its dual problem (from the linear problem), we will obtain a linear problem. Unfortunately, this can be inconvenient due to the disappearance of the fractional objective functions. Hence, the role of the objective functions in dual problems becomes unclear, or it is lost completely. Consequently, the dual linear problems in consideration have very few connections with primal problems. We note that, there exist various optimization problems in economics that are considered as minimizations of ratios of two functions such as cost/time, cost/volume, cost/profit (see [26, p.9]). When solving such problems by numerical methods, it is better if we can observe the variation in the values of numerators and denominators.

From a structural standpoint, one naturally wants the dual problem of a linear fractional problem to also be a linear fractional problem. The same is desired in semi-infinite linear fractional programming. One of dual schemes satisfying this requirement is given by Seshan [18], and is introduced in [26]. In this dual scheme, the objective functions of the primal and the dual problem are the same and the constraints of the dual problem can be obtained from the primal problem by using linear transformations. A disscusion in detail for this is introduced in [25]. This scheme is also refined later by T.Q. Son, with a scheme for finding optimal solutions of the primal problem via its dual problem [24].

Motivated by this observation and based on the dual scheme proposed in [24], we propose a dual scheme for a linear countable semi-infinite fractional problem such that its dual problem is also a linear countable semi-infinite fractional problem. Developing the method introduced in [24], we propose a way to find optimal solutions of linear semi-infinite fractional problems based on sequences of approximate schemes. Concretely, a maximizing sequence for (P) is built up, where each element of the sequence is an optimal solution of a corresponding subproblem. The optimal solution of a corresponding subproblem can be found by the dual scheme introduced in [24]. The scheme for solving linear semi-infinite fractional programming problems above is new, and has been not yet introduced before. By this approach, we can also build up a minimizing sequence for the dual problem of (P). Then, the strong duality theorems between (P) and its dual problem are established.

We now describe the contents of the next sections. Section 2 is devoted to notations and basic results. Section 3 presents an approach to solve the problem (P) via maximizing sequences of (P). Section 4 focuses on the optimizing sequences, duality theorems and a dual scheme for solving linear semi-infinite fractional problems. Some examples are given in this section. The last part is devoted to the appendix.

2 Notations and preliminaries

2.1 Notations

An infinite matrix, denoted by \(\mathcal {A}\), is a twofold table of real or complex numbers. It is defined by

$$\begin{aligned} \mathcal {A}:=(a_{ij}), i,j:=1,2,\ldots , \infty . \end{aligned}$$

Let \(\mathcal {M}\) be the set of all infinite matrices. For \(\mathcal {A}, \mathcal {B} \in \mathcal {M}\) and \(\lambda \in \mathbb {R}\), the following operations can be found in the book of Richard G. Cooke [6]:

  • \(\mathcal {A}+ \mathcal {B}:=(a_{ij}+b_{ij}), i,j:=1,2,\ldots , \infty ;\)

  • \(\lambda \mathcal {A}:=(\lambda a_{ij}), i,j:=1,2,\ldots , \infty ;\)

  • \(\mathcal {A}. \mathcal {B}:=(c_{ij}),\; c_{ij}:=\sum _{k=1}^\infty a_{ik}.b_{kj}, i,j:=1,2,\ldots , \infty ,\) the sum \(c_{ij}\) exists;

  • \(\mathcal {A}^T\) is the transpose of \(\mathcal {A}\).

  • \((\mathcal {A}. \mathcal {B})^T=\mathcal {B}^T.\mathcal {A}^T.\)

The infinite matrix \(\mathcal {A}\) is said to be a row-finite matrix if each row contains only a finite number of non-zero elements, and is said to be a column-finite matrix if every column contains only a finite numbers of non-zero elements.

In this paper, the notation \(\mathcal {A}\) stands for a row-finite matrix, i.e.,

$$\begin{aligned} \mathcal {A}:=\left( \begin{array}{cccc} a_{11}&{}a_{12}&{}\ldots &{}a_{1n}\\ a_{21}&{}a_{22}&{}\ldots &{}a_{2n}\\ \vdots &{} \vdots &{}\vdots &{}\vdots \end{array}. \right) \end{aligned}$$

and the notation \(\mathcal {A}^T\) stands for the transposition of \(\mathcal {A}\).

We need the following notations:

$$\begin{aligned} \begin{array}{ll} \mathbb {R}^n=\{(x_1, x_2, \ldots , x_n)\mid x_i \in \mathbb {R}, i=\overline{1,n} \}, &{}\mathbb {R}^{\mathbb {N}}:=\{(x_1, x_2, \ldots ) \mid x_i \in \mathbb {R}, i=1,2,\ldots \},\\ \mathbb {R}^n_+=\{(x_1, x_2, \ldots , x_n)\mid x_i \ge 0, i=\overline{1,n} \},&{} \mathbb {R}^{\mathbb {N}}_+:=\{(x_1, x_2, \ldots ) \mid x_i \ge 0, i=1,2,\ldots \}, \end{array} \end{aligned}$$

\(\mathbb {R}^{[\mathbb {N}]}\) stands for the set of all sequences with only a finite number of nonzero terms;

\(\mathbb {R}_+^{[\mathbb {N}]}\) is the set of all nonnegative sequences of \(\mathbb {R}^{[\mathbb {N}]}\). Obviously, it is a convex cone of \(\mathbb {R}^{[\mathbb {N}]}\).

Let us denote

$$\begin{aligned} b:=\left( \begin{array}{l} b_1\\ b_2\\ \vdots \\ \end{array}\right) , \quad a_i:=\left( \begin{array}{l} a_{i1}\\ a_{i2}\\ \vdots \\ a_{in}\\ \end{array}\right) , i=1,2. \ldots , +\infty . \end{aligned}$$

The problem (P) can be written in the form

$$\begin{aligned} \begin{array}{lll} \mathrm{(P)}&{}\mathrm{Max}&{}F(x)=\displaystyle {\frac{c^Tx+c_0}{d^Tx+d_0}}\\ &{}\mathrm{s.t}&{} \mathcal {A}x\le b,\\ &{}&{} x \in \mathbb {R}^n_+. \end{array} \end{aligned}$$
(1)

where \(\mathcal {A}\) is a row-finite matrix. The optimal value of (P) is denoted by \(z_P\) and its feasible set is denoted by X,

$$\begin{aligned} X = \{ x \in \mathbb {R}^n_+ \mid \mathcal {A}x \le b \}. \end{aligned}$$

Throughout the paper, we assume that \(D(x):=d^Tx+d_0 >0\) for all \(x \in X\), the feasible set X is bounded, the optimal value of (P) is finite, and the function F does not reduce to a constant on X.

3 Solving linear semi-infinite fractional problems

In this part, inspired by the method for solving linear semi-infinite problems introduced in [21] (see Theorem 7.3.1), we will find the optimal value of (P) via using a maximizing sequence. Note that, a sequence \(\{x_k\}\subset \mathbb {R}^n\) is said to be a maximizing sequence for (P) if

$$\begin{aligned} \lim _{k\rightarrow \infty } F(x_k)=z_P. \end{aligned}$$

We need the following notations:

$$\begin{aligned} \mathcal {A}^{(k)}:=\left( \begin{array}{cccc} a_{11}&{}a_{12}&{}\ldots &{}a_{1n}\\ a_{21}&{}a_{22}&{}\ldots &{}a_{2n}\\ \vdots &{} \vdots &{}\vdots &{}\vdots \\ a_{k1}&{}a_{k2}&{}\ldots &{}a_{kn} \end{array} \right) , \quad b^{(k)}:=\left( \begin{array}{c} b_1\\ b_2\\ \vdots \\ b_k \end{array} \right) . \end{aligned}$$

For each \(k\in \mathbb {N}\), let us consider the following problem:

$$\begin{aligned} \begin{array}{lll} \mathrm{(P_k)}&{}\mathrm{Max}&{}F(x)=\displaystyle {\frac{c^Tx+c_0}{d^Tx+d_0}}\\ &{}\mathrm{s.t}&{} \mathcal {A}^{(k)} x\le b^{(k)},\\ &{}&{} x \in \mathbb {R}^n_+. \end{array} \end{aligned}$$

Assume that the feasible set \(X_k\) of \(\mathrm{(P_k)}\) is a regular set, i.e., a nonempty and bounded set, D(x) is positive for all \(x \in X_k\), and the function F does not reduce to a constant on \(X_k\). In this case, the problem \(\mathrm{(P_k)}\) attains its optimal value over \(X_k\), [3, Theorem 4.3]. Obviously, as \(k \rightarrow \infty \), we obtain the problem (P).

Theorem 3.1

Suppose that, \(x_k \in X_k\) is an optimal solution of \(\mathrm{(P_k)}\) for each \(k \in \mathbb {N}\). Then, the sequence \(\{x_k\}_k\) is a maximizing sequence for \(\mathrm{(P)}.\) Moreover, if there exists \(k_0\) such that \(X_{k_0}\) is compact then there exists \({\bar{x}} \in X_{k_0}\) such that \( F({\bar{x}})=z_P.\)

Proof

Let \(\alpha =z_P\). For each \(k \in \mathbb {N}\), suppose that \(\bar{x}_k\in X_k\) is an optimal solution of \(\mathrm{(P_k)}\). It is obvious that,

$$\begin{aligned}&X\subset \ldots \subset X_{k+1}\subset X_k\subset \ldots \subset X_1, \nonumber \\&X= \bigcap _{k=1}^\infty X_k. \end{aligned}$$
(2)

Since \(\bar{x}_k\in X_k\) is an optimal solution of \(\mathrm{(P_k)}\) for each \(k \in \mathbb {N}\), from (2) we get

$$\begin{aligned} F(\bar{x}_k) \ge F(\bar{x}_{k+1})\ge \ldots \ge \alpha . \end{aligned}$$
(3)

The sequence \(\{F(\bar{x}_k)\}_k\) is a decreasing sequence and is bounded below. Hence,

$$\begin{aligned} \{F(\bar{x}_k)\}_k \rightarrow \inf _k \{F(\bar{x}_k)\}_k. \end{aligned}$$

We claim that \(\alpha =\inf _k \{F(\bar{x}_k)\}_k\). Suppose to contrary that \(\alpha <\inf _k \{F(\bar{x}_k)\}_k\), i.e.,

$$\begin{aligned} F(\bar{x}_k) > \alpha , \forall k =1,2,\ldots . \end{aligned}$$

It follows that \(X \subset X_k\) and \(X \ne X_k\) for all \(k=1,2, \ldots \), i.e., there exists \(y \in X_k\setminus X\), for all \(k=1,2, \ldots .\) It yields \(\bigcap _{k=1}^\infty X_k \ne X\), a contradiction.

On the other hand, if there exists \(k_0\) such that \(X_{k_0}\) is compact, then, by (2), the sequence \(\{x_k\}\) is bounded. Hence, there exists a subsequence of \(\{x_k\}_k\), denoted by \(\{x_{k_n}\}_n\), converges to \(\bar{x}\in X_{k_0}\). By the continuous property of the function F, we get

$$\begin{aligned} F(x_{k_n}) \rightarrow F(\bar{x}). \end{aligned}$$

Since \(\{F(x_{k_n})\}_n\) is a subsequence of the sequence \(\{F(x_k)\}_k\), and \(\{F(x_k)\}_k\) is a convergent sequence, the uniqueness of limits implies that \(\{F(x_k)\} \rightarrow F(\bar{x})\). We obtain \(F(\bar{x})=\alpha .\) This completes the proof. \(\square \)

Remark 3.1

For large enough k, \(x_k\) can be seen as an approximate solution for (P).

4 Duality theorems and dual scheme for finding optimal solutions of (P)

In the paper [24], with some modifications on the Seshan’s dual scheme [18], a dual scheme for linear fractional programming problems is introduced. Due to that dual scheme, the dual problem for \(\mathrm{(P_k)}\) is formulated by

$$\begin{aligned} \begin{array}{lll} \mathrm{(D_k)}&{} \mathrm{Min}&{} \displaystyle { I(u,v):= \frac{c^Tu+c_0}{d^Tu+d_0}},\\ &{}\mathrm{s.t.}&{} (u,v) \in Y_k, \end{array} \end{aligned}$$

where \( Y_k\) consists of all \((u,v)\in \mathbb {R}^n\times \mathbb {R}^k\) satisfying the constraints

$$\begin{aligned}&(d^Tu)c-(c^Tu)d-{\mathcal {A}^{(k)}}^Tv \le c_0d-d_0c, \end{aligned}$$
(4)
$$\begin{aligned}&c_0d^Tu-d_0c^Tu+(b^k)^Tv \le 0, \end{aligned}$$
(5)
$$\begin{aligned}&\mathcal {A}^{(k)}u\le b^{(k)},\end{aligned}$$
(6)
$$\begin{aligned}&u \in \mathbb {R}^n_+,\ v \in \mathbb {R}^k_+. \end{aligned}$$
(7)

We assume that \(d^Tu+d_0 >0\) for every \((u,v) \in Y_k\).

The later dual scheme has a specific property. If we project the feasible set of the dual problem, formulated due to the scheme introduced in [24], into the horizon space of the primal problem, we can obtain the solution set of the primal problem (see Theorem A.4). Hence, that dual scheme is useful for us to study duality theorems for (P).

For convenience, weak, strong and converse duality theorems for \(\mathrm{(D_k)}\) will be introduced in the appendix of this article.

4.1 Duality theorems for (P)

In Sect. 3 , it is shown that there is a close connection between (P) and the problems \(\mathrm{(P_k)}\). Motivated by the constructions of the pair \((\mathrm{(P_k), (D_k)})\), we propose the dual problem of (P) as follows:

$$\begin{aligned} \begin{array}{lll} \mathrm{(D)}&{} \mathrm{Min}&{} \displaystyle { I(u,v):= \frac{c^Tu+c_0}{d^Tu+d_0}},\\ &{}\mathrm{s.t.}&{} (u,v) \in Y, \end{array} \end{aligned}$$

where the feasible set Y consists of all \((u,v)\in \mathbb {R}^n\times \mathbb {R}^{[\mathbb {N}]}\) satisfying the following constraints:

$$\begin{aligned}&(d^Tu)c-(c^Tu)d-\mathcal {A}^Tv- (c_0d-d_0c)\le 0, \end{aligned}$$
(8)
$$\begin{aligned}&c_0d^Tu-d_0c^Tu+b^Tv \le 0, \end{aligned}$$
(9)
$$\begin{aligned}&\mathcal {A}u \le b, \end{aligned}$$
(10)
$$\begin{aligned}&u \in \mathbb {R}^n_+, v \in \mathbb {R}^{[\mathbb {N}]}_+. \end{aligned}$$
(11)

Since the primal problem (P) has a number of infinite constraints, the dual variable \(v \in \mathbb {R}^{[\mathbb {N}]}_+\) is used in the construction of (D). It was inspired by the dual variable space for linear semi-infinite program introduced in [1, p. 66].

Assume that \(d^Tu+d_0>0\) for all \((u,v) \in Y\). We also assume that the value of (D), denoted by \(z_D\), is finite. We note that, since \(v \in \mathbb {R}^{[\mathbb {N}]}_+\), the infinite sums in the feasible set of (D) are well defined.

Theorem 4.1

(Weak duality) For any \(x \in X\) and any \((u,v) \in Y\), one has

$$\begin{aligned} F(x) \le I(u,v). \end{aligned}$$

Proof

Let \(x \in X\) and \((u,v) \in Y\). Since \(v \in \mathbb {R}^{[\mathbb {N}]}_+\), the feasible set Y is well defined. Using a similar technique as in the proof of Theorem 2.1 in [24], we can obtain the desired result. \(\square \)

We now are at the position to study strong duality between (P) and (D). A sequence \(\{(u_k,v_k)\}_k\) is said to be a minimizing sequence for \(\mathrm{(D)}\) if \((u_k,v_k)\in \mathbb {R}^n_+\times \mathbb {R}^k_+\) for each \(k\in \mathbb {N}\) and

$$\begin{aligned} \lim _{k \rightarrow \infty } I(u_k,v_k) =z_D. \end{aligned}$$

Theorem 4.2

(Strong duality) If \({\bar{x}}_k\in X_k\) is an optimal solution to \(\mathrm{(P_k)}\) for each \(k \in \mathbb {N}\), then, there exists \({\bar{v}}_k \in \mathbb {R}^k_+\) such that \(\{({\bar{x}}_k,{\bar{v}}_k)\}_k\) is a minimizing sequence for \(\mathrm{(D)}\) and

$$\begin{aligned} z_D=z_P. \end{aligned}$$

Proof

For each optimal solution \({\bar{x}}_k\) of \(\mathrm{(P_k)}\), \(k \in \mathbb {N}\), by Theorem A.2, there exists \(({\bar{u}}_k, \bar{v}_k) \in {Y}_k\) such that it is an optimal solution of \(\mathrm{({D}_k)}\), where \({\bar{u}}_k:={\bar{x}}_k\) and

$$\begin{aligned} F({\bar{x}}_k)=\max \{F(x) \mid x \in X_k\}=\min \{I(u,v)\mid (u,v) \in Y_k\}=I({\bar{u}}_k, {\bar{v}}_k). \end{aligned}$$
(12)

Combining this and (3), we get

$$\begin{aligned} I({\bar{u}}_k, {\bar{v}}_k)=F({\bar{x}}_k)\ge I({\bar{u}}_{k+1}, \bar{v}_{k+1})=F({\bar{x}}_{k+1})\ge \ldots \ge \alpha , k=1,2, \ldots \end{aligned}$$

where \(\alpha = z_P\). The sequence \(\{I({\bar{u}}_k, {\bar{v}}_k)\}_k\) is a decreasing sequence and is bounded below. Hence, it is a convergent sequence. Moreover

$$\begin{aligned} I({\bar{u}}_k, {\bar{v}}_k)\ge z_D, \forall k \in \mathbb {N}, \end{aligned}$$
(13)

where \(z_D\) is optimal value of (D). Note that, by Theorem 3.1, we have

$$\begin{aligned} \lim _{k \rightarrow \infty }F({\bar{x}}_k)=z_P. \end{aligned}$$
(14)

Combining (13), (14) and the weak duality, we get

$$\begin{aligned} \lim _{k \rightarrow \infty }F({\bar{x}}_k)= z_P \le z_D \le I({\bar{u}}_k, {\bar{v}}_k), \forall k \in \mathbb {N}. \end{aligned}$$

Hence, from (12), we obtain

$$\begin{aligned} \lim _{k \rightarrow \infty }F({\bar{x}}_k)= z_P= z_D=\lim _{k \rightarrow \infty } I({\bar{u}}_k, {\bar{v}}_k). \end{aligned}$$

The proof is completed. \(\square \)

Corollary 4.1

Suppose that, for each \(k \in \mathbb {N}\), the pair \(({\bar{u}}_k,\bar{v}_k)\) is an optimal solution for \(\mathrm{({D}_k)}\). Then, the sequence \(\{({\bar{u}}_k,{\bar{v}}_k)\}_k\) is a minimizing sequence for \(\mathrm{({D})}.\) Furthermore, strong duality between (P) and (D) holds.

Proof

Suppose that, for each \(k \in \mathbb {N}\), \((\bar{u}_k, \bar{v}_k)\) is an optimal solution of \(\mathrm{({D}_k)}\). By Theorem A.3, if we set \({\bar{x}}_k={\bar{u}}_k\) then \({\bar{x}}_k\) is an optimal solution of \(\mathrm{(P_k)}\). Hence, by Theorem 3.1, \(\{{\bar{x}}_k\}_k\) is a maximizing sequence for (P). Using an analogous argument as in the proof of Theorem 4.2, we can deduce the desired results. \(\square \)

Remark 4.1

From Theorem A.4, if we pick a point from the feasible set of \(\mathrm{({D}_k)}\) and project it into the horizon space of the problem \(\mathrm{(P_k)}\), we obtain an optimal solution of \(\mathrm{(P_k)}\), i.e.,

$$\begin{aligned} \mathrm{Sol(P_k)}=\mathrm{Pr}_{\mathbb {R}^n}{({Y}_k)} \end{aligned}$$

where \(\mathrm{Pr}_{\mathbb {R}^n}{({Y}_k)}:= \{ u_k \in \mathbb {R}^n_+\mid (u_k,v_k)\in Y_k\}\) and \(\mathrm{Sol(P_k)}\) is the optimal solution set of \(\mathrm{(P_k)}.\) Moreover, by Theorem A.2, for \(\bar{x}_k \in \mathrm{Sol (P_k)}\), we can choose a pair \((\bar{u}_k, \bar{v}_k) \in Y_k\) such that it is an optimal solution of \(\mathrm{(D_k)}\), where \(\bar{u}_k=\bar{x}_k \), \(\bar{v}_k=(d^T\bar{x}_k+d_0) \bar{v}^*\) and \(\bar{v}^*\) is an optimal solution of the problem \(\mathrm{(LD_k)}\). Here, \(\mathrm{(LD_k)}\) is a linear problem given in the proof of Theorem A.2.

4.2 Dual scheme for solving (P)

From Remark 4.1, we can see that the dual problem of (P) does not need to be solved because the maximizing sequence for the primal problem can be obtained by taking the first component of each element of feasible sequences of the problems \(\mathrm{(D_k)}\). Concretely, for each \(k \in \mathbb {N}\), if we have a feasible point \((u_k,v_k)\) of \(\mathrm{(D_k)}\) then \(u_k\) is an optimal solution of \(\mathrm{(P_k)}\). Hence, the sequence \(\{u_k\}_k\) is a maximizing sequence of \(\mathrm{(P_k)}\). The dual scheme for solving the problem (P) is proposed below.

  1. Step 1:

    From the problem (P), construct the problem \(\mathrm{(P_k)}\). Then, formulate the dual problem \(\mathrm{(D_k)}\) of \(\mathrm{(P_k)}\).

  2. Step 2:

    Determine the feasible set \({Y}_k\) of \(\mathrm{(D_k)}\). For each \(k:=1,2, \ldots \), pick a point \((\bar{u}_k,\bar{v}_k) \in {Y}_k\).

  3. Step 3:

    Build up the sequence \(\{\bar{u}_k\}_k\). It is a maximizing sequence for \(\mathrm{(P)}\). The optimal value of (P) follows.

Remark 4.2

The point \((\bar{u}_k,\bar{v}_k) \in {Y}_k\) can be found by using the simplex method.

The following examples are given in detail to illustrate the dual method for solving linear semi-infinite fractional problems.

Example 1

$$\begin{aligned} \mathrm{(P_1) \quad Max} \left\{ \displaystyle F(x)=\frac{1}{x}\, \Big | \, (k+1)x \ge k, k=1,2,\ldots \right\} . \end{aligned}$$

The feasible set of the above problem is \(\{x \in \mathbb {R} \mid x \ge \frac{k}{k+1}, k=1,2,...\}\). It is obvious that \(\max F(x)=1\).

This result can be checked by the dual scheme below. Let us consider the problem \(\mathrm{(P_{1k})}\) associated to \(\mathrm{(P_1)}.\)

$$\begin{aligned} \begin{array}{rl} \mathrm{(P_{1k})\quad Max}&{} F(x)\\ \mathrm{s.t.}&{} Ax \le b,\\ &{}x \ge 0, \end{array} \end{aligned}$$

where \(A=\left( \begin{array}{c} -2\\ -3\\ \vdots \\ -(k+1) \end{array} \right) \) and \(b=\left( \begin{array}{c} -1\\ -2\\ \vdots \\ -k \end{array} \right) . \)

The dual problem of \(\mathrm{(P_{1k})}\) is

$$\begin{aligned} \mathrm{(D_{1k}) \quad Min} \left\{ \displaystyle I(u,v)=\frac{1}{u} \; \Big | \;(u,v) \in Y_{1k}\right\} , \end{aligned}$$

where

$$\begin{aligned} Y_{1k}=\left\{ (u,v) \left| \begin{array}{l} u \in \mathbb {R}_+, v \in \mathbb {R}^k_+,\\ 2v_1+3v_2+\ldots +(k+1)v_k \le 1,\\ u \le v_2+2v_2+\ldots +kv_k,\\ u \ge \frac{i}{i+1}, i=1, \ldots ,k. \end{array}\right. \right\} . \end{aligned}$$

Pick a point \((u,v) \in Y_{1k}\):

$$\begin{aligned} u=\frac{k}{k+1}, v= \left( 0,\ldots ,0, \frac{1}{k+1}\right) . \end{aligned}$$

Due to the scheme, we get \(x_k=u= \frac{k}{k+1}\) is an optimal solution of \(\mathrm{(P_{1k})}\). The maximizing sequence of the primal problem is \(\{x_k\}=\{k/(k+1)\}\). Hence, the optimal value is

$$\begin{aligned} \lim _{k \rightarrow \infty }F(x_k)=1. \end{aligned}$$

Example 2

$$\begin{aligned} \mathrm{(P_2)\quad Max} \left\{ F(x)=\frac{x_2-1}{x_1+1}\, \Big | \, x_1\ge 0, x_2 \ge 0, (k+1)x_1+kx_2 \le k+1, k=1,2,\ldots \right\} . \end{aligned}$$

The problem \((\mathrm{P}_{2\mathrm{k}})\) associated to \((\mathrm{P}_{2})\) is

$$\begin{aligned} \begin{array}{rl} (\mathrm{P}_{2\mathrm{k}})\quad \mathrm{Max}&{} F(x)\\ \mathrm{s.t.}&{} Ax \le b,\\ &{}x \in \mathbb {R}^2_+, \end{array} \end{aligned}$$

where \(A=\left( \begin{array}{cc} 2&{}1\\ 3&{}2\\ \vdots &{}\vdots \\ k+1&{} k \end{array} \right) \) and \(b=\left( \begin{array}{c} 2\\ 3\\ \vdots \\ k+1 \end{array} \right) . \)

The dual problem of \(\mathrm{(P_{2k})}\) is

$$\begin{aligned} \mathrm{(D_{2k}) \quad Min} \left\{ \displaystyle I(u,v)=\frac{u_2-1}{u_1+1} \; \Big | \;(u,v) \in Y_{2k}\right\} , \end{aligned}$$

where

$$\begin{aligned} Y_{2k}=\left\{ (u,v) \left| \begin{array}{l} \; \; u \in \mathbb {R}^2_+, v \in \mathbb {R}^k_+,\\ -u_2-[2v_1+3v_2+\ldots +(k+1)v_k] +1\le 0,\\ u_1-(v_1+2v_2+\ldots +kv_k)+1\le 0,\\ -u_1-u_2+v_1+2v_2+\ldots +kv_k\le 0,\\ (i+1)u_1+iu_2\le i+1, i=1, \ldots ,k. \end{array}\right. \right\} \end{aligned}$$

or,

$$\begin{aligned} Y_{2k}=\left\{ (u,v) \left| \begin{array}{l} \; \; u \in \mathbb {R}^2_+, v \in \mathbb {R}^k_+,\\ u_2\ge 1-[2v_1+3v_2+\ldots +(k+1)v_k],\\ u_1\le (v_1+2v_2+\ldots +kv_k)-1,\\ u_1+u_2\ge v_1+2v_2+\ldots +kv_k,\\ u_1+\frac{i}{i+1}u_2\le 1, i=1, \ldots ,k. \end{array}\right. \right\} \end{aligned}$$

If we choose \((u,v)=(u_1,u_2,v_1,v_2, \ldots , v_k)\) where

$$\begin{aligned} u_1=0, u_2=\frac{k+1}{k}, v_1=\frac{1}{k}, v_2=\frac{1}{2k}, \ldots , v_k=\frac{1}{kk} \end{aligned}$$

then \((u,v) \in Y_{2k}\). Hence, the minimizing sequence for \(\mathrm{(P_2)}\) is \(\{(0, \frac{k+1}{k})\}\). Thus, the optimal value of \(\mathrm{(P_2)}\) is 0.

5 Conclusion

We proposed a way to solve a linear countable semi-infinite fractional programming problem via a dual scheme. Due to the scheme, the dual problem does not need to be solved and the optimal value of the primal problem could be determined via optimizing sequences.