1 Introduction

In the fields of nonlinear dynamic systems to obtain the numerical solution of the optimal control problem [1, 2], many problems belong to semi-infinite programming (SIP) [3, 4]. In recent years, many efforts have been made in the research of SIP (see e.g. [5, 6]). A simple form is given by

$$\mathrm{SIP}\qquad \begin{array}{@{}ll} \min& f(x)\cr\noalign{\vspace{3pt}} \mathrm{s.t.}& g(x,\omega)\leq0,\quad\forall\omega\in \varOmega=[a,b], \end{array} $$

where f:R nR is continuously differentiable and g:R n×[a,b]→R is continuously differentiable with respect to x.

A common approach for solving SIP is discretization method (see e.g. [7]). Generally, given any positive integer q, the interval [a,b] can be discretized into the following finite set:

(1)

Solving SIP problem boils down to solving a class of discretized SIP problem with the form of

$$ {\mathrm{SIP}_q}\qquad \begin{array}{@{}ll} \min& f(x) \cr\noalign{\vspace{3pt}} \mathrm{s.t.}& g(x,\omega)\leq0,\quad\forall\omega\in \varOmega_q. \end{array} $$
(2)

When the Hausdorff distance between Ω and Ω q goes to zero, the solution of SIP q closes to the solution of SIP. For the sake of discussion, in this work, we also only consider the discretized semi-infinite optimization problem SIP q with a fixed discretization level q. Note that, if q becomes very large, the number of constraints of SIP q will be very large, which caused difficulties in the numerical solutions. Thus, how to design effective numerical methods to solve SIP q has attracted much attention of many scholars.

In 1998, Lawrence and Tits [8] proposed a feasible SQP algorithm for SIP q , however, in which two QP subproblem need to be solved to obtain main search direction at iteration. Later, combining the technique of updating constraint index set in QP subproblems [8] with the idea of norm-relaxed method [9], Jian, Xu and Han [10] presented a norm-relaxed method of feasible directions for SIP q . At each iteration, only one QP subproblem is solved to yield the main search direction. However, in order to achieve the superlinear convergence, parameter η k in the QP subproblem need to be updated at each iteration.

On the other hand, in recent years, with the generalization and application of tools for computing second derivatives, sequential quadratically constrained quadratic programming (SQCQP) methods have rapid development [11, 12, 17]. However, these methods have to compute second accuracy or approximation derivative of functions. Thanks to norm-relaxed technique and the idea of SQCQP methods [13], Jian, Hu, Tang and Zheng [14] proposed a new feasible SQCQP algorithm, in which main search direction is solved by one QCQP subproblem with simple quadratically constraints. This algorithm possesses the advantage of lower requirement on parameters, which do not need to be updated. This overcomes the drawback of updating parameter η k to some extent, however, its drawback is a high computation cost and numerical instability brought by pivoting operation used to calculate approximate active set.

In order to resolve these problems and draw the advantage of SQCQP algorithm [14], we attempt to design a sequential simple QCQP algorithm for SIP q , with the technique of updating constraint index set [8, 10]. In this paper, the proposed algorithm has the following features: (i) the initial point is feasible; (ii) at each iteration, only one simple QCQP subproblem is solved to obtain search direction; (iii) only a few of constraints in the QCQP subproblem is used, which can reduce the computational cost greatly; (iv) under some mild conditions, the global convergence is proved.

To this end, we organize our papers as follows: In the next section, we present our algorithm and discuss its properties. In Sect. 3, the global convergence is proved. Finally, some preliminary numerical results are reported in Sect. 4.

2 Description of algorithm

For the sake of analysis, denoting the feasible set and active set of problem SIP q as

$$\everymath{\displaystyle }\begin{array}{ll} X_q=\bigl\{x\in R^n|\ g(x,\omega)\leq0, \omega\in \varOmega_q\bigr\}, \cr\noalign{\vspace{3pt}} \varOmega_0(x)=\bigl\{\omega\in\varOmega_q|\ g(x,\omega)=0\bigr\}. \end{array} $$

For point x kX q ,Ω k Ω q , we consider the following search direction subproblem QCQP(x k,H k ,Ω k ):

$$ \everymath{\displaystyle }\begin{array}{@{}l@{\quad }l@{\quad }l} \min_{(d,z)}&\gamma_0z +\frac{1}{2}d^{T}H_{k}d \cr\noalign{\vspace{3pt}} \mathrm{s.t.}& \nabla f\bigl(x^{k}\bigr)^{T}d\leq\gamma_0z, \cr\noalign{\vspace{3pt}} & g\bigl(x^k,\omega\bigr)+\nabla_x g\bigl(x^k,\omega\bigr)^Td \cr\noalign{\vspace{3pt}} &\quad-\gamma_{\omega}\sigma_{k}z+\varepsilon_kz^2\leq0,\quad \omega\in\varOmega_k, \end{array} $$
(3)

where γ ω (ω∈{0}∪Ω k ) are all positive constants, H k is symmetric, positive definite matrix, negative constants σ k and ε k such that (σ k ,ε k )≠(0,0).

Assumption 1

The functions f and g(⋅,ω):R n×[a,b]→R, ωΩ q are all at least first-order continuously differentiable.

Lemma 1

Suppose that Assumption 1 holds and x kX q . Then (i) subproblem (3) has a unique optimal solution; and (ii) (d k,z k ) is an optimal solution of (3) if and only if it is a KKT point of (3).

Proof

(i) Refer to the proof of [14, Lemma 2.1(i)], we can conclude that (3) has an optimal solution. Next, we just need to prove the uniqueness. Assume that (z 1,d 1),(z 2,d 2) both are optimal solution to (3), then

$$ \everymath{\displaystyle }\begin{array}{@{}lll} a\stackrel{\triangle}{=}\gamma_0 z_1+\frac{1}{2} \bigl(d^1\bigr)^TH_kd^1, \cr\noalign{\vspace{3pt}} a= \gamma_0 z_2+\frac{1}{2} \bigl(d^2 \bigr)^TH_kd^2 \end{array} $$
(4)

These imply that

Note that (3) is a convex programming, so the convex combination (λz 1+(1−λ)z 2,λd 1+(1−λ)d 2) with λ∈(0,1) is also an optimal solution to (3). So \(a=\gamma_{0}(\lambda z_{1}+(1-\lambda)z_{2}) +\frac{1}{2}(\lambda d^{1}+(1-\lambda)d^{2})^{T}H_{k}(\lambda d^{1}+(1-\lambda )d^{2})\). Further, we have

$$\everymath{\displaystyle }\begin{array}{@{}ll}\lambda\biggl(\gamma_0z_1+\frac{1}{2} \lambda\bigl(d^1\bigr)^TH_kd^1\biggr) \cr\noalign{\vspace{3pt}} \qquad +(1-\lambda)\bigl(\gamma_0z_2+\bigl(d^2\bigr)^TH_kd^2\bigr) \cr\noalign{\vspace{3pt}} \quad{} =\gamma_0\bigl(\lambda z_1+(1-\lambda)z_2\bigr) \cr\noalign{\vspace{3pt}} \qquad{} +\frac{1}{2}\bigl(\lambda d^1+(1-\lambda)d^2\bigr)^TH_k\bigl(\lambda d^1 \cr\noalign{\vspace{3pt}} \qquad{}+(1-\lambda)d^2\bigr). \end{array} $$

Note that λ∈(0,1), by expanding and simplifying the relation above, we further get (d 1d 2)T H k (d 1d 2)=0. This together with the positive definiteness of H k shows that d 1=d 2. Finally, z 1=z 2 follows immediately from the first equality of (4). Thus, the conclusion (i) is at hand.

(ii) The proof is similar to that of [14, Lemma 2.1(ii)].

 □

If (d k,z k ) is a KKT point for (3), then there exist multipliers μ k and \(u^{k}=(u_{\omega}^{k}, \omega\in\varOmega_{k})\) such that

$$ H_kd^k+\mu_k\nabla f \bigl(x^k\bigr)+\sum_{\omega\in\varOmega_k}u_{\omega}^k \nabla_x g\bigl(x^k,\omega\bigr)=0, $$
(5)
$$ \gamma_0=\gamma_0\mu_k+\sum _{\omega \in\varOmega_k} u_{\omega}^k( \gamma_{\omega}\sigma_k-2z_k\varepsilon_k), $$
(6)
$$ 0\leq\mu_k\bot\bigl(\gamma_0z_k- \nabla f\bigl(x^k\bigr)^Td^k \bigr)\geq0, $$
(7)
$$ \begin{array}{@{}lll}0&\leq& u_{\omega}^k\bot\bigl(\gamma_{\omega}\sigma_kz_k-\varepsilon_kz_k^2-g \bigl(x^k,\omega\bigr) \cr\noalign{\vspace{3pt}} && {}-\nabla_x g\bigl(x^k,\omega\bigr)^Td^k \bigr) \cr\noalign{\vspace{3pt}} &\geq& 0,\quad\omega\in\varOmega_k. \end{array} $$
(8)

Of course, if x k is a KKT point for problem SIP q (2), then there exists a KKT multiplier vector \(\lambda^{k}=(\lambda_{\omega}^{k},\omega\in\varOmega_{q})\) such that

$$ \everymath{\displaystyle }\begin{array}{@{}ll}\nabla f\bigl(x^k\bigr)+\sum_{\omega\in\varOmega_q}\lambda_{\omega }^k\nabla_x g\bigl(x^k,\omega\bigr)=0, \cr\noalign{\vspace{3pt}} 0\leq\lambda_{\omega}^k\bot\ g\bigl(x^k,\omega\bigr)\leq0, \quad \forall\omega\in\varOmega_q. \end{array} $$
(9)

Assumption 2

Suppose that the Mangasarian– Fromovitz constraint qualification (MFCQ) is satisfied by problem (2) at any xX q , i.e., there exists a vector dR n such that x g(x,ω)T d<0, ∀ωΩ 0(x).

Lemma 2

Suppose that Assumptions 12 hold and Ω k Ω q such that Ω 0(x k)⊆Ω k . Let (d k,z k ) be the optimal solution of (3). Then (i) γ 0 z k +(d k)T H k d k≤0, z k ≤0;

(ii) z k =0⇔d k=0⇔x k is a KKT point for problem (2); and

(iii) If x k is not a KKT point for (3), then z k <0 andf(x k)T d k<0,∇ x g(x k,ω)T d k<0,∀ωΩ 0(x k).

Proof

(i) From (5) and (7)–(8), one has

Then, in view of the positive definite property of H k , one has \(z_{k}\leq-\frac{1}{\gamma_{0}}(d^{k})^{T}H_{k}d^{k}\leq0\).

(ii)–(iii) The proof is similar to that of [10, Lemma 2.2]. □

Algorithm A

Discretization: given an appropriately large positive integer q (generally, ≥100), the interval [a,b] is discretized into finite set Ω q by formula (1).

Parameters: \(\alpha\in(0,\frac{1}{2})\), β∈(0,1), 0<ϱ≪1,γ 0, γ ω >0, ωΩ q .

Data: x 0X q , Choose Ω 0 by the formula Ω 0=Ω 0(x 0)∪{a}∪{b}, H 0R n×n is symmetric and positive definite, set k=0.

Step 1. Computation of search direction. Select parameters σ k and ε k such that 0≤(ε k ,σ k )≠(0,0), solve (3) to obtain a KKT pair \((d^{k}, z_{k}, u_{\varOmega_{k}}^{k}, \mu_{k})\). If z k =0, then x k is a KKT point of (2) and stop.

Step 2. Line search. Compute t k , the first value t of the sequence {1,β,β 2,…} that satisfies

figure a

Step 3. Updates. (i) Set x k+1=x k+t k d k.

(ii) Pick Ω k+1 such that

where \(\varOmega_{k}^{b}=\{\omega\in\varOmega_{k}|\ u_{\omega}^{k}>0\}\), and \(\bar{\omega}_{k}\) is the first index in Ω q such that (10b) does not hold.

(iii) If t k ϱ and \(\bar{\omega}_{k}\notin\varOmega_{k}\), then set H k+1=H k . Otherwise, compute a new symmetric positive definite approximation H k+1 by some suitable technique.

(iv) Set k:=k+1 and go back to Step 1.

Remark 1

(i) Ω 0 including the end-points during the first iteration often leads to a better initial search direction, see Ref. [8]. As to the update of Ω k+1 in Step 3, we allow for additional constraint indices to be added to Ω k+1. Since adjacent constraint functions of SIP q are “sequentially related” when the discretization is fine, Refs. [8, 15, 16] add the approximate active set of “left local maximizers” at x k in Ω k+1.

(ii) When t k ϱ and \(\bar{\omega}_{k}\notin\varOmega_{k}\), matrix H k isn’t updated, which is key to prove Lemma 8 below.

Using Taylor expansions of f(x k+td k) and g(x k+td k) as well as g(x k,ω)<0,∀ωΩ q Ω 0(x k), by Lemma 2 (iii) and Assumption 1, we can get the following lemma, which shows our algorithm is well defined.

Lemma 3

For each k, the inequalities (10a)–(10b) are always satisfied for all positive and sufficiently small t, so the line search at Step 2 yields a step size t k =β j for some j=j(k).

3 Global convergence

In this section, we analyze the global convergence of Algorithm A, for this, the following basic assumption is necessary.

Assumption 3

(i) The iteration sequence {x k} is bounded; (ii) the sequence {H k } of matrices is symmetric and uniformly positive definite, i.e., there exist two positive constants a and b such that ad2d T H k dbd2, ∀dR n, ∀k; and (iii) parameter sequences {σ k } and {ε k } are bounded, and there exists a constant \(\underline{c}>0\) such that \(\sigma_{k}+\varepsilon_{k}\geq\underline{c}\) holds for all k large enough.

Lemma 4

The sequences {(d k,z k )} is bounded and lim k→∞ t k d k=0.

Proof

(i) Boundedness of {d k} follows from the first constraint of (3), Lemma 2 (i), Assumptions 1 and 3. Further, boundedness of z k from \(0\geq z_{k}\geq \frac{1}{\gamma_{0}}\nabla f(x^{k})^{T}d^{k}\).

(ii) From Assumption 3 and the line search (10a), it follows that {f(x k)} is monotonically decreasing and {f(x k)} converges. On the other hand, from (10a), the first constraint of (3), Lemma 2 (i) and Assumption 3(ii), we can easily see that lim k→∞ t k d k=0. □

Lemma 5

The multiplier sequences {μ k } and \(\{u^{k}\stackrel{\triangle }{=}(u_{\varOmega_{k}}^{k}, 0_{\varOmega_{q}\backslash \varOmega_{k}})\}\) are bounded.

Proof

From the KKT condition (6) and z k ≤0, one has \(\gamma_{0}=\gamma_{0}\mu_{k}+\sum_{\omega\in\varOmega _{k}}u_{\omega}^{k}(\gamma_{\omega}\sigma_{k}-2\varepsilon z_{k})\geq\gamma_{0}\mu_{k}\), which shows that 0≤μ k ≤1. Thus {μ k } is bounded. Inspired by the proof of [10, Lemma 3.2], we can conclude that the boundedness of {u k}. □

For the sake of convenience, denote the negative value of the optimal value of (3) as

(11)

Lemma 6

Let K be a given infinite iterative index set. Then (i) \((d^{k},z_{k})\stackrel{k\in K}{\longrightarrow}(0,0)\) if and only if \(v_{k}\stackrel{k\in K}{\longrightarrow}0\); and (ii) if \(d^{k}\stackrel{k\in K}{\longrightarrow}0\), then all accumulation points of {x k} K are KKT points for SIP q (2). Further, for a given limit point x of {x k} K , there exists an infinite index set K′⊆K such that {(x k,u k/μ k )} K converges the KKT pair (x ,λ ) with lim kK μ k =μ >0.

Proof

(i) If \((d^{k},z_{k})\stackrel{k\in K}{\longrightarrow}(0,0)\), then \(v_{k}\stackrel{k\in K}{\longrightarrow}0\) follows from (11). Conversely, combining (11), Lemma 2 (i) and Assumption 3 (ii), one has \(v_{k}\geq\frac{1}{2}a\|d^{k}\|^{2}\geq0\) for kK. Thus, if \(v_{k}\stackrel{k\in K}{\longrightarrow}0\), then \(d^{k}\stackrel{k\in K}{\longrightarrow}0\). Further, \(z_{k}\stackrel{k\in K}{\longrightarrow}0\) follows from ∇f(x k)T d kγ 0 z k ≤0. (ii) The proof is similar to that of [10, Lemma 3.4]. □

Lemma 7

Let (d k,z k ) be the unique solution of QCQP (x k,H k ,Ω k ), and let K be an infinite index set such that \((x^{k},H_{k},d^{k},z_{k})\stackrel {k\in K}{\longrightarrow}(x^{*},H_{*},d^{*},z_{*}), \varOmega_{k}\equiv\hat{\varOmega},\ \forall k\in K\). Then (i) (d ,z ) is the unique solution of QCQP \((x^{*}, H_{*}, \hat{\varOmega})\); and (ii) If d ≠0, then there exists a constant \(\underline{t}>0\) such that (10a) and \(g(x^{k}+td^{k}, \omega)\leq0, \omega\in \hat{\varOmega}\) hold for all \(t\in [0,\underline{t}]\) and all kK large enough.

Proof

(i) Using the boundedness of the multiplier sequence {μ k ,u k} K , by passing to the limit in the KKT conditions (5)–(8), the conclusion of this lemma is at hand.

(ii) In view of lim kK d k=d ≠0, there exists a constant δ 0>0 such that ∥d k∥≥δ 0,∀kK. Therefore, by Lemma 2 (i) and Assumption 3 (ii), one has

$$ \everymath{\displaystyle }\begin{array}[b]{@{}ll}\gamma_0z_k\leq- \bigl(d^k\bigr)^TH_kd^k\leq -a\big\|d^k\big\|^2\leq-a\delta_0^2, \cr\noalign{\vspace{3pt}} \quad z_k\leq-\frac{1}{\gamma_0}a\delta_0^2,\quad \forall k\in K. \end{array} $$
(12)

Further, using Taylor expansion of f(x k+td k), we can easily conclude that (10a) holds for kK large enough and sufficiently small t>0.

From (8), (12) and Assumption 3, it follows that

where \(\eta=\min\{\gamma_{\omega}, \frac{a\delta_{0}^{2}}{\gamma_{0}}, \omega\in \hat{\varOmega}\}>0\). These inequalities show that there exists a constant \(\underline{t}>0\) such that (10a) and \(g(x^{k}+td^{k}, \omega)\leq0, \omega\in \hat{\varOmega}\) hold for all \(t\in[0,\underline{t}]\) and all kK large enough. □

Lemma 8

liminf k v k =0, where v k is defined by formula (11).

Proof

Suppose by contradiction that liminf k v k ≠0. From (11) and Lemma 2 (i), we have v k>0 and

$$ v_*\stackrel{\triangle}{=}\lim\inf _{k}v_k>0. $$
(13)

Again, by Lemma 4 and Assumption 3, one can assume, without loss of generality, that there exists an infinite index set K such that

$$\everymath{\displaystyle }\begin{array}{@{}ll}\bigl(x^k, d^k, z_k, H_k, v_k; d^{k+1}, z_{k+1}, v_{k+1}\bigr) \cr\noalign{\vspace{3pt}} \quad{}\stackrel{K}{\rightarrow}\bigl(x^*, d^*, z_*, H_*, v_*; d_+^*, z_+^*, v_+^*\bigr), \cr\noalign{\vspace{3pt}} \quad \varOmega_k\equiv\hat{\varOmega},\quad\varOmega_k^b\equiv \hat{\varOmega}^b, \quad \varOmega_{k+1}\equiv\hat{\varOmega}_+,\quad k\in K. \end{array} $$

Since \(\varOmega_{k}^{b}\subseteq\varOmega_{k}\), the solution (d k,z k ) of QCQP(x k,H k ,Ω k ) is also the KKT point and unique solution of QCQP(x k, \(H_{k},\varOmega_{k}^{b}\)) from the KKT conditions (5)–(8) and Lemma 1. Thus, by Lemma 7, one knows that (d ,z ) is the unique solution of QCQP\((x^{*}, H_{*},\hat{\varOmega}^{b})\).

Note that \(v_{k}\stackrel{k\in K}{\longrightarrow}v_{*}>0\) and \((d^{k},z_{k})\stackrel{k\in K}{\longrightarrow}(d^{*},z_{*})\) as well as (11), we have d ≠0 and z <0, further, \(t_{k}\stackrel{k\in K}{\longrightarrow}0\) from Lemma 4. Without loss of generality, we assume that \(t_{k}<\beta\min\{\varrho,\underline{t}\}\), for all kK, where \(\underline{t}\) is as given by Lemma 7 and ϱ,β>0 is as given in the algorithm. The fact that t k <ϱ<1 implies that for all kK the line search criterion at Step 2 is not satisfied at point \(\bar{x}^{k+1}=x^{k}+\frac{t_{k}}{\beta}d^{k}\). By Lemma 7 (ii), one knows that (10a) always holds and (10b) is violated at \(\bar{x}^{k+1}\) for kK large enough. Without loss of generality, passing to a subsequence of K if necessary, assume that

$$ g\bigl(\bar{x}^{k+1},\bar{\omega}_k \bigr)>0,\quad\bar{\omega}_k\equiv\bar{\omega},\ \forall k\in K. $$
(14)

Since d ≠0 and \(t_{k}/\beta<\underline{t}\), by Lemma 7 (ii), we can conclude that \(\bar{\omega}\notin\hat{\varOmega}\). Again, combining t k <ϱ and Step 3(iii), one gets H k+1=H k for all kK. Thus, (d k+1,z k+1) solves QCQP(x k+1,H k ,Ω k+1), for all kK. Note that \(\hat{\varOmega}_{+}\supseteq\hat{\varOmega}^{b}\cup\{\bar{\omega}\} \supset \hat{\varOmega}^{b}\) from Step 3(ii), together with \(x^{k+1}\stackrel{k\in K}{\longrightarrow}x^{*}\) from Lemma 4, we can also conclude that the limits \((d^{*}_{+}, z^{*}_{+})\) is an optimal solution of QCQP\((x^{*}, H_{*}, \hat{\varOmega}_{+})\). Since \(g(\bar{x}^{k+1}, \bar {\omega })>0\) and \(g(x^{k+1},\bar{\omega})\leq0\) for all kK, using the Taylor expansion, we obtain

This also implies that \(\nabla_{x} g(x^{*},\bar{\omega})^{T}d^{*}\geq0\). Again, by Lemma  4, one has \(\bar{x}^{k+1}\stackrel{k\in K}{\longrightarrow}x^{*}\). Combining \(g(\bar {x}^{k+1},\bar{\omega})>0\) with \(g(x^{k+1},\bar{\omega})\leq0\), one has \(g(x^{*}, \bar{\omega})=0\). Therefore, (d ,z ) is infeasible for QCQP \((x^{*}, H_{*}, \hat{\varOmega}_{+})\). Note that (d ,z ) and \((d_{+}^{*}, z_{+}^{*})\) are optimal solutions for QCQP\((x^{*}, H_{*},\hat{\varOmega}^{b})\) and QCQP\((x^{*}, H_{*}, \hat{\varOmega}_{+})\), respectively. According to Lemma 1 (i) and the definition of v k , we get \(v_{*}^{+}<v_{*}\), which contradicts (13) and \(v_{k+1}\stackrel{k\in K}{\longrightarrow}v_{+}^{*}\). The whole proof of Lemma 8 is completed. □

Theorem 1

Suppose that Assumptions 13 hold. Then there exists an accumulation point x of the iteration sequence {x k} such that x is a KKT point for problem SIP q (2). In such sense, Algorithm A is said to possess weak global convergence.

Proof

The result follows immediately from Lemma 8, Lemma 6 and Assumption 3 (i). □

4 Numerical experiments

In this section, we test the effectiveness of the proposed algorithm (Algorithm A). The numerical experiments are implemented on MATLAB 7.0. The solver of QCQP subproblem (3) is MOSEK 6.0. As to update of H k , we adopt the BFGS formula [14], which was made a sight modification from BFGS formula [18], and let H 0 be the identity matrix. For the sake of comparison, we test five problems, which can be found in Refs. [19, 20].

During the test experiments, we select q=100, which means the discretization level of the tested SIP problems is |Ω q |=101. The parameters are used in Algorithm A are given as follows:

$$\everymath{\displaystyle }\begin{array}{@{}ll}\alpha=0.4,\quad \beta=0.8, \quad \gamma_0=2, \quad \gamma_\omega=0.5\gamma_0, \cr\noalign{\vspace{3pt}} \forall\omega\in \varOmega_q, \quad \varrho=1\mathrm{E}-3, \quad \varepsilon_k=0.1+\frac{1}{(k+1)^3}, \cr\noalign{\vspace{3pt}} \sigma_k=\frac {1}{(k+1)^3}. \end{array} $$

Moreover, we use |z k |≤10−6 or |d k|≤10−6 as the stopping criterion for Algorithm A.

To judge the effectiveness of Algorithm A, we report and compare the numerical results of our algorithm with NFSQP algorithm in [10], SQP algorithm in [16] and HSQP algorithm in [19], respectively. For the sake of comparison, we use the same initial point x 0 as given in the above numerical problems expect cw3. For cw3, HSQP algorithm use x 0=(1,1,1)T, our algorithm choose x 0=(−1,−1.5,2)T as feasible initial point. The comparison results are reported in Table 1. The detailed iteration with Algorithm A for problems cw2, cw3 and cw14 are given in Table 2. The columns of Tables 12 have the following meaning:

Table 1 Comparison results for problems with discretization |Ω q |=101
Table 2 Detailed iteration for Algorithm A

The columns Ni and Nf indicate the number of iterations and objective function evaluations; Ng is the number of “scalar” constraint function evaluations (g(x,ω) for a given x and ω); ∑|Ω k | is the sum over all iterations of the size of Ω k ; \(|\bar{\varOmega}|\) is the average size of Ω k , i.e., \(|\bar{\varOmega}|=\sum|\varOmega_{k}|/\)Ni; |Ω | is the size of final iterate; Objective is the objective function value at the final iterate point. \(\varOmega_{k}^{\mathrm{index}}\) represents the indices ordinal number of Ω k with |Ω q |=101.

The column \(|\bar{\varOmega}|\) in Table 1 tells us that the average available constraints each iteration for solving QCQP subproblem is very small relative to the discretization level |Ω q |=101. This can be seen from the \(\varOmega_{k}^{\mathrm{index}}\) column in Table 2. The numerical results show that the technique of updating Ω k can reduce the computational cost of Algorithm A to a great extent. From Table 1 and 2, we find that Algorithm A is effective for the test problems.