Abstract
In this paper, we present a sequential simple quadratically constrained quadratic programming (QCQP) norm-relaxed method for finely discretized semi-infinite optimization problems. At each iteration, the iteration point is feasible, and an improved search direction is solved by only one simple QCQP subproblem, in which only a few of constraints are chosen. Under some weak conditions, the proposed algorithm possesses weak global convergence. Finally, numerical results show that the proposed method is effective.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the fields of nonlinear dynamic systems to obtain the numerical solution of the optimal control problem [1, 2], many problems belong to semi-infinite programming (SIP) [3, 4]. In recent years, many efforts have been made in the research of SIP (see e.g. [5, 6]). A simple form is given by
where f:R n→R is continuously differentiable and g:R n×[a,b]→R is continuously differentiable with respect to x.
A common approach for solving SIP is discretization method (see e.g. [7]). Generally, given any positive integer q, the interval [a,b] can be discretized into the following finite set:
Solving SIP problem boils down to solving a class of discretized SIP problem with the form of
When the Hausdorff distance between Ω and Ω q goes to zero, the solution of SIP q closes to the solution of SIP. For the sake of discussion, in this work, we also only consider the discretized semi-infinite optimization problem SIP q with a fixed discretization level q. Note that, if q becomes very large, the number of constraints of SIP q will be very large, which caused difficulties in the numerical solutions. Thus, how to design effective numerical methods to solve SIP q has attracted much attention of many scholars.
In 1998, Lawrence and Tits [8] proposed a feasible SQP algorithm for SIP q , however, in which two QP subproblem need to be solved to obtain main search direction at iteration. Later, combining the technique of updating constraint index set in QP subproblems [8] with the idea of norm-relaxed method [9], Jian, Xu and Han [10] presented a norm-relaxed method of feasible directions for SIP q . At each iteration, only one QP subproblem is solved to yield the main search direction. However, in order to achieve the superlinear convergence, parameter η k in the QP subproblem need to be updated at each iteration.
On the other hand, in recent years, with the generalization and application of tools for computing second derivatives, sequential quadratically constrained quadratic programming (SQCQP) methods have rapid development [11, 12, 17]. However, these methods have to compute second accuracy or approximation derivative of functions. Thanks to norm-relaxed technique and the idea of SQCQP methods [13], Jian, Hu, Tang and Zheng [14] proposed a new feasible SQCQP algorithm, in which main search direction is solved by one QCQP subproblem with simple quadratically constraints. This algorithm possesses the advantage of lower requirement on parameters, which do not need to be updated. This overcomes the drawback of updating parameter η k to some extent, however, its drawback is a high computation cost and numerical instability brought by pivoting operation used to calculate approximate active set.
In order to resolve these problems and draw the advantage of SQCQP algorithm [14], we attempt to design a sequential simple QCQP algorithm for SIP q , with the technique of updating constraint index set [8, 10]. In this paper, the proposed algorithm has the following features: (i) the initial point is feasible; (ii) at each iteration, only one simple QCQP subproblem is solved to obtain search direction; (iii) only a few of constraints in the QCQP subproblem is used, which can reduce the computational cost greatly; (iv) under some mild conditions, the global convergence is proved.
To this end, we organize our papers as follows: In the next section, we present our algorithm and discuss its properties. In Sect. 3, the global convergence is proved. Finally, some preliminary numerical results are reported in Sect. 4.
2 Description of algorithm
For the sake of analysis, denoting the feasible set and active set of problem SIP q as
For point x k∈X q ,Ω k ⊆Ω q , we consider the following search direction subproblem QCQP(x k,H k ,Ω k ):
where γ ω (ω∈{0}∪Ω k ) are all positive constants, H k is symmetric, positive definite matrix, negative constants σ k and ε k such that (σ k ,ε k )≠(0,0).
Assumption 1
The functions f and g(⋅,ω):R n×[a,b]→R, ω∈Ω q are all at least first-order continuously differentiable.
Lemma 1
Suppose that Assumption 1 holds and x k∈X q . Then (i) subproblem (3) has a unique optimal solution; and (ii) (d k,z k ) is an optimal solution of (3) if and only if it is a KKT point of (3).
Proof
(i) Refer to the proof of [14, Lemma 2.1(i)], we can conclude that (3) has an optimal solution. Next, we just need to prove the uniqueness. Assume that (z 1,d 1),(z 2,d 2) both are optimal solution to (3), then
These imply that
Note that (3) is a convex programming, so the convex combination (λz 1+(1−λ)z 2,λd 1+(1−λ)d 2) with λ∈(0,1) is also an optimal solution to (3). So \(a=\gamma_{0}(\lambda z_{1}+(1-\lambda)z_{2}) +\frac{1}{2}(\lambda d^{1}+(1-\lambda)d^{2})^{T}H_{k}(\lambda d^{1}+(1-\lambda )d^{2})\). Further, we have
Note that λ∈(0,1), by expanding and simplifying the relation above, we further get (d 1−d 2)T H k (d 1−d 2)=0. This together with the positive definiteness of H k shows that d 1=d 2. Finally, z 1=z 2 follows immediately from the first equality of (4). Thus, the conclusion (i) is at hand.
(ii) The proof is similar to that of [14, Lemma 2.1(ii)].
□
If (d k,z k ) is a KKT point for (3), then there exist multipliers μ k and \(u^{k}=(u_{\omega}^{k}, \omega\in\varOmega_{k})\) such that
Of course, if x k is a KKT point for problem SIP q (2), then there exists a KKT multiplier vector \(\lambda^{k}=(\lambda_{\omega}^{k},\omega\in\varOmega_{q})\) such that
Assumption 2
Suppose that the Mangasarian– Fromovitz constraint qualification (MFCQ) is satisfied by problem (2) at any x∈X q , i.e., there exists a vector d∈R n such that ∇ x g(x,ω)T d<0, ∀ω∈Ω 0(x).
Lemma 2
Suppose that Assumptions 1–2 hold and Ω k ⊆Ω q such that Ω 0(x k)⊆Ω k . Let (d k,z k ) be the optimal solution of (3). Then (i) γ 0 z k +(d k)T H k d k≤0, z k ≤0;
(ii) z k =0⇔d k=0⇔x k is a KKT point for problem (2); and
(iii) If x k is not a KKT point for (3), then z k <0 and ∇f(x k)T d k<0,∇ x g(x k,ω)T d k<0,∀ω∈Ω 0(x k).
Proof
(i) From (5) and (7)–(8), one has
Then, in view of the positive definite property of H k , one has \(z_{k}\leq-\frac{1}{\gamma_{0}}(d^{k})^{T}H_{k}d^{k}\leq0\).
(ii)–(iii) The proof is similar to that of [10, Lemma 2.2]. □
Algorithm A
Discretization: given an appropriately large positive integer q (generally, ≥100), the interval [a,b] is discretized into finite set Ω q by formula (1).
Parameters: \(\alpha\in(0,\frac{1}{2})\), β∈(0,1), 0<ϱ≪1,γ 0, γ ω >0, ω∈Ω q .
Data: x 0∈X q , Choose Ω 0 by the formula Ω 0=Ω 0(x 0)∪{a}∪{b}, H 0∈R n×n is symmetric and positive definite, set k=0.
Step 1. Computation of search direction. Select parameters σ k and ε k such that 0≤(ε k ,σ k )≠(0,0), solve (3) to obtain a KKT pair \((d^{k}, z_{k}, u_{\varOmega_{k}}^{k}, \mu_{k})\). If z k =0, then x k is a KKT point of (2) and stop.
Step 2. Line search. Compute t k , the first value t of the sequence {1,β,β 2,…} that satisfies
Step 3. Updates. (i) Set x k+1=x k+t k d k.
(ii) Pick Ω k+1 such that
where \(\varOmega_{k}^{b}=\{\omega\in\varOmega_{k}|\ u_{\omega}^{k}>0\}\), and \(\bar{\omega}_{k}\) is the first index in Ω q such that (10b) does not hold.
(iii) If t k ≤ϱ and \(\bar{\omega}_{k}\notin\varOmega_{k}\), then set H k+1=H k . Otherwise, compute a new symmetric positive definite approximation H k+1 by some suitable technique.
(iv) Set k:=k+1 and go back to Step 1.
Remark 1
(i) Ω 0 including the end-points during the first iteration often leads to a better initial search direction, see Ref. [8]. As to the update of Ω k+1 in Step 3, we allow for additional constraint indices to be added to Ω k+1. Since adjacent constraint functions of SIP q are “sequentially related” when the discretization is fine, Refs. [8, 15, 16] add the approximate active set of “left local maximizers” at x k in Ω k+1.
(ii) When t k ≤ϱ and \(\bar{\omega}_{k}\notin\varOmega_{k}\), matrix H k isn’t updated, which is key to prove Lemma 8 below.
Using Taylor expansions of f(x k+td k) and g(x k+td k) as well as g(x k,ω)<0,∀ω∈Ω q ∖Ω 0(x k), by Lemma 2 (iii) and Assumption 1, we can get the following lemma, which shows our algorithm is well defined.
Lemma 3
For each k, the inequalities (10a)–(10b) are always satisfied for all positive and sufficiently small t, so the line search at Step 2 yields a step size t k =β j for some j=j(k).
3 Global convergence
In this section, we analyze the global convergence of Algorithm A, for this, the following basic assumption is necessary.
Assumption 3
(i) The iteration sequence {x k} is bounded; (ii) the sequence {H k } of matrices is symmetric and uniformly positive definite, i.e., there exist two positive constants a and b such that a∥d∥2≤d T H k d≤b∥d∥2, ∀d∈R n, ∀k; and (iii) parameter sequences {σ k } and {ε k } are bounded, and there exists a constant \(\underline{c}>0\) such that \(\sigma_{k}+\varepsilon_{k}\geq\underline{c}\) holds for all k large enough.
Lemma 4
The sequences {(d k,z k )} is bounded and lim k→∞ t k d k=0.
Proof
(i) Boundedness of {d k} follows from the first constraint of (3), Lemma 2 (i), Assumptions 1 and 3. Further, boundedness of z k from \(0\geq z_{k}\geq \frac{1}{\gamma_{0}}\nabla f(x^{k})^{T}d^{k}\).
(ii) From Assumption 3 and the line search (10a), it follows that {f(x k)} is monotonically decreasing and {f(x k)} converges. On the other hand, from (10a), the first constraint of (3), Lemma 2 (i) and Assumption 3(ii), we can easily see that lim k→∞ t k d k=0. □
Lemma 5
The multiplier sequences {μ k } and \(\{u^{k}\stackrel{\triangle }{=}(u_{\varOmega_{k}}^{k}, 0_{\varOmega_{q}\backslash \varOmega_{k}})\}\) are bounded.
Proof
From the KKT condition (6) and z k ≤0, one has \(\gamma_{0}=\gamma_{0}\mu_{k}+\sum_{\omega\in\varOmega _{k}}u_{\omega}^{k}(\gamma_{\omega}\sigma_{k}-2\varepsilon z_{k})\geq\gamma_{0}\mu_{k}\), which shows that 0≤μ k ≤1. Thus {μ k } is bounded. Inspired by the proof of [10, Lemma 3.2], we can conclude that the boundedness of {u k}. □
For the sake of convenience, denote the negative value of the optimal value of (3) as
Lemma 6
Let K be a given infinite iterative index set. Then (i) \((d^{k},z_{k})\stackrel{k\in K}{\longrightarrow}(0,0)\) if and only if \(v_{k}\stackrel{k\in K}{\longrightarrow}0\); and (ii) if \(d^{k}\stackrel{k\in K}{\longrightarrow}0\), then all accumulation points of {x k} K are KKT points for SIP q (2). Further, for a given limit point x ∗ of {x k} K , there exists an infinite index set K′⊆K such that {(x k,u k/μ k )} K′ converges the KKT pair (x ∗,λ ∗) with lim k∈K′ μ k =μ ∗>0.
Proof
(i) If \((d^{k},z_{k})\stackrel{k\in K}{\longrightarrow}(0,0)\), then \(v_{k}\stackrel{k\in K}{\longrightarrow}0\) follows from (11). Conversely, combining (11), Lemma 2 (i) and Assumption 3 (ii), one has \(v_{k}\geq\frac{1}{2}a\|d^{k}\|^{2}\geq0\) for k∈K. Thus, if \(v_{k}\stackrel{k\in K}{\longrightarrow}0\), then \(d^{k}\stackrel{k\in K}{\longrightarrow}0\). Further, \(z_{k}\stackrel{k\in K}{\longrightarrow}0\) follows from ∇f(x k)T d k≤γ 0 z k ≤0. (ii) The proof is similar to that of [10, Lemma 3.4]. □
Lemma 7
Let (d k,z k ) be the unique solution of QCQP (x k,H k ,Ω k ), and let K be an infinite index set such that \((x^{k},H_{k},d^{k},z_{k})\stackrel {k\in K}{\longrightarrow}(x^{*},H_{*},d^{*},z_{*}), \varOmega_{k}\equiv\hat{\varOmega},\ \forall k\in K\). Then (i) (d ∗,z ∗) is the unique solution of QCQP \((x^{*}, H_{*}, \hat{\varOmega})\); and (ii) If d ∗≠0, then there exists a constant \(\underline{t}>0\) such that (10a) and \(g(x^{k}+td^{k}, \omega)\leq0, \omega\in \hat{\varOmega}\) hold for all \(t\in [0,\underline{t}]\) and all k∈K large enough.
Proof
(i) Using the boundedness of the multiplier sequence {μ k ,u k} K , by passing to the limit in the KKT conditions (5)–(8), the conclusion of this lemma is at hand.
(ii) In view of lim k∈K d k=d ∗≠0, there exists a constant δ 0>0 such that ∥d k∥≥δ 0,∀k∈K. Therefore, by Lemma 2 (i) and Assumption 3 (ii), one has
Further, using Taylor expansion of f(x k+td k), we can easily conclude that (10a) holds for k∈K large enough and sufficiently small t>0.
From (8), (12) and Assumption 3, it follows that
where \(\eta=\min\{\gamma_{\omega}, \frac{a\delta_{0}^{2}}{\gamma_{0}}, \omega\in \hat{\varOmega}\}>0\). These inequalities show that there exists a constant \(\underline{t}>0\) such that (10a) and \(g(x^{k}+td^{k}, \omega)\leq0, \omega\in \hat{\varOmega}\) hold for all \(t\in[0,\underline{t}]\) and all k∈K large enough. □
Lemma 8
liminf k v k =0, where v k is defined by formula (11).
Proof
Suppose by contradiction that liminf k v k ≠0. From (11) and Lemma 2 (i), we have v k>0 and
Again, by Lemma 4 and Assumption 3, one can assume, without loss of generality, that there exists an infinite index set K such that
Since \(\varOmega_{k}^{b}\subseteq\varOmega_{k}\), the solution (d k,z k ) of QCQP(x k,H k ,Ω k ) is also the KKT point and unique solution of QCQP(x k, \(H_{k},\varOmega_{k}^{b}\)) from the KKT conditions (5)–(8) and Lemma 1. Thus, by Lemma 7, one knows that (d ∗,z ∗) is the unique solution of QCQP\((x^{*}, H_{*},\hat{\varOmega}^{b})\).
Note that \(v_{k}\stackrel{k\in K}{\longrightarrow}v_{*}>0\) and \((d^{k},z_{k})\stackrel{k\in K}{\longrightarrow}(d^{*},z_{*})\) as well as (11), we have d ∗≠0 and z ∗<0, further, \(t_{k}\stackrel{k\in K}{\longrightarrow}0\) from Lemma 4. Without loss of generality, we assume that \(t_{k}<\beta\min\{\varrho,\underline{t}\}\), for all k∈K, where \(\underline{t}\) is as given by Lemma 7 and ϱ,β>0 is as given in the algorithm. The fact that t k <ϱ<1 implies that for all k∈K the line search criterion at Step 2 is not satisfied at point \(\bar{x}^{k+1}=x^{k}+\frac{t_{k}}{\beta}d^{k}\). By Lemma 7 (ii), one knows that (10a) always holds and (10b) is violated at \(\bar{x}^{k+1}\) for k∈K large enough. Without loss of generality, passing to a subsequence of K if necessary, assume that
Since d ∗≠0 and \(t_{k}/\beta<\underline{t}\), by Lemma 7 (ii), we can conclude that \(\bar{\omega}\notin\hat{\varOmega}\). Again, combining t k <ϱ and Step 3(iii), one gets H k+1=H k for all k∈K. Thus, (d k+1,z k+1) solves QCQP(x k+1,H k ,Ω k+1), for all k∈K. Note that \(\hat{\varOmega}_{+}\supseteq\hat{\varOmega}^{b}\cup\{\bar{\omega}\} \supset \hat{\varOmega}^{b}\) from Step 3(ii), together with \(x^{k+1}\stackrel{k\in K}{\longrightarrow}x^{*}\) from Lemma 4, we can also conclude that the limits \((d^{*}_{+}, z^{*}_{+})\) is an optimal solution of QCQP\((x^{*}, H_{*}, \hat{\varOmega}_{+})\). Since \(g(\bar{x}^{k+1}, \bar {\omega })>0\) and \(g(x^{k+1},\bar{\omega})\leq0\) for all k∈K, using the Taylor expansion, we obtain
This also implies that \(\nabla_{x} g(x^{*},\bar{\omega})^{T}d^{*}\geq0\). Again, by Lemma 4, one has \(\bar{x}^{k+1}\stackrel{k\in K}{\longrightarrow}x^{*}\). Combining \(g(\bar {x}^{k+1},\bar{\omega})>0\) with \(g(x^{k+1},\bar{\omega})\leq0\), one has \(g(x^{*}, \bar{\omega})=0\). Therefore, (d ∗,z ∗) is infeasible for QCQP \((x^{*}, H_{*}, \hat{\varOmega}_{+})\). Note that (d ∗,z ∗) and \((d_{+}^{*}, z_{+}^{*})\) are optimal solutions for QCQP\((x^{*}, H_{*},\hat{\varOmega}^{b})\) and QCQP\((x^{*}, H_{*}, \hat{\varOmega}_{+})\), respectively. According to Lemma 1 (i) and the definition of v k , we get \(v_{*}^{+}<v_{*}\), which contradicts (13) and \(v_{k+1}\stackrel{k\in K}{\longrightarrow}v_{+}^{*}\). The whole proof of Lemma 8 is completed. □
Theorem 1
Suppose that Assumptions 1–3 hold. Then there exists an accumulation point x ∗ of the iteration sequence {x k} such that x ∗ is a KKT point for problem SIP q (2). In such sense, Algorithm A is said to possess weak global convergence.
Proof
The result follows immediately from Lemma 8, Lemma 6 and Assumption 3 (i). □
4 Numerical experiments
In this section, we test the effectiveness of the proposed algorithm (Algorithm A). The numerical experiments are implemented on MATLAB 7.0. The solver of QCQP subproblem (3) is MOSEK 6.0. As to update of H k , we adopt the BFGS formula [14], which was made a sight modification from BFGS formula [18], and let H 0 be the identity matrix. For the sake of comparison, we test five problems, which can be found in Refs. [19, 20].
During the test experiments, we select q=100, which means the discretization level of the tested SIP problems is |Ω q |=101. The parameters are used in Algorithm A are given as follows:
Moreover, we use |z k |≤10−6 or |d k|≤10−6 as the stopping criterion for Algorithm A.
To judge the effectiveness of Algorithm A, we report and compare the numerical results of our algorithm with NFSQP algorithm in [10], SQP algorithm in [16] and HSQP algorithm in [19], respectively. For the sake of comparison, we use the same initial point x 0 as given in the above numerical problems expect cw3. For cw3, HSQP algorithm use x 0=(1,1,1)T, our algorithm choose x 0=(−1,−1.5,2)T as feasible initial point. The comparison results are reported in Table 1. The detailed iteration with Algorithm A for problems cw2, cw3 and cw14 are given in Table 2. The columns of Tables 1–2 have the following meaning:
The columns Ni and Nf indicate the number of iterations and objective function evaluations; Ng is the number of “scalar” constraint function evaluations (g(x,ω) for a given x and ω); ∑|Ω k | is the sum over all iterations of the size of Ω k ; \(|\bar{\varOmega}|\) is the average size of Ω k , i.e., \(|\bar{\varOmega}|=\sum|\varOmega_{k}|/\)Ni; |Ω ∗| is the size of final iterate; Objective is the objective function value at the final iterate point. \(\varOmega_{k}^{\mathrm{index}}\) represents the indices ordinal number of Ω k with |Ω q |=101.
The column \(|\bar{\varOmega}|\) in Table 1 tells us that the average available constraints each iteration for solving QCQP subproblem is very small relative to the discretization level |Ω q |=101. This can be seen from the \(\varOmega_{k}^{\mathrm{index}}\) column in Table 2. The numerical results show that the technique of updating Ω k can reduce the computational cost of Algorithm A to a great extent. From Table 1 and 2, we find that Algorithm A is effective for the test problems.
References
Balaram, B., Narayanan, M.D., Rajendrakumar, P.K.: Optimal design of multi-parametric nonlinear systems using a parametric continuation based genetic algorithm approach. Nonlinear Dyn. 67(2), 1669–1681 (2012)
Zhang, L.M., Gao, H.T., Chen, Z.Q.: Multi-objective global optimal parafoil homing trajectory optimization via Gauss pseudospectral method. Nonlinear Dyn. (2012). doi:10.1007/s11071-012-0586-9
Xia, Y., Chan, K.W.: Dynamic constrained optimal power flow using semi-infinite programming. IEEE Trans. Power Syst. 21(3), 1455–1457 (2006)
Stefan, K., Ekaterina, K., Hans, G.B., Johannes, P.S.: Numerical methods for optimal control problems in design of robust optimal experiments for nonlinear dynamic processes. Optim. Methods Softw. 19, 3–4 (2004)
Hettich, R., Kortanek, K.O.: Semi-infinite programming: theory, methods, and applications. SIAM Rev. 35, 380–429 (1993)
Lopez, M., Still, G.: Invited review semi-infinite programming. Eur. J. Oper. Res. Soc. Ser. B 180, 491–518 (2007)
Hettich, R.: Discretization methods for the solution of semi-infinite programming problems. J. Optim. Theory Appl. 71, 85–103 (1991)
Lawrence, C.T., Tits, A.L.: Feasible sequential quadratic programming for finely discretized problems form SIP. In: Remtsen, R., Ruckmann, J.J. (eds.) Semi-Infinite Programming. Kluwer Academic, Norwell (1998)
Cawood, M., Kostreva, M.: Norm-relaxed method of feasible directions for solving nonlinear programming problems. J. Optim. Theory Appl. 83, 311–320 (1994)
Jian, J.-B., Xu, Q.-J., Han, D.-L.: A norm-relaxed method of feasible directions for finely discretized problems from semi-infinite programming. Eur. J. Oper. Res. 186, 41–62 (2008)
Fukushima, M., Luo, Z., Tseng, P.: A sequential quadratically constrained quadratic programming method for differentiable convex minimization. SIAM J. Optim. 13, 1098–1119 (2003)
Solodov, M.: On the sequential quadratically constrained quadratic programming methods. Math. Oper. Res. 29, 64–79 (2004)
Jian, J.-B.: New sequential quadratically constrained quadratic programming method of feasible directions and its convergence rate. J. Optim. Theory Appl. 129, 109–130 (2006)
Jian, J.-B., Hu, Q.-J., Tang, C.-M., Zheng, H.-Y.: A sequential quadratically constrained quadratic programming method of feasible directions. Appl. Math. Optim. 56, 343–363 (2007)
Panier, E.R., Tits, A.L.: A globally convergent algorithm with adaptively refined discretization for semi-infinite optimization problems arising in engineer design. IEEE Trans. Autom. Control 34, 903–908 (1989)
Zhou, J.L., Tits, A.L.: An SQP algorithm for finely discretized continuous minimaxing problems and other minimax problems with many objective functions. SIAM J. Control Optim. 6, 461–487 (1996)
Jian, J.-B.: Fast Algorithm for Smooth Constrained Optimization–Theoretical Analysis and Numerical Experiment. Science Press, Beijing (2010)
De, J.F.A., Pantoja, O., Mayne, D.Q.: Exact penalty function algorithm with simple updating of the penalty parameter. J. Optim. Theory Appl. 69(3), 441–467 (1991)
Tan, Y.-Y.: Researches on hybrid method for Semi-infinite nonlinear programming. Ph.D. Thesis. Chinese Academy of Sciences. Institute of Computational Mathematics (2000)
Coope, I.D., Watson, G.A.: A projected Lagrangian algorithm for semi-infinite programming. Math. Program. 32, 337–356 (1985)
Acknowledgements
The authors thank anonymous reviewers for their valuable comments and suggestions on this manuscript. This work is supported by the National Natural Science Foundation of China (No. 11271086), the Guangxi Natural Science Foundation of China (No. 2011 GXNSFD018002), Guangxi Education Office Research Project (201106LX322) and Innovation Group of Talents Highland of Guang Xi higher School.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Xu, QJ., Jian, JB. A nonlinear norm-relaxed method for finely discretized semi-infinite optimization problems. Nonlinear Dyn 73, 85–92 (2013). https://doi.org/10.1007/s11071-013-0768-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11071-013-0768-0