Abstract
In this paper, we use an interior points method of projective type for calculate the initial dual solution of the convex nonlinear programming using the Ye Lustig variant applied to linear programming, as a result we propose a modification in this algorithm to reduce the number of iterations and the computing time of this algorithm. The numerical tests confirm that the modified algorithm is robust.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The nonlinear programming \((\textit{NLP})\) is a model which traduces many real applications. It can be found in control theory, combinatory optimization. In term of research, it is one of subject treated with fervour, in particular the problem of initialization in problem of optimization [1–8].
Choice of starting primal and dual point is an important practical issue with a significant effect on the robustness of the algorithm. A poor choice \( (x^{0},y^{0},s^{0})\) satisfying only the minimal conditions \(x^{0}>0\) , \( s^{0}>0.\) In interior point methods, the successive iterates should be strictly feasible. In consequence, a major concern is to find an initial primal feasible solution \(x^{0}\) [7, 9]. The object of this paper is the dual problem \((y^{0},s^{0})\).
A nonlinear program in its standard form is of type:
where \(f(x)\) is a reel nonlinear function, \(A\in \mathbb {R}^{m\times n}\) with \(rg(A)=m<n,\ c,\ x\ \in \mathbb {R}^{n}\) and \(\ y,~b\in \mathbb {R}^{m},\) and its dual problem
where \(L(x,y,s)\) is a lagrangian function, \(s\in \mathbb {R}^{n},\) \(y\in \mathbb {R}^{m}\) are the vectors.
Some notations are used throughout the paper and they are as follows. \(\mathfrak {R}^{n~}\), \(\mathfrak {R}_{+}^{n~}\) and \(\mathfrak {R}_{++}^{n~}\) denote the set of vectors with \( n\) components, the set of nonnegative vectors and the set of positive vectors, respectively. \(\mathfrak {R}^{n~\times ~n}\) denotes the set of \(n\times n\) real matrices.\(\left\| \text {.}\right\| _{2}\) denote the Euclidean norm, \(e=(1,\ldots ,1)^{t}\) is the vector of ones in \(\mathfrak {R}^{n}.\)
2 Presentation of the problem
The strict dual feasibility problem associated to the problem \((\textit{CNLD})\) is to find a vector \((y,s)\) such that:
We poses \(y=y^{+}-y^{-},\) where \(y^{+}>0,~y^{-}>0\).
The problem \((DF)\) is equivalent to the following problem
One way to solve a strictly dual feasible problem consists in introducing an additional variable \(\xi \) as follows:
where \(D\) is the \((n)\times (n+2m)\) matrix defined by \(D=(A^{t},~-A^{t},~I)\), \(a\in \mathfrak {R}_{++}^{n+2m}\) is arbitrary and \(d=(y^{+},~y^{-},~s)\in \mathfrak {R}_{++}^{n+2m}\)
The problem \((P_{\xi })\) can be written as a following linear program:
where \(c{^{\prime \prime }}\) \(\in \mathfrak {R}^{n+2m+1}\) is the vector defined by:
\(D{^{\prime }}\) is the \((n)\times (n+2m+1)\) matrix defined by \(D{^{\prime }}=\left( D,~\nabla f(x)-Da\right) \) and \(l\) \(\in \mathfrak {R}^{n+2m+1}\) is vector such that \(l=\left( d,~\xi \right) .\)
For solving the problem \((LP_{\xi })\), we use the Ye Lustig variant of projective interior point method [5].
Lemma 1
[5] \(x^{*}\) is a solution of problem \((PF)\) if and only if \((d^{*},~\xi )\) is an optimal solution of problem \((LP_{\xi })\)with \(d\in \mathfrak {R}_{++}^{n+2m}\) and \(\xi \) sufficiently small.
To compute the optimal solution of problem (\(LP_{\xi }\)) , we use only the second phase of the projective interior point method. The corresponding algorithm is:
3 Algorithm for solving (\(LP_{_{\xi }}\))
Description of the algorithm
(a) Initialization: \(\varepsilon >0\) is fixed, \(d^{0}=e,~\xi _{0}\) \( =1\) and \(k=0\)
If \(\left\| Dd-\nabla f(x)\right\| _{2}<\varepsilon \), Stop: \( d^{k}\) is an \(\varepsilon \)-approximate solution of \((DF)\).
If not go to (b).
(b) If \(\xi _{k}<\varepsilon \), Stop: \(d^{k}\) is an \(\varepsilon \) -approximate solution of \((DF)\).
If not go to (c).
(c) Step k
Determinate:
-
\(G_{k}=diag(l^{k}),D=\left[ D,~\nabla f(x)-Da\right] ,r=\frac{1}{\sqrt{(n+2m+1)(n+2m+2)}},\)
-
\(C_{k}=\left[ D^{^{\prime }}G_{k},~-\nabla f(x)\right] \)
-
\(P_{k}=\left[ I-C_{k}^{t}(C_{k}C_{k}^{t})^{-1}C_{k}\right] \left[ G_{k}c{^{\prime \prime }t},~-c{{^{\prime \prime }}t}l^{k}\right] ^{t}\)
Compute:
-
\(z^{k+1}=\frac{e_{n+2m+2}}{n+2m+2}-\alpha _{k}r\frac{P_{k}}{\left\| P_{k}\right\| _{2}}\)
-
\(l^{k+1}=\frac{1}{z^{k+1}\left[ n+2m+2\right] }G_{k}z^{k+1}\left[ n+2m+1 \right] \)
Take:
-
\(\xi _{k}=l^{k+1}\left[ n+2m\right] ,\) and go to (b)
4 Modified algorithm
At iteration \(k\), the original algorithm gives a strictly feasible solution \((d^{k},\xi ^{k})\). To find the next iteration \((d^{k+1},\xi ^{k+1})\), we search a vector \(w^{k}\) such that the vector \(d^{k}+w^{k}\) is a solution of problem \((FD)\), i.e
and
\((1)\) is equivalent to \(Dd^{k}=\nabla f(x)-\xi _{k}q,\) where
\((3)\) is equivalent to the following convex quadratic optimization problem
By optimality condition, we obtain the follwing system
we obtain
\(\theta =-(DD^{t})^{-1}\xi _{k}q\)
deduce
\(w^{k}=-D^{t}\theta =D^{t}(DD^{t})^{-1}\xi _{k}q\).
The calculation of \(w^{k}\) requires the calculation of the inverse of the matrix \((DD^{t})\), which is undesirable in practice, to correct, we proceed as follows, let
equivalent to the follwing linear symetric definite positive system
Proposition 1
For all \((d^{k},\xi _{k})\) is a strict feasible solution of \((P_{\xi })\),
if \( \max \left| v_{i}^{k}\right| <1\) then:
where
Proof
1. \(\ D(d^{k}+w^{k})=\nabla f(x)\)
let, \(w^{k}\) is a solution of \((\textit{COP}),\) we have
\(w^{k}=D^{t}u^{k}\) and \(u^{k}=\xi _{k}u^{0}\)
2. \(d^{k}+w^{k}>0\)
we have
so, \(D(\textit{diag}(d^{k})e_{n+2m}+w^{k})=\nabla f(x)\)
then,
as \(d^{k}>0\) then,
for \(d^{k}+w^{k}>0,\)
let:
we have, \(e_{n+2m}+\textit{diag}\big [ (d^{k})\big ] ^{-1}w^{k}>0\) then
\(e_{n+2m}-v^{k}>0,\) giving:
\(\max \left| v_{i}^{k}\right| <1\)
therefore, we have: \(d^{k}+w^{k}>0.\) \(\square \)
4.1 The modified algorithm
Description of the algorithm
(a \({^{\prime }}\) ) Initialization: \(\varepsilon >0\) is fixed, \(d^{0}=e,~\xi _{0}\) \(=1\) and \(k=0\)
If \(\left\| Dd^{0}-\nabla f(x)\right\| _{2}<\varepsilon \), Stop: \(d^{k}\) is an \(\varepsilon \)-approximate solution of \((DF)\).
else
calculated \(u^{0}\) solution of the linear system
(b \({^{\prime }}\) )
If \(\xi _{k}<\varepsilon \)
Stop: \(d^{k}\) is an \(\varepsilon \)-approximate solution of \((DF)\).
else
Compute
If \(\max \left| v_{i}^{k}\right| <1,\) stop: \( d^{k}+D^{t}u^{k}\) is an \(\varepsilon \)-approximate solution of \((DF)\).
else
go to (c \({^{\prime }})\)
(c \({^{\prime }}\) ) is identical to (c) of the algorithm original.
(d \({^{\prime }}\) ) take \(k=k+1\) and go to (b \({^{\prime }}\) ).
5 Numerical tests
The algorithm has been tested on some benchmark problems issued from the paper [7, 9].The implementation is manipulated in DEV C++, We have taken \(\varepsilon =10^{-8}\).
Example 1 [9]
where
In this example, we find the dual initial point \((s^{0},y^{0})\)
\(s^{0}=\left( 29.824284~\ 10.732446~\ 0.544773~\ 1.013929~\ 0.488309\right) ^{t}\)
\(y^{0}=\left( 20.822601~\ 5.717161~\ 3.343051\right) \)
The experiment results see Table 1.
Example 2 [7]
In this example, we find the dual initial point \((s^{0},y^{0})\)
The experiment results see Table 2.
Example 3 [9]
where
In this example, we find the dual initial point \((s^{0},y^{0})\)
The experiment results see Table 3.
Example 4 (Erikson’s problem 1980)
We consider the following convex problem
where \(a_{i}\in \mathfrak {R}_{++}\) and \(b_{i}\in \mathfrak {R}\) are fixed. We have tested this example for different value of \(n,a_{i}\) and \(b_{i}.\)
Algorithm | Number of iteration | Time (s) |
---|---|---|
\(a_{i}=1~,b_{i}=6\) | ||
\(\quad n=10~~\) | ||
Ye-Lustig variant algorithm | \(32\) | \(0.03\) |
Modified algorithm | \(23\) | \(0.02\) |
\(\quad n=100~~\) | ||
Ye-Lustig variant algorithm | \(29\) | \(0.02\) |
Modified algorithm | \(21\) | \(0.02\) |
\(\quad n=300~~\) | ||
Ye-Lustig variant algorithm | \(27\) | \(0.02\) |
Modified algorithm | \(20\) | \(0.01\) |
\(\quad n=500~~\) | ||
Ye-Lustig variant algorithm | \(26\) | \(0.01\) |
Modified algorithm | \(17\) | \(0.01\) |
\(a_{i}=2~,b_{i}=4\) | ||
\(\quad n=10~~\) | ||
Ye-Lustig variant algorithm | 28 | 0.03 |
Modified algorithm | 17 | 0.02 |
\(\quad n=100~~\) | ||
Ye-Lustig variant algorithm | 25 | 0.03 |
Modified algorithm | 15 | 0.01 |
\(\quad n=300~~\) | ||
Ye-Lustig variant algorithm | 23 | 0.02 |
Modified algorithm | 13 | 0.01 |
\(\quad n=500~~\) | ||
Ye-Lustig variant algorithm | 21 | 0.02 |
Modified algorithm | 11 | 0.01 |
\(a_{i}=10~,b_{i}=1\) | ||
\(\quad n=10~~\) | ||
Ye-Lustig variant algorithm | \(32\) | \(0.04\) |
Modified algorithm | \(25\) | \(0.02\) |
\(\quad n=100~~\) | ||
Ye-Lustig variant algorithm | \(31\) | \(0.03\) |
Modified algorithm | \(23\) | \(0.01\) |
\(\quad n=300~~\) | ||
Ye-Lustig variant algorithm | \(31\) | \(0.03\) |
Modified algorithm | \(23\) | \(0.01\) |
\(\quad n=500~~\) | ||
Ye-Lustig variant algorithm | \(28\) | \(0.02\) |
Modified algorithm | \(16\) | \(0.01\) |
6 Conclusion
Our modification, we have improved the numerical behavior of the algorithm, reducing the number of iterations corresponding to the search of an initial strictly feasible dual solution of the problem \((\textit{CNLP})\). It is also possible to applied this idea to the search of an optimal solution of the \( (\textit{CNLP})\) problem.
References
Benterki, D.: Résolution des problèmes de programmation semi-définie par la méthode de réduction du potentiel. Université Ferhat Abbas, Sétif, Thèse de Doctorat (2004)
Djeffal, E.A., Djeffal, L.: Feasible short-step interior point algorithm for linear complementarity problem based on kernel function. Adv. Model. Optim. 15(2), 157–172 (2013)
Djeffal, E.A.: Etude numérique d’une méthode de trajectoire centrale pour la programmation convexe sous contraintes lin éaires. Université Hadj Lakhdar, Batna, Thèse de magister (2009)
Benterki, D., Merikhi, B.: A modified algorithm for the strict feasibility. RAIRO Oper. Res. 35, 395–399 (2001)
Lustig, I.J.: Feasibility issues in a primal-dual interior point method for linear programming. Math. Program. 49, 145–162 (1991)
Benterki, D., Crouzeix, J.P., Merikhi, B.: A numerical implementation of an interior point method for semidefinite programming. Pesqui. Operacional 23(1), 49–59 (2003)
Fan, X., Bo, Y.: A polynomial path following algorithm for convex programming. Appl. Math. Comput. 196, 866–878 (2008)
Benterki, D., Keraghel, A.: Finding a strict feasible solution of a linear semidefinite program. Appl. Math. Comput. 217, 6437–6440 (2011)
Roumili, H.: Méthodes de points intérieurs non r éalisables en optimisation. Théorie, algorithmes et applications, Th èse de Doctorat, Université Ferhat Abbas, Sétif (2007)
Acknowledgments
The authors thank the referees for their careful reading and their precious comments. Their help is much appreciated.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Djeffal, E.A., Djeffal, L. & Benoumelaz, F. Finding a strict feasible dual solution of a convex optimization problem. Afr. Mat. 27, 13–21 (2016). https://doi.org/10.1007/s13370-014-0315-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13370-014-0315-2