1 Introduction

In this paper, we study the quadratically constrained quadratic programming (QCQP) problem as follows:

$$({\text{QCQP}})\quad \begin{array}{*{20}l} {\mathop {\hbox{min} }\limits_{{x \in R^{n} }} f(x)} \hfill \\ {g(x) \le 0} \hfill \\ {h_{j} (x) \le 0} \quad j = 1,2, \ldots ,m. \hfill \\ \end{array}$$
(1.1)

where \(f(x) = x^{T} Qx + 2p^{T} x + r\) is a coercive function, \(g(x) = x^{T} Q_{0} x + 2p_{0}^{T} x + r_{0}\) and \(h_{j} (x) = x^{T} Q_{j} x + 2p_{j}^{T} x + r_{j}\) for \(j = 1,2, \ldots ,m.\) \(Q\) and \(Q_{j}\) are n × n real symmetric matrices, \(p\) and \(p_{j}\) are real vectors in \({\mathbb{R}}^{n}\) and \({\text{c}},{\text{c}}_{\text{j}} \in {\mathbb{R}}\) are real numbers. QCQP appears in many areas of science and engineering such as wireless communications and networking, [7], radar [17], and signal processing [10]. Some important subclasses of QCQP problem are also the trust region problem [1], Max-Cut problem, 0–1 quadratic programming problem and box constrained quadratic programming problem [13]. It is well known that QCQP is NP-Hard, it means that this problem cannot be solved in polynomial time unless P = NP [6].

If all of the matrices \(Q,Q_{0} ,Q_{1} , \ldots ,Q_{m}\) are positive semidefinite, problem (1.1) becomes a convex programming problem which is solvable in polynomial time (within a given precision level) using the semidefinite programming methods [13]. If at least one of them isn’t positive semidefinite then finding the global optimal solution of QCQP is much more difficult and in general this is an open problem. In this article we will consider the case of QCQP problem (1.1) in which one non-convex constraint \(g(x) \le 0\) exists. The feasible set for this case may even be non-convex.

Having a non-convex constraint in programming is first studied by Rosen in [18]. Rosen studied the problem of linear programs with an additional reverse convex constraint; and Hillestad and Jacobsen in [9] gave characterizations of optimal solutions and proposed an algorithm based on optimality properties (see also [11, 12]).

A non-convex quadratic programming problem with two quadratic constraints is also studied in [21] and using the semidefinite programming (SDP) approach a family of solvable subclasses of non-convex quadratic programming problem is identified. In [20] a non-convex quadratic programming problem is also reformulated into a linear conic programming problem and based on the matrix decomposition method and SDP techniques, some polynomial-time solvable subclasses of QCQP are identified.

Lu et al. [14] have proposed a new method for reformulating QCQP into a relaxation linear conic programming and solved the problem by using the SDP approach. To find a global optimal solution, they investigated the relationship between its Lagrangian multipliers and related linear conic programming problem. In order to apply this method, some conditions should be imposed on QCQP problem such as semidefinite condition. So we cannot use this method for any QCQP problem.

The copositive representation of a quadratic programming problem is another approach to find an approximate solution [14]. In [22] Shi and Jin presented the optimality and copositivity conditions for determining whether or not a given KKT solution is globally optimal and then proposed a local search based scheme to find the global optimal solution. But the limitation of this method is that it is not also convenient for any QCQP problems and it could find only a lower bound of optimal value of objective function.

The main contribution of this paper is to propose a numerical procedure to find a global optimal solution of the non-convex QCQP problem (1.1) with one non-convex constraint and probably many convex constraints. At first, the problem (1.1) is changed to an equivalent parametric problem. Then by finding the minimizer of parametric problem and updating the parameter, an iterative method called Parametric QCQP method (PQCQP) is proposed.

This work was intended as an attempt to motivate the use of adaptive ellipsoid based method for solving a class of non-convex QCQP problems. One of the advantages of this method is to determine the infeasibility of problem (1.1). The new method is an iterative method which combines a bisection scheme, adaptive ellipsoid based method, and SDP relaxation.

The rest of this paper is arranged as follows: In Sect. 2 we provide detail of PQCQP method and present a new algorithm. Section 3 contains convergence of the new method and more theoretical results. The computational results on randomly generated problem are reported in Sect. 4. Conclusions are given in Sect. 5.

The following notations are adopted in this paper. Let \(S_{ }^{n}\) denote the set of real symmetric matrices of size n, and \({\text{S}}_{ + }^{\text{n}}\) the set of positive semidefinite matrices of size n, \({\text{S}}_{ + + }^{\text{n}}\) the set of positive definite matrices. Given a vector \(x \in {\mathbb{R}}^{n}\), \(x_{i}\) denotes the \({\text{i}}\)th entry of \(x\). For a matrix \({\text{Y}}\), \({\text{Y}}_{\text{ij}}\) denotes the \(({\text{i}},{\text{j}})\)th entry of \({\text{Y}}\). For any two matrices \(M = (M_{ij} )\) and \(N = (N_{ij} )\) in \(S_{ }^{n}\), the inner product of these two matrices is defined by \(M*N = \mathop \sum \nolimits_{i = 1}^{n} \mathop \sum \nolimits_{j = 1}^{n} M_{ij} N_{ij} .\)

2 New method for solving QCQP

Consider problem (1.1). If all constraints and objective function are convex, then we can solve it by using the second order cone programming efficiently [8]. If at least one of the constraints is non-convex then solving problem (1.1) is difficult, because feasible set may be non-convex. We consider the case of QCQP problems with one non-convex constraint \(g(x) \le 0\). The epigraph form of problem (1.1) is as follows

$$\begin{array}{*{20}l} {\alpha^{*} = \mathop {\hbox{min} }\limits_{x,\alpha } \alpha } \hfill \\\quad\quad\quad {f(x) \le \alpha } \hfill \\\quad\quad\quad {g(x) \le 0} \hfill \\ \quad\quad\quad{h_{j} (x) \le 0}\quad j = 1,2, \ldots ,m. \hfill \\ \end{array}$$
(2.1)

Let \(\varLambda (\alpha ) = \left\{ {x\left| {f(x) \le \alpha ,\,\,g(x) \le 0,\,\,h_{j} (x) \le 0,\,\,j = 1,2, \ldots ,m } \right.} \right\}\). The following lemma concerns the relationship between problems (1.1) and (1.2).

Lemma 1

Consider problem (1.1) and (2.1), we have:

  1. 1.

    Problem (1.1) is feasible if and only if \(\varLambda (\alpha )\) is a nonempty set for some \(\alpha \in {\mathbb{R}}.\)

  2. 2.

    Problem (1.1) is solvable if \(\varLambda (\alpha )\) is a nonempty compact set for some \(\alpha \in {\mathbb{R}}.\)

Proof

  1. 1.

    Let \(\varLambda = \left\{ {x\left| {g(x) \le 0,\,\,h_{j} (x) \le 0,\,\,j = 1,2, \ldots ,m } \right.} \right\}.\) First suppose that \(\varLambda (\alpha )\) is a nonempty set for some \(\alpha \in {\mathbb{R}}.\) Since \(\varLambda (\alpha ) \subseteq \varLambda ,\) the proof is evident.

    Conversely, assume that problem (1.1) is feasible and \(\bar{x}\) is a feasible solution of problem (1.1). we take \(\alpha = f(\bar{x})\). Thus \(\varLambda (\alpha )\) is a nonempty set.

  2. 2.

    Take \(\alpha \in {\mathbb{R}}\) such that \(\varLambda (\alpha )\) is a nonempty compact set. Since \(\varLambda (\alpha )\) is compact set and the objective function f is continuous, so by Weierstrass theorem there exists a minimizing point \(\bar{x} \in \varLambda (\alpha )\) for the problem \(\min_{x \in \varLambda (\alpha )} f(x).\) We claim that \(\bar{x}\) is the optimal solutions of problem (1.1), thus solvability is established. To prove this claim, we assume, on contrary, that there exists a point \(\hat{x} \in \varLambda \backslash \varLambda (\alpha ),\) such that \(f(\hat{x}) < f(\bar{x}).\) Since \(f(\bar{x}) < \alpha ,\) we have \(f(\hat{x}) < \alpha\) contradicting the assumption that \(\hat{x} \notin \varLambda (\alpha ).\)\(\square\)

Now we introduce a new function as below:

$$\begin{array}{*{20}l} {\text{F}}(\alpha ) ={\mathop {\hbox{min} }\limits_{{x \in R^{n} }} g(x)} \hfill \\ \quad \quad \quad\quad {{\text{s}} . {\text{t}} .\quad f(x) \le \alpha }\quad j=1,2,\ldots,m \hfill \\ \end{array}$$
(2.2)

It is seen that F(α) can be computed by solving the QCQP problem in which the objective function is non-convex and the constraints are convex.

One see that \({\text{F(}}\alpha )\le 0\) if and only if \(\varLambda (\alpha ) \ne \emptyset\). Therefore we have:

$$\begin{aligned} & \alpha^{*} = \mathop {\hbox{min} }\limits_{\alpha } \alpha \\ & \quad\quad \quad F(\alpha ) \le 0 \\ \end{aligned}$$

Let an interval \([\alpha_{1} ,\alpha_{2} ]\) including the optimal value \(\alpha^{*}\) be given. We call \([\alpha_{1} ,\alpha_{2} ]\) trust interval. Here a question arises how to determine a trust interval? To answer this question, we compute endpoints of a trust interval as follows:

Right endpoint For finding the right endpoint, we need to a feasible solution \(x_{0}\). Let \(\alpha_{2} = f(x_{0} )\) as a right endpoint of trust interval. Since \(x_{0}\) is a feasible solution for problem (1.1), so \(\alpha^{*} \le \alpha_{2} .\)

To obtain a feasible solution \(x_{0} ,\) consider the following problem:

$$\begin{array}{*{20}l}g^{*} = {\mathop {\hbox{min} }\limits_{{x \in R^{n} }} g(x)} \hfill \\ {~~\qquad h_{j} (x) \le 0}\quad j = 1,2, \ldots ,m. \hfill \\ \end{array}$$
(2.3)

For solving non-convex problem (2.3), we use AEA method which will be discussed later. We take \(x_{0}\), the optimal solution of problem (2.3). In the next theorem, we state the relationship between the problems (1.1) and (2.3).

Theorem 2

The problem (1.1) is feasible if and only if \(g^{*} \le 0.\)

Proof

Suppose that the problem (1.1) is feasible. So

$$\exists \bar{x} \in {\mathbb{R}}^{n} : g(\bar{x}) \le 0 , \quad h_{j} (\bar{x}) \le 0 \quad for \,\,j = 1,2, \ldots ,m.$$

Therefore we have \(g^{*} \le g(\bar{x}) \le 0.\)

Conversely, let \(x^{*}\) be an optimal solution of problem (2.3) and \(g^{*} \le 0\). We have

$$g(x^{*} ) \le 0 ,\quad h_{j} (x^{*} ) \le 0\quad for\,\,j = 1,2, \ldots ,m.$$

So the problem (1.1) is feasible.\(\square\)

Corollary 3

The problem (1.1) is infeasible if and only if \(g^{*} > 0.\)

Left endpoint Consider the following problem equivalent to problem (1.1):

$$\begin{array}{*{20}l} {\hbox{min} \,Q*X + 2p^{T} x + r} \hfill \\ {\quad Q_{0} *X + 2p_{0}^{T} x + r_{0} \le 0} \hfill \\ {\quad Q_{j} *X + 2p_{j}^{T} x + r_{j} \le 0} \hfill \\ {\qquad X = xx^{T} \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \succ } 0} \hfill \\ \end{array} \quad j = 1,2, \ldots ,m$$
(2.4)

The last constraint i.e. \(X = xx^{T}\) is non-convex. We can directly relax it to the convex constraint \(X\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \succ } xx^{T}\), therefore the convex relaxation of problem (2.4) is as follows:

$$\begin{aligned} & \hbox{min} \,Q*X + 2p^{T} x + r \\ & ({\text{SDPR}})\quad \begin{array}{*{20}l} {Q_{0} *X + 2p_{0}^{T} x + r_{0} \le 0} \hfill \\ {Q_{j} *X + 2p_{j}^{T} x + r_{j} \le 0} \hfill \\ {\quad X\underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{ \succ } xx^{T} } \hfill \\ {\quad X\underset{\hbox{$\smash{\scriptscriptstyle-}$}}{ \succ } 0} \hfill \\ \end{array} \quad j = 1,2, \ldots ,m \\ \end{aligned}$$
(2.5)

Problem (2.5) is called SDP Relaxation problem (SDPR).

Lemma 4

([4]) Let \(V(QCQP)\) and \(V(SDPR)\) denote the optimal values of optimization problems (1.1) and (2.5) respectively. Then we have \(V(SDPR) \le V(QCQP)\).

Let \(Y^{*}\) be an optimal solution of problem (2.5) then \(\alpha_{1} = H_{0} Y^{*}\) is a left endpoint of trust interval. According to Lemma 2, we have \(\alpha_{1} \le \alpha^{*} .\)

So we could find a trust interval \([\alpha_{1} ,\alpha_{2} ].\) Now a question arises: how to tighten this trust interval? For answering this question, we propose a method that turns out to be similar to the bisection method. At each step, the method divides the interval in two by computing the midpoint \(\alpha_{3} = \frac{{(\alpha_{1} + \alpha_{2} )}}{2}\) of the interval and the value of the function \(F(\alpha_{3} ).\) Therefore we have two intervals \([\alpha_{1} ,\alpha_{3} ]\) and \([\alpha_{3} ,\alpha_{2} ]\). If \(F(\alpha_{3} ) \le 0\), we select the subinterval \([\alpha_{1} ,\alpha_{3} ]\), otherwise we select \([\alpha_{3} ,\alpha_{2} ].\)

According to above discussion, the proposed method is as follows:

figure a

In Step 4 for calculating \(F(\alpha_{3} )\), we should solve problem (2.2). We use the adaptive ellipsoid based method (AEA) [4] in which its details are given in the following.

We recall that \(\varLambda (\alpha ) = \left\{ {x \in R^{n} \left| {f(x) \le \alpha ,\,\,h_{j} (x) \le 0} \right.,\,\,j = 1, \ldots ,m} \right\}\) is a subset of the feasible set of problem (1.1), that \(\alpha\) is obtained from Algorithm 1. For the remainder of this section, we will assume that the parameter α is given. For simplicity of notation, we denote \(\varLambda (\alpha )\) briefly by \(\varLambda\). According to [16], problem (2.2) is equivalent to the following linear conic programming problem:

$$({\text{CP}})\begin{array}{*{20}l} {\hbox{min} \,H^{{\prime }} *Y} \hfill \\ {\quad Y_{11} = 1} \hfill \\ {\quad Y \in D_{\varLambda }^{*} } \hfill \\ \end{array}$$
(2.6)

where \(H^{\prime} = [0\, p^{T} ;p\, Q]\) and \(D_{\varLambda }^{*} = cone\left\{ {Y \in S^{n + 1} \left| {Y = \left[ {\begin{array}{*{20}c} 1 \\ x \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 1 \\ x \\ \end{array} } \right]^{T} } \right.\quad {\text{for }}\,\,{\text{some}}\,\,x \in \varLambda } \right\}.\)

The cone of nonnegative quadratic functions \(D_{\xi }\) over a union of ellipsoids \(\xi \subseteq R^{n}\) and its dual cone \(D_{\xi }^{*}\) is introduced in [4], such that \(S_{ + }^{n + 1} \subseteq D_{\xi } \subseteq D_{\varLambda }\) and \(D_{\varLambda }^{*} \subseteq D_{\xi }^{*} \subseteq S_{ + }^{n + 1} .\) A tighter lower bound could be obtained, By replacing the untractable cone \(D_{\varLambda }^{*}\) in problem (CP) with the tractable cone \(D_{\xi }^{*}\) instead of \(S_{ + }^{n + 1} .\)

Let \({\mathcal{F}} = \left\{ {{\mathcal{F}}_{e}^{1} , \ldots ,{\mathcal{F}}_{e}^{k} } \right\}\) be a collection of full-dimensional ellipsoids, where

$${\mathcal{F}}_{e}^{j} = \left\{ {x \in R^{n} \left| {x^{T} A_{j} x + 2b_{j}^{T} x + c_{j} \le 0} \right.} \right\}$$
(2.7)

where \(A_{j} \in S_{ + + }^{n} ,\) \(b_{j} \in R^{n}\) and \(c_{j} \in R\), for \(j = 1, \ldots ,k.\) Let \(F = \bigcup\nolimits_{j = 1}^{k} {{\mathcal{F}}_{e}^{j} }\). So \(F\) is an ellipsoidal cover of \(\varLambda\), if \(\varLambda \subseteq F.\) From [2] each cone \(D_{{{\mathcal{F}}_{e}^{j} }}\) has an LMI representation. The cone of nonnegative quadratic functions over \(F\) and its dual cone are defined as follows:

$$D_{F} = \left\{ {U \in S^{n + 1} \left| {\left[ {\begin{array}{*{20}c} 1 \\ x \\ \end{array} } \right]^{T} U\left[ {\begin{array}{*{20}c} 1 \\ x \\ \end{array} } \right] \ge 0\quad {\text{for}}\,\,{\text{all}}\,\, x \in F} \right.} \right\}$$
(2.8)
$$D_{F}^{*} = cone\left\{ {Y \in S^{n + 1} \left| {Y = \left[ {\begin{array}{*{20}c} 1 \\ x \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} 1 \\ x \\ \end{array} } \right]^{T} } \right. \quad for\,\, some\,\,x \in F} \right\}$$
(2.9)

Theorem 5

([15])

  1. 1.

    If \(\varLambda \subseteq F\), for some \(\alpha \in {\mathbb{R}}\), then \(D_{F} \subseteq D_{\varLambda }\) and \(D_{\varLambda }^{*} \subseteq D_{F}^{*}\).

  2. 2.

    If \(F = \mathop {\bigcup }\nolimits_{j = 1}^{k} {\mathcal{F}}_{e}^{j}\), then \(D_{F}^{ } = \mathop {\bigcap }\nolimits_{j = 1}^{k} D_{{{\mathcal{F}}_{e}^{j} }}^{ }\) and \(D_{F}^{*} = \mathop \sum \nolimits_{j = 1}^{k} D_{{{\mathcal{F}}_{e}^{j} }}^{*}\).

From Theorem 5, LMI representations of \(D_{F}^{*}\) is as follows.

Corollary 6

([15]) Let sets \({\mathcal{F}}_{e}^{j} ,F,\) \(D_{F}^{ }\) and \(D_{F}^{*}\) be defined as in (2.7)–(2.9). Then for any matrix \(Y \in S^{n + 1} ,\), \(Y \in D_{F}^{*}\) if and only if

$$Y = Y^{1} + Y^{2} + \cdots + Y^{k} ,\quad \left[ {\begin{array}{*{20}l} {c_{j} } \hfill & {b_{j}^{T} } \hfill \\ {b_{j} } \hfill & {A_{j} } \hfill \\ \end{array} } \right]*Y^{j} \le 0, Y^{j} \in S^{n + 1} \quad for\,\, j = 1, \ldots ,k.$$

Now we can relax problem (CP) to the following linear conic problem:

$$\begin{aligned} & \min \,H^{{\prime }} *Y \\ & \quad Y_{{11}} = 1 \\ & \quad Y \in D_{F}^{*} \\ \end{aligned}$$
(2.10)

According to Corollary 6, problem (2.10) can be specifically rewritten as

$$({\text{RCP}})\quad \begin{array}{*{20}l} {\hbox{min} \,H^{{\prime }} *Y} \hfill \\ {\quad Y = Y^{1} + Y^{2} + \cdots + Y^{k} ,\quad Y_{11} = 1} \hfill \\ {\quad \left[ {\begin{array}{*{20}l} {c_{j} } \hfill & {b_{j}^{T} } \hfill \\ {b_{j} } \hfill & {A_{j} } \hfill \\ \end{array} } \right]*Y^{j} \le 0,\quad Y^{j} \in S^{n + 1} , \quad j = 1, \cdots ,k.} \hfill \\ \end{array}$$
(2.11)

By applying the reformulation–linearization technique (RLT) to problem (2.11), a tighter lower bound for problem (2.2) could be obtained. The problem (2.11) is changed as follows:

$$({\text{RCP-RLT}})\quad \begin{array}{*{20}l} {\hbox{min} \,H^{{\prime }} *Y} \hfill \\ {\quad Y = Y^{1} + Y^{2} + \cdots + Y^{k} ,\quad Y_{11} = 1} \hfill \\ {\quad \left[ {\begin{array}{*{20}c} {r_{i} } & {p_{i}^{T} } \\ {p_{i} } & {Q_{i} } \\ \end{array} } \right]*Y^{j} \le 0,\quad i = 0,1, \ldots ,m , \quad j = 1, \ldots ,k.} \hfill \\ {\quad \left[ {\begin{array}{*{20}c} {c_{j} } & {b_{j}^{T} } \\ {b_{j} } & {A_{j} } \\ \end{array} } \right]*Y^{j} \le 0,\quad Y^{j} \in S^{n + 1} , \quad j = 1, \ldots ,k.} \hfill \\ \end{array}$$
(2.12)

The next theorem shows that problem (RCP-RLT) indeed provides a lower bound for problem (2.2) if \(\varLambda \subseteq F.\)

Theorem 7

([4]) Let \(F\) and \({\mathcal{F}}_{e}^{j}\) be defined in (2.7), if \(\varLambda \subseteq F\), then

$$V(RCP) \le V(RCP - RLT) \le V(CP) = V(P).$$

Theorem 8

([4]) Let \({\mathcal{F}}_{\text{e}}^{\text{j}}\) be defined in (2.7), if the set \({\mathcal{F}}_{\text{e}}^{\text{j}} \cap \varLambda\) has a nonempty interior for \(i = 1, \ldots ,k\), then problem (RCP-RLT) is strongly feasible.

Therefore, the ellipsoids \({ \mathcal{F}}_{e}^{j}\) are needed to an efficient arrangement to cover \(\varLambda\). In order to detect which ellipsoid \({ \mathcal{F}}_{e}^{j}\) in \(F\) should be refined, the concepts of “most sensitive points” and “most sensitive ellipsoids” are introduced. In order to detect the sensitive regions, the following theorem is needed.

Theorem 9

([4]) If \(Y^{*} = (Y^{1} )^{*} + (Y^{2} )^{*} + \cdots + (Y^{k} )^{*}\) is an optimal solution to problem (RCP-RLT), then, for \((Y^{i} )^{*} \ne 0\) with \(i \in \left\{ {1, \ldots ,k} \right\}\), we have

$$(Y^{1} )^{*} = \mathop \sum \limits_{s = 1}^{{n_{i} }} \alpha_{is} \left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{is} } \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{is} } \hfill \\ \end{array} } \right]^{T}$$

for some \(n_{i} \in \{ 1, \ldots ,n + 1\}\), \(\alpha_{is} > 0\) and \(x^{is} \in { \mathcal{F}}_{e}^{j}\). In this case, \(Y^{*}\) can be decomposed into

$$Y^{*} = \mathop \sum \limits_{{i:(Y^{i} )^{*} \ne 0}} \mathop \sum \limits_{s = 1}^{{n_{i} }} \alpha_{is} \left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{is} } \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{is} } \hfill \\ \end{array} } \right]^{T}$$

With \(\mathop \sum \nolimits_{{i:(Y^{i} )^{*} \ne 0}} \mathop \sum \nolimits_{s = 1}^{{n_{i} }} \alpha_{is} = 1\)

Definition 1

([4]). For the decomposition

$$Y^{*} = \mathop \sum \limits_{{i:(Y^{i} )^{*} \ne 0}} \mathop \sum \limits_{s = 1}^{{n_{i} }} \alpha_{is} \left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{is} } \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{is} } \hfill \\ \end{array} } \right]^{T}$$
(2.13)

\(x^{*}\) is the most sensitive point if

$$x^{*} = \mathop {\text{argmin}}\limits_{{\left\{ {x^{is} :(Y^{i} )^{*} \ne 0;\,\,s = 1,2, \ldots ,n_{i} } \right\}}} \left\{ {(x^{is} )^{T} Q_{0} x^{is} + 2(p_{0} )^{T} x^{is} + r_{0} } \right\}$$
(2.14)

The minimum objective value among all of the sensitive points is \(x^{*}\). Note that if there are multiple sensitive points that having the same minimum objective value, the sensitive point \({\text{x}}^{ *}\) is selected as the one having the smallest index in i with the smallest index in s as a tie breaker. Denote the corresponding index i by t, then the ellipsoid \({ \mathcal{F}}_{\text{e}}^{\text{t}}\) contain sensitive point \({\text{x}}^{ *}\) is named the most sensitive ellipsoid. The next theorem is about connection between the sensitive point \({\text{x}}^{ *}\) and optimal solution of problem (2.2).

Theorem 10

([4]) Assume \(Y^{*}\) is the optimal solution to problem (RCP-RLT) with the most sensitive point \(x^{*}\), then

$$\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{*} } \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{*} } \hfill \\ \end{array} } \right]^{T} *H^{{\prime }} \le V(P)$$
(2.15)

Moreover, if \(x^{*} \in \varLambda\), then the matrix \(\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{*} } \hfill \\ \end{array} } \right]\left[ {\begin{array}{*{20}l} 1 \hfill \\ {x^{*} } \hfill \\ \end{array} } \right]^{T}\) is optimal to problem (CP) and \(x^{*}\) is optimal to problem (2.2).

According to Theorem 8, if \({\text{x}}^{ *} \in \varLambda\), then \({\text{x}}^{ *}\) is an optimal solution of problem (2.2). Otherwise \({\text{x}}^{ *}\) is a lower bound of problem (2.2). In this case, to get a better lower bound we have to refine the ellipsoid cover \({\text{F}}\). But it does not need to refine F everywhere. A question that arises is how to determine which ones need to refine? To answer this question we use fewer ellipsoids involved in problem (RCP-RLT). Since the most sensitive point \({\text{x}}^{ *}\) in the most sensitive ellipsoid \({ \mathcal{F}}_{\text{e}}^{\text{t}}\) in F has the lowest objective value, therefore only this ellipsoid is need to refine.

The lower bound of problem (2.2) is improved by refining ellipsoids around the region of \(x^{*}\).The following definition is to easily manage the ellipsoids in the set F.

Definition 2

([4]) For a given rectangle set \(T = [u,v] = \left\{ {x \in R^{n} \left| {u_{i} \le x_{i} \le v_{i} } \right.} \right\}\), define the corresponding ellipsoid \({ \mathcal{F}}_{e}^{T}\) generated by T as

$${ \mathcal{F}}_{e}^{T} = \left\{ {x \in R^{n} \left| {\mathop \sum \limits_{i = 1}^{n} \frac{{(2x_{i} - v_{i} - u_{i} )^{2} }}{{(v_{i} - u_{i} )^{2} }} \le n} \right.} \right\}$$
(2.16)

It is easy to verify that \(T \subseteq {\mathcal{F}}_{e}^{T}\).

Let \(T = \bigcup\nolimits_{i = 1}^{k} {\{ T_{i} \} }\) and \({\mathfrak{T}}\) be the union of the rectangle sets in T. Suppose \({\mathfrak{T}}\) be a rectangle cover of \(\varLambda\) that

$$\varLambda \subseteq {\mathfrak{T}} = \bigcup\nolimits_{i = 1}^{k} {T_{i} } .$$

Then the set \(F = \bigcup\nolimits_{j = 1}^{k} {{\mathcal{F}}_{e}^{j} }\), whose member \({\mathcal{F}}_{e}^{j}\) is a full-dimensional ellipsoid generated by the rectangle set \(T_{i}\), respectively, is an ellipsoid cover of \(\varLambda\). In order to make this method converge quickly, a good initial rectangle set cover is necessary. In order to fulfill this purpose, consider the following problems:

$$\left( {I_{min}^{i} } \right)\quad \begin{array}{*{20}l} {\mathop {\hbox{min} }\limits_{{x \in R^{n} }} \, x_{i} } \hfill \\ {\quad f(x) \le \alpha } \hfill \\ {\quad h_{j} (x) \le 0 \quad j = 1,2, \ldots ,m} \hfill \\ \end{array}$$
(2.17)

and

$$\left( {I_{max}^{i} } \right)\quad \begin{array}{*{20}l} {\mathop {\hbox{max} }\limits_{{x \in R^{n} }} x_{i} } \hfill \\ {\quad f(x) \le \alpha } \hfill \\ { \quad h_{j} (x) \le 0 \quad j = 1,2, \ldots ,m} \hfill \\ \end{array}$$
(2.18)

For \(i = 1,2, \ldots ,n\) problems (2.17) and (2.18) are convex programming, hence they can be solved efficiently. Since the feasible region \(\varLambda\) is closed and bounded, therefore problems (2.17) and (2.18) have finite optimal solutions. Denote the optimal solutions of problems (2.17) and (2.18) to be \(u_{i}^{1}\) and \(v_{i}^{1}\) respectively. The rectangle set \({\text{T}}_{1} = [{\text{u}}^{1} ,{\text{v}}^{1} ]\) is chosen as the initial rectangle set covering the feasible region \(\varLambda\) and the ellipsoid \({\mathcal{F}}_{\text{e}}^{1}\) generated from \({\text{T}}_{1}\) is the initial ellipsoid cover F of \(\varLambda\). The most sensitive point \(x^{*}\), the most sensitive ellipsoid \({\mathcal{F}}_{e}^{t}\) and the rectangle set \(T_{t}\) generating the most sensitive ellipsoid \({\mathcal{F}}_{e}^{t}\) is also detected. Then, this rectangle set is split by half. Let \({\text{id}} = \arg \mathop {\hbox{max} }\limits_{{\left\{ {{\text{i}} = 1, \ldots ,{\text{n}}} \right\}}} \{ {\text{v}}_{\text{i}}^{\text{t}} - {\text{u}}_{\text{i}}^{\text{t}} \}\), then \({\text{T}}_{\text{t}}\) is split into \({\text{T}}_{{{\text{t}}1}} = [{\text{u}}^{{{\text{t}}1}} ,{\text{v}}^{{{\text{t}}1}} ]\) and \({\text{T}}_{{{\text{t}}2}} = [{\text{u}}^{{{\text{t}}2}} ,{\text{v}}^{{{\text{t}}2}} ]\), where \({\text{u}}^{{{\text{t}}1}} = {\text{u}}^{\text{t}}\), \({\text{v}}^{{{\text{t}}2}} = {\text{v}}^{\text{t}}\), \({\text{v}}_{\text{i}}^{{{\text{t}}1}} = {\text{v}}_{\text{i}}^{\text{t}}\),\({\text{u}}_{\text{i}}^{{{\text{t}}2}} = {\text{u}}_{\text{i}}^{\text{t}}\), for \({\text{i}} \ne {\text{id}}\), and \({\text{v}}_{\text{id}}^{{{\text{t}}1}} = {\text{u}}_{\text{id}}^{{{\text{t}}2}} = \frac{{{\text{u}}_{\text{id}}^{\text{t}} + {\text{v}}_{\text{id}}^{\text{t}} }}{2}\). Two ellipsoids \({\mathcal{F}}_{\text{e}}^{{{\text{t}}1}}\) and \({\mathcal{F}}_{\text{e}}^{{{\text{t}}2}}\) are generated from \({\text{T}}_{{{\text{t}}1}}\) and \({\text{T}}_{{{\text{t}}2}}\) according to

$${\mathcal{F}}_{e}^{t1} = \left\{ {x \in R^{n} \left| {\mathop \sum \limits_{i = 1}^{n} \frac{{\left( {2x_{i} - v_{i}^{t1} - u_{i}^{t1} } \right)^{2} }}{{\left( {v_{i}^{t1} - u_{i}^{t1} } \right)^{2} }} \le n} \right.} \right\}$$
(2.19)

and

$${\mathcal{F}}_{e}^{t2} = \left\{ {x \in R^{n} \left| {\mathop \sum \limits_{i = 1}^{n} \frac{{\left( {2x_{i} - v_{i}^{t2} - u_{i}^{t2} } \right)^{2} }}{{\left( {v_{i}^{t2} - u_{i}^{t2} } \right)^{2} }} \le n} \right.} \right\}.$$
(2.20)

Let \({\text{Int(}}T_{i} \cap \varLambda )\ne \emptyset\) for any rectangle set \({\text{T}}_{\text{i}}\) in T, then one of the following occurs:

  1. 1.
    $$\begin{aligned}{\text{Int}}\left( {{\text{T}}_{{{\text{t}}1}} \cap \varLambda } \right) \ne \emptyset ,\quad {\text{Int}}\left( {{\text{T}}_{{{\text{t}}2}} \cap \varLambda } \right) = \emptyset \end{aligned}$$
  2. 2.
    $$\begin{aligned}{\text{Int}}\left( {{\text{T}}_{{{\text{t}}2}} \cap \varLambda } \right) \ne \emptyset ,\quad {\text{Int}}\left( {{\text{T}}_{{{\text{t}}1}} \cap \varLambda } \right) = \emptyset \end{aligned}$$
  3. 3.

    \({\text{Int}}\left( {{\text{T}}_{{{\text{t}}1}} \cap \varLambda } \right) \ne \emptyset ,\quad {\text{Int}}\left( {{\text{T}}_{{{\text{t}}2}} \cap \varLambda } \right) \ne \emptyset\).

In the case 1 because \({\text{T}}_{{{\text{t}}2}} \cap \varLambda\) don’t have common interior, thus the rectangle set \({\text{T}}_{{{\text{t}}2}}\) should be eliminated from the rectangle set cover \({\mathfrak{T}}\) for further consideration. When the case 1 is occur the rectangle set \({\text{T}}_{{{\text{t}}2}}\) should be eliminated. In order to determine which rectangle set should be eliminated, consider the following problems:

$$\left( {I_{min}^{id} } \right)\quad \begin{array}{*{20}l} {\mathop {\hbox{min} }\limits_{{x \in R^{n} }} x_{id} } \hfill \\ {\quad f(x) \le \alpha } \hfill \\ { \quad h_{j} (x) \le 0 \quad j = 1,2, \ldots ,m} \hfill \\ \end{array}$$
(2.21)

and

$$\left( {I_{max}^{id} } \right)\begin{array}{*{20}l} {\mathop {\hbox{max} }\limits_{{x \in R^{n} }} \quad x_{id} } \hfill \\ {\quad f(x) \le \alpha } \hfill \\ {\quad h_{j} (x) \le 0 \quad j = 1,2, \ldots ,m} \hfill \\ \end{array}$$
(2.22)

Problems (2.21) and (2.22) are also convex programming, Therefore they can be solved efficiently. Denote the optimal value of Problems (2.21) and (2.22) to be \(\varphi\) and \(\psi\), respectively. The rectangle sets in T is changed as the following way.

$${\mathfrak{T}} = {\mathfrak{T}}\backslash \{ T_{t} \} \cup \{ T_{t2} \} ,\quad if\quad \varphi \ge \frac{{u_{id}^{t} + v_{id}^{t} }}{2}$$
(2.23)
$${\mathfrak{T}} = {\mathfrak{T}}\backslash \{ T_{t} \} \cup \{ T_{t1} \} ,\quad if\quad \psi \le \frac{{u_{id}^{t} + v_{id}^{t} }}{2}$$
(2.24)
$${\mathfrak{T}} = {\mathfrak{T}}\backslash \{ T_{t} \} \cup \{ T_{t1} \} \cup \{ T_{t2} \} ,\quad otherwise$$
(2.25)

Theorem 11

([3]) The set \({\text{T}}_{\text{i}} \cap \varLambda\) has a nonempty interior for each rectangle set \({\text{T}}_{\text{i}}\) in T if the rectangle sets are added into T according to (2.23)–(2.25).

Since each \(T_{i}\) has a common interior with \(\varLambda\), each ellipsoid \({ \mathcal{F}}_{e}^{i}\), generated from \(T_{i}\), also has a common interior with \(\varLambda\). Consider the following corollary:

Corollary 12

([3]) The set \({ \mathcal{F}}_{e}^{i} \cap \varLambda\) has a nonempty interior for each \({ \mathcal{F}}_{e}^{i}\) generated by the rectangle set \(T_{i}\) in T.

Consider the following problem \(\left( {I_{ }^{c} } \right)\):

$$(I^{c} )\quad \begin{array}{*{20}l} {\mathop {\hbox{min} }\limits_{{x \in R^{n} }} \quad \left\| {x - x^{*} } \right\|_{\infty } } \hfill \\ {\quad f(x) \le \alpha } \hfill \\ {\quad h_{j} (x) \le 0 \qquad j = 1,2, \ldots ,m} \hfill \\ \end{array}$$
(2.26)

The optimal value of problem \((I^{c} )\) indicates the distance of the current sensitive point \(x^{*}\) to the feasible region \(\varLambda\). It is clearly that problem \((I^{c} )\) is a convex programming problem. We denote the optimal solution of \((I^{c} )\) by \(\bar{x}^{1} .\)

figure b

3 Convergence of PQCQP method

In this section we will prove that the PQCQP method is convergent to an optimal solution. At first we investigate some properties of function \(F(\alpha )\).

Let us rewrite \(F(\alpha )\) as follows:

$$F(\alpha ) = { \hbox{min} }\left\{ {g(x)\left| {x \in K(\alpha ) \cap \left( {\mathop {\bigcap }\limits_{j = 1}^{m} T_{j} } \right)} \right.} \right\}$$
(3.1)

where \(K(\alpha ) = \left\{ {x\left| {f(x) \le \alpha } \right.} \right\}\) and \(T_{j} = \left\{ {x\left| {h_{j} (x) \le 0} \right.} \right\}\) for \(j = 1,2, \ldots ,m.\)

Lemma 13

\(F(\alpha )\) is a decreasing function of \(\alpha .\)

Proof

If \(\alpha_{n} \le \alpha_{n + 1}\) then \(K(\alpha_{n} ) \subseteq K(\alpha_{n + 1} )\), so \(F(\alpha_{n + 1} ) \le F(\alpha_{n} ).\) \(\square\)

Some properties of \(K(\alpha )\) are established in the following lemma.

Lemma 14

For any \(\upalpha \in {\mathbb{R}}\) , we have:

  1. 1.

    \(K(\alpha )\) is a convex set.

  2. 2.

    \(K(\alpha )\) is a compact set.

Proof

  1. 1.

    Suppose \(x_{1} ,x_{2} \in K(\alpha )\) and \(\lambda \in \left[ {1,0} \right]\) be an arbitrary constant. So we have

    $$f\left( {x_{1} } \right) \le \alpha , f\left( {x_{2} } \right) \le \alpha .$$

    Since the function \(f(x)\) is convex, we have

    $$f(\lambda x_{1} + (1 - \lambda )x_{2} ) \le \lambda f(x_{1} ) + (1 - \lambda )f(x_{2} ) \le \alpha$$

    Thus \(x_{1} + (1 - \lambda )x_{2} \in K(\alpha ).\)

  2. 2.

    We notice that the continuity of \(f\) implies the closedness of the set \(K(\alpha )\). Thus, it remains only to show that the set \(K(\alpha )\) is bounded. We prove this by contradiction. Suppose that there is an \(\alpha \in {\mathbb{R}}\) such that the set \(K(\alpha )\) is unbounded. Then there must exist a sequence \(\{ x^{p} \} \subset K(\alpha )\) with \(\left\| {x^{p} } \right\| \to \infty\). But then by the coercivity of \(f\), we must also have \(f(x^{p} ) \to \infty\). This contradicts the fact that \(f(x^{p} ) \le \alpha\) for all \(= 1,2, \ldots\). Therefore the set \(K(\alpha )\) must be bounded. \(\square\)

Theorem 15

Let \(K(\alpha )\) be a nonempty set. Then \(F(\alpha )\) is well-defined.

Proof

According to Lemma 14, \(K(\alpha )\) is a compact set. It is sufficient to prove that \(T = \bigcap\nolimits_{j = 1}^{m} {T_{j} }\) is closed. It is equivalent to show that every \(T_{j}\) (\(j = 1,2, \ldots ,m\)) is closed. We remember that \(T_{j} = \left\{ {x\left| {h_{j} (x) \le 0} \right.} \right\}\). Since \(h_{j} (x) \in ( - \infty ,0]\) and \(h_{j} (x)\) is a continuous function, thus \(T_{j}\) is closed set. Therefore \(K\left( \alpha \right) \cap \left( {\bigcap\nolimits_{j = 1}^{m} {T_{j} } } \right)\) is a compact set.

\(g(x)\) is continuous function, so we conclude that \(\hbox{min} \,g(x)\) on \(K\left( \alpha \right) \cap \left( {\bigcap\nolimits_{j = 1}^{m} {T_{j} } } \right)\) always exists. Therefore \(F(\alpha )\) is well-defined.

Lemma 16

Let \(\{ \alpha_{n} \}_{n = 1}^{\infty }\) be a sequence generated by the PQCQP method, then \(K(\alpha_{n} )\) is a nonempty set for any \(n \in {\mathbb{N}}\).

Proof

According to Algorithm 1, the proof is obvious.

Theorem 17

Let \(\{ \alpha_{n} \}_{n = 1}^{\infty }\) be a sequence generated by the PQCQP method then we have:

  1. 1.

    \(\{ F(\alpha_{n} )\}_{n = 1}^{\infty }\) is convergent.

  2. 2.

    \(\{ \alpha_{n} \}_{n = 1}^{\infty }\) is convergent.

  3. 3.

    If \(z^{*}\) is the optimal value of problem (1.1) then \(\alpha_{n} \to z^{*} .\)

Proof

  1. 1.

    In Theorems 15 and 16 we have shown that \(F(\alpha )\) always exists and is well-defined. Now it is sufficient to prove that the new method is convergent. According to Lemma 11, the sequence \(\{ F(\alpha_{n} )\}_{n = 1}^{\infty }\) is decreasing. Because \(F(\alpha_{n} )\) is bounded from below to \(F(\alpha_{2} )\), thus \(\{ F(\alpha_{n} )\}_{n = 1}^{\infty }\) is convergent.

  2. 2.

    Suppose \(l > 0\) and \(\alpha_{n + 1} = (\alpha_{n} + \alpha_{n - 1} )/2\). If \(\left| {\alpha_{2} - \alpha_{1} } \right| < l\) then \(\left| {\alpha_{n + 1} - \alpha_{n} } \right| < \frac{l}{{2^{n} }}\). So \(\left| {\alpha_{n - 1} - \alpha_{n} } \right| \to 0\) as \(n \to \infty .\)

  3. 3.

    Since \(z^{*} \in [\alpha_{n - 1} ,\alpha_{n} ]\). So

    $$\alpha_{n - 1} \le z^{*} \le \alpha_{n} .$$
    (3.2)

According to Step 2 of Algorithm 1, we have \(\alpha_{n + 1} = (\alpha_{n} + \alpha_{n - 1} )/2\).Therefore

$$\alpha_{n - 1} \le \alpha_{n + 1} \le \alpha_{n}$$
(3.3)

From (3.2), (3.3) and the proof of part 2, we conclude

$$|\alpha_{n} - z^{*} | \le |\alpha_{n} - \alpha_{n - 1} | < \frac{l}{{2^{n - 1} }}.$$

Hence \(|\alpha_{n} - z^{*} | \to 0\) as \(n \to \infty .\)

4 Computational results

In this section, we report some computational results of the PQCQP method. The new method was implemented using MATLAB R2013a on a PC with Intel Core i7 CPU and 16G memory. In the Algorithm 1 and AEA, CVX [8] is used to solve all convex programming problems. The test problems are considered as follows [23]:

$$\begin{aligned} & \mathop {\hbox{min} }\limits_{{x\smallint R^{n} }} x^{T} Q^{m + 1} x + 2p^{{m + 1^{T} }} x + r^{m + 1} \hfill \\ & \quad x^{T} Q^{0} x + 2p^{{0^{T} }} x + r^{0} \le 0 \hfill \\ &\quad x^{T} Q^{j} x + 2p^{{j^{T} }} x + r^{j} \le 0 \quad j = 1,2, \ldots ,m \hfill \\ \end{aligned}$$
(4.1)

where the matrices \(Q^{j} = O^{j} D^{j} (O^{j} )^{T}\) for some orthogonal matrices \(O^{j}\) and diagonal matrices \(D^{j} \in S_{ + + }^{n}\)(\(j = 1,2, \ldots ,m\)). The parameters in the test problems are randomly generated as follows.

The matrix \(O^{j} = P_{1}^{j} P_{2}^{j} P_{3}^{j}\), in which \(P_{i}^{j} = I - 2\frac{{\omega_{i} \omega_{i}^{T} }}{{\left\| {\omega^{2} } \right\|}}\), i = 1, 2, 3. The components of vector \(\omega_{i} \in R^{n}\) are random numbers in [− 1, 1] and I is the n-dimensional identity matrix. The matrices \(D^{j}\) are considered in the following way:

\(D^{0} = Diag\left( {D_{1}^{0} , \ldots ,D_{n}^{0} } \right)\) with \(D_{i}^{0} \in \left[ { - \,50,0} \right]\) for \(i = 1, \ldots ,\left\lfloor {\frac{n}{2}} \right\rfloor\) and \(D_{i}^{0} \in \left[ {0,50} \right]\) for \(i = \left\lfloor {\frac{n}{2}} \right\rfloor + 1, \ldots ,n.\)

\(D^{j} = Diag\left( {D_{1}^{j} , \ldots ,D_{n}^{j} } \right)\) with \(D_{i}^{j} \in \left[ {0,50} \right]\) for \(j = 1, \ldots ,m + 1\).

Also, we set \(p^{0} = \left( {p_{1}^{0} , \ldots ,p_{n}^{0} } \right)\) with \(p_{i}^{0} \in \left[ { - 10,10} \right]\), \(p^{j} = \left( {p_{1}^{j} , \ldots ,p_{n}^{j} } \right)\) with \(p_{i}^{j} \in \left[ { - 50,0} \right]\), and \(r_{ }^{j} \in \left[ { - \,5,0} \right]\) for \(j = 1, \cdots ,m + 1\), and \(r^{0} \in \left[ { - \,5,0} \right]\).

In order to demonstrate the validity of the PQCQP method, we use the global optimization package BARON [19] to obtain the optimal value.

The termination condition was chosen as \(\left| {\alpha_{2} - \alpha_{1} } \right| < \varepsilon\) with \(\varepsilon = 10^{ - 5}\). Since academic version BARON could solve only problems with at most m = n=10, so for m = n=10, 40 random test problems were generated and the results are listed in Table 1, including the optimal value of SDP relaxation problem corresponding to problem (1.1) (to see how good is optimal solution) and the optimal values (labeled by “OPT”) and the CPU time (in seconds) required for solving problems by BARON and PQCQP-AEA.

Table 1 Numerical results for 40 test problems

We summarize results of Table 1 in Fig. 1, by method that proposed by Dolan and More in [5]. Figure 1 plots the function

$$\pi_{s} (\tau ) = \frac{1}{|P|}|\{ p \in P:r_{p,s} \le \tau \} |.$$

where P denotes the set of problems used for a given numerical experiment and \(r_{p,s}\) denotes the ratio the amount of CPU time needed to solving problem \(p\) with method \(s\) and the least amount of CPU time needed for solving problem \(p\).

Fig. 1
figure 1

Performance profile for CPU time

The value of \(\pi_{s} (1)\) is probability that the method \(s\) will win over test problems. In Fig. 1, we see that PQCQP-AEA method is successful in about 85% of problems and the CPU time for solving problem by PQCQP-AEA is less than CPU time for solving problem by BARON. The Table 1 indicates that PQCQP-AEA could obtain comparable accurate solutions with BARON in an efficient manner.

We report numerical results for some different m and n with the PQCQP-AEA in Table 2.

Table 2 summarizes the average number of iterations and the average CPU time for 50 test problems for cases n = 15 and m = 10, n = m=15, n = 30 and m = 20, n = 50 and m = 10, n = 50 and m = 30 and n = m=50. From Tables 1 and 2, we conclude that the PQCQP-AEA can be successfully applied to problem (1.1).

Table 2 Numerical results for random test

5 Conclusion

In this paper, we have proposed a numerical procedure to find a global optimal solution of the non-convex QCQP problem (1.1) with one non-convex constraint. For solving this problem, we change the problem (1.1) to an equivalent parametric problem (PQCQP) that all of constraints in PQCQP problem (2.2) are convex. The convergence of PQCQP method is investigated. Computational results show that PQCQP-AEA can be successfully applied to problems (1.1).

As for future work, PQCQP method would be extended for the cases of problem (1.1) which objective function and constraints are non-convex.