1 Introduction

In this paper, we deal with nonlinear optimization problems with bound constraints. In the literature, different approaches have been proposed for solving such problems. Among them, we recall trust-region methods (see, e.g., [1, 2]), interior-point methods (see, e.g., [35]), active-set methods (see, e.g., [611]) and second-order methods (see, e.g., [12, 13]).

Even though a large number of different methods is available, there is still a strong interest in developing efficient methods to solve box-constrained problems. This is mainly due to the fact that many real-world applications can be modeled as large-scale problems with bound constraints. Furthermore, those methods are used as building blocks in many algorithmic frameworks for nonlinearly constrained problems (e.g., in penalty-based approaches).

Recently, an active-set method, namely the NMBC algorithm, was proposed in [14]. NMBC algorithm has three main features: It makes use of the technique described in [15] to identify active constraints; it builds up search directions by combining a truncated-Newton strategy (used in the subspace of the non-active constraints) with a Barzilai–Borwein strategy [16] (used in the subspace of the active constraints); and it generates a new iterate by means of a non-monotone line search procedure with backtracking.

Even though numerical results reported in [14] were promising, the method has a drawback that might affect its performance in some cases. Indeed, due to the fact that the search direction is given by two different subvectors (the one generated by means of the truncated-Newton strategy and the one obtained by means of the Barzilai–Borwein strategy), we might end up with a badly scaled direction. When dealing with such a direction, finding a good starting stepsize can become pretty hard.

In this paper, we give a twofold contribution. On the one hand, we describe and analyze an important theoretical feature of the active-set estimate proposed by Facchinei and Lucidi in [15]. In particular, we prove that under suitable assumptions, a significant reduction in the objective function can be obtained when setting to the bounds all those variables estimated active. In this way, we extend to box-constrained nonlinear problems a similar result already proved in [17] for \(\ell _1\)-regularized least squares problems and in [18] for quadratic problems with non-negativity constraints.

On the other hand, thanks to the descent property of the active-set estimate, we are able to define a new algorithmic scheme that overcomes the issues described above for the NMBC algorithm. More specifically, we define a two-stage algorithmic framework that suitably combines the active-set estimate proposed in [15] with the non-monotone line search procedure described in [14]. In the first stage of our framework, we set the estimated active variables to the corresponding bounds. Then, in the second stage, we generate a search direction in the subspace of the non-active variables (employing a suitably modified truncated-Newton step) to get a new iterate.

There are three main differences between the method we propose here and the one in [14]:

  1. 1.

    thanks to the two stages, we can get rid of the Barzilai–Borwein step for the active variables, thus avoiding the generation of badly scaled search directions;

  2. 2.

    the search direction is computed only in the subspace of the non-active variables, allowing savings in terms of CPU time, especially when dealing with large-scale problems;

  3. 3.

    a specific proximity check is included in order to guarantee global convergence of the method. This is crucial, from a theoretical point of view, since we embed the two stages described above within a non-monotone stabilization framework.

Regarding the theoretical properties of the algorithm, we prove that a non-monotone strategy is able to guarantee global convergence to stationary points even if at each iteration a gradient-related direction is generated only in the subspace of the non-active variables. Furthermore, we prove that under standard additional assumptions, the algorithm converges at a superlinear rate.

The paper is organized as follows. In Sect. 2, we formalize the problem and introduce the notation that will be used throughout the paper. In Sect. 3, we present our active-set estimate, stating some theoretical results, proved in “Appendix A.” In Sect. 4, we describe our two-stage active-set algorithm (a formal description of the algorithm can be found in “Appendix B”) and report the theorems related to the convergence. The detailed convergence analysis of the algorithm is reported in “Appendix C.” Finally, our numerical experience is presented in Sect. 5, and some conclusions are drawn in Sect. 6.

2 Problem Definition and Notations

We address the solution of bound-constrained problems of the form:

$$\begin{aligned} \min \ \{f(x): l\le x\le u\}, \end{aligned}$$
(1)

where \(f \in C^{2}({\mathbb {R}}^n)\); \(x,l,u \in {\mathbb {R}}^n\), and \(l<u\).

In the following, we denote by g(x) and H(x) the n gradient vector and the \(n\times n\) Hessian matrix of f(x), respectively. We also indicate with \(\Vert \cdot \Vert \) the Euclidean norm. Given a vector \(v\in {\mathbb {R}}^{n}\) and an index set \(I \subseteq \{1,\ldots ,n\}\), we denote by \(v_I\) the subvector with components \(v_i\), \(i\in I\). Given a matrix \(H \in {{{\mathbb {R}}}}^{n\times n}\), we denote by \(H_{I \,I}\) the submatrix with components \(h_{ij}\) with \(i,j \in I\) and by \(\lambda _{\text {max}}\) its largest eigenvalue. The open ball with center x and radius \(\rho >0\) is denoted by \(\mathcal{B}(x,\rho )\). Finally, given \(x \in {\mathbb {R}}^n\), we indicate with \([x]^{\sharp }\) the projection of x onto [lu], where \(l,u \in {\mathbb {R}}^n\) define the feasible region of problem (1).

Now, we give the formal definition of stationary points for problem (1).

Definition 2.1

A point \(x^{*}\in [l,u]\) is called stationary point of problem (1) iff it satisfies the following first-order necessary optimality conditions:

$$\begin{aligned} g_i(x^{*}) \ge 0, \quad&\text { if } x^{*}_{i}=l_{i}, \end{aligned}$$
(2)
$$\begin{aligned} g_i(x^{*}) \le 0, \quad&\text { if } x^{*}_{i}=u_{i}, \end{aligned}$$
(3)
$$\begin{aligned} g_i(x^{*}) = 0, \quad&\text { if } l_{i}< x^{*}_{i}< u_{i}. \end{aligned}$$
(4)

These conditions can be equivalently written as:

$$\begin{aligned}&g(x^*) - \lambda ^* + \mu ^* = 0, \end{aligned}$$
(5)
$$\begin{aligned}&(l_i-x^*_i)\,\lambda ^*_i = 0, \quad \qquad \,\,\,i=1,\ldots ,n, \end{aligned}$$
(6)
$$\begin{aligned}&(x^*_i-u_i)\,\mu ^*_i = 0, \quad \qquad \,i=1,\ldots ,n, \end{aligned}$$
(7)
$$\begin{aligned}&\lambda _i^* \ge 0, \quad \mu _i^* \ge 0, \quad \qquad i=1,\ldots ,n, \end{aligned}$$
(8)

where \(\lambda ^*, \mu ^* \in {\mathbb {R}}^{n}\) are the KKT multipliers.

3 Active-Set Estimate: Preliminary Results and Properties

As we will see later on, the use of a technique to estimate the active constraints plays a crucial role in the development of a theoretically sound and computationally efficient algorithmic framework. The active-set estimation we consider for box-constrained nonlinear problems takes inspiration from the approach first proposed in [19], and further studied in [15], based on the use of some approximations of KKT multipliers.

Let x be any feasible point, and \(\lambda (x)\), \(\mu (x)\) be some appropriate approximations of the KKT multipliers \(\lambda ^*\) and \(\mu ^*\). We define the following index subsets:

$$\begin{aligned} A_l(x) := & {} \left\{ i\in \{1,\ldots ,n \} \, :\, l_i \le x_i\le l_i + \epsilon \lambda _i(x), \, g_i(x)>0 \right\} , \end{aligned}$$
(9)
$$\begin{aligned} A_u(x) := & {} \left\{ i\in \{1,\ldots ,n \} \, :\, u_i - \epsilon \mu _i(x) \le x_i \le u_i, \, g_i(x)<0 \right\} , \end{aligned}$$
(10)
$$\begin{aligned} N(x) := & {} \left\{ i \in \{1,\dots ,n\} \, :\, i \notin A_l(x) \cup A_u(x)\right\} , \end{aligned}$$
(11)

where \(\epsilon > 0\).

In particular, \(A_l(x)\) and \(A_u(x)\) contain the indices of the variables estimated active at the lower bound and the upper bound, respectively. The set N(x) includes the indices of the variables estimated non-active.

In this paper, \(\lambda (x)\) and \(\mu (x)\) are defined as the multiplier functions introduced in [20]: starting from the solution of (5) at x, and then minimizing the error over (6)–(8), it is possible to compute the functions \(\lambda :{\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) and \(\mu :{\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) as:

$$\begin{aligned}&\lambda _i(x) := \frac{(u_i-x_i)^2}{(l_i-x_i)^2+(u_i-x_i)^2} g_i(x), \qquad i = 1,\dots ,n, \end{aligned}$$
(12)
$$\begin{aligned}&\mu _i(x) := -\frac{(l_i-x_i)^2}{(l_i-x_i)^2+(u_i-x_i)^2} g_i(x), \, \quad i = 1,\dots ,n. \end{aligned}$$
(13)

By adapting the results shown in [15], we can state the following proposition.

Proposition 3.1

If \((x^*,\lambda ^*,\mu ^*)\) satisfies KKT conditions for problem (1), then there exists a neighborhood \(\mathcal{B}(x^*,\rho )\) such that

  • \(\{i \, :\, x^*_i = l_i, \, \lambda ^*_i > 0 \} \subseteq A_l(x) \subseteq \{i \, :\, x^*_i = l_i\}\),

  • \(\{i \, :\, x^*_i = u_i, \, \mu ^*_i > 0 \} \subseteq A_u(x) \subseteq \{i \, :\, x^*_i = u_i\}\),

for each \(x \in \mathcal{B}(x^*,\rho )\).

Furthermore, if strict complementarity holds, then

  • \(\{i \, :\, x^*_i = l_i, \, \lambda ^*_i > 0 \} = A_l(x) = \{i \, :\, x^*_i = l_i\}\),

  • \(\{i \, :\, x^*_i = u_i, \, \mu ^*_i > 0 \} = A_u(x) = \{i \, :\, x^*_i = u_i\}\),

for each \(x \in \mathcal{B}(x^*,\rho )\).

We notice that stationary points can be characterized by using the active-set estimate, as shown in the next propositions.

Proposition 3.2

A point \(\bar{x} \in [l,u]\) is a stationary point of problem (1) iff the following conditions hold:

$$\begin{aligned}&\max \ \{l_i - \bar{x}_i, -g_i(\bar{x})\} = 0, \quad i \in A_l(\bar{x}), \end{aligned}$$
(14)
$$\begin{aligned}&\max \ \{\bar{x}_i - u_i, g_i(\bar{x})\} = 0, \quad i \in A_u(\bar{x}), \end{aligned}$$
(15)
$$\begin{aligned}&g_i(\bar{x}) = 0, \quad i \in N(\bar{x}). \end{aligned}$$
(16)

Proof

See “Appendix A.” \(\square \)

Proposition 3.3

Given \(\bar{x} \in [l,u]\), assume that

$$\begin{aligned} \{i \in A_l(\bar{x}) \, :\, \bar{x}_i>l_i\} \cup \{i \in A_u(\bar{x}) \, :\, \bar{x}_i < u_i\} = \emptyset . \end{aligned}$$
(17)

Then, \(\bar{x}\) is a stationary point of problem (1) iff

$$\begin{aligned} g_i(\bar{x}) = 0 \text { for all } i \in N(\bar{x}). \end{aligned}$$

Proof

See “Appendix A.” \(\square \)

Proposition 3.4

Given \(\bar{x} \in [l,u]\), assume that

$$\begin{aligned} g_i(\bar{x}) = 0 \text { for all } i \in N(\bar{x}). \end{aligned}$$
(18)

Then, \(\bar{x}\) is a stationary point of problem (1) iff

$$\begin{aligned} \{i \in A_l(\bar{x}) \, :\, \bar{x}_i > l_i\} \cup \{i \in A_u(\bar{x}) \, :\, \bar{x}_i < u_i\} = \emptyset . \end{aligned}$$

Proof

See “Appendix A.” \(\square \)

3.1 Descent Property of the Active-Set

In this subsection, we show that the active-set estimate can be used for computing a point that ensures a sufficient decrease in the objective function simply by fixing the estimated active variables at the bounds.

First, we give an assumption on the parameter \(\epsilon \) appearing in the definition of the active-set estimates \(A_l(x)\) and \(A_u(x)\) that will be used to prove the main result in this subsection.

Assumption 3.1

Assume that the parameter \(\epsilon \) appearing in (9) and (10) satisfies the following conditions:

$$\begin{aligned} \left\{ \begin{array}{ll} 0 < \epsilon \le \dfrac{1}{\bar{\lambda }}, &{} \quad \text{ if } \ \bar{\lambda }> 0, \\ \epsilon > 0, &{} \quad \text{ otherwise, } \end{array} \right. \end{aligned}$$
(19)

where

$$\begin{aligned} \bar{\lambda } := \max _{x \in [l,u]} \lambda _{\text {max}} (H(x)). \end{aligned}$$

Now, we state the main result of the subsection.

Proposition 3.5

Let Assumption 3.1 hold. Let \(x\in [l,u]\) be such that

$$\begin{aligned} A_l(x) \cup A_u(x) \ne \emptyset , \end{aligned}$$

and let \(\tilde{x}\) be the point defined as

$$\begin{aligned} \tilde{x}_i := l_i, \quad&i \in A_l(x), \\ \tilde{x}_i := u_i, \quad&i \in A_u(x), \\ \tilde{x}_i := x_i, \quad&i \in N(x), \end{aligned}$$

where \(A_l(x)\), \(A_u(x)\) and N(x) are the index subsets defined as in (9), (10) and (11), respectively.

Then,

$$\begin{aligned} f(\tilde{x})-f(x) \le - \dfrac{1}{2\epsilon }\Vert x-\tilde{x}\Vert ^2. \end{aligned}$$

Proof

See “Appendix A.” \(\square \)

As we already highlighted in Introduction, Proposition 3.5 is a non-trivial extension of similar results already proved in the literature.

In particular, here we deal with problems having a general non-convex objective function, while in [17, 18], where a similar analysis was carried out, the authors only considered convex quadratic optimization problems.

3.2 Descent Property of the Non-active Set

In this subsection, we show that, thanks to the theoretical properties of the active-set estimate, a sufficient decrease in the objective function can also be obtained by suitably choosing a direction in the subspace of the non-active variables only. Let us consider a search direction satisfying the following conditions:

$$\begin{aligned}&d_i = 0, \quad \forall i\in A_l(x) \cup A_u(x), \end{aligned}$$
(20)
$$\begin{aligned}&d_{N(x)}^Tg_{N(x)}(x) \le -\sigma _1 \Vert g_{N(x)}(x)\Vert ^{2}, \end{aligned}$$
(21)
$$\begin{aligned}&\Vert d_{N(x)}\Vert \le \sigma _{2}\Vert g_{N(x)}(x)\Vert , \end{aligned}$$
(22)

where \(\sigma _1, \sigma _2 > 0\). Condition (20) ensures that the estimated active variables are not updated when moving along such a direction, while (21) and (22) imply that d is gradient-related with respect to only the estimated non-active variables.

Given a direction d satisfying (20)–(22), the following proposition shows that a sufficient decrease in the objective function can be guaranteed by projecting suitable points obtained along d.

Proposition 3.6

Given \(\bar{x} \in [l,u]\), let us assume that \(N(\bar{x}) \ne \emptyset \) and that \(g_{N(\bar{x})}(\bar{x}) \ne 0\). Let \(\gamma \in ]0,1[\). Then, there exists \(\bar{\alpha }>0\) such that

$$\begin{aligned} f(\bar{x}(\alpha ))-f(\bar{x})\le \gamma \alpha g(\bar{x})^T d, \quad \forall \alpha \in ]0,\bar{\alpha }], \end{aligned}$$
(23)

where \(\bar{x}(\alpha ):=[\bar{x}+\alpha d]^{\sharp }\), and d satisfies (20)–(22) in \(\bar{x}\).

Proof

See “Appendix A.” \(\square \)

4 A New Active-Set Algorithm for Box-Constrained Problems

In this section, we describe a new algorithmic framework for box-constrained problems. Its distinguishing feature is the presence of two different stages that enable us to separately handle active and non-active variables.

In “Appendix B,” we report the formal scheme of our Active-Set Algorithm for Box-Constrained Problems (ASA-BCP). In the following, we only give a sketch of it, indicating with \(f_R\) a reference value of the objective function that is updated throughout the procedure. Different criteria were proposed in the literature to choose this value (see, e.g., [21]). Here, we take \(f_R\) as the maximum among the last M function evaluations, where M is a nonnegative parameter.

  • At every iteration k, starting from the non-stationary point \(x^k\), the algorithm fixes the estimated active variables at the corresponding bounds, thus producing the new point \(\tilde{x}^k\). In particular, the sets

    $$\begin{aligned} A_l^k:=A_l(x^k), \quad A_u^k:=A_u(x^k)\quad \text{ and } \quad N^k:=N(x^k) \end{aligned}$$
    (24)

    are computed and the point \(\tilde{x}^k\) is produced by setting

    $$\begin{aligned} \tilde{x}^{k}_{A_l^k} := l_{A_l^k},\quad \tilde{x}^{k}_{A_u^k} := u_{A_u^k}\quad \text{ and } \quad \tilde{x}^{k}_{N^k} := x^k_{N^k}. \end{aligned}$$
  • Afterward, a check is executed to verify if the new point \(\tilde{x}^k\) is sufficiently close to \(x^k\). If this is the case, the point \(\tilde{x}^k\) is accepted. Otherwise, an objective function check is executed and two further cases are possible: if the objective function is lower than the reference value \(f_R\), then we accept the point \(\tilde{x}^k\); otherwise the algorithm sets \(\tilde{x}^k\) by backtracking to the last good point (i.e., the point \(\tilde{x}\) that produced the last \(f_R\)).

  • At this point, the active and non-active sets are updated considering the information related to \(\tilde{x}^k\), i.e., we build

    $$\begin{aligned} \tilde{A}_l^k:=A_l(\tilde{x}^k), \quad \tilde{A}_u^k:=A_u(\tilde{x}^k)\quad \text{ and } \quad \tilde{N}^k:=N(\tilde{x}^k). \end{aligned}$$
    (25)

    A search direction \(d^k\) is then computed: we set \(d_{\tilde{A}^k}^k :=0 \), with \(\tilde{A}^k := \tilde{A}_l^k \cup \tilde{A}_u^k\), and calculate \(d^{k}_{\tilde{N}^k}\) by means of a modified truncated-Newton step (see, e.g., [22] for further details on truncated-Newton approaches).

  • Once \(d^k\) is computed, a non-monotone stabilization strategy, inspired by the one proposed in [23], is used to generate the new iterate. In particular, the algorithm first checks if \(\Vert d^k\Vert \) is sufficiently small. If this is the case, the unitary stepsize is accepted, and we set

    $$\begin{aligned} x^{k+1} := [\tilde{x}^k + d^k]^{\sharp } \end{aligned}$$

    without computing the related objective function value and start a new iteration. Otherwise, an objective function check is executed and two further cases are possible: if the objective function is greater than or equal to the reference value \(f_R\), then we backtrack to the last good point and take the related search direction; otherwise we continue with the current point. Finally, a non-monotone line search is performed in order to get a stepsize \(\alpha ^k\) and generate

    $$\begin{aligned} x^{k+1} := [\tilde{x}^k+\alpha ^k d^k]^{\sharp }. \end{aligned}$$
  • After a prefixed number of iterations without calculating the objective function, a check is executed to verify if the objective function is lower than the reference value \(f_R\). If this is not the case, a backtracking and a non-monotone line search are executed.

The non-monotone line search used in the algorithm is the same as the one described in, e.g., [14]. It sets \(\alpha ^k := \delta ^{\nu }\), where \(\nu \) is the smallest nonnegative integer for which

$$\begin{aligned} f([\tilde{x}^k+\delta ^{\nu } d^k]^{\sharp }) \le f_R + \gamma \delta ^{\nu } g(\tilde{x}^k)^T d^k, \end{aligned}$$
(26)

with \(\delta \in ]0,1[\) and \(\gamma \in ]0,\frac{1}{2}[\).

Remark 4.1

From Propositions 3.3 and 3.4, it follows that ASA-BCP is well defined, in the sense that at the kth iteration it produces a new point \(x^{k+1} \ne x^k\) iif \(x^k\) is non-stationary.

Hereinafter, we indicate the active-set estimates in \(x^k\) and \(\tilde{x}^k\) with the notation used in (24) and in (25), respectively.

Now, we state the main theoretical result ensuring the global convergence of ASA-BCP.

Theorem 4.1

Let Assumption 3.1 hold. Then, ASA-BCP either produces a stationary point for problem (1) in a finite number of iterations, or produces an infinite sequence \(\{x^k\}\) and every limit point \(x^*\) of the sequence is a stationary point for problem (1).

Proof

See “Appendix C.” \(\square \)

Finally, under standard additional assumptions, superlinear convergence of the method can be proved.

Theorem 4.2

Assume that \(\{x^k\}\) is a sequence generated by ASA-BCP converging to a point \(x^*\) satisfying the strict complementarity condition and such that \(H_{N^* N^*} (x^*) \succ 0\), where \(N^* := \{i :l_i< x^*_i < u_i\}\). Assume that the sequence \(\{d^k\}\) of directions satisfies the following condition:

$$\begin{aligned} \lim _{k \rightarrow \infty } \frac{\Vert H_{\tilde{N}^k \tilde{N}^k}(\tilde{x}^k) d^k_{\tilde{N}^k} + g_{\tilde{N}^k}(\tilde{x}^k) \Vert }{\Vert g_{\tilde{N}^k}(\tilde{x}^k) \Vert } = 0. \end{aligned}$$
(27)

Then, the sequence \(\{x^k\}\) converges to \(x^*\) superlinearly.

Proof

See “Appendix C.” \(\square \)

5 Numerical Experience

In this section, we describe the details of our computational experience.

In Sect. 5.1, we compare ASA-BCP with the following codes:

  • NMBC [14] (in particular, we considered the version named NMBC \(_2\) in [14]);

  • ALGENCAN [13]: an active-set method using spectral projected gradient steps for leaving faces, downloaded from the TANGO web page (http://www.ime.usp.br/~egbirgin/tango);

  • LANCELOT B [24]: a Newton method based on a trust-region strategy, downloaded from the GALAHAD web page (http://www.galahad.rl.ac.uk).

All computations have been run on an Intel Xeon(R), CPU E5-1650 v2 3.50 GHz. The test set consisted of 140 bound-constrained problems from the CUTEst collection [25], with dimension up to \(10^5\). The stopping condition for all codes was

$$\begin{aligned} \Vert x - [x - g(x)]^\sharp \Vert _\infty < 10^{-5}, \end{aligned}$$

where \(\Vert \cdot \Vert _\infty \) denotes the sup-norm of a vector.

In order to compare the performances of the algorithms, we make use of the performance profiles proposed in [26].

Following the analysis suggested in [27], we preliminarily checked whether the codes find different stationary points: The comparison is thus restricted to problems for which all codes find the same stationary point (with a tolerance of \(10^{-3}\)). Furthermore, we do not consider in the analysis those problems for which all methods find a stationary point in less than 1 second.

In ASA-BCP, we set the algorithm parameters to the following values: \(Z := 20\) and \(M := 99\) (so that the last 100 objective function values are included in the computation of the reference value).

In running the other methods considered in the comparisons, default values were used for all parameters (but those related to the stopping condition).

C\(++\) and Fortran 90 implementations (with CUTEst interface) of ASA-BCP, together with details related to the experiments and the implementation, can be found at the following web page: https://sites.google.com/a/dis.uniroma1.it/asa-bcp.

5.1 Comparison on CUTEst Problems

In this subsection, we first compare ASA-BCP with the NMBC algorithm presented in [14]. Then, we report the comparison of ASA-BCP with other two solvers for bound-constrained problems, namely ALGENCAN [13] and LANCELOT B [24]. All the codes are implemented in Fortran 90.

Recalling how we selected the relevant test problems, the analysis was restricted to 43 problems for the comparison between ASA-BCP and NMBCand to 62 problems for the comparison between ASA-BCP, ALGENCAN and LANCELOT B.

In particular, in the comparison between ASA-BCP and NMBC, 96 problems were discarded because they were solved in less than 1 second by both algorithms. A further problem (namely SCOND1LS with 5002 variables) was removed because ASA-BCP and NMBC found two different stationary points (NMBC found the worst one).

In the comparison between ASA-BCP, ALGENCAN and LANCELOT B, 75 problems were discarded because they were solved in less than 1 second by all the considered algorithms. Other 3 problems were removed as the methods stopped at different stationary points. Namely, NCVXBQP3 with \(10^5\) variables, POWELLBC with \(10^3\) variables and SCOND1LS with 5002 variables were discarded in our comparison. The worst stationary points were found by ASA-BCP, LANCELOT B and ASA-BCP, respectively.

In Fig. 1, we report the performance profiles of ASA-BCP and NMBC. These profiles show that ASA-BCP outperforms NMBC in terms of CPU time, number of objective function evaluations and number of conjugate gradient iterations. This confirms the effectiveness of our two-stage approach when compared to the NMBC algorithm.

Fig. 1
figure 1

Comparison between ASA-BCP and NMBC: performance profiles on CPU time, number of objective function evaluations and number of conjugate gradient iterations. The x axis is in linear scale in the left panel and in logarithmic scale in the right panel

Fig. 2
figure 2

Comparison among ASA-BCP, ALGENCAN and LANCELOT B: performance profiles on CPU time, number of objective function evaluations and number of conjugate gradient iterations. The x axis is in linear scale in the left panel and in logarithmic scale in the right panel

These results seem to confirm that on the one hand, computing the search direction only in the subspace of the non-active variables guarantees some savings in terms of CPU time, and on the other hand, getting rid of the Barzilai–Borwein step (used in NMBC) avoids the generation of badly scaled search directions.

In Fig. 2, we show the performance profiles of ASA-BCP, ALGENCAN and LANCELOT B. By taking a look at the performance profiles related to CPU time, we can easily see that ASA-BCP and ALGENCAN are comparable in terms of efficiency and are both better than LANCELOT B. As regards robustness, we can see that ASA-BCP outperforms both ALGENCAN and LANCELOT B. More specifically, when \(\tau \) is equal to 2, ASA-BCP solves about \(95\,\%\) of the problems, while ALGENCAN and LANCELOT B  respectively, solve about 90 and \(30\,\%\) of the problems. Furthermore, ASA-BCP is able to solve all the problems when \(\tau \) is about 70, while ALGENCAN and LANCELOT B get to solve all the problems for significantly larger values of \(\tau \).

For what concerns the number of objective function evaluations, ASA-BCP is the best in terms of efficiency and is competitive with LANCELOT B in terms of robustness. In particular, when \(\tau \) is equal to 2, ASA-BCP solves about \(85\,\%\) of the problems, while ALGENCAN and LANCELOT B  respectively, solve about 15 and \(30\,\%\) of the problems. Moreover, ASA-BCP and LANCELOT B solve all the problem when \(\tau \) is about 60 and 50, respectively, while ALGENCAN gets to solve all the problems when \(\tau \) is about 600.

Finally, as regards the number of conjugate gradient iterations, ASA-BCP outperforms the other two codes in terms of efficiency, while LANCELOT B is the best in terms of robustness. More in detail, when \(\tau \) is equal to 2, ASA-BCP solves about \(85\,\%\) of the problems, while ALGENCAN and LANCELOT B  respectively, solve about 20 and \(35\,\%\) of the problems. LANCELOT B is able to solve all the problems when \(\tau \) is about 200, while ASA-BCP and ALGENCAN need larger values of \(\tau \).

6 Conclusions

In this paper, a two-stage active-set algorithm for box-constrained nonlinear programming problems is devised. In the first stage, we get a significant reduction in the objective function simply by setting to the bounds the estimated active variables. In the second stage, we employ a truncated-Newton direction computed in the subspace of the estimated non-active variables. These two stages are inserted in a non-monotone framework, and the convergence of the resulting algorithm ASA-BCP is proved. Experimental results show that our implementation of ASA-BCP is competitive with other widely used codes for bound-constrained minimization problems.