1 Introduction

Multiobjective optimization has played an important role in solving real-world problems. Most engineering problems require the designer to optimize several conflicting objectives. The objectives are in conflict to each other if an improvement in one objective leads to the deterioration in another. In multiobjective optimization problems, several objective functions have to be minimized simultaneously. In this method, no single point can minimize all objective functions at a time. Therefore, the concept of optimality is replaced with Pareto optimality or efficiency [1]. A point is called Pareto optimal or efficient, if there does not exist a different point with the same or smaller objective function values, such that there is a decrease in at least one objective function value. Multiobjective unconstrained optimization problems have been applications in engineering design [2, 3], design [4,5,6], location science [7], statistics [8], medicine [9,10,11], and cancer treatment planning [12], etc. There are many new studies on this field to solve the multiobjective unconstrained optimization problems [13,14,15,16]. A general solution approach for the multiobjective optimization problem is the scalarization technique which is widely used for computing the proper efficient solutions [17]. This method is free from priori chosen weighting factors or any other form of a prior ranking or ordering information for the different objective functions [18, 19]. Several parameter dependent scalarization approaches for solving nonlinear multiobjective optimization problems are discussed in [20]. Scalarization techniques convert the original multiobjective optimization problem into a new single objective optimization problem in such a way that the optimal solution for the new problem is also optimal for the original one. From a practical point of view, the main advantage of this approach is that several fast and reliable methods developed for solving single objective optimization problems can be used to solve multiobjective optimization problems. In multiobjective optimization, one of the most widely used scalarization techniques is the weighting method, which consists of minimizing the weighted sum of different objectives [21]. In general, the weights, which are critical for the methods, are not known in advance for us. Thus, the computational implementations of this technique are not straightforward. Of course, the random choices of the weighting vector do not yield an optimal solution. The extension of the weighting method is for vector optimization [22] .

It is well known that the objective functions are minimized rapidly along the descent direction. The q-calculus was first developed by Jackson [23], and the results obtained in [24] rise to generalizations of series, functions and special numbers within the context of the q-calculus [25]. The q-calculus has been one of the research interests in the field of mathematics, physics, and signal processing for the last few decades [24, 26]. The q-Newton-Kantorovich method [27] has been developed to solve the system of nonlinear equations such as:

$$\begin{aligned} \vert x_{1}^2-4\vert +e^{7x_{2}-36}-2=0, \end{aligned}$$
(1)
$$\begin{aligned} \log _{10}\bigg (\frac{12x_{1}^{2}}{x_{2}}-6\bigg )+x_{1}^{4}-9=0. \end{aligned}$$
(2)

With a starting point \(x^{0}=(2, 5)^{T}\), the solution \(x^{*}=(\sqrt{3}, \frac{36}{7})^{T}\) is obtained. But, the classical Newton-Kantorovich method with the same starting point fails because the partial derivative of (1) with respect to the first variable does not exist. This is one of the motivation to use the q-derivative over the classical derivative. The q-derivative has been used in the steepest descent method to solve single objective unconstrained optimization problems [28]. It shows that the generated points are escaped from many local minima and reach to the global minima. Global optimization using q-gradient was further studied in [29], where the parameter q is a dilation that is used to control the degree of the localness of the search. The q-derivative concept has also been used to develop q-least mean squares algorithm given in [30], which shows that the q-derivative takes larger steps to get the optimal solution for \(q\in (0,1)\) when compared to the conventional derivative. Recently, the q-derivative in the gradient of the given function is used to show the local convergent scheme, and then this idea extended to show the global convergence property for single objective unconstrained optimization [31]. The advantages of applying the q-derivative in multiobjective unconstrained optimization problems are given as follows:

  1. 1.

    When \(q\ne 1\), the q-gradient vector can make any angle with the classical gradient vector, and the search direction can point in any direction. For example, for the case of the steepest descent method for single objective optimization problems, the descent direction can reduce the zigzag movement to obtain the optimal solution [28].

  2. 2.

    It minimizes the cost for solving multiobjective optimization problems because q-gradient takes the larger steps in the search direction as it evaluates the secant of the function rather than the tangent for the case of classical derivative [30].

To the best of our knowledge, the q-derivative has not been applied in Newton’s method to solve multiobjective unconstrained optimization problems so far. In this paper, we apply the q-derivative to compute the q-Hessian which is used to find Newton’s search direction, and generalize the algorithm given in [32] and prove the convergence theorem.

The outline of this paper is organized in the following manner: in Sect. 2, some prerequisites related to the multiobjective optimization problems are discussed. In Sect. 3, we present the first-order optimality condition for multiobjective unconstrained optimization using q-derivative, and present Newton’s search descent direction. In Sect. 4, we give the algorithm with convergence theorem, and numerical examples are given in Sect. 5. The last section is conclusion.

2 Preliminaries

We address the following multiobjective unconstrained optimization problem (MUOP):

$$\begin{aligned} \min \;\;f(x),\;\;x\in \mathbb {R}^{n}, \end{aligned}$$
(3)

where \(f:\mathbb {R}^{n}\rightarrow \mathbb {R}^{m},\)\(f(x)=(f_{1}(x), f_{2}(x), \ldots , f_{m}(x)),\) and \(f_{j}:\mathbb {R}^{n}\rightarrow \mathbb {R}\) is continuously differentiable for \(j=1,\ldots ,m.\) For \(x, y\in \mathbb {R}^{n}\) we denote vector inequalities as

$$\begin{aligned} x=y&\iff x_{i} = y_{i}\;\;\text {for all}\;\;i=1,\ldots ,n, \\ x\geqq y&\iff x_{i}\ge y_{i}\;\;\text {for all}\;\;i=1,\ldots , n,\\ x\ge y&\iff x_{i} \ge y_{i}\;\; \text {and}\;\;x\ne y,\\ x>y&\iff x_{i}>y_{i}\;\; \text {for all}\;\;i=1,\ldots ,n. \end{aligned}$$

The q-derivative (\(q\ne 1\)) of \(f_{j}\) for \(j=1,\ldots ,m\) is defined as

$$\begin{aligned} D_{q, x}f_{j}(x)={\left\{ \begin{array}{ll} \frac{f_{j}(x)-f_{j}(qx)}{x(1-q)}, \;\;x\ne 0, \\ f_{j}'(x),\;\; x=0. \end{array}\right. } \end{aligned}$$
(4)

In the limit as \(q\rightarrow 1\) or \(x\rightarrow 0\), the q-derivative reduces to the classical derivative. Suppose the partial derivatives of \(f_{j}:\mathbb {R}^{n}\rightarrow \mathbb {R}\), for \(j=1,\ldots ,m\) exist. For \(x\in \mathbb {R}^{n},\) consider an operator \(\epsilon _{q,i}\) on \(f_{j}\) as

$$\begin{aligned} (\epsilon _{q,i}f_{j})(x)=f_{j}(x_{1}, x_{2},\ldots , x_{i-1}, qx_{i}, x_{i+1},\ldots , x_{n}). \end{aligned}$$
(5)

The q-partial derivative (\(q\ne 1\)) of \(f_{j}\) for \(j=1,\ldots ,m\) at x with respect to \(x_{i}\) for \(i=1,\ldots ,n\) is

$$\begin{aligned} D_{q, x_{i}}f_{j}(x)={\left\{ \begin{array}{ll} \frac{f_{j}(x)-(\epsilon _{q,i}f_{j})(x)}{(1-q)(x_{i})},\;\;\;x_{i}\ne 0,\\ \frac{\partial f_{j}}{\partial x_{i}},\;\;x_{i}=0. \end{array}\right. } \end{aligned}$$
(6)

We denote

$$\begin{aligned} g_{j}(x)=\nabla f_{j}(x)=(g^{j}_{1}(x), g^{j}_{2}(x),\ldots , g^{j}_{n}(x))^{T}, \end{aligned}$$

where \(g_{i}=\frac{\partial f_{j}}{\partial x_{i}}\) for \(i=1,\ldots ,n\), \(j=1,\ldots ,m\). The Jacobian of the function \(f_{j}\) for \(j=1,\ldots ,m\) is the q-partial derivative of \(\nabla f_{j}(x)\), which is given as:

$$\begin{aligned} D_{q}\nabla f_{j}(x)=\begin{bmatrix} D_{q, x_{1}}g^{j}_{1}(x) &{} D_{q, x_{2}}g^{j}_{1}(x)&{} \ldots ,&{} D_{q, x_{n}}g^{j}_{1}(x) \\ D_{q, x_{1}}g^{j}_{2}(x) &{} D_{q, x_{2}}g^{j}_{2}(x)&{} \ldots &{}D_{q, x_{n}}g^{j}_{2}(x) \\ \ldots &{} \ldots &{} \ldots &{}\ldots \\ D_{q, x_{1}}g^{j}_{n}(x) &{} D_{q, x_{2}}g^{j}_{n}(x)&{} \ldots &{} D_{q, x_{n}}g^{j}_{n}(x) \end{bmatrix}_{n\times n}. \end{aligned}$$
(7)

In short, we write \(D_{q}\nabla f_{j}(x)=[D_{q,x_{i}}g_{i}(x)]_{n\times n},\;\forall \;\;i=1,\ldots n,\) and \(j=1,\ldots ,m.\) The matrix \(D_{q}\nabla f_{j}(x)\) is not necessarily a symmetric matrix. For example, let \(f:\mathbb {R}^{2}\rightarrow \mathbb {R}\) be a function defined by

$$\begin{aligned} f(x)=3x_{1}^2-5x_{1}x_{2}^{3}. \end{aligned}$$
(8)

Then,

$$\begin{aligned} \nabla f(x)=[6x_{1}-5x_{2}^{3}, -15x_{1}x_{2}^{2} ]^{T}, \end{aligned}$$

and

$$\begin{aligned} D_{q}\nabla f(x)=\begin{bmatrix} 6 &{} -5(1+q+q^{2})x_{2}^{2} \\ -15x_{2}^{2} &{} -15(1+q)x_{1}x_{2} \end{bmatrix}, \end{aligned}$$

which is not symmetric. A point \(x^{*}\in X\) is a Pareto optimum, if there is no \(y\in X\) for which \(f_{j}(y)\le f(x^{*})\), \(j=1,\ldots ,m\), and \(f(y)\ne f(x^{*}).\) The point \(x^{*}\in X\) is a weak Pareto optimum, if there is no \(y\in X\) for which \(f_{j}(y)< f(x^{*}),\)\(j=1,\ldots ,m.\) Let \(\mathbb {R}_{++}\) be the set of strictly positive real numbers. Assume that \(X\subseteq \mathbb {R}^{n}\) is an open set and \(f_{j}:X\rightarrow \mathbb {R}\), \(j=1,\ldots ,m\) is given function. The directional derivative of \(f_{j}\), where \(j=1,\ldots ,m\) at \(x\in \mathbb {R}^{n}\) in the direction \(d\in \mathbb {R}^{n}\) is defined as

$$\begin{aligned} f'_{j}(x;d)=\lim \limits _{\alpha \rightarrow 0}\frac{f_{j}(x+\alpha d)-f_{j}(x)}{\alpha },\;\forall \;j=1,\ldots ,m. \end{aligned}$$
(9)

For \(x\in \mathbb {R}^{n}\), \(\Vert x \Vert \) denotes the Euclidean norm in \(\mathbb {R}^{n}\). Norm of a matrix \(A\in \mathbb {R}^{n\times n}\) is \(\Vert A\Vert =\max \frac{\Vert Ax\Vert }{\Vert x\Vert }, x\ne 0.\) We say \(d\in \mathbb {R}^{n}\) as a descent direction for \(f_{j}\) at x, if for all \(j=1,\ldots ,m\), \(d^{T}\nabla f_{j}(x)<0\). Thus, d is a descent direction of f at x, if there exists \(\alpha _{0}>0\) such that \(f_{j}(x+\alpha d)<f_{j}(x)\) for all \(\alpha \in (0, \alpha _{0}]\). A point \(x^{*}\in \mathbb {R}^{n}\) is said to be an efficient point of (MUOP), if there does not exist \(y\in \mathbb {R}^{n}\) such that \(f_{j}(y)\le f_{j}(x^{*})\), \(j=1,\ldots ,m.\) This means that point \(x^{*}\) is weak efficient point of \(f_{j}\), if there does not exist \(d\in \mathbb {R}^{n}\) such that \(g_{j}(x^{*})^{T}d<0\) for all \(j=1,\ldots ,m.\) The next proposition due to [32] establishes the relationship between the properties of being a critical and an optimal point.

Proposition 1

Let \(f_{j}\), \(j=1,\ldots ,m\), be a continuously differentiable on \(X\subset \mathbb {R}^{n}\). Then,

  1. 1.

    If \(x^{*}\) is locally weak Pareto optimal, then \(x^{*}\) is a critical point for \(f_{j}\).

  2. 2.

    If \(f_{j}\in \mathbb {R}^{m}\) is convex, and \(x^{*}\) is a critical for \(f_{j}\), then \(x^{*}\) is a weak Pareto optimal.

  3. 3.

    If \(f_{j}\in C^{2} (\mathbb {R}^{n}, \mathbb {R}^{m})\), and Hessian matrices are positive definite for all \(x\in \mathbb {R}^{n}\), and if \(x^{*}\) is critical point for all \(f_{j}\), then \(x^{*}\) for all \(j=1,\ldots ,m\), is a Pareto optimal.

The idea of our proposed algorithm is very straightforward: choose an initial guess \(x^{0}\) and check if part 1 of Proposition 1 holds. If not, compute a Newton search direction and make a suitable step length from \(x^{0}\) along Newton search direction, which results a new point and the algorithm is repeated in this way.

3 On q-Newton type descent direction

We now proceed to present the q-Newton descent direction for multiobjective unconstrained optimization problem. For any point \(x \in \mathbb {R}^{n}\), we denote \(d_{Nq}(x),\) the Newton direction as the optimal solution of the following problem:

$$\begin{aligned} {\left\{ \begin{array}{ll} \min \;\; \max _{j=1,\ldots ,m} g_{j}(x)^{T}d_{Nq}(x)+\frac{1}{2}d_{Nq}(x)^{T}D_{q}\nabla f_{j}(x)d_{Nq}(x)\\ \text {subject to} \;\; d_{Nq}(x)\in \mathbb {R}^{n}. \end{array}\right. } \end{aligned}$$
(10)

We consider the symmetric counter part \(\bar{D}_{q}\) of \(D_{q}\) as

$$\begin{aligned} \bar{D}_{q}=\frac{1}{2}(D_{q}+D_{q}^{T}). \end{aligned}$$
(11)

In addition to this, \(D_{q}\nabla f_{j}\), \(j=1,\ldots ,m\) may not be positive definite for some q. Here, we assume the symmetric counter part \(\bar{D}_{q}\nabla f_{j}\), \(j=1,\ldots ,m,\) and the positive definiteness of \(\bar{D}_{q}\nabla f\) is in the vicinity of \(x^{*}.\) The optimal value of problem (10) is given as:

$$\begin{aligned} \theta (x):\text {inf}_{d_{Nq}\in \mathbb {R}^{n}}\;\;\max _{j=1,\ldots ,m} g_{j}(x)^{T}d_{Nq}+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}. \end{aligned}$$
(12)

For multiobjective optimization, Newton search direction [32] is obtained by minimizing the maximum of quadratic term, which is given as:

$$\begin{aligned} d_{Nq}(x):\text {arg min}_{d_{Nq}\in \mathbb {R}^{n}}\max _{j=1,\ldots ,m}\;\;g_{j}(x)^{T}d_{Nq}+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}. \end{aligned}$$
(13)

The problem (10) is a non-smooth problem, but it also involves quadratic approximation of each objective function.

The above problem will be a quadratic convex programming problem, if every objective function is strongly convex. Therefore, such problem always has a unique minimizer, which is presented as:

$$\begin{aligned} P(x):{\left\{ \begin{array}{ll} \min \qquad \quad \Gamma (x)\\ {\text {subject to}\,\; g_{j}(x)^{T}d_{Nq}(x)+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x)\le \Gamma (x),\; 1\le j\le m},\\ \qquad \quad \qquad (\Gamma (x),d_{Nq})\in \mathbb {R}\times \mathbb {R}^{n}. \end{array}\right. } \end{aligned}$$

Thus,

$$\begin{aligned} \Gamma (x):\text {arg min}_{d_{Nq}\in \mathbb {R}^{n}}\max _{j=1,\ldots ,m}\;\;g_{j}(x)^{T}d_{Nq}+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}. \end{aligned}$$
(14)

Also, note that for \(m=1\), the Newton direction \(d_{Nq}(x)\) becomes the classical Newton direction for scalar optimization problems.

For \(x\in \mathbb {R}^{n}\), necessary condition for Pareto optimality is given in [1], and defined for steepest descent like methods for multiobjective case in [33, 34], which is modified as:

$$\begin{aligned} \mathfrak {R}(\bar{D}_{q}\nabla f_{j}(x))\cap (-\mathbb {R}^{m}_{++})=\phi ,\;\forall \;j=1,\ldots ,m. \end{aligned}$$
(15)

Note that P(x) has unique solution, which can be obtained using Karush-Kuhn-Tucker (KKT) optimality conditions. The Lagrange function of problem P(x) is:

$$\begin{aligned} L(\Gamma ,d_{Nq}; \lambda )=\Gamma (x)+\sum _{j=1}^{m}\lambda _{j}\bigg (g_{j}(x)^{T}d_{Nq}+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}-\Gamma (x)\bigg ), \end{aligned}$$
(16)

where \(\lambda _{j}\ge 0\) are Lagrange multipliers. The corresponding KKT optimality conditions for P(x) are given as:

$$\begin{aligned} \sum _{j=1}^{m}\lambda _{j}\big (g_{j}(x)^{T}+\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x)\big )&=0,\end{aligned}$$
(17)
$$\begin{aligned} \sum _{j=1}^{m}\lambda _{j}&=1, \end{aligned}$$
(18)
$$\begin{aligned} \lambda _{j} \ge 0,\;\;g_{j}(x)^{T}d_{Nq}(x)+\frac{1}{2}d_{Nq}(x)^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x)&\le \Gamma (x),\;\;\forall \;j=1,\ldots ,m, \end{aligned}$$
(19)
$$\begin{aligned} \lambda _{j}\bigg (g_{j}(x)^{T}d_{Nq}(x)+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x)-\Gamma (x)\bigg )&=0,\;\forall \;j=1,\ldots ,m. \end{aligned}$$
(20)

Suppose \(d_{Nq}(x)\) satisfies (17)–(20) with Lagrange multipliers \(\lambda _{j}\), where \(j=1,\ldots ,m\). The optimal value of P(x) is \(\Gamma (x).\) In particular, from (17), we obtain following:

$$\begin{aligned} d_{Nq}(x)=-\left[ \sum _{j=1}^{m}(\bar{D}_{q}\nabla f_{j}(x)\right] ^{-1}\sum _{j=1}^{m}\lambda _{j}g_{j}(x)^{T}. \end{aligned}$$
(21)

Theorem 1

For any noncritical point \(x\in \mathbb {R}^{n},\) the Newton direction \(d_{Nq}(x)\), as defined in (21) is a descent direction at x.

Proof

Note that, from (14), for \(d_{Nq}(x)=0\), we have \(\Gamma (x)\le 0.\) Suppose x is not a critical point of \(f_{j}\), \(\forall \;j=1,\ldots ,m\), then we must have

$$\begin{aligned} \mathfrak {R}(\bar{D}_{q}\nabla f_{j}(x))\cap (-\mathbb {R}^{m}_{++})\ne \phi ,\;\forall \;j=1,\ldots ,m. \end{aligned}$$

Thus, there exists \(d(x)\in \mathbb {R}^{n}\) such that \(g_{j}(x)^{T}d_{Nq}(x)<0,\)\(\forall \;j=1,\ldots ,m\). Replacing d(x) by \(\gamma d_{Nq}(x)\) for any \(\gamma \in (0,1)\), we get

$$\begin{aligned} \Gamma (x)&\le \max _{j=1,\ldots m}g_{j}(x)^{T}\gamma d(x)+\frac{1}{2}\gamma d(x)^{T}\bar{D}_{q}\nabla f_{j}(x)\gamma d(x) \\&=\gamma \max _{j=1,\ldots m} g_{j}(x)^{T} d(x)+\frac{1}{2} d(x)^{T}\bar{D}_{q}\nabla f_{j}(x) d(x). \end{aligned}$$

Therefore, for \(\gamma >0,\)\(\max _{j=1,\ldots , m} g_{j}(x)^{T} d_{Nq}(x)+\frac{1}{2} d_{Nq}(x)^{T}\bar{D}_{q}\nabla f_{j}(x) d_{Nq}(x)\) is negative, that is, \(\Gamma (x)<0\). Since \(\bar{D}_{q}\nabla f_{j}(x)\) is positive definite matrix, and \(d_{Nq}(x)\ne 0,\) then

$$\begin{aligned} g_{j}(x)^{T}d_{Nq}(x)< g_{j}(x)^{T}d_{Nq}(x)^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x)\le \Gamma (x)<0. \end{aligned}$$
(22)

Thus, \(g_{j}(x)^{T}d_{Nq}(x)<0,\)\(\forall \;j=1,\ldots ,m.\) Thus, \(d_{Nq}(x)\) is a descent direction.

This completes the proof. \(\square \)

Remark 1

We say that \(x^{*}\) is a critical point of \(f_{j}\) where \(j=1,\ldots ,m\) if and only if \(\Gamma (x^{*})=0,\) and \(d_{Nq}(x^{*})=0.\)

Theorem 2

Let function \(d_{Nq}:X\rightarrow \mathbb {R}\) given by (21) be a bounded on compact sets and \(\Gamma :X\rightarrow \mathbb {R}\) given by (14), then \(|\Gamma (x)-\Gamma (y)|<\epsilon \), \(\forall \;\; x, y\in X.\)

Proof

Let \(Y\subset X\) be a compact set for any \(x\in X\), and we have \(\Gamma (x)\le 0\) due to part 1 in Lemma 3.2 of [32], then

$$\begin{aligned} g_{j}(x)^{T}d_{Nq}(x)+\frac{1}{2}d_{Nq}(x)^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x)\le 0. \end{aligned}$$

We obtain

$$\begin{aligned} -\frac{1}{2}d_{Nq}(x)^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x) \ge g_{j}(x)^{T}d_{Nq}(x). \end{aligned}$$
(23)

Note that \(f_{j}\) is twice continuously differentiable, and its all q-Hessians are positive definite due to (11), so there exists \(\kappa \) and \(\lambda >0\) such that

$$\begin{aligned} \kappa =\max _{x\in Y, j=1,\ldots ,m}\Vert g_{j}(x) \Vert ,{} \end{aligned}$$
(24)

and

$$\begin{aligned} \lambda =\min _{y\in Y,\Vert \theta \Vert =1}\theta ^{T}\bar{D}_{q}\nabla f_{j}(y)\theta , \end{aligned}$$
(25)

where \(j=1,\ldots ,m\). Combining (23)–(25) and using Cauchy–Schwartz inequality, for \(x\in Y\), and \(j=1,\ldots ,m\), we get

$$\begin{aligned} \lambda d_{Nq}(x)^{T} d_{Nq}(x)\le \Vert g_{j}(x) \Vert \Vert d_{Nq}(x) \Vert \le \kappa \Vert d_{Nq}(x) \Vert . \end{aligned}$$
(26)

Therefore,

$$\begin{aligned} \Vert d_{Nq}(x)\Vert =\frac{1}{\lambda }\kappa , \end{aligned}$$
(27)

for all \(y\in Y.\) As any point in Y is in the interior of a compact subset of Y, then it suffices to show that continuity of \(\Gamma (x)\) on an arbitrary compact set \(Y\subset X\). For \(y\in Y\), and \(\phi _{y,j}:Y\rightarrow \mathbb {R}\), where \(j=1,\ldots ,m\) such that

$$\begin{aligned} \zeta \rightarrow g_{j}( \zeta )^{T}d_{Nq}(y)+\frac{1}{2}d_{Nq}(x)^{T}\bar{D}_{q}\nabla f_{j}( \zeta )d_{Nq}(y). \end{aligned}$$
(28)

The family \(\{\psi _{y,j}\}\in Y,\)\(j=1,\ldots ,m\) is equi-continuous.

$$\begin{aligned} \Gamma ( \zeta )\le \max _{j=1,\ldots m}g_{j}( \zeta )^{T}\gamma d_{Nq}(y)+\frac{1}{2}\gamma d_{Nq}^{T}(y)\bar{D}_{q}\nabla f_{j}( \zeta )\gamma d_{Nq}(y), \end{aligned}$$

that is,

$$\begin{aligned} \Gamma _{y}(\zeta )\le \phi _{y}+|\Gamma _{y}(\zeta )-\Gamma _{y}(y)|+\epsilon . \end{aligned}$$

Thus, \(|\Gamma (\zeta )-\Gamma (y)|<\epsilon \). Interchanging the roles of \(\zeta \) and y, we conclude that \(\Gamma \) is continuous on X.

This completes the proof. \(\square \)

4 On q-Newton unconstrained multiobjective algorithm and convergence

On the basis of theory described in previous section, we present the algorithm of q-Newton unconstrained multiobjective algorithm for solving (MOUP) using q-derivative. We examine \(\Gamma (x)\) to obtain the Newton direction \(d_{Nq}(x)\) at each non-critical point. The step length is determined by means of inexact Armijo condition with backtracking line search method. The algorithm for finding a critical/Pareto front is given below.

figure a

We now present the convergence theorem of Algorithm 1. Observe that if Algorithm 1 terminates after a finite number of iterations, then it terminates at a Pareto critical point. The following theorem is the modification of [35].

Theorem 3

Let \(f_{j}{} { becontinuouslydifferentiableonacompactset}X\subset \mathbb {R}^{n}{} { for}j=1,\ldots ,m{ and}\{x^{k}\}{} { bethesequenceby}x^{k+1}=x^{k}+\alpha _{k}d_{Nq}(x^{k}){ giveninAlgorithm}1,\,{ and}\alpha _{k}{} \) satisfies

$$\begin{aligned} f_{j}\left( x^{k+1}\right) - f_{j}\left( x^{k}\right) \le c\alpha _{k}\sum _{j=1}^{m}\lambda _{j}^{k}\left( g_{j}(x^{k})\right) ^{T}d_{Nq}\left( x^{k}\right) , \end{aligned}$$
(29)

for all \(j=1,\ldots ,m.{ Supposethat}L_{0}=\{x\in X:f(x)<f(x^{0}) \}{} { isboundedandconvex},\,{ where}x^{0}\in X{ isaninitialguesspoint}.{ Thefunction}f_{j}(x){ isboundedbelowforatleastone}j\in \{1,\ldots ,m\}.{ Then},\,{ theaccumulationpointof}\{x^{k}\}{} { isacriticalpointof}x^{*}{} \) of (MOUP).

Proof

We have

$$\begin{aligned} f_{j}\left( x^{k+1}\right) - f_{j}\left( x^{k}\right) \le c\alpha _{k}\sum _{i=1}^{m}\lambda _{j}^{k}\left( g_{j}(x^{k})\right) ^{T}d_{Nq}\left( x^{k}\right) . \end{aligned}$$

Since \(\sum _{j=1}^{m}\lambda _{j}^{k}=1,{ and}\lambda _{j}^{k}\ge 0,\) then

$$\begin{aligned} f_{j}\left( x^{k+1}\right) - f_{j}\left( x^{k}\right) \le c\alpha _{k}\max _{j=1,\ldots m} \left( (g_{j}(x^{k}))^{T} d_{Nq}(x^{k})\right) . \end{aligned}$$

Since \(\bar{D}_{q}\nabla f_{j}(x)\) is positive definite, then

$$\begin{aligned} f_{j}\left( x^{k+1}\right) -f_{j}\left( x^{k}\right)< & {} c\alpha _{k} \max _{j=1,\ldots m} \left( (g_{j}(x^{k}))^{T} d_{Nq}(x^{k})\right. \\&\left. \quad +\,\frac{1}{2}d_{Nq}\left( x^{k}\right) ^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}\left( x^{k}\right) \right) =c\alpha _{k} \Gamma \left( x^{k}\right) . \end{aligned}$$

We obtain

$$\begin{aligned} f_{j}\left( x^{k+1}\right) <f_{j}\left( x^{0}\right) +c\sum _{i=0}^{k}\alpha _{i}\Gamma \left( x^{i}\right) \;\;\text {for all}\;j=1,\ldots ,m. \end{aligned}$$

Fix one \(j_{1}{} { from}j=1,\ldots ,m{ forwhich}f_{j}(x){ isboundedbelowsuchthat}f_{j1}(x)>-\infty { forall}x\in X.{ Also},\,\{f_{j1}(x^{k})\}{} { ismonotonicallydecreasingsequencewhichisboundedbelowwhere}f_{j1}(x^{*})>-\infty \). Thus,

$$\begin{aligned} f_{j1}\left( x^{0}\right) -f_{j1}\left( x^{k+1}\right) >-c\sum _{i=0}^{k}\alpha _{i}\Gamma \left( x^{i}\right) . \end{aligned}$$

Taking \(k\rightarrow \infty \) in the above inequality to get following:

$$\begin{aligned} c\sum _{i=0}^{\infty }\alpha _{i}\left( -\Gamma (x^{i})\right)<f_{j1}\left( x^{0}\right) -f_{j1}\left( x^{*}\right) <\infty \end{aligned}$$
(30)

We already know that \(\Gamma (x^{i})\le 0{ forall}i,\,{ and}c\sum _{i=0}^{\infty }\alpha _{i}(-\Gamma (x^{i})){ isfinite}.{ Thus},\,{ weobtain}c\alpha _{k}(-f(x^{k}))\rightarrow 0{ as}k\rightarrow \infty .{ Sincethesteplengthisboundedaboveso}\alpha _{k}\rightarrow \infty { forsome}k{ implies}L_{0}{} { unboundedwhichiscontradictiontotheassumption}.{ If}\alpha _{k}\ge \beta { forall}k{ andforsome}\beta >0,\,{ thenweget}-f(x^{k})\rightarrow 0{ as}k\rightarrow \infty .{ Notethat}L_{0}{} { isboundedsequence},\,{ andhasatleastoneaccumulationpoint}.{ Let}\{P_{1}^{*}, P_{2}^{*},\ldots , P_{r}^{*}\}{} { bethesetofaccumulationpoints}\{x^{k}\}.{ Since}P_{s}^{*}{} { isanaccumulationpointforevery}s\in \{1,2,\ldots , r\},\,{ and}\Gamma { isacontinuousfunction},\,{ then}\Gamma (P_{s}^{*}){ isacriticalpointof}f{ forevery}s\in \{1,2,\ldots ,r\}{} \).

This completes the proof. \(\square \)

Theorem 4

Let \(f_{j}{} { beacontinuouslydifferentiableonacompactset}X\subset \mathbb {R}^{n}{} { forevery}j=1,\ldots ,m,\,{ and}\{x^{k}\}{} { bethesequenceby}x^{k+1}=x^{k}+\alpha _{k}d^{k}_{Nq}(x^{k})\), and given that

  1. 1.
    $$\begin{aligned} c_{2}\sum _{j=1}^{m}\lambda _{j}^{k}\left( g_{j}(x^{k})\right) ^{T}d_{Nq}\left( x^{k}\right) \le \sum _{j=1}^{m}\lambda _{j}^{k}g_{j}\left( x^{k+1}\right) d_{Nq}\left( x^{k}\right) , \end{aligned}$$
    (31)
  2. 2.

    \(g_{j}{} { areLipschitzforall}j=1,\ldots ,m,\) and

  3. 3.

    \(\cos ^2\theta _{j}^{k}\ge \delta { forsome}\delta >0,{ forall}j=1,\ldots ,m,\,{ where}\theta _j^k{ istheanglebetween}d_{Nq}(x^{k}){ and}g_{j}(x^k) \).

Then, every accumulation point of \(\{x^{k}\}{} \) generated by Algorithm 1 is a weak efficient solution of (MOUP).

Proof

From Theorem 3, we observe that every accumulation point of \(\{x^{k}\}{} { isacriticalpointof}f_{j},\,{ where}j=1,\ldots , m.{ Let}x^{*}{} { beanaccumulationpointof}\{x^{k}\}.{ Fixone}j_{0}{} { from}j=1,\ldots , m{ forwhich}g_{j_{0}}(x^{*})=0,\,{ then}x^{*}{} \) will be a weak efficient solution. From part 2 of Theorem 4, \( g_{j}{} { areLipschitzcontinuousforall}j=1,\ldots m.{ Thus},\,{ thereexists}L_{j}>0{ suchthat}\Vert g_{j}(x)- g_{j}(y)\Vert \le L_{j}\Vert x-y \Vert { for}j=1,\ldots ,m.\) Form Cauchy–Schwartz inequality, we have

$$\begin{aligned} \left( g_{j}(x^{k+1})-g_{j}(x^{k})\right) ^{T}d_{Nq}\left( x^{k}\right)&\le \Vert g_{j}\left( x^{k+1}\right) -g_{j}\left( x^{k}\right) \Vert \Vert d_{Nq}\left( x^{k}\right) \Vert \\&\le L_{j}\Vert x^{k+1} -x^{k}\Vert \Vert d_{Nq}(x^{k}) \Vert \\&\le L_{j}\alpha _{k}\Vert d_{Nq}(x^{k}) \Vert ^{2}, \end{aligned}$$

Since \(L=\max L_{j},\,{ where}j=1,2\ldots ,m,\) then

$$\begin{aligned} \left( g_{j}(x^{k+1})-g_{j}(x^{k})\right) ^{T}d_{Nq}\left( x^{k}\right) \le L\alpha _{k}\left\Vert d_{Nq}\left( x^{k}\right) \right\Vert ^{2}. \end{aligned}$$

Thus,

$$\begin{aligned} L\alpha _{k}\left\Vert d_{Nq}\left( x^{k}\right) \right\Vert ^{2}&\ge \max _{j=1,\ldots m}\left( g_{j}(x^{k+1})- g_{j}(x^{k})\right) ^{T} d_{Nq}\left( x^{k}\right) \\&\ge \sum _{j=1}^{m}\lambda _{j}^{k}\left( g_{j}(x^{k+1})-g_{j}(x^{k})\right) ^{T}d_{Nq}\left( x^{k}\right) . \end{aligned}$$

From part 1 of Theorem 4, we get

$$\begin{aligned} L\alpha _{k}\left\Vert d_{Nq}(x^{k})\right\Vert ^{2}&\ge (c_{2}-1)\sum _{j=1}^{m}\lambda _{j}^{k} g_{j}\left( x^{k}\right) ^{T}d_{Nq}\left( x^{k}\right) \\&\ge (c_{2}-1)\max _{j=1,\ldots ,m} g_{j}\left( x^{k}\right) ^{T}d_{Nq}\left( x^{k}\right) . \end{aligned}$$

This implies

$$\begin{aligned} \alpha _{k}\ge \frac{c_{2}-1}{L\Vert d_{Nq}(x^{k}) \Vert ^{2}}\max _{j=1,\ldots ,m} g_{j}\left( x^{k}\right) ^{T}d_{Nq}\left( x^{k}\right) . \end{aligned}$$

Since \(\max _{j=1,\ldots ,m} g_{j}(x^{k})^{T}d_{Nq}(x^{k})<0,\) then

$$\begin{aligned} \alpha _{k}\bigg (\max _{j=1,\ldots ,m} g_{j}(x^{k})^{T}d_{Nq}(x^{k})\bigg )\le \frac{c_{2}-1}{L\Vert d_{Nq}(x^{k}) \Vert ^{2}}\bigg (\max _{j=1,\ldots ,m}g_{j}(x^{k})^{T} d_{Nq}(x^{k})\bigg )^2, \end{aligned}$$

that is,

$$\begin{aligned} -c_{1}\alpha _{k}\bigg (\max _{j=1,\ldots ,m}g_{j}(x^{k})^{T}d_{Nq}(x^{k})\bigg )\ge \frac{c_{1}(c_{2}-1)}{L\Vert d_{Nq}(x^{k}) \Vert ^{2}}\min _{j=1,\ldots ,m}\bigg ( g_{j}(x^{k})^{T}d_{Nq}(x^{k})\bigg )^2. \end{aligned}$$

Since \((g_{j}(x^{k})^{T}d_{Nq}(x^{k}))^2=(g_{j}(x^{k})^{T})^2(d_{Nq}(x^{k}))^{2} (\cos ^2\theta _{j}^{k})\), then

$$\begin{aligned} -c_{1}\alpha _{k}\bigg (\max _{j=1,\ldots ,m}g_{j}(x^{k})^{T}d_{Nq}(x^{k})\bigg ) \ge \frac{c_{1}(1-c_{2})}{L}\min _{j=1,\ldots ,m}[\Vert g_{j}(x^{k})\Vert ^{2}\cos ^{2}(\theta _{j})^{T}], \end{aligned}$$

where \(\theta _{j}^{k}{} { istheanglebetween}g_{j}(x^{k}){ and}d_{Nq}(x^{k}).\) We follow the same process done in Theorem 3.

$$\begin{aligned} \infty > f_{j1}\left( x^{0}\right) -f_{j1}\left( x^{k+1}\right) \ge -c_{1}\sum _{i=0}^{k}\alpha _{i}\max _{y\in Y, j=1,\ldots ,m}\left( g_{j}(x^{i})\right) ^{T}d_{Nq}\left( x^{i}\right) \\=\sum _{i=0}^{k}\alpha _{i}\left( -c_{1}\max _{y\in Y, j=1,\ldots ,m}(g_{j}(x^{i}))^{T}d_{Nq}(x^{i})\right) . \end{aligned}$$

Taking \(k\rightarrow \infty \), we get

$$\begin{aligned} \infty >f_{j1}\left( x^{0}\right) -f_{j1}\left( x^{*}\right) \ge \sum _{i=0}^{\infty }\alpha _{i}\left( -c_{1}\max _{j=1,\ldots ,m}(g_{j}(x^{i}))^{T}d_{Nq}(x^{i})\right) . \end{aligned}$$

Since \(-c_{1}\max _{j=1,\ldots ,m}g_{j}(x^{i})^{T}d_{Nq}(x^{i})>0,\) then

$$\begin{aligned} \alpha _{k}(-c_{1}\max _{j=1,\ldots ,m}(g_{j}(x^{k}))^{T}d_{Nq}(x^{k}))\rightarrow 0\; \text {as}\; k\rightarrow \infty . \end{aligned}$$

We also have

$$\begin{aligned} \frac{c_{1}(1-c_{2})}{L}\min _{j=1,\ldots ,m}[\Vert g_{j}(x^{k}) \Vert ^{2}\cos ^{2}\theta _{j}^{k}]\rightarrow 0\;\;\;\text {as} \;\; k\rightarrow \infty . \end{aligned}$$

Since \(\cos ^{2}\theta _{j}^{k}>\delta { for}j=1,\ldots ,m,\,{ then}\min _{j=1,\ldots ,m}\Vert g_{j}(x^{k})\Vert ^{2}\rightarrow 0{ as}k\rightarrow \infty .{ Fixany}j_{0}{} { from}j=1,\ldots ,m{ suchthat}\Vert g_{j0}(x^{k}) \Vert ^{2}\rightarrow 0{ as}k\rightarrow \infty .{ Since}\Vert g_{j0}\big (x^k\big ) \Vert { isacontinuousfunction},\,{ and}\Vert g_{j0}(x^{k})\Vert \rightarrow 0{ as}k\rightarrow \infty ,\,{ then}g_{j0}(x^{*})=0{ foreveryaccumulationpoints}x^{*}{} { of}\{x^{k}\}.{ Thus},\,x^{*}{} \) is a local weak efficient solution.

This completes the proof. \(\square \)

Remark 2

Algorithm 1 is also applicable to non-convex functions. It is important to note that weak efficient solution of any multiobjective unconstrained optimization problem is not unique. Thus, if Algorithm 1 is executed with any initial point, then the users may obtain any one out of these weak efficient points while shifting the descent direction from global to local rapidly due to \(q\)-gradient. All three assumptions of Theorem 4 should be satisfied for every accumulation point of the sequence \(\{x^{k} \}{} \) to be a weak efficient point of (MOUP), otherwise accumulation point becomes critical point if assumptions of Theorem 3 is satisfied.

5 Numerical examples

In this section, Algorithm 1 is verified and compared with existing methods using some numerical problems from different sources. MATLAB (2019a) code is developed for Algorithm 1. To avoid unbounded solutions, the following subproblem is solved:

$$\begin{aligned} \bar{P}(x^k): {\left\{ \begin{array}{ll} \min \qquad \quad \Gamma (x^k)\\ \text {subject to}\;\;g_{j}(x^k)^{T}d_{Nq}(x)+\frac{1}{2}d_{Nq}^{T}\bar{D}_{q}\nabla f_{j}(x)d_{Nq}(x^k)\le \Gamma (x^k),\\ \qquad \quad \qquad lb\le x^k+d_{Nq}\le ub,\\ \qquad \quad \qquad (\Gamma (x^k),d_{Nq})\in \mathbb {R}\times \mathbb {R}^{n}, \end{array}\right. } \end{aligned}$$

where \(j=1,\ldots ,m,\,lb{ and}ub{ arelowerandupperboundsof}x.{ Solutionof}\bar{P}(x^k){ isnotadescentdirection},\,{ if}\bar{D}_{q}\nabla f_{j}(x^k){ isnotpositivedefiniteforall}j.{ Insuchcases},\,{ anapproximation}\tilde{D}_{q}\nabla f_{j}(x^k)=\bar{D}_{q}\nabla f_{j}(x^k)+E(x^k){ isused},\,{ where}E(x^k)\) is a diagonal matrix obtained using modified Cholesky factorization algorithm developed in [36]. The subproblem \(\bar{P}(x^k)\) is solved using MATLAB function ‘fmincon’ with ‘Algorithminterior point’, ‘Specified Objective Gradient’,‘Specified Constraint Gradient’. Also, \(|\Gamma (x^k)|<10^{-5}{} \) or maximum 200 iterations is considered as stopping criteria.

It is important to note that weak efficient solution of a multiobjective optimization problem is not unique. Thus, if the users start at any initial point and execute the algorithm, then user may reach at one of weak efficient points. The weighting method is one of the most attractive procedures for solving multiobjective optimization problems. This is due to the fact that it reduces the original problem to a family of scalar minimization problems. We first verify the steps of Algorithm 1 for obtaining a critical point with the following example:

Example 1

Consider the multiobjective optimization problem:

\(\underset{x\in \mathbb {R}^2}{\min } (f_1(x),f_2(x))\), where

\(f_{1}(x)={\left\{ \begin{array}{ll} (x_{1}-1)^{3}\sin \frac{1}{x_{1}-1}+(x_{1}-1)^{2}+x_{1}(x_{2}-1)^{4},\;\;\;\text {if}\; x_{1}\ne 1,\\ (x_{2}-1)^{4},\;\;\text {if}\;\;x_{1}=1. \end{array}\right. }{} \)

\(f_{2}(x)= x_1^2+x_2^2.\)

Note that \(\frac{\partial ^{2}f_{1}}{\partial x_{i}^{2}};(i=1,2){ doesnotexistatapoint}(1,1)^T,\,{ whichindicatesthat}f_{1}{} \) is not twice differentiable. Thus, second order sufficient condition can not be applied to justify the existence of the minimizer as in the case of higher order numerical optimization methods. Further, the Newton’s algorithm can not be applied. But, the \(q\)-derivative can be applied as described below. For \(q\ne 1\),

$$\begin{aligned} \bar{D}_{q}\nabla f_{1}(1,1)=\begin{bmatrix} 3(q-1)\sin \frac{1}{q-1} &{} \frac{(q-1)^{3}}{2} \\ \frac{(q-1)^{3}}{2} &{} 4(q-1)^{2} \end{bmatrix}. \end{aligned}$$

Note that \(\bar{D}_{q}\nabla f_{1}(1,1)\) is positive definite when the principal minors are positive, that is,

$$\begin{aligned} 3(q-1)\sin \frac{1}{q-1}>0, \end{aligned}$$

and

$$\begin{aligned} det(\bar{D}_{q}\nabla f_{1}(1,1))=12\sin \frac{1}{q-1}-\frac{(q-1)^{3}}{4}>0. \end{aligned}$$

In particular one may observe that, for

$$\begin{aligned} q\in (0,1)\cap \bigg (1+\frac{1}{2k\pi }, 1+\frac{1}{(2k+1)\pi }\bigg ), \end{aligned}$$

where \(k\in \mathbb {Z}^{-},\,{ theabovetwoinequalitieshold}.{ Therefore},\,{ forthisselectionof}q,\,\bar{D}_{q}\nabla f_{1}(1,1){ ispositivedefinite}.{ WehavesolvedtheproblemusingAlgorithm}~1{ withapproximation}x^0=(1.6,1.5)^T,\,{ initialparameters}q=0.93,\,c_1=10^{-4},\,c_2=0.9,\,\delta =10^{-3}{} { anderroroftolerance}\epsilon =10^{-5}.{ Weobtain}f(x^0)=(f_1(x^0),f_2(x^0))=(0.6750,4.81)^T,\,g_1(x^0)=(2.1491,0.5814)^T\), \(g_2(x^0)=(3.0880,2.8950)^T,\,\bar{D}_{q}\nabla f_{1}(x^0)= \begin{bmatrix} 4.4794 &{} 0.3634 \\ 0.3634 &{} 2.9585 \end{bmatrix} { and}\bar{D}_{q}\nabla f_{2}(x^0)=\begin{bmatrix} 1.93 &{} 0 \\ 0 &{} 1.93 \end{bmatrix} .{ Both}\bar{D}_{q}\nabla f_{1}(x^0){ and}\bar{D}_{q}\nabla f_{2}(x^0){ arepositivedefiniteandhencesolutionof}P(x^0){ isadescentdirectionof}f.{ Solutionof}P(x^0){ isobtainedas}\Gamma (x^0)=-0.5438{ and}d_{Nq}(x^0)=( -0.4685,-0.1390)^T.{ Since}\cos ^2(\theta _{1}^{0})=0.9994>\delta { and}\cos ^2(\theta _{2}^{0})=0.7991>\delta { with}\alpha _0=1\) satisfying (29) and (31), then the next iterating point is given as \(x^{1}=x^0+\alpha _0 d_{Nq}(x^0)=(1.1315,1.3610)^T.{ Clearly},\,{ wehave}f(x^1)=(0.0387,3.1327)^T<f(x^0).{ Thefinalsolutionisobtainedas} x^{*}=(1.0365,1.0412)^{T}{} { after}5{ iterations},\,{ usingthestoppingcriteria}|\Gamma (x^k)|<10^{-5}.{ Thiscanalsobeverifiedthat}x^{*}{} { isanapproximateweakefficientsolutionof}f{ byweightedsummethodwithweight}w=(1,0)\).

Generate approximate Pareto front The multiobjective optimization problems have no single isolated minimum point but a set of efficient points. We consider a multi-start technique to generate an approximate Pareto front. A set of 100 uniformly distributed random points is collected between \(lb{ and}ub\) and the proposed algorithm is executed at every initial point. The approximate Pareto front generated by Algorithm 1 is compared with the weighted sum method using the following two test problems [37]:

$$\begin{aligned} (BK1):\quad \min \quad \quad \quad&\big (x_1^2+x_2^2\;,\;(x_1-5)^2\;,\;(x_2-5)^2\big ),\\ \,\,\text {subject to }&-5\le x_1,x_2\le 10, \end{aligned}$$

and

$$\begin{aligned} (IM1):\quad&\min \,\,\,\quad \quad \big ( 2\sqrt{x}_1\;,\;x_1(1-x_2)+5\big ),\\&\text {subject to }\, 1\le x_1\le 4,~~1\le x_2\le 2. \end{aligned}$$

The single objective \(q\)-Newton method developed in [31] is used to solve single-objective optimization problems in the weighted sum method. We have considered weights \((1,0),\,(0,1)\), and 98 different random positive weights. The approximate Pareto fronts of the test problems (BK1) and (IM1) are provided in Fig. 1. One can observe that Algorithm 1 provides approximate Pareto fronts for both (BK1) and (IM1). But, the weighted sum method fails to generate the approximate Pareto front in (IM1).

Fig. 1
figure 1

Approximate Pareto fronts of BK1 and IM1

Comparison with three existing schemes Algorithm 1 (q-QN) is compared with quasi-Newton methods for multiobjective optimization problems developed in [35] (QN1), [38] (QN2), and [39] (QN3). A set of bound constrained test problems are collected from different sources, and solved using these methods. All algorithms are executed, and computational details are provided in Table 1. In this table, ‘It’, ‘\(\# F\)’ and ‘\(\# G\)’ denote total number of iterations, function evaluations, and gradient evaluations, respectively. Total Hessian count in Algorithm 1 is equal to ‘It’. In (QN2) and (QN3), total gradient evaluations is equal to ‘It’. One can observe from Table 1 that Algorithm 1 takes less number of iterations than other methods in most cases (the lowest number of iterations are indicated by bold numbers). In view of Table 1, we can also see that (\(q\)-QN) has a significant improvement over (QN1), (QN2) and (QN3) relative to the number of objective function evaluations, and gradient evaluations for most of the cases. The methods (QN1), (QN2) and (QN3) update the positive definite Hessians for all \(f_{j},\,{ where}j=1,\ldots ,m,\,{ butinmethod}(q\)-QN), we solve subproblem of (MOUP) by updating q-Hessian generated by q-derivative, which takes larger steps to get the weak efficient solutions/critical point. Hence, the q-Newton method uses better approximations of objective functions than (QN1), (QN2) and (QN3) for solving the sub-problem. Thus, from the numerical results, (\(q\)-QN) is superior to other existing methods presented in this paper.

Table 1 Computation details

6 Conclusion

In this paper, the \(q\)-calculus is used in the Newton’s method for solving multiobjective unconstrained optimization problems for which existence of second order partial derivatives at every point is not required. We have given the algorithm and proved its convergence. The sequence provided by the method converges quadratically, and the Newton direction is chosen in the vicinity of the solution. Moreover, the quadratic convergence in case of second derivatives is Lipschitz continuous. The \(q\)-gradient enables the search to be carried out in a more diverse set of directions. Numerical results show that the proposed method is more efficient as compared with the other methods for solving multiobjective unconstrained optimization problems.