1 Introduction

Newton-like methods are famous for approximating a locally unique solution \(x^*\) of equation

$$\begin{aligned} F(x)=0, \end{aligned}$$
(1.1)

where \(F:D\subseteq {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a differentiable nonlinear function and D is a convex subset of \({\mathbb {R}}.\) These methods are studied based on: semi-local and local convergence [2, 3, 15, 16, 20].

The methods such as Euler’s, Halley’s, super Halley’s, Chebyshev’s [25, 8, 12, 17, 20] require the evaluation of the second derivative \(F''\) at each step. To avoid this computation, many authors have used higher order multi-point methods [1, 2, 6, 911, 14, 16, 19, 20].

Newton’s method is undoubtedly the most popular method for approximating a locally unique solution \(x^*\) provided that the initial point is close enough to the solution. In order to obtain a higher order of convergence Newton-like methods have been studied such as Potra–Ptak, Chebyshev, Cauchy Halley and Ostrowski method. The number of function evaluations per step increases with the order of convergence. In the scalar case the efficiency index [13, 16, 20] \(EI=p^{\frac{1}{m}}\) provides a measure of balance where p is the order of the method and m is the number of function evaluations.

According to the Kung–Traub conjecture the convergence of any multi-point method without memory cannot exceed the upper bound \(2^{m-1}\) [16, 20] (called the optimal order). Hence the optimal order for a method with three function evaluations per step is 4. The corresponding efficiency index is \(EI= 4^{\frac{1}{3}}=1.58740\ldots \) which is better than Newtons method which is \(EI= 2^{\frac{1}{2}}=1.414\ldots \) Therefore, the study of new optimal methods of order four is important.

We study the local convergence analysis of three step King-like method with a parameter defined for each \(n=0,1,2,\ldots \) by

$$\begin{aligned} y_n= & {} x_n-\frac{F(x_n)}{F'(x_n)},\nonumber \\ z_n= & {} y_n-\frac{F(y_n)F'(x_n)^{-1}F(x_n)}{F(x_n)-2F(y_n)},\nonumber \\ x_{n+1}= & {} z_n-\frac{(F(x_n)+\alpha F(y_n))F'(x_n)^{-1}F(z_n)}{F(x_n)+(\alpha -2)F(y_n)}, \end{aligned}$$
(1.2)

where \(x_0\in D \) is an initial point and \( \alpha \in {\mathbb {R}}\) a parameter. Sharma et al. [19] showed the sixth order of convergence of method (1.2) using Taylor expansions and hypotheses reaching up to the fourth derivative of function F although only the first derivative appears in method (1.2). These hypotheses limit the applicability of method (1.2). As a motivational example, let us define function F on \(D=[-\frac{1}{2},\frac{5}{2}]\) by

$$\begin{aligned} F(x)=\left\{ \begin{array}{ll} x^3\ln x^2+ x^5-x^4,\quad x\ne 0\\ 0 ,\,\,\, x=0. \end{array}\right. \end{aligned}$$

Choose \(x^*=1.\) We have that

$$\begin{aligned} F'(x)= & {} 3x^2\ln x^2+5x^4-4x^3+2x^2,\,\, F'(1)=3,\\ F''(x)= & {} 6x\ln x^2+20x^3-12x^2+10x,\\ F'''(x)= & {} 6\ln x^2+60x^2-24x+22. \end{aligned}$$

Then, obviously function F does not have bounded third derivative in X,  since \(F'''(x)\) is unbounded at \(x=0\) (i.e., unbounded in D). The results in [19] require that all derivatives up to the fourth are bounded. Therefore, the results in [19] cannot be used to show the convergence of method (1.2). However, our results can apply (see Example 3.3). Notice that, in-particular there is a plethora of iterative methods for approximating solutions of nonlinear equations defined on \({\mathbb {R}}\) [1, 2, 57, 911, 14, 16, 19, 20]. These results show that if the initial point \(x_0\) is sufficiently close to the solution \(x^*,\) then the sequence \(\{x_n\}\) converges to \(x^*.\) But how close to the solution \(x^*\) the initial guess \(x_0\) should be? These local results give no information on the radius of the convergence ball for the corresponding method. We address this question for method (1.2) in Sect. 2. The same technique can be used to other methods. In the present study we extend the applicability of these methods by using hypotheses up to the first derivative of function F and contractions. Moreover we avoid Taylor expansions and use instead Lipschitz parameters. Indeed, Taylor expansions and higher order derivatives are needed to obtain the equation of the local error and the order of convergence of the method. Using our technique we find instead the computational order of convergence (COC) or the approximate computational order of convergence that do not require the usage of Taylor expansions or higher order derivatives (see Remark 2.2, part 4). Moreover, using the Lipschitz constants we determine the radius of convergence of method (1.2). Notice also that the local error in [19] cannot be used to determine the radius of convergence of method (1.2).

We do not address the global convergence of the three-step King-like method (1.2) in this study. Notice however that the global convergence of King’s method [drop the third step of method (1.2) to obtain King’s method] has not been studied either. This is mainly due to the fact that these methods are considered as special case of Newton-like methods for which there are many results (see e.g [15]). Therefore, one can simply specialize global convergence results for Newton-like methods to obtain the specific results for method (1.2) or King’s method.

The paper is organized as follows. In Sect. 2 we present the local convergence analysis. We also provide a radius of convergence, computable error bounds and uniqueness result not given in the earlier studies using Taylor expansions. Special cases and numerical examples are presented in the concluding Sect. 3.

2 Local convergence analysis

We present the local convergence analysis of method (1.2) in this section. Let \(L_0 >0, L >0, M \ge 1\) be given parameters. It is convenient for the local convergence analysis of method (1.2) that follows to introduce some scalar functions and parameters. Define functions \(g_1, \, p,\, h_p,\, q,\, h_q\) on the interval \([0, \frac{1}{L_0})\) by

$$\begin{aligned} g_1(t)= & {} \frac{Lt}{2(1-L_0t)},\\ p(t)= & {} \left( \frac{L_0t}{2}+2Mg_1(t)\right) ,\\ h_p(t)= & {} p(t)-1,\\ q(t)= & {} \frac{L_0t}{2}+|\alpha -2|Mg_1(t),\\ h_q(t)= & {} q(t)-1 \end{aligned}$$

and parameter \(r_1\) by

$$\begin{aligned} r_1=\frac{2}{2L_0+L}. \end{aligned}$$

We have that \(h_p(0)=h_q(0)=-1 <0\) and \(h_q(t)\rightarrow +\infty , h_p(t)\rightarrow +\infty \) as \(t\rightarrow \frac{1}{L_0}^-.\) It follows from the intermediate value theorem that functions \(h_p, h_q\) have zeros in the interval \((0, \frac{1}{L_0}).\) Denote by \(r_p, r_q\) the smallest such zeros. Moreover, define functions \(g_2\) and \(h_2\) on the interval \([0, r_p)\) by

$$\begin{aligned} g_2(t)=\left[ 1+\frac{M^2}{(1-L_0t)(1-p(t))}\right] g_1(t) \end{aligned}$$

and

$$\begin{aligned} h_2(t)=g_2(t)-1. \end{aligned}$$

We have that \(h_2(0)=-1 < 0\) and \(h_2(t)\rightarrow +\infty \) as \(t\rightarrow r_p^-.\) Denote by \(r_2\) the smallest zero of function \(h_2\) in the interval \((0, r_p).\) Furthermore, define functions \(g_3\) and \(h_3\) on the interval \([0, \min \{r_p, r_q\})\) by

$$\begin{aligned} g_3(t)=\left( 1+\frac{M}{1-L_0t}+\frac{2M^2g_1(t)}{(1-L_0t)(1-q(t))}\right) g_2(t) \end{aligned}$$

and

$$\begin{aligned} h_3(t)=g_3(t)-1. \end{aligned}$$

We have that \(h_3(0)=-1 < 0\) since \(g_1(0)=g_2(0)=0\) and \(h_3(t)\rightarrow +\infty \) as \(t\rightarrow \min \{r_p, r_q\}.\) Denote by \(r_3\) the smallest zero of function \(h_3\) in the interval \((0, \min \{r_p, r_q\}).\) Set

$$\begin{aligned} r=\min \{r_1, r_2, r_3\}. \end{aligned}$$
(2.1)

Then, we have that

$$\begin{aligned} 0 < r \le r_1 < \frac{1}{L_0} \end{aligned}$$
(2.2)

and for each \( t\in [0, r)\)

$$\begin{aligned} 0\le & {} g_1(t) < 1, \end{aligned}$$
(2.3)
$$\begin{aligned} 0\le & {} p(t) < 1, \end{aligned}$$
(2.4)
$$\begin{aligned} 0\le & {} g_2(t) < 1, \end{aligned}$$
(2.5)
$$\begin{aligned} 0\le & {} q(t) < 1 \end{aligned}$$
(2.6)

and

$$\begin{aligned} 0\le g_3(t) < 1. \end{aligned}$$
(2.7)

Denote by \(U(v,\rho ), \bar{U}(v,\rho )\) the open and closed balls in \({\mathbb {R}}\) with center \(v\in {\mathbb {R}}\) and of radius \(\rho >0.\) Next,we present the local convergence analysis of method (1.2) using the preceding notation.

Theorem 2.1

Let \(F: D\subset {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a differentiable function. Suppose there exist \(x^*\in D,\, L_0 > 0,\, L >0,\, M \ge 1\) such that \( F(x^*)=0,\,\, F'(x^*)\ne 0\),

$$\begin{aligned} |F'(x^*)^{-1}(F'(x)-F'(x^*))|\le & {} L_0|x-x^*|, \end{aligned}$$
(2.8)
$$\begin{aligned} |F'(x^*)^{-1}(F'(x)-F'(y))|\le & {} L|x-y|, \end{aligned}$$
(2.9)
$$\begin{aligned} |F'(x^*)^{-1}F'(x)|\le & {} M, \end{aligned}$$
(2.10)

and

$$\begin{aligned} \bar{U}(x^*, r)\subseteq D; \end{aligned}$$
(2.11)

where the radius of convergence r is defined by (2.1). Then, the sequence \(\{x_n\}\) generated by method (1.2) for \(x_0\in U(x^*, r)-\{x^*\}\) is well defined, remains in \(U(x^*, r)\) for each \(n=0,1,2,\ldots \) and converges to \(x^*.\) Moreover, the following estimates hold

$$\begin{aligned}&|y_n-x^*|\le g_1(|x_n-x^*|)|x_n-x^*| < |x_n-x^*| < r, \end{aligned}$$
(2.12)
$$\begin{aligned}&|z_n-x^*|\le g_2(|x_n-x^*|)|x_n-x^*| < |x_n-x^*|, \end{aligned}$$
(2.13)

and

$$\begin{aligned} |x_{n+1}-x^*|\le g_3(|x_n-x^*|)|x_n-x^*| < |x_n-x^*|, \end{aligned}$$
(2.14)

where the \(''g''\) functions are defined above Theorem 2.1. Furthermore, for \(T\in [r, \frac{2}{L_0})\) the limit point \(x^*\) is the only solution of equation \(F(x)=0\) in \(\bar{U}(x^*,T)\cap D\).

Proof

We shall show estimates (2.12)–(2.14) using mathematical induction. By hypothesis \(x_0\in {U}(x^*, r)-\{x^*\},\) (2.1) and (2.8), we have that

$$\begin{aligned} |F'(x^*)^{-1}(F'(x_0)-F'(x^*))|\le L_0|x_0-x^*| < L_0r < 1. \end{aligned}$$
(2.15)

It follows from (2.15) and Banach Lemma on invertible functions [2, 3, 16, 17, 20] that \(F'(x_0)\ne 0\) and

$$\begin{aligned} |F'(x^*)^{-1}F'(x_0)|\le \frac{1}{1-L_0|x_0-x^*|}. \end{aligned}$$
(2.16)

Hence, \(y_0\) is well defined by the first sub-step of method (1.2) for \(n=0.\) Using (2.1), (2.3), (2.9) and (2.16) we get that

$$\begin{aligned} \nonumber |y_0-x^*|= & {} |x_0-x^*-F'(x_0)^{-1}F(x_0)|\nonumber \\\le & {} |F'(x_0)^{-1}F'(x^*)||\int _0^1F'(x^*)^{-1}(F'(x^*+\theta (x_0-x^*))\nonumber \\&-F'(x_0))(x_0-x^*)d\theta |\nonumber \\\le & {} \frac{L|x_0-x^*|^2}{2(1-L_0|x_0-x^*|)}\nonumber \\= & {} g_1(|x_0-x^*|)|x_0-x^*| < |x_0-x^*| < r, \end{aligned}$$
(2.17)

which shows (2.12) for \(n=0\) and \(y_0\in U(x^*, r).\) We can write that

$$\begin{aligned} F(x_0)=F(x_0)-F(x^*)=\int _0^1F'(x^*+\theta (x_0-x^*)(x_0-x^*)d\theta . \end{aligned}$$
(2.18)

Notice that \(|x^*+\theta (x_0-x^*)-x^*|=\theta |x_0-x^*| < r.\) Hence, we get that \(x^*+\theta (x_0-x^*)\in U(x^*, r).\) Then, by (2.10) and (2.18), we obtain that

$$\begin{aligned} |F'(x^*)^{-1}F(x_0)|\le & {} M|x_0-x^*|. \end{aligned}$$
(2.19)

We also have by (2.17) and (2.19) (for \(y_0=x_0\)) that

$$\begin{aligned} |F'(x^*)^{-1}F(y_0)|\le & {} M|y_0-x^*|\le Mg_1(|x_0-x^*|)|x_0-x^*|, \end{aligned}$$
(2.20)

since \(y_0\in U(x^*, r)\). Next, we shall show \( F(x_0)-2F(y_0)\ne 0.\) Using (2.1), (2.4), (2.8), (2.20) and the hypothesis \(x_0\ne x^*,\) we have in turn that

$$\begin{aligned}&|(F'(x^*)(x_0-x^*))^{-1}[F(x_0)-F(x^*)-F'(x^*)(x_0-x^*)-2F(y_0)|\nonumber \\&\quad \le |x_0-x^*|^{-1}[|F'(x^*)^{-1}(F(x_0)-F(x^*)-F'(x^*)(x_0-x^*))|\nonumber \\&\qquad +\,2|F'(x^*)^{-1}F(y_0)|\nonumber \\&\quad \le |x_0-x^*|^{-1}[\frac{L_0}{2}|x_0-x^*|^2+2Mg_1(|x_0-x^*|)|x_0-x^*|\nonumber \\&\quad =p(|x_0-x^*|) < p(r) < 1. \end{aligned}$$
(2.21)

It follows from (2.21) that

$$\begin{aligned} |(F(x_0)-2F(y_0))^{-1}F'(x^*)|\le \frac{1}{|x_0-x^*|(1-p(|x_0-x^*|))}. \end{aligned}$$
(2.22)

Hence, \(z_0\) and \(x_1\) are is well defined for \(n=0.\) Then, using (2.1), (2.5), (2.16), (2.17), (2.20) and (2.22) we get in turn that

$$\begin{aligned} |z_0-x^*|\le & {} |y_0-x^*|+\frac{M^2|y_0-x^*||x_0-x^*|}{(1-L_0|x_0-x^*|)(1-p(|x_0-x^*|))|x_0-x^*|}\nonumber \\\le & {} \left[ 1+\frac{M^2}{(1-L_0|x_0-x^*|)(1-p(|x_0-x^*|))}\right] g_1(|x_0-x^*|)|x_0-x^*|\nonumber \\\le & {} g_2(|x_0-x^*|)|x_0-x^*| < |x_0-x^*| < r, \end{aligned}$$
(2.23)

which show (2.13) for \(n=0\) and \(z_0\in U(x^*, r).\) Next, we show that \( F(x_0){-}(\alpha -2)F(y_0)\ne 0.\) Using (2.1), (2.6), (2.8), (2.17) and \(x_0\ne x^*,\) we obtain in turn that

$$\begin{aligned}&|(F'(x^*)(x_0-x^*))^{-1}[F(x_0)-F(x^*)-F'(x^*)(x_0-x^*)-(\alpha -2)F(y_0)|\nonumber \\&\quad \le |x_0-x^*|^{-1}[|F'(x^*)^{-1}(F(x_0)-F(x^*)-F'(x^*)(x_0-x^*))|\nonumber \\&\qquad +\,|\alpha -2||F'(x^*)^{-1}F(y_0)|\nonumber \\&\quad \le |x_0-x^*|^{-1}\left[ \frac{L_0}{2}|x_0-x^*|+|\alpha -2|M|y_0-x^*|\right] \nonumber \\&\quad \le \frac{L_0}{2}|x_0-x^*|+ |\alpha -2|M g_1(|x_0-x^*|)\nonumber \\&\quad =q(|x_0-x^*|) < q(r) < 1. \end{aligned}$$
(2.24)

It follows from (2.24) that

$$\begin{aligned} |(F'(x^*)(x_0-x^*))^{-1}F'(x^*)|\le \frac{1}{|x_0-x^*|(1-q(|x_0-x^*|))}. \end{aligned}$$
(2.25)

Hence, \(x_1\) is well defined by the third sub-step of method (1.2) for \(n=0.\) Then using (2.1), (2.7),(2.16), (2.19) (for \(z_0=x_0\)), (2.23) (2.24) and (2.25), we obtain in turn that

$$\begin{aligned} x_1-x^*= & {} z_0-x^*-F'(x_0)^{-1}F(z_0)\\&+\left[ 1-\frac{F(x_0)+\alpha F(y_0)}{F(x_0)+(\alpha -2)F(y_0)}\right] F'(x_0)^{-1}F(z_0) \end{aligned}$$

so,

$$\begin{aligned}&|x_1-x^*|=|z_0-x^*|+|F'(x_0)^{-1}F'(x^*)||F'(x^*)^{-1}F(z_0)|\nonumber \\&\qquad \quad +\,2|(F'(x^*)(x_0-x^*))^{-1}F'(x^*)||F'(x^*)^{-1}F(y_0)||F'(x_0)^{-1}F(x^*||\nonumber \\&\qquad \quad F'(x^*)^{-1}F(z_0)|\nonumber \\&\quad \le |z_0-x^*|+ \frac{M|z_0-x^*|}{1-L_0|x_0-x^*|}\nonumber \\&\qquad \quad +\,\frac{2M^2|y_0-x^*||z_0-x^*|}{(1-L_0|x_0-x^*|)|x_0-x^*|(1-q(|x_0-x^*|))}\nonumber \\&\quad \le \left[ 1+\frac{M}{1-L_0|x_0-x^*|}+\frac{2M^2g_1(|x_0-x^*|)}{(1-L_0|x_0-x^*|)(1-q(|x_0-x^*|))}\right] |z_0-x^*| \nonumber \\&\quad =g_3(|x_0-x^*|)|x_0-x^*| < |x_0-x^*| < r, \end{aligned}$$
(2.26)

which shows (2.14) for \(n=0\) and \(x_1\in U(x^*, r).\) By simply replacing \(x_0, y_0, z_0,\, x_1\) by \(x_k, y_k, z_k, x_{k+1}\) in the preceding estimates we arrive at estimates (2.12) – (2.14). Then, it follows from the estimate \(|x_{k+1}-x^*| < |x_k-x^*| < r,\) we deduce that \(x_{k+1}\in U(x^*, r)\) and \(\lim _{k\rightarrow \infty }x_k=x^*.\) To show the uniqueness part, let \(Q=\int _0^1F'(y^*+\theta (x^*-y^*)d\theta \) for some \(y^*\in \bar{U}(x^*, T)\) with \(F(y^*)=0.\) Using (2.8), we get that

$$\begin{aligned} |F'(x^*)^{-1}(Q-F'(x^*))|\le & {} \int _0^1L_0|y^*+\theta (x^*-y^*)-x^*|d\theta \nonumber \\\le & {} \int _0^1L_0(1-\theta )|x^*-y^*|d\theta \le \frac{L_0}{2}T < 1. \end{aligned}$$
(2.27)

It follows from (2.27) and the Banach Lemma on invertible functions that Q is invertible. Finally, from the identity \(0=F(x^*)-F(y^*)=Q(x^*-y^*),\) we deduce that \(x^*=y^*\). \(\square \)

Remark 2.2

  1. 1.

    In view of (2.8) and the estimate

    $$\begin{aligned} |F'(x^*)^{-1}F'(x)|= & {} |F'(x^*)^{-1}(F'(x)-F'(x^*))+I|\\\le & {} 1+|F'(x^*)^{-1}(F'(x)-F'(x^*))| \le 1+L_0|x-x^*| \end{aligned}$$

    condition (2.10) can be dropped and M can be replaced by

    $$\begin{aligned} M(t)=1+L_0 t \end{aligned}$$

    or

    $$\begin{aligned} M(t)=M=2, \end{aligned}$$

    since \(t\in [0, \frac{1}{L_0}).\)

  2. 2.

    The results obtained here can be used for operators F satisfying autonomous differential equations [2] of the form

    $$\begin{aligned} F'(x)=P(F(x)) \end{aligned}$$

    where P is a continuous operator. Then, since \(F'(x^*)=P(F(x^*))=P(0),\) we can apply the results without actually knowing \(x^*.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)

  3. 3.

    In [2, 3] we showed that \(r_1=\frac{2}{2L_0+L}\) is the convergence radius of Newton’s method:

    $$\begin{aligned} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n)\,\,\,\text {for each}\,\,\,n=0,1,2,\cdots \end{aligned}$$
    (2.28)

    under the conditions (2.8) and (2.9). It follows from the definition of r that the convergence radius r of the method (1.2) cannot be larger than the convergence radius \(r_1\) of the second order Newton’s method (2.28). As already noted in [2, 3] \(r_1\) is at least as large as the convergence radius given by Rheinboldt [18]

    $$\begin{aligned} r_R=\frac{2}{3L}. \end{aligned}$$
    (2.29)

    The same value for \(r_R\) was given by Traub [20]. In particular, for \(L_0 < L\) we have that

    $$\begin{aligned} r_R < r_1 \end{aligned}$$

    and

    $$\begin{aligned} \frac{r_R}{r_1}\rightarrow \frac{1}{3}\,\,\, as\,\,\, \frac{L_0}{L}\rightarrow 0. \end{aligned}$$

    That is the radius of convergence \(r_1\) is at most three times larger than Rheinboldt’s.

  4. 4.

    It is worth noticing that method (1.2) is not changing when we use the conditions of Theorem 2.1 instead of the stronger conditions used in [19]. Moreover, we can compute the computational order of convergence (COC) defined by

    $$\begin{aligned} \xi = \ln \left( \frac{|x_{n+1}-x^*|}{|x_n-x^*|}\right) /\ln \left( \frac{|x_{n}-x^*|}{|x_{n-1}-x^*|}\right) \end{aligned}$$

    or the approximate computational order of convergence

    $$\begin{aligned} \xi _1= \ln \left( \frac{|x_{n+1}-x_n|}{|x_n-x_{n-1}|}\right) /\ln \left( \frac{|x_{n}-x_{n-1}|}{|x_{n-1}-x_{n-2}|}\right) . \end{aligned}$$

    This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using higher order Fréchet derivative of operator F.

3 Numerical examples

We present numerical examples in this section.

Example 3.1

Let \(D=(-\infty , +\infty ).\) Define function f of D by

$$\begin{aligned} f(x)=\sin (x). \end{aligned}$$
(3.1)

Then we have for \(x^*=0\) that \(L_0=L=M=1.\) The parameters are

$$\begin{aligned} r_1=0.6667,\, {r_p}=0.5858,\,r_q=0.7192,\,r_2=0.3776,\, r_3=0.2289=r. \end{aligned}$$

The comparison of radii are given in Table 1.

Table 1 Comparison table for radii

Example 3.2

Let \(D=[-1,1].\) Define function f of D by

$$\begin{aligned} f(x)=e^x-1. \end{aligned}$$
(3.2)

Using (3.2) and \(x^*=0,\) we get that \(L_0=e-1 < L=e, M=2.\) The parameters are

$$\begin{aligned} r_1=0.3249,\, {r_p}=0.2916,\,r_q=0.2843,\,r_2=0.09876,\,r_3=0.0360=r. \end{aligned}$$

The comparison of radii are given in Table 2.

Table 2 Comparison table for radii

Example 3.3

Returning back to the motivational example at the introduction of this study, we have \(L_0=L=96.662907,\,M=2.\) The parameters are

$$\begin{aligned} r_1=0.0069,\, {r_p}=0.0101,\,r_q=0.0061,\,r_2=0.0025,\, r_3=0.0009=r. \end{aligned}$$