1 Introduction

Let \({(\mathbb {K}, |\cdot |)}\) be an arbitrary valued field with a non-trivial absolute value \(|\cdot |\) and \({f :\mathbb {K}\rightarrow \mathbb {K}}\) be an arbitrary function. It is well known that the iterative methods are among the most commonly used tools for finding the zeros of f, and undoubtedly the most famous iterative methods in the literature are Newton, Halley [1], and Chebyshev’s [2] methods.

Let \(\mathbb {K}[z]\) be an univariate polynomial ring over \(\mathbb {K}\) and \(f \in \mathbb {K}[z]\) be a polynomial of degree \({n \ge 2}\). In 1891, Weierstrass [3] offered a different approach for approximating the zeros of f, namely to compute all of them simultaneously. To this end, he proposed his famous iterative method which can be defined in the vector space \(\mathbb {K}^n\) by the following iteration:

$$\begin{aligned} x^{(k + 1)} = x^{(k)} - W_f(x^{(k)} ), \qquad k = 0,1,2,\ldots , \end{aligned}$$
(1)

where the Weierstrass iteration function \({W_f :\mathcal {D} \subset \mathbb {K}^n \rightarrow \mathbb {K}^n}\) is defined as follows

$$\begin{aligned} W_f(x) = (W_1(x),\ldots , W_n(x)) \quad \text {with}\quad W_i(x) = \frac{f(x_i)}{a_0 \prod _{j \, \ne \, i} \, (x_i - x_j)} \quad (i \in I_n), \end{aligned}$$
(2)

where \(a_0\) is the leading coefficient of f. Here and throughout the whole paper \(I_n\) denotes the set \(\{1, 2, \ldots , n\}\) and \(\mathcal {D}\) denotes the set of all vectors in \(\mathbb {K}^n\) with pairwise distinct components, i.e.,

$$ \mathcal {D} = \left\{ x \in \mathbb {K}^n \,:\, x_i \ne x_j \,\,\text { whenever } i \ne j \right\} . $$

Since 1960, the Weierstrass’ method (1) has drawn a great interest among the mathematical community (see, e.g., the monographs of Sendov, Andreev, and Kyurkchiev [4], Kyurkchiev [5], Petković [6], and references therein), and such methods are often called simultaneous methods. The next famous simultaneous methods in the literature, are due to Dochev and Byrnev [7] (also known as Tanabe’s method [8]), Ehrlich [9] and Börsch-Supan [10]. It is well known that the Weierstrass’ method is quadratically convergent while the other three methods are cubically convergent but when the zeros are simple.

Let \({\xi _1,\ldots ,\xi _s}\) be all distinct zeros of f with multiplicities \({m_1,\ldots ,m_s}\), respectively. To preserve the convergence order, in 1972, Sekuloski [11] presented the first simultaneous method for multiple polynomial zeros. Thereafter, many papers have been devoted to this topic (see, e.g., Petković [6], Farmer and Loizou [12], Gargantini [13], Petković et al. [14], Kjurkchiev and Andreev [15], Iliev and Iliev [16], Proinov and Cholakov [17], Proinov and Ivanov [18], and references therein). In particular, the Weierstrass iteration function for multiple zeros \(w_f\) can be defined in the vector space \({\mathbb {K}^s}\) (\(2 \le s \le n\)) as follows (see, e.g., Petković [6, Eq. (1.40)]):

$$\begin{aligned} w_f(x) = (w_1(x),\ldots , w_s(x)) \quad \text {with}\quad w_i(x) = \frac{f(x_i)}{a_0 \prod _{j \, \ne \, i} \, (x_i - x_j)^{m_j}} \quad (i \in I_s). \end{aligned}$$
(3)

In 1977, Nourein [19, 20] provided the idea for combining different iteration functions to increase the convergence order of the simultaneous methods, thus two fourth-order methods were obtained by combining Ehrlich’s method with Newton’s one, and Börsch-Supan’s method with Weierstrass’ method (1). There since, many authors (see, e.g., [21,22,23] and references therein) have applied Nourein’s approach to construct simultaneous methods with accelerated convergence combining two particular iteration functions. Such methods are often called simultaneous methods with correction [6, 18]. In 1987, Wang and Wu [24] were the first who constructed and studied a simultaneous method involving an arbitrary correction but this had not led to the increase of the convergence order. In 2021, Proinov and Vasileva [25] constructed and studied a family of Ehrlich’s type methods with arbitrary correction and arbitrary convergence order for simple polynomial zeros while very recently, Proinov and Ivanov [18] constructed and studied the local and the semilocal convergence of a family of Sakurai-Torii-Sugiura type methods, with arbitrary correction and arbitrary convergence order, for simple and multiple polynomial zeros.

In this paper, developing the last mentioned ideas, in Sect. 2, we show how to construct and study new general families of high-order simultaneous methods involving more than one arbitrary correction. To demonstrate that any correction has a different influence on the methods, in Sect. 3, we prove a local convergence theorem (Theorem 3.6) about one of the proposed families, called Schröder-like methods with two corrections. Theorem 3.6 shows that our approach opens up an opportunity to manage the corrections in order to achieve a balance between the convergence order and the computational efficiency [26] when constructing some particular methods. A numerical example (Example 3.8) is also provided to confirm the obtained theoretical results and to compare our methods with some classical ones.

2 Families of simultaneous methods with more corrections

In this section, using some of the most famous iteration functions in the literature, we construct three new families of simultaneous methods with more than one correction for the approximation of simple or multiple polynomial zeros.

For this aim, we suppose that \({\Omega }\) and \({\Gamma }\) are two arbitrary iteration functions in the vector space \({\mathbb {K}^s}\). Using these functions and (3), we define two modified Weierstrass iteration functions in \({\mathbb {K}^s}\) by \({\mathcal {W}(x) = (\mathcal {W}_1(x), \ldots , \mathcal {W}_s(x))}\) and \({\mathscr {W}(x) = (\mathscr {W}_1(x), \ldots , \mathscr {W}_s(x))}\) with

$$\begin{aligned} \mathcal {W}_i(x) = \frac{f(x_i)}{a_0 \prod _{j \, \ne \, i} \, (x_i - \Omega _j)^{m_j}} \quad \text {and}\quad \mathscr {W}_i(x) = \frac{f(x_i)}{a_0 \prod _{j \, \ne \, i} \, (x_i - \Gamma _j)^{m_j}}. \end{aligned}$$
(4)

Obviously, \(\mathcal {W}_i\) and \(\mathscr {W}_i\) have the same zeros as the polynomial f. In what follows, using these iteration functions we shall construct three families with more than one correction.

2.1 Schröder-like methods with two corrections

In 2017, Kyncheva et al. [27] studied the well known Schröder’s method [28] as a method for the simultaneous computation of polynomial zeros of unknown multiplicity, i.e., as a method in \(\mathbb {K}^s\). Namely, they have defined it by the following iteration:

$$\begin{aligned} x^{(k+1)} = S(x^{(k)}), \quad k=0,1,2,\ldots , \end{aligned}$$
(5)

where Schröder’s iteration function S is defined in \(\mathbb {K}^s\) by \(S(x) = (S_1(x), \ldots , S_s(x))\) with

$$\begin{aligned} S_i(x) = \left\{ \begin{array}{ll} x_i - \displaystyle \frac{f'(x_i)}{f(x_i)} \left( \left( \frac{f'(x_i)}{f(x_i)} \right) ^{2} - \frac{f''(x_i)}{f(x_i)} \right) ^{-1} &{} \text {if }\, f(x_i) \ne 0,\\ x_i &{} \text {if }\, f(x_i) = 0. \end{array} \right. \end{aligned}$$
(6)

Without much ado, we combine Schröder’s iteration function (6) with the above defined modified Weierstrass functions \(\mathcal {W}\) and \(\mathscr {W}\) to define an iteration function T in \(\mathbb {K}^s\) by \({T(x) = (T_1(x), \ldots , T_s(x))}\) with

$$\begin{aligned} T_i(x) = S_i(\mathcal {W}; \mathscr {W}; x) = x_i - \displaystyle \frac{\mathcal {W}'(x_i)}{\mathcal {W}(x_i)} \left( \left( \frac{\mathscr {W}'(x_i)}{\mathscr {W}(x_i)} \right) ^{2} - \frac{\mathscr {W}''(x_i)}{\mathscr {W}(x_i)} \right) ^{-1}. \end{aligned}$$
(7)

Now let \(f(x_i) \ne 0\), \({x \,\, \#\,\, \Omega }\) and \({x \,\, \#\,\, \Gamma }\), where \(\#\) denotes a binary relation on \(\mathbb {K}^s\) defined by \({x \,\, \#\,\, y}\) if and only if \({x_i \ne y_j}\) for all \({i, j \in I_s}\) such that \({i \ne j}\), then using some well known identities (see [6, Eq.(1.46)]), from (4) we get the following relations:

$$\begin{aligned} \frac{\mathcal {W}'_i(x_i)}{\mathcal {W}_i(x_i)}= & {} \frac{f'(x_i)}{f(x_i)} - \sum _{j \,\ne \, i} \frac{m_j}{x_i - \Omega _j} \quad \text {and}\quad \nonumber \\ \left( \frac{\mathscr {W}'_i(x_i)}{\mathscr {W}_i(x_i)}\right) ^2 - \frac{\mathscr {W}''_i(x_i)}{\mathscr {W}_i(x_i)}= & {} \left( \frac{f'(x_i)}{f(x_i)}\right) ^2 - \frac{f''(x_i)}{f(x_i)} - \sum _{j \, \ne \, i} \frac{m_j}{(x_i - \Gamma _j(x))^2}. \end{aligned}$$
(8)

From these relations and (7), we get the following family of Schröder-like methods with two corrections (SLMC):

$$\begin{aligned} x^{(k+1)} = T(\Omega ; \Gamma ; x^{(k)}), \qquad k = 0, 1, 2, \ldots , \end{aligned}$$
(9)

where the iteration function \({T :D \subset \mathbb {K}^s \rightarrow \mathbb {K}^s}\) is defined by

$$\begin{aligned} T(\Omega ; \Gamma ; x)= & {} (T_1(\Omega ; \Gamma ; x), \ldots , T_s(\Omega ; \Gamma ; x)) \quad \text {with}\quad \nonumber \\ T_i(\Omega ; \Gamma ; x)= & {} \left\{ \begin{array}{ll} x_i - \displaystyle \frac{L_i(\Omega ; x)}{F_i(\Gamma ; x)} &{} \text {if }\, f(x_i) \ne 0,\\ x_i &{} \text {if }\, f(x_i) = 0, \end{array} \right. \end{aligned}$$
(10)

where \(L_{\,i}(\Omega ; x)\) and \(F_{\,i}(\Gamma ; x)\) are defined as follows:

$$\begin{aligned}{} & {} L_{\,i}(\Omega ; x) = \displaystyle \frac{f'(x_i)}{f(x_i)} - \sum _{j \, \ne \, i} \frac{m_j}{x_i - \Omega _j(x)}, \nonumber \\{} & {} F_i(\Gamma ; x) = \left( \frac{f'(x_i)}{f(x_i)}\right) ^2 - \frac{f''(x_i)}{f(x_i)} - \sum _{j \, \ne \, i} \frac{m_j}{(x_i - \Gamma _j(x))^2}. \end{aligned}$$
(11)

If the set \(\mathscr {D}\) is the interception of the domains of \(\Omega \) and \(\Gamma \), then the domain of (10) is the set

$$\begin{aligned} D = \left\{ x \in \mathscr {D} \, :x \,\, \#\,\, \Omega (x), \, x \,\, \#\,\, \Gamma (x) \, \text { and }\, F_i(\Gamma ; x) \ne 0 \, \, \text { whenever }\, f(x_i) \ne 0 \, \right\} . \end{aligned}$$
(12)

In Theorem 3.6, we prove that the iteration family (9) has Q-convergence order \(r \ge 3\).

2.2 Chebyshev-Halley methods with two corrections

In 2022, Ivanov [29] studied the Chebyshev-Halley family for multiple zeros defined by the iteration

$$\begin{aligned} x_{k+1} = T_\alpha (x_k), \end{aligned}$$
(13)

where the Chebyshev-Halley iteration function \({T_\alpha :\mathbb {K}\rightarrow \mathbb {K}}\) is defined by

$$\begin{aligned} T_\alpha (x) = x - \displaystyle \frac{m N(x)}{2}\, \frac{3 - m - 2\alpha (1 - m) + m(1 - 2\alpha )M(x)}{1 - \alpha (1 - m) - m\alpha M(x)}\, \end{aligned}$$
(14)

with \(\alpha \in \mathbb {C}\), \({N(x) = f(x)/f'(x)}\) and \({M(x) = N(x)\, f''(x)/f'(x)}\). It is worth noting that the iteration (13) includes, as special cases, the most famous iteration methods in the literature, namely Halley’s method for \({\alpha = 1/2}\), Chebyshev’s method for \({\alpha = 0}\), Super-Halley method for \({\alpha = 1}\) and Osada’s method for \({\alpha = 1/(1 - m)}\) and \(m > 1\).

In 2017, Kyncheva et al. [30] studied the convergence of Newton, Halley, and Chebyshev’s methods as methods in \(\mathbb {K}^s\). Here, the same way as in [30], we consider Chebyshev-Halley family (13) in \(\mathbb {K}^s\) and then, as in the previous section, we combine the Chebyshev-Halley iteration function with the modified Weierstrass functions \(\mathcal {W}\) and \(\mathscr {W}\) to construct the following new family of Chebyshev-Halley-like methods with two corrections:

$$\begin{aligned} x^{(k+1)} = \mathcal {T}(\Omega ; \Gamma ; x^{(k)}), \qquad k = 0, 1, 2, \ldots , \end{aligned}$$
(15)

where the iteration function \({\mathcal {T} :D \subset \mathbb {K}^s \rightarrow \mathbb {K}^s}\) is defined by

$$\begin{aligned} \mathcal {T}(\Omega ; \Gamma ; x)= & {} (\mathcal {T}_1(\Omega ; \Gamma ; x), \ldots , \mathcal {T}_s(\Omega ; \Gamma ; x)) \quad \text {with}\quad \nonumber \\ \mathcal {T}_i(\Omega ; \Gamma ; x)= & {} x_i - \displaystyle \frac{m_i}{2 L_i(\Omega ; x)}\\{} & {} \times \frac{3 - m_i - 2\alpha (1 - m_i) + m_i(1 - 2 \alpha )\left( 1 - \displaystyle \frac{F_i(\Gamma ; x)}{L_i(\Omega ; x)^2}\right) }{1 - \alpha (1 - m_i) - m_i\alpha \left( 1 - \displaystyle \frac{F_i(\Gamma ; x)}{L_i(\Omega ; x)^2}\right) }\,, \end{aligned}$$

where \(L_{\,i}(\Omega ; x)\) and \(F_{\,i}(\Gamma ; x)\) are defined by (11). We have to note that, in the case \({\alpha = 1/2}\) and \(\Omega = \Gamma \), the iteration (15) reduces to Sakurai-Torii-Sugiura’s method with correction which was recently studied in [18].

2.3 Weierstrass-like methods with three corrections

Recently, Marcheva, and Ivanov [31] have provided a detailed convergence analysis of a modification of the Weierstrass’ method (1) that has been derived by Nedzhibov [32], in 2016.

Let \(\Delta \) be another arbitrary iteration function in \(\mathbb {K}^s\). In the present section, we implement our new idea to the mentioned method and thus we construct the following family of Weierstrass-like methods with three corrections:

$$\begin{aligned} x^{(k+1)} = \Phi (\Omega ; \Gamma ; \Delta ; x^{(k)}), \qquad k=0, 1, 2, \ldots , \end{aligned}$$
(16)

where the iteration function \(\Phi \) is defined in \(\mathbb {K}^s\) as follows:

$$\begin{aligned} \Phi (\Omega ; \Gamma ; \Delta ; x)= & {} (\Phi _1(\Omega ; \Gamma ; \Delta ; x), \ldots , \Phi _s(\Omega ; \Gamma ; \Delta ; x)) \quad \text {with}\quad \nonumber \\ \Phi _i(\Omega ; \Gamma ; \Delta ; x)= & {} x_i - \displaystyle \frac{x_i\, \mathcal {W}_{\,i}(x)}{\Delta _i(x) + \mathscr {W}_{\,i}(x)}, \end{aligned}$$
(17)

where \(\mathcal {W}_{\,i}(x)\) and \(\mathscr {W}_{\,i}(x)\) are defined by (4). The family (17) has Q-convergence order \(r \ge 2\).

Remark 2.1

Our new approach can be further applied to obtain high-order and high-efficient analogs of many other iteration methods for the approximation of simple or multiple polynomial zeros.

3 Local convergence analysis of Schröder-like methods with two corrections

In this section, using the concepts of Proinov [33], we obtain a general local convergence result (Theorem 3.6) about the iterative methods of the family (9) for computing multiple polynomial zeros with known multiplicity.

Henceforward, we shall use the following notations and conventions: The vector space \(\mathbb {R}^s\) is endowed with the standard coordinate-wise ordering \(\,\preceq \,\) defined by \(x \preceq y\) if and only if \(x_i \le y_i\) for each \({i \in I_s}\) and the vector space \(\mathbb {K}^s\) is equipped with the max-norm \(\Vert \cdot \Vert _\infty \) and with a vector norm \({\Vert \cdot \Vert }\) with values in \(\mathbb {R}^s\) defined by

$$ \Vert x\Vert _\infty = \max \{|x_1|, \ldots , |x_s|\} \quad \text {and}\quad \Vert x\Vert = (|x_1|,\ldots ,|x_s|). $$

We define \({d :\mathbb {K}^s \rightarrow \mathbb {R}^s}\) by \(d(x) = (d_1(x), \ldots , d_s(x))\) with \(d_i(x) = \min _{j \,\ne \, i}{|x_i - x_j\,|}\) and, assuming that \({\mathbb {K}^s}\) is an algebra over \(\mathbb {K}\), we define the coordinate-wise division of two vectors \({x, y \in \mathbb {K}^s}\) by \( {x}/{y} = ({x_1}/{y_1},\cdots , {x_s}/{y_s}) \) provided that y has only nonzero components. For a non-negative integer k and \(r \ge 1\), \({S_k(r)}\) stands for the sum

$$ S_k (r) = \sum _{0 \,\le \, j \,<\, k} r^j. $$

From now on, we assume by definition that \(0^0 = 1\) and we denote by \(\mathbb {R}_+\) the set of non-negative numbers, and with J some interval in \(\mathbb {R}_+\) containing 0.

Let \({f \in \mathbb {K}[z]}\) be a polynomial of degree \({n \ge 2}\) and let \({m_1,\ldots ,m_s}\) be natural numbers such that \({m_1 + \ldots + m_s = n}\), then a vector \({\xi \in \mathbb {K}^s}\) is said to be a root-vector of f with multiplicity m if ([18, Definition 2.3])

$$ f(z) = a_0 \prod _{i = 1}^s (z - \xi _i)^{m_i} \quad \text {for all }\, z \in \mathbb {K}, $$

where \(a_0\) is the leading coefficient of f. If f has only simple zeros, we simply say a root-vector of f.

For the sake of self-dependence, before proceeding further we recall some important definitions and results of [33] and [18].

Definition 3.1

([33, Definitions 7–8]) A function \({\varphi :J \rightarrow \mathbb {R}_+}\) is said to be quasi-homogeneous of (exact) degree \({p \ge 0}\) if it satisfies the following two conditions:

  1. 1.

    \( \displaystyle \varphi (\lambda t) \le \lambda ^p \varphi (t) \quad \text {for all }\, \lambda \in [0,1] \,\text { and }\, t \in J; \vspace{0.2cm} \)

  2. 2.

    \( \displaystyle \lim _{t \rightarrow 0^+} \frac{\varphi (t)}{t^p} \ne 0. \)

Definition 3.2

([33, Definition 9]) A function \({F :D \subset \mathbb {K}^s \rightarrow \mathbb {K}^s}\) is said to be an iteration function of first kind at a point \({\xi \in \mathcal {D}}\) if there exists a quasi-homogeneous function \({\phi :J \rightarrow \mathbb {R}_+}\) of degree \({p \ge 0}\) such that for each vector \({x \in \mathbb {K}^s}\) with \({E(x) \in J}\) the following conditions are satisfied:

$$\begin{aligned} x \in D \quad \text {and}\quad \Vert \, F(x) - \xi \, \Vert \preceq \phi (E(x)) \, \Vert x - \xi \, \Vert , \end{aligned}$$
(18)

where the function \(E :\mathbb {K}^s \rightarrow \mathbb {R}_+\) is defined by

$$\begin{aligned} E(x) = \left\| \frac{x - \xi }{d(\xi )} \right\| _\infty \, . \end{aligned}$$
(19)

The function \(\phi \) is said to be control function of F.

Theorem 3.3

([33, Theorem 3]) Suppose \({T :D \subset \mathbb {K}^s \rightarrow \mathbb {K}^s}\) and \({\xi \in \mathbb {K}^s}\) is a fixed point of T with pairwise distinct components. Let T be an iteration function of first kind at \({\xi \in \mathbb {K}^s}\) with control function \({\phi :J \rightarrow \mathbb {R}_+}\) of degree \(p \ge 0\), and let \(x^{(0)} \in \mathbb {K}^s\) be an initial guess of \(\xi \) such that

$$ E(x^{(0)}) \in J \quad \text {and}\quad \phi (E(x^{(0)})) < 1. $$

Then, the Picard iteration \({x^{(k+1)} = T(x^{(k)})}\), \({k = 0, 1, 2, \ldots }\), is well defined and converges to \(\xi \) with Q-order \({r = p + 1}\) and with the following error estimates for all \({k \ge 0}\):

$$ \Vert x^{(k)} - \xi \,\Vert \preceq \lambda ^{S_k (r)} \, \Vert x^{(0)} - \xi \,\Vert \quad \text {and}\quad \Vert x^{(k+1)} - \xi \,\Vert \preceq \lambda ^{r^k} \, \Vert x^{(k)} - \xi \,\Vert , $$

where \({\lambda = \phi (E(x^{(0)}))}\). Besides, the following estimate for the asymptotic error constant holds:

$$ \limsup _{k \rightarrow \infty } \frac{\Vert x^{(k+1)} - \xi \, \Vert _p}{\Vert x^{(k)} - \xi \,\Vert _p^r} \le \frac{1}{\delta (\xi )^{p}} \, \lim _{t \rightarrow 0^+} \frac{\phi (t)}{t^{p}} \,. $$

where \(\delta (\xi ) = \min _{i \ne j}|\xi _i - \xi _j|\).

Lemma 3.4

([18, Lemma 2.7]) Let \({x, y, \xi }\) be three vectors in \(\mathbb {K}^s\), and let \(\xi \) has pairwise distinct components. If \({\, \Vert y - \xi \, \Vert \preceq \alpha \, \Vert \, x - \xi \, \Vert \,}\) for some \({\alpha \ge 0}\), then for all \({i, j \in I_s}\), we have

$$ |x_i - y_j| \ge (1 - (1 + \alpha )\, E(x) ) \, |\xi _i - \xi _j|, $$

where the function E is defined by (19).

Before to state and prove our main result of this section, for two quasi-homogeneous functions \({\omega :J_{\omega } \rightarrow \mathbb {R}_+}\) of degree \({p \ge 0}\) and \({\gamma :J_{\gamma } \rightarrow \mathbb {R}_+}\) of degree \({q \ge 0}\), we define the functions g and h as follows

$$\begin{aligned} g(t) = t \, (1 + \omega (t)) \quad \text {and}\quad h(t) = t \, (1 + \gamma (t)). \end{aligned}$$
(20)

Using the functions g and h defined by (20), we define the functions A, B and \(\phi \) by

$$\begin{aligned} A(t) = \frac{a \, t^2 \omega (t)}{(1 - t) (1 - g(t))}, \quad B(t)= & {} \frac{a \, t^3 \gamma (t)}{(1 - t) (1 - h(t))}\left( \frac{1}{1 - t} + \frac{1}{1 - h(t)}\right) \, \quad \!\!\!\!\text {and}\quad \nonumber \\ \phi (t)= & {} \frac{A(t) + B(t)}{1 - B(t)}\,, \end{aligned}$$
(21)

where the number a is defined by

$$\begin{aligned} a = \max _{1 \,\le \,i \,\le \, s}\, \frac{1}{m_i} \sum _{j \,\ne \, i} m_j\, . \end{aligned}$$
(22)

According to the properties of the quasi-homogeneous functions of exact degree (see [33, Proposition 1]), A is a quasi-homogeneous function of degree \(p + 2\) while B is a quasi-homogeneous function of degree \(q + 3\) and therefore \(\phi \) is quasi-homogeneous functions of degree \(m = \min \{p + 2, q + 3\}\).

The following is our main lemma. The more curious reader can find a similar proof in [18, Lemma 2.8].

Lemma 3.5

Let \({f \in \mathbb {K}[z]}\) be a polynomial of degree \({n \ge 2}\), \({\xi \in \mathbb {K}^s}\) be a root-vector of f with multiplicity \({{\textbf {m }}}\), and let \({\Omega }\) and \(\Gamma \) be two iteration functions of first kind at \({\xi }\) with control functions \({\omega }\) and \(\gamma \) of respective degrees \(p \ge 0\) and \(q \ge 0\). Then \({T :D \subset \mathbb {K}^s \rightarrow \mathbb {K}^s}\) defined by (10) is an iteration function of first kind at \(\xi \) with control function \({\phi :J \rightarrow \mathbb {R}_+}\) of degree \(m = \min \{p + 2, q + 3\}\) defined by (21).

Proof

Let x be a vector in \({\mathbb {K}^s}\) such that \({E(x) \in J}\), where E(x) is defined by (19) and

$$ J = \{t \in J_{\omega } \cap J_{\gamma } :g(t)< 1, \, h(t)< 1\, \text { and }\, B(t) < 1 \}. $$

According to Definition 3.2, we have to prove that

$$\begin{aligned} x \in D \quad \text {and}\quad | T_i(\Omega ; \Gamma ; x) - \xi _i \,| \le \phi (E(x)) \, | x_i - \xi _i \, | \quad \text {for each}\quad i \in I_s, \end{aligned}$$
(23)

where D is defined by (12).

Since \(\Omega \) and \(\Gamma \) are iteration functions of first kind at \(\xi \) with control functions \(\omega \) and \(\gamma \), then by Definition 3.2 we have

$$\begin{aligned} | \Omega _i(x) - \xi _i \,| \!\le \! \omega (E(x)) \, | x_i - \xi _i \, | \!\,\text { and }\,\! | \Gamma _i(x) - \xi _i \,| \!\le \! \gamma (E(x)) \, | x_i - \xi _i \, | \,\text { for each }\, i \!\in \! I_s. \end{aligned}$$
(24)

So, applying Lemma 3.4 and taking into account that \({E(x) \in J}\), we get for all \({i \ne j}\),

$$\begin{aligned} |x_i - \Omega _j(x)| \ge (1 - g(E(x)))\, d_j(\xi )> 0 \,\text { and }\, |x_i - \Gamma _j(x)| \ge (1 - h(E(x)))\, d_j(\xi ) > 0. \end{aligned}$$
(25)

These inequalities imply that \({x \,\, \#\,\, \Omega (x)}\) and \({x \,\, \#\,\, \Gamma (x)}\).

Now, let \({f(x_i) \ne 0}\) for some i. In accordance with (12), it remains to prove that \(F_i(\Gamma ; x) \ne 0\). Using some known identities (see, e.g., [18, Lemma 2.6]), we get

$$\begin{aligned} F_i(\Gamma ; x) = \frac{m_i \, (1 - \beta _i)}{(x_i - \xi _i)^2}\, , \,\text { where }\, \beta _i = \frac{(x_i - \xi _i)^2}{m_i} \sum _{j \, \ne \, i} m_j \left( \frac{1}{(x_i - \Gamma _j(x))^2} - \frac{1}{(x_i - \xi _j)^2} \right) . \end{aligned}$$
(26)

From this, \({E(x) \in J}\) and the second inequalities in (24) and (25), we get

$$\begin{aligned} |\beta _i|\le & {} \frac{|x_i - \xi _i|^2}{m_i} \sum _{j \, \ne \, i} m_j \left| \frac{1}{x_i - \Gamma _j(x)} - \frac{1}{x_i - \xi _j} \right| \, \left( \frac{1}{|x_i - \Gamma _j(x)|} + \frac{1}{|x_i - \xi _j|} \right) \nonumber \\\le & {} B(E(x)) < 1. \end{aligned}$$
(27)

Thus, we have \(|1 - \beta _i| \ge 1 - |\beta _i| \ge 1 - B(E(x)) > 0\) which means that \(F_i(\Gamma ; x) \ne 0\) ans so \(x \in D\).

To complete the proof of the lemma, it remains to prove the inequality of (23). Note that if \(x_i = \xi _i\) for some i, then the inequality in (23) becomes an equality. Suppose \(x_i \ne \xi _i\). In this case, from (10) and (11), we obtain

$$\begin{aligned} T_i(\Omega ; \Gamma ; x) - \xi _i = x_i - \xi _i - \displaystyle \frac{L_{\,i}(\Omega ; x)}{F_i(\Gamma ; x)} = \frac{\beta _i + \alpha _i}{\beta _i - 1}\, (x_i - \xi _i), \end{aligned}$$
(28)

where \(\beta _i\) is defined by (26) and \(\alpha _i\) is defined by

$$\begin{aligned} \alpha _i = \frac{x_i - \xi _i}{m_i} \sum _{j \, \ne \, i} m_j \left( \frac{1}{x_i - \xi _j} - \frac{1}{x_i - \Omega _j(x)}\right) \, . \end{aligned}$$
(29)

From this, the first inequalities of (24) and (25), and the Hölder’s inequality, we get the estimate (see [18, Eq. (2.18)])

$$\begin{aligned} |\alpha _i| \le \frac{1}{m_i} \sum _{j \, \ne \, i} m_j \frac{|x_i - \xi _i|\, |\xi _j - \Omega _j(x)|}{|x_i - \xi _j| \, |x_i - \Omega _j(x)|} \le \frac{a\, \omega (E(x)) E(x)^2}{(1 - E(x))(1 - g(E(x)))} = A(E(x)). \end{aligned}$$
(30)

Finally, from the triangle inequality and the estimates (30) and (27), we obtain

$$\begin{aligned} \frac{|\beta _i - \alpha _i|}{|1 - \beta _i|} \le \frac{|\beta _i| + |\alpha _i|}{1 - |\beta _i|} \le \frac{B(E(x)) + A(E(x))}{1 - B(E(x))} = \phi (E(x)) \end{aligned}$$
(31)

which, together with (28), proves the desired inequality of (23) and completes the proof of the lemma.

The following theorem provides sufficient conditions and error estimates to guarantee the Q-convergence of the SLMC (9) with order \(r \ge 3\).

Theorem 3.6

(Local convergence of SLMC) Let \(f \in \mathbb {K}[z]\) be a polynomial of degree \({n \ge 2}\), \({\xi \in \mathbb {K}^s}\) be a root-vector of f with multiplicity \({{\textbf {m }}}\) and let \({\Omega }\) and \(\Gamma \) be two iteration functions of first kind at \({\xi }\) with control functions \({\omega :J_\omega \rightarrow \mathbb {R}_+}\) and \(\gamma :J_\gamma \rightarrow \mathbb {R}_+\) of respective degrees \(p \ge 0\) and \(q \ge 0\). Suppose \(x^{(0)} \in \mathbb {K}^s\) is an initial guess satisfying

$$\begin{aligned} E(x^{(0)}) = \left\| \frac{x^{(0)} - \xi }{d(\xi )} \right\| _\infty \in J_\omega \cap J_\gamma , \,\, g(E(x^{(0)}))< 1, \,\, h(E(x^{(0)}))< 1, \,\, \Lambda (E(x^{(0)})) < 1, \end{aligned}$$
(32)

where \(\Lambda = A + 2B\) with functions A and B defined by (21). Then the iteration (9) is well defined and converges to \(\xi \) with Q-order \(r = \min \{p + 3, q + 4\}\) and with the following error estimates for all \(k \ge 0\):

$$\begin{aligned} \Vert x^{(k+1)} - \xi \, \Vert \preceq \lambda ^{r^k}\, \Vert x^{(k)} - \xi \, \Vert \quad \text {and}\quad \Vert x^{(k)} - \xi \, \Vert \preceq \lambda ^{\frac{r^k - 1}{r - 1}}\, \Vert x^{(0)} - \xi \, \Vert , \end{aligned}$$
(33)

where \({\lambda = \phi (E(x^{0}))}\) with \(\phi \) defined by (21). Besides, we have the following estimate of the asymptotic error constant:

$$\begin{aligned} \limsup _{k \rightarrow \infty } \frac{\Vert x^{(k+1)} - \xi \Vert _\infty }{\Vert x^{(k)} - \xi \Vert _\infty ^r} \le \frac{a}{\delta (\xi )^{\, r - 1}} \lim _{t \rightarrow 0^+} \frac{\omega (t)}{t^m}\,, \end{aligned}$$
(34)

where \(m = \min \{p + 2, q + 3\}\), a is defined by (22) and \(\delta (\xi ) = \min _{i \ne j}|\xi _i - \xi _j|\).

Proof

The proof follows immediately from Lemma 3.5 and Theorem 3.3. \(\square \)

Remark 3.7

Theorem 3.6 shows that the optimal choice of the corrections \({\Omega }\) and \(\Gamma \) is when \(p = q + 1\). For example, if one chooses to put Newton’s or Weierstrass’ iteration function (IF) (\({p = 1}\)) at the place of \({\Omega }\) they will obtain a fourth-order method no matter if \(\Gamma \) is the identity function (\({q = 0}\)) or the IF of some high-order method as Newton, Weierstrass, Halley, etc (\({q \ge 1}\)) (see Example 3.8). So, when constructing some particular method it is reasonable to make the optimal choice of \({\Omega }\) and \(\Gamma \) in order to preserve the computational efficiency.

In the following example, we implement three particular members of the family (9) to exemplify the results of Theorem 3.6. On the other hand, to emphasize the performance of our methods, we compare them with two classical such ones.

Example 3.8

Let \(\mathbb {K}= \mathbb {C}\). In this example, we apply three particular methods of the family (9), namely the case

$$ \Omega _i(x) = x_i - \frac{f(x_i)}{f'(x_i)} \quad \text {and}\quad \Gamma _i(x) = x_i $$

that we shall call Schröder-like method with one Newton’s correction (SLMN), the case

$$ \Omega _i(x) = \Gamma _i(x) = x_i - \frac{f(x_i)}{f'(x_i)} $$

called Schröder-like method with two Newton’s corrections (SLMNN) and the case

$$ \Omega _i(x) = x_i - \frac{f(x_i)}{f'(x_i)} \quad \text {and}\quad \Gamma _i(x) = x_i - \left( \frac{f'(x_i)}{f(x_i)} - \frac{1}{2} \frac{f''(x_i)}{f'(x_i)}\right) ^{-1} $$

called Schröder-like method with Newton and Halley’s corrections (SLMNH) to the polynomial

$$ f(z) = z^6 - 1. $$

In order to compare our methods with existing such ones, we also apply the classical Sakurai-Torii-Sugiura’s method (STS) [34, 35] and the Nourein’s method (EN) \(x^{(k + 1)} = \Phi (x^{(k)})\), where the iteration function is defined by \(\Phi (x) = (\Phi _1(x),\ldots , \Phi _n(x))\) with

$$ \Phi _i(x) = x_i - \frac{f(x_i)}{f'(x_i) - f(x_i) \displaystyle \sum _{j \ne i}{\frac{1}{x_i - x_j + f(x_j)/f'(x_j)}}}. $$

Here, we use the Aberth’s initial approximation [36]:

$$ x^{(0)}_\nu = - \frac{a_1}{n} + R \exp (i\, \theta _\nu ), \qquad \theta _\nu = \frac{\pi }{n} \left( 2 \nu - \frac{3}{2} \right) , \qquad \nu = 1, \ldots , n, $$

with \(a_1 = 0\), \(R = 2\) and \(n = 6\). Note that the value of R is chosen with accordance to the Cauchy’s bound for the zeros of f.

In Table 1, we give the values of the error \(\varepsilon _k\) at k-th iteration and the computational order of convergence (COC) \(\rho _k\) defined by [37]

$$ \varepsilon _k = \Vert x^{(k)} - x^{(k-1)}\Vert _\infty \quad \text {and}\quad \rho _k = \frac{\ln (\varepsilon _{k+1}/\varepsilon _k)}{\ln (\varepsilon _k/\varepsilon _{k-1})}. $$

It is worth noting that the concept of COC has been introduced by Weerakoon and Fernando [38] and further developed by Cordero and Torregrosa [37], Grau-Sánchez, et al. [39] and Proinov and Ivanov [18].

One can see from the table that our three methods have similar convergence behavior and the COC \(\rho _k\) confirms the theoretical one r, obtained in Theorem 3.6. It is also seen that any of our methods behave slightly better than the considered classical ones.

Table 1 Numerical results for Example 3.8

4 Conclusions

In the present paper, we have offered the idea for composing several iteration functions in order to construct new families of iterative methods with both high-order of convergence and high computational efficiency. To illustrate our idea, we have combined some of the most famous iteration functions with two or three arbitrary iteration functions (corrections) to obtain three new families of high-order iterative methods for the simultaneous computation of simple or multiple polynomial zeros. To show that any correction has a different influence that can be managed in order to achieve a balance between the convergence order and the computational efficiency when constructing some particular methods, we have proved a local convergence theorem about one of the proposed general families called Schröder-like methods with two corrections. A numerical example has been provided to confirm the theoretical results and to compare some of our methods with the classical ones.

Comprehensive analysis of the convergence, dynamics, and computational efficiency of the presented new families as well as of other newly constructed ones will be performed in our future works.