1 Introduction

Multivariate polynomial interpolation problems are difficult to solve. We are interested in studying some of them. The first problem is to choose a set of interpolation conditions such that the corresponding interpolation problem has a unique solution on a space of multivariate polynomials. The interpolation points are usually in restrictive distribution. For example, these points can be collected from hypersurfaces, where each hypersurface contains a certain number of points (see [2, 3, 8]). The second one is to find an effective formula for the interpolation polynomial. In particular, the Newton type formula is not available for many multivariate interpolation schemes.

Let \(\mathscr {P}(\mathbb {R}^2)\) be the vector space of all polynomials (of real coefficients) in \(\mathbb {R}^2\) and \(\mathscr {P}_d(\mathbb {R}^2)\) the subspace consisting of all polynomials of degree at most d. Let \(\mathscr {S}_d\) be the vector space of even bivariate polynomials with respect to the second variable y of degree at most d,

$$\begin{aligned} \mathscr {S}_d=\{p\in \mathscr {P}_d(\mathbb {R}^2): p(x,y)=p(x,-y)\}.\end{aligned}$$

In the paper of Carnicer and Godés [6], the authors proved that

$$\begin{aligned} \dim \mathscr {S}_d=\big [\dfrac{d+2}{2}\big ]\cdot \big [\dfrac{d+3}{2}\big ].\end{aligned}$$

We consider a problem of Hermite interpolation by a polynomial in \(\mathscr {S}_d\). More precisely, the problem means to find a polynomial in \(\mathscr {S}_d\) which matches, on a set of distinct points in \(\mathbb {R}^2\), values of a function and its partial derivatives. It is required that the number of interpolation conditions equals the dimension of \(\mathscr {S}_d\). If the interpolation problem has a unique solution, then we say that the problem is regular.

Problem 1

Find a set \(A=\{{\textbf {a}}_1,\ldots ,{\textbf {a}}_m\}\) be m distinct points in \(\mathbb {R}^2\) and differential operators \(\mu _1,\ldots ,\mu _{N_d}\) at points of A with \(N_d=\dim \mathscr {S}_d\) for which the interpolation problem

$$\begin{aligned} \mu _i(f)=f_{\mu _i}, \quad 1\le i\le N_d, \end{aligned}$$

has a unique solution in \(\mathscr {S}_d\) for any given preassigned data \(\{f_{\mu _i}\}\). Find an explicit formula for the interpolation polynomial.

In the case where \(m=N_d\) and \(\mu _i\) is the evaluation functional defined by \(\mu _i(f)=f(\textbf{a}_i)\), \(i=1,\ldots , N_d\), Problem 1 becomes the Lagrange interpolation problem for \(\mathscr {S}_d\) which was first studied by Carnicer and Godés. In [6], the authors constructed a regular set X in the upper half plane called an \(\mathscr {S}_d\)-Berzolari-Radon set. More precisely, X consists of \(N_d\) distinct points distributed on lines, where each line contains a certain number of points. They also established a Newton type formula for the symmetric interpolation polynomial. To prove the regularity of X, Carnicer and Godés showed that the interpolation operator corresponding to the Berzolari-Radon set is a bijective map onto the space \(\mathscr {S}_d\). They constructed a symmetric polynomial of the Newton form that matches the value of the interpolated function at X.

In [12], we introduced the multipoint Berzolari–Radon sets (MBR sets for short) whose points are not necessarily distinct and lie on the upper half plane. When a point \(\textbf{a}\) is repeated \(\nu \) times in a MBR set, we get the directional derivatives up to the order \(\nu -1\) with respect to the vector that is parallel with the line containing \(\textbf{a}\). We showed that the MBR set solves the Hermite interpolation problem. To prove the regularity of the the MBR set, we use the factorization method to verify that the interpolation operator corresponding to the MBR set is an injective map from \(\mathscr {S}_d\) to \(\mathbb {R}^{N_d}\), and consequently it is a bijective map. To establish a Newton type formula for the interpolation polynomial, we first constructed a bivariate polynomial that interpolates (in Hermite type) a function at points in the MBR set lying on a line. We then used these polynomials corresponding interpolation conditions to the lines to obtain the formula. Moreover, the formula enabled us to prove a continuity property of Hermite interpolation at MBR sets with respect to the interpolation points.

In this paper, we investigate the Hermite interpolation problem for \(\mathscr {S}_d\), where the interpolation points distribute on quadratic curves in the following three forms:

  1. (i)

    parabola: \(x-\alpha -\beta y^2=0\), \(\alpha \in \mathbb {R}\) and \(\beta \ne 0\);

  2. (ii)

    circle \((x-\alpha )^2+y^2-R^2=0\), \(\alpha \in \mathbb {R}\) and \(R>0\);

  3. (iii)

    hyperbola: \((x-\alpha )^2-\beta y^2-1=0\), \(\alpha \in \mathbb {R}\) and \(\beta >0\).

Let \(\mathscr {Q}\) be the class of all quadratic curves mentioned above. Note that any curve in \(\mathscr {Q}\) is symmetric with respect to the horizontal axis. Moreover, the curve intersects the X-axis at one point (the parabola) or two points (the circle and the hyperbola). We treat the three types of curves separately in Sects. 2 and 3.

Let \(\mathscr {C}\) be a curve in \(\mathscr {Q}\). The first type of interpolation condition is the evaluation functional corresponding to a intersection point of \(\mathscr {C}\) and the X-axis. We use the natural parameterization \(t\mapsto \rho (t)\) of \(\mathscr {C}\) to define the second type of interpolation conditions. More precisely, they are differential operators of the form \(F\mapsto (F\circ \rho )^{(k)} (t_0)\), where the point \(\rho (t_0)\) lies on the upper half part of \(\mathscr {C}\). In Sect. 2, we construct an interpolation polynomial of Hermite type associated with the above-mentioned interpolation conditions on \(\mathscr {C}\). It is a bivariate polynomial, however it depends only on the first variable x. In Sect. 3, we use the differential operators to get a divisibility criterion for the quadratic polynomial q that defines \(\mathscr {C}\). We show that a polynomial \(P\in \mathscr {S}_d\) is divisible by q if all differential operators corresponding to \(\mathscr {C}\) vanish at P. The criterion enables us to use the factorization method in order to prove the regularity of the Hermite scheme. In Sect. 4, we establish a Newton type formula for the Hermite interpolation polynomial. Here every term of the formula contains an interpolation polynomial constructed in Sect. 2. We also give some examples in which we compute the Hermite interpolation polynomials precisely and study their continuity property.

We note that the factorization method is used by many authors to prove the regularity of interpolation schemes (see [2,3,4,5, 11, 14]), where the space of polynomials is the set of all polynomials of degree at most d. It is different from the space of symmetric polynomials \(\mathscr {S}_d\) considered in this paper. In a recent work [7], the authors studied the stability property of the Lagrange interpolation operator in \(\mathbb {R}^n\) and its even and odd parts with respect to the last variable. They showed that that the Lesbesgue constants of the even and odd operators are less than the Lebesgue constant of the full Lagrange operator. For details, we refer the reader to [7].

Finally, it is worth saying that the Hermite or more complex Hermite-Birkhoff interpolation can be used in the potential applications. For example, in [9, 10, 16, 17], the authors solved the interpolation problems in the framework of scattered data with different kind of basis functions and gave numerical results.

2 Bivariate Hermite interpolation associated with curves in \(\mathscr {Q}\)

The construction of Hermite interpolation on curves heavily rely on univariate Hermite interpolation. It is defined as follows.

Let \(t_1,\ldots ,t_n\) be \(n+1\) distinct real numbers. Let \(\nu _1,\ldots ,\nu _n\) be n positive integers and \(l=\nu _1+\cdots +\nu _n\). The following classical result can be found in [1, Theorem 1.1].

Theorem 1

Given a function f for which \(f^{(\nu _i-1)}(t_i)\) exists for \(i=1,\ldots ,n\). Then there exists a unique polynomial p of degree at most \(l-1\) such that

$$\begin{aligned} p^{(j)}(t_i)=f^{(j)} (t_i),\quad 1\le i\le n,\,\, 0\le j\le \nu _i-1. \end{aligned}$$

The polynomial p in Theorem 1 is denoted by \({\textbf {H}}[\{(t_1;\nu _1),\ldots ,(t_n;\nu _n)\};f]\) and called the Hermite interpolation polynomial. In the special case where \(l=n\) and \(\nu _1=\cdots =\nu _n=1\), the interpolation polynomial becomes the ordinary Lagrange interpolation polynomial. If \(n=1\), then \({\textbf {H}}[\{(t_1;l)\};f]\) is identity with the Taylor expansion of f at \(t_1\) to order l. We can regards the set of pairs of nodes and multiplicities \(\{(t_1;\nu _1),\ldots ,(t_n;\nu _n)\}\) as a multipoint set \(\{u_1,\ldots ,u_n\}\subset \mathbb {R}\). Here, \((t_i;\nu _i)\) means that \(t_i\) is repeated \(\nu _i\) times. For example, \(A=\{(0;2),(-2;3), (3;1)\}\) can be identified with \(\{0,-2, 0, -2, 3, -2\}\). Hence, we can write \({\textbf {H}}[\{u_1,\ldots ,u_l\};f]\) instead of \({\textbf {H}}[\{(t_1;\nu _1),\ldots ,(t_n;\nu _n)\};f]\). The most useful formula for the Hermite interpolation polynomial is the Newton representation

$$\begin{aligned} f(t)=f[u_1]+f[u_1,u_2](t-u_1)+\cdots +f[u_1,u_2,\ldots ,u_l](t-u_1)\cdots (t-u_{l-1}) \end{aligned}$$
(1)

where the coefficients \(f[u_1,u_2,\ldots ,u_i]\) are the divided differences.

Let \(\mathscr {C}\) be a curve in the class \(\mathscr {Q}\). We set

$$\begin{aligned} \mathscr {C}^{+}=\mathscr {C}\cap \{(x,y)\in \mathbb {R}^2: y>0\}.\end{aligned}$$

Let \(\textbf{c}\) be a point on \(\mathscr {C}^{+}\) and \(\ell \) be a positive integer. We denote by \(Diff (\mathscr {C},\textbf{c},\ell )\) the set of differential operators:

  1. (i)

    If \(\mathscr {C}\) is the parabola \(x-\alpha =\beta y^2\) and \(\textbf{c}=(\alpha +\beta y_0^2,y_0)\) with \(y_0>0\), then

    $$\begin{aligned} Diff (\mathscr {C},\textbf{c},\ell )=\Big \{\mu : \mu (F)=\dfrac{d^k}{d y^k} F(\alpha +\beta y^2,y)\Big |_{y=y_0},\quad k=0,1,\ldots ,\ell -1\Big \};\end{aligned}$$
  2. (ii)

    If \(\mathscr {C}\) is the circle \((x-\alpha )^2+y^2-R^2=0\) and \(\textbf{c}= (\alpha +R\cos t_0, R\sin t_0)\) with \(0<t_0<\pi \), then

    $$\begin{aligned} Diff (\mathscr {C},\textbf{c},\ell )=\Big \{\mu : \mu (F)=\dfrac{d^k}{d t^k} F(\alpha +R\cos t, R\sin t)\Big |_{t=t_0},\quad k=0,1,\ldots ,\ell -1\Big \};\end{aligned}$$
  3. (iii)

    If \(\mathscr {C}\) is the hyperbola \((x-\alpha )^2-\beta y^2=1\) and \(\textbf{c}= \Big (\alpha +\dfrac{e^{t_0}+e^{-t_0}}{2},\dfrac{e^{t_0}-e^{-t_0}}{2\sqrt{\beta }}\Big )\) with \(t_0>0\), then

    $$\begin{aligned} Diff (\mathscr {C},\textbf{c},\ell )=\Big \{\mu : \mu (F)=\dfrac{d^k}{d t^k} F\Big (\alpha +\dfrac{e^{t}+e^{-t}}{2},\dfrac{e^{t}-e^{-t}}{2\sqrt{\beta }}\Big )\Big |_{t=t_0},\quad k=0,1,\ldots ,\ell -1\Big \}.\end{aligned}$$

To study bivariate Hermite interpolation, we must compute the derivatives of composite functions. For convenience to the readers, we recall the Faà di Bruno formula which can be found in [15].

Theorem 2

If f(t) and g(t) are functions for which the necessary derivatives are defined, then

$$\begin{aligned} \big (f(g(t))\big )^{(n)}=\sum \dfrac{n!}{k_1!\cdots k_n!} f^{(k)}(g(t))\Big (\dfrac{g^{(1)}(t)}{1!}\Big )^{k_1}\Big (\dfrac{g^{(2)}(t)}{2!}\Big )^{k_2}\cdots \Big (\dfrac{g^{(n)}(t)}{n!}\Big )^{k_n}, \end{aligned}$$

where \(k=k_1+\cdots +k_n\) and the sum is over all \(k_1,\ldots ,k_n\) for which \(k_1+2k_2+\cdots +nk_n=n\).

The first result is focused on the parabola.

Lemma 1

Let \(\ell , \nu _1, \ldots ,\nu _n\) be positive integers such that \(\nu _1+\cdots +\nu _n=\ell \). Let \(\textbf{b}\) be the intersection point of the parabola \(\mathscr {C}\) of the equation \(x-\alpha -\beta y^2=0\) with the horizontal axis, i.e. \(\textbf{b}=(\alpha ,0)\). Let \(\textbf{c}_i= (\alpha +\beta y_i^2,y_i)\), \(y_i>0\), \(i=1,\ldots ,n\), be n distinct points on \(\mathscr {C}^{+}\). We set

$$\begin{aligned} \textbf{A}=\{(\textbf{b};1), (\textbf{c}_1;\nu _1),\ldots ,(\textbf{c}_n;\nu _n)\}\end{aligned}$$

and

$$\begin{aligned} A=\{(\alpha ;1), (\alpha +\beta y_1^2;\nu _1),\ldots ,(\alpha +\beta y_n^2;\nu _n)\}.\end{aligned}$$

For a function F defined on \(\mathscr {C}\) for which the necessary derivatives are defined, we set \(f(x)=F\big (x,\sqrt{\frac{x-\alpha }{\beta }}\big )\). Then the polynomial \(\mathfrak H_{\mathscr {C}}[\textbf{A};F]\) defined by

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](x,y):=\textbf{H}[A;f](x)\end{aligned}$$

belongs to \(\mathscr {S}_\ell \) and satisfies the following relations

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\textbf{b})=F(\textbf{b}),\quad \mu (\mathfrak H_{\mathscr {C}}[\textbf{A};F])=\mu (F),\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i).\end{aligned}$$

Proof

Since \(\nu _1+\cdots +\nu _n=\ell \), \(\textbf{H}[A;f](x)\) is a polynomial of degree at most \(\ell \). Therefore, \(\mathfrak H_{\mathscr {C}}[\textbf{A};F]\in \mathscr {S}_\ell \). By the definition, we have

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\textbf{b})=\textbf{H}[A;f](\alpha )=f(\alpha )=F(\textbf{b}).\end{aligned}$$

From the relation

$$\begin{aligned} \dfrac{d^j}{d y^j}\textbf{H}[A;f](\alpha +\beta y^2_i)=\dfrac{d^j}{d y^j}f(\alpha +\beta y^2_i),\quad 1\le i\le n,\quad 0\le j\le \nu _i-1, \end{aligned}$$

we can use the Faà di Bruno formula to obtain the following relations

$$\begin{aligned} \dfrac{d^j}{d y^j}\textbf{H}[A;f](\alpha +\beta y^2)\Big |_{y=y_i}=\dfrac{d^j}{d y^j}f(\alpha +\beta y^2)\Big |_{y=y_i}. \end{aligned}$$
(2)

Indeed, let us set \(g(y)=\alpha +\beta y^2\). Using Theorem 2, we can write

$$\begin{aligned}&\dfrac{d^j}{d y^j}\textbf{H}[A;f](\alpha +\beta y^2)\Big |_{y=y_i} \dfrac{d^j}{d y^j}\Big (\textbf{H}[A;f]\circ g(y)\Big )\Big |_{y=y_i}\\&= \sum \dfrac{j!}{k_1!\cdots k_j!} (\textbf{H}[A;f])^{(k)}(g(y_i))\Big (\dfrac{g^{(1)}(y_i)}{1!}\Big )^{k_1}\Big (\dfrac{g^{(2)}(y_i)}{2!}\Big )^{k_2}\cdots \Big (\dfrac{g^{(j)}(y_i)}{j!}\Big )^{k_j}\\&= \sum \dfrac{j!}{k_1!\cdots k_j!} f^{(k)}(g(y_i))\Big (\dfrac{g^{(1)}(y_i)}{1!}\Big )^{k_1}\Big (\dfrac{g^{(2)}(y_i)}{2!}\Big )^{k_2}\cdots \Big (\dfrac{g^{(j)}(y_i)}{j!}\Big )^{k_j}=\dfrac{d^j}{d y^j}(f\circ g)(y)\Big |_{y=y_i}, \end{aligned}$$

where \(k=k_1+\cdots +k_j\) and the sum is over all \(k_1,\ldots ,k_j\) for which \(k_1+2k_2+\cdots +jk_j=j\).

Remark that \(F(\alpha +\beta y^2,y)=f(\alpha +\beta y^2)\) for \(y>0\). For \(1\le i\le n\) and \(0\le j\le \nu _i-1\), we have

$$\begin{aligned} \dfrac{d^j}{d y^j} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\alpha +\beta y^2,y)\Big |_{y=y_i}= & {} \dfrac{d^j}{d y^j}\textbf{H}[A;f](\alpha +\beta y^2)\Big |_{y=y_i} \\= & {} \dfrac{d^j}{d y^j}f(\alpha +\beta y^2)\Big |_{y=y_i} \\= & {} \dfrac{d^j}{d y^j} F(\alpha +\beta y^2,y)\Big |_{y=y_i}, \end{aligned}$$

where we use (2) in the second equality. The last relation can be rewritten as

$$\begin{aligned} \mu (\mathfrak H_{\mathscr {C}}[\textbf{A};F])=\mu (F),\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i).\end{aligned}$$

\(\square \)

Lemma 2

Let \(\ell , \nu _1, \ldots ,\nu _n\) be positive integers such that \(\nu _1+\cdots +\nu _n=\ell \). Let \(\textbf{b}\) be an intersection point of the circle \(\mathscr {C}\) of the equation \((x-\alpha )^2+y^2-R^2=0\) with the horizontal axis, i.e. \(\textbf{b}\in \{(\pm R+\alpha ,0)\}\). Let \(\textbf{c}_i= (\alpha +R\cos t_i, R\sin t_i)\), \(0<t_i<\pi \), \(i=1,\ldots ,n\), be n distinct points on \(\mathscr {C}^{+}\). We set

$$\begin{aligned} \textbf{A}=\{(\textbf{b};1), (\textbf{c}_1;\nu _1),\ldots ,(\textbf{c}_n;\nu _n)\}\end{aligned}$$

and

$$\begin{aligned} A=\{(\pm R+\alpha ;1), (\alpha +R\cos t_1;\nu _1),\ldots ,(\alpha +R\cos t_n;\nu _n)\}.\end{aligned}$$

For a function F defined on \(\mathscr {C}\) for which the necessary derivatives are defined, we set \(f(x)=F\big (x,\sqrt{R^2-(x-\alpha )^2}\big )\). Then the polynomial \(\mathfrak H_{\mathscr {C}}[\textbf{A};F]\) defined by

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](x,y):=\textbf{H}[A;f](x)\end{aligned}$$

belongs to \(\mathscr {S}_\ell \) and satisfies the following relations

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\textbf{b})=F(\textbf{b}),\quad \mu (\mathfrak H_{\mathscr {C}}[\textbf{A};F])=\mu (F),\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i).\end{aligned}$$

Proof

The proof is analogous to the proof of Lemma 1. Evidently, \(\mathfrak H_{\mathscr {C}}[\textbf{A};F]\in \mathscr {S}_\ell \). We first see that

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\textbf{b})=\textbf{H}[A;f](\pm R+\alpha )=f(\pm R+\alpha )=F(\textbf{b}).\end{aligned}$$

The Faa di Bruno formula along with the interpolation conditions for \(\textbf{H}[A;f]\),

$$\begin{aligned} \dfrac{d^k}{d t^k}\textbf{H}[A;f](\alpha +R\cos t_i)=\dfrac{d^k}{d t^k} f(\alpha +R\cos t_i),\quad 1\le i\le n,\quad 0\le k\le \nu _i-1, \end{aligned}$$

lead to

$$\begin{aligned} \dfrac{d^k}{d t^k}\textbf{H}[A;f](\alpha +R\cos t)\Big |_{t=t_i}=\dfrac{d^k}{d t^k}f(\alpha +R\cos t)\Big |_{t=t_i}. \end{aligned}$$
(3)

By the definition, we have \(F(\alpha +R\cos t, R\sin t)=f(\alpha +R\cos t)\) for \(0<t<\pi \). For \(1\le i\le n\) and \(0\le k\le \nu _i-1\), we have

$$\begin{aligned} \dfrac{d^k}{d t^k} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\alpha +R\cos t, R\sin t)\Big |_{t=t_i}= & {} \dfrac{d^k}{d t^k}\textbf{H}[A;f](\alpha +R\cos t)\Big |_{t=t_i} \\= & {} \dfrac{d^k}{d t^k}f(\alpha +R\cos t)\Big |_{t=t_i} \\= & {} \dfrac{d^k}{d t^k} F(\alpha +R\cos t, R\sin t)\Big |_{t=t_i}, \end{aligned}$$

where we use (3) in the second equality. In other words,

$$\begin{aligned} \mu (\mathfrak H_{\mathscr {C}}[\textbf{A};F])=\mu (F),\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i).\end{aligned}$$

\(\square \)

Next, we state the result corresponding to the hyperbola without proof.

Lemma 3

Let \(\ell , \nu _1, \ldots ,\nu _n\) be positive integers such that \(\nu _1+\cdots +\nu _n=\ell \). Let \(\textbf{b}\) be the intersection point of the right branch of hyperbola \(\mathscr {C}\) of the equation \((x-\alpha )^2-\beta y^2-1=0\) with the horizontal axis, i.e. \(\textbf{b}=(\alpha +1,0)\). Let \(\textbf{c}_i= \big (\alpha +\frac{e^{t_i}+e^{-t_i}}{2},\frac{e^{t_i}-e^{-t_i}}{2\sqrt{\beta }}\big )\), \(t_i>0\), \(i=1,\ldots ,n\), be n distinct points on \(\mathscr {C}^{+}\). We set

$$\begin{aligned} \textbf{A}=\{(\textbf{b};1), (\textbf{c}_1;\nu _1),\ldots ,(\textbf{c}_n;\nu _n)\}\end{aligned}$$

and

$$\begin{aligned} A=\big \{(\alpha +1;1), \big (\alpha +\frac{e^{t_1}+e^{-t_1}}{2};\nu _1\big ),\ldots ,\big (\alpha +\frac{e^{t_n}+e^{-t_n}}{2};\nu _n\big )\big \}.\end{aligned}$$

For a function F defined on \(\mathscr {C}\) for which the necessary derivatives are defined, we set \(f(x)=F\Big (x,\sqrt{\frac{(x-\alpha )^2-1}{\beta }}\Big )\). Then the polynomial \(\mathfrak H_{\mathscr {C}}[\textbf{A};F]\) defined by

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](x,y):=\textbf{H}[A;f](x)\end{aligned}$$

belongs to \(\mathscr {S}_\ell \) and satisfies the following relations

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](\textbf{b})=F(\textbf{b}),\quad \mu (\mathfrak H_{\mathscr {C}}[\textbf{A};F])=\mu (F),\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i).\end{aligned}$$

Remark 1

We consider the case where the interpolation set contains only one point, that is \(\textbf{A}=\{(\textbf{b};1)\}\) with \(\textbf{b}\in \mathscr {C}\cap Ox\). The Hermite interpolation of a function F at \(\textbf{A}\) is the constant polynomial defined by

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](x,y):=F(\textbf{b}).\end{aligned}$$

It is an element of \(\mathscr {S}_0\).

3 A divisibility criterion

In this section, we give a divisibility criterion for polynomials in \(\mathscr {S}_d\).

Lemma 4

Under the assumption of Lemma 1, if \(P\in \mathscr {S}_\ell \) satisfies the relations

$$\begin{aligned} P(\textbf{b})=0,\quad \mu (P)=0,\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i),\end{aligned}$$

then P is a multiple of \(q(x,y)=x-\alpha -\beta y^2\).

Proof

Let us set \(Q(y)=P(\alpha +\beta y^2,y)\). We see that Q is an even polynomial of degree at most \(2\ell \), because

$$\begin{aligned} Q(-y)=P(\alpha +\beta (-y)^2,-y)=P(\alpha +\beta y^2,y)=Q(y).\end{aligned}$$

The relation \(P(\textbf{b})=0\) gives \(Q(0)=0\). For each \(i\in \{1,\ldots ,n\}\), the equations

$$\begin{aligned} \mu (P)=0,\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i) \end{aligned}$$

imply \(Q^{(j)}(y_i)=0\) for \(j=0,\ldots ,\nu _i-1\). Moreover, since Q is even, we have \(Q^{(j)}(-y_i)=0\) for \(j=0,\ldots ,\nu _i-1\). It follows that

$$\begin{aligned} Q(y)=y\prod _{i=1}^n (y-y_i)^{\nu _i}(y+y_i)^{\nu _i} Q^{*}(y) \end{aligned}$$
(4)

The last relation forces \(Q^{*}=0\), because if \(Q^{*}\ne 0\), then the degree of polynomial at the right hand side of (4) is greater or equal to \(1+2\sum _{i=1}^n \nu _i=2\ell +1\), which is strictly greater than \(\deg Q\). Consequently, \(Q=0\), which is equivalent to \(P(\alpha +\beta y^2,y)=0\) for all \(y\in \mathbb {R}\). This means that is P vanishes on \(\mathscr {C}\), and hence P is a multiple of \(x-\alpha -\beta y^2\).\(\square \)

Lemma 5

Under the assumption of Lemma 2, if \(P\in \mathscr {S}_\ell \) satisfies the relations

$$\begin{aligned} P(\textbf{b})=0,\quad \mu (P)=0,\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i),\end{aligned}$$

then P is a multiple of \(q(x,y)=(x-\alpha )^2+y^2-R^2\).

Proof

Observe that \(Q(t):=P(\alpha +R\cos {t},R\sin {t})\) is a trigonometric polynomial of degree at most \(\ell \). The hypothesis \(P(\textbf{b})=0\) becomes \(Q(0)=0\) or \(Q(\pi )=0\). Let us fix \(i\in \{1,\ldots ,n\}\). By assumption, we have

$$\begin{aligned} \dfrac{d^j }{d t^j}Q (t_i)=0,\quad j=0,\ldots ,\nu _i-1. \end{aligned}$$

In other words, \(t_i\) is a root of multiplicity \(\nu _i\) of Q. In addition, since \(P(x,y)=P(x,-y)\), we conclude that \(Q(-t)=Q(t)\), and hence \(Q^{(j)}(-t_i)=0\) for \(j=0,\ldots , \nu _i-1\). This says that \(-t_i\) is also a root of multiplicity \(\nu _i\) of Q. Since \(\nu _1+\cdots +\nu _n=\ell \), the trigonometric polynomial Q(t) has, counted with multiplicity, \(2\ell +1\) roots. Now, we can use Theorem 1.7 in [18, Chapter X] to obtain \(Q=0\). The relation \(P(\alpha +R\cos t, R\sin t)=0\) for all \(t\in \mathbb {R}\) says that P vanishes on \(\mathscr {C}\). It follows that P is a multiple of \((x-\alpha )^2+y^2-R^2\). \(\square \)

Lemma 6

Under the assumption of Lemma 3, if \(P\in \mathscr {S}_\ell \) satisfies the relations

$$\begin{aligned} P(\textbf{b})=0,\quad \mu (P)=0,\quad \mu \in \bigcup _{i=1}^nDiff (\mathscr {C},\textbf{c}_i,\nu _i),\end{aligned}$$

then P is a multiple of \(q(x,y)=(x-\alpha )^2-\beta y^2-1\).

To prove the lemma, we need an elementary result concerning the vanishing of derivatives of functions. It was proved in [13, Lemma 2.6].

Lemma 7

Let k be a natural number. Let g and h be k-times differentiable functions at \(t_0\in \mathbb {R}\). If \(g(t_0)\ne 0\) and \((gh)^{(i)}(t_0)=0\) for \(i=0,\ldots ,k\), then \(h^{(i)}(t_0)=0\) for \(i=0,\ldots ,k\).

Proof of Lemma 6

Since P is a polynomial of degree at most \(\ell \), there exists a univariate polynomial Q of degree at most \(2\ell \) such that

$$\begin{aligned} Q(e^t)=e^{\ell t} P\left( \alpha +\dfrac{e^t+e^{-t}}{2},\dfrac{e^t-e^{-t}}{2\sqrt{\beta }}\right) .\end{aligned}$$

For \(1\le i\le n\) and \(0\le j\le \nu _i-1\), we have

$$\begin{aligned} \left( e^{-\ell t}Q(e^{t})\right) ^{(j)}\Big |_{t=t_i}=\dfrac{d^j}{d t^j}P\left( \alpha +\dfrac{e^{t}+e^{-t}}{2},\dfrac{e^{t}-e^{-t}}{2\sqrt{\beta }}\right) \Big |_{t=t_i}=0.\end{aligned}$$

Applying Lemma 7, we obtain

$$\begin{aligned} \left( Q(e^{t})\right) ^{(j)}\Big |_{t=t_i}=0,\quad 1\le i\le n,\quad 0\le j\le \nu _i-1 \end{aligned}$$
(5)

We will show that

$$\begin{aligned} Q^{(j)}(e^{t_i})=0,\quad 1\le i\le n,\quad 0\le j\le \nu _i-1. \end{aligned}$$
(6)

To prove the relation, let us fix \(i\in \{1,\ldots ,n\}\). The assertion is obvious true when \(j=0\). Assume that the assertion holds for \(j=0,\ldots ,s-1\) with \(s<\nu _i-1\); we will prove it holds for s. From (5) we can use Faà di Bruno’s formula (see Theorem 2) to get

$$\begin{aligned} 0=\left( Q(e^{t})\right) ^{(s)}\Big |_{t=t_i}= \sum \dfrac{s!}{k_1!\ldots k_s!}Q^{(k)}(e^{t_i})\left( \dfrac{e^{t_i}}{1!}\right) ^{k_1}\cdots \left( \dfrac{e^{t_i}}{s!}\right) ^{k_s}, \end{aligned}$$
(7)

where \(k=k_1+\cdots +k_s\) and the sum is overall \(k_1,\ldots , k_s\) for which \(k_1+2k_2+\cdots +sk_s=s\). Note that \(k\le s\) and \(k=s\) only if \(k_1=s\), \(k_2=k_3\cdots =k_s=0\). Hence, by induction hypothesis, all terms in the sum in (7) vanish except the term corresponding to \(k_1=s\), \(k_2=\cdots =k_s=0\). It follows that \(Q^{(s)}(e^{t_i}) e^{ st_i}=0\), and so \(Q^{(s)}(e^{t_i})=0\). The assertion is proved completely.

Next, by the symmetry of P, we can write

$$\begin{aligned} Q(e^{-t})=e^{-\ell t} P\left( \alpha +\dfrac{e^t+e^{-t}}{2},-\dfrac{e^t-e^{-t}}{2\sqrt{\beta }}\right) = e^{-\ell t}\cdot P\left( \alpha +\dfrac{e^t+e^{-t}}{2},\dfrac{e^t-e^{-t}}{2\sqrt{\beta }}\right) .\end{aligned}$$

Relations (5) and Lemma 7 give

$$\begin{aligned} \left( Q(e^{-t})\right) ^{(j)}\Big |_{t=t_i} =0,\quad 1\le i\le n,\quad 0\le j\le \nu _i-1 \end{aligned}$$
(8)

We now apply the above argument again, with \(Q(e^{t})\) replaced by \(Q(e^{-t})\), to obtain

$$\begin{aligned} Q^{(j)}(e^{-t_i})=0,\quad 1\le i\le n,\quad 0\le j\le \nu _i-1. \end{aligned}$$
(9)

Moreover, the hypothesis \(P(\textbf{b})=0\), \(\textbf{b}=(\alpha +1,1)\), is equivalent to \(Q(0)=0\). Combining this fact with (6) and (9), we get the following factorization

$$\begin{aligned} Q(x)=x\prod _{i=1}^{n}(x-e^{t_i})^{\nu _i}(x-e^{-t_i})^{\nu _i}Q_1(x) \end{aligned}$$
(10)

Since \(\nu _1+\cdots +\nu _n=\ell \), the degree of the factor \(x\prod _{i=1}^{n}(x-e^{t_i})^{\nu _i}(x-e^{-t_i})^{\nu _i}\) equals to \(2\ell +1\) which is strictly greater than \(\deg Q\). This forces \(Q_1=0\), and consequently \(Q=0\). In view of the definition of Q, we can say that P vanishes on the right branch of the hyperbola \(\mathscr {C}\). In particular, two curves \(\{(x,y)\in \mathbb {R}^2: P(x,y)=0\}\) and \(\mathscr {C}\) have more than \(2\ell \) common points. The Bézout theorem shows that P is a multiple of \((x-\alpha )^2-\beta y^2-1\). \(\square \)

Remark 2

Lemma 6 is also true when \(\textbf{b}=(\alpha -1,0)\) and the \(\textbf{c}_i\) lies on the left branch of the hyperbola.

4 Regular Hermite interpolation schemes

In this section, we always assume that d is a positive integer. We also set \(m=[\frac{d}{2}]+1\) and \(s_k=d-2k+2\) for \(k=1,2,\ldots ,m\). Remark that \(s_1,\ldots ,s_{m-1}\) are positive, whereas \(s_m\) is non-negative. The following simple result is used to prove the regularity of Hermite schemes.

Lemma 8

We have

$$\begin{aligned} \sum _{k=1}^m (s_k+1)=\dim \mathscr {S}_d.\end{aligned}$$

Proof

Since \(d=\big [\frac{d}{2}\big ]+\big [\frac{d+1}{2}\big ]\) and \(m=\big [\frac{d}{2}\big ]+1\), we have

$$\begin{aligned} \sum _{k=1}^m (s_k+1)= & {} \sum _{k=1}^m(d-2k+3)=m(d-m+2)\\= & {} \left( \Big [\dfrac{d}{2}\Big ]+1\right) \cdot \left( \Big [\dfrac{d+1}{2}\Big ]+1\right) =\Big [\dfrac{d+2}{2}\Big ]\cdot \Big [\dfrac{d+3}{2}\Big ]\\= & {} \dim \mathscr {S}_d. \end{aligned}$$

\(\square \)

Theorem 3

Let \(\mathscr {C}_1, \mathscr {C}_2,\ldots ,\mathscr {C}_m\) be m distinct curves in \(\mathscr {Q}\). For \(1\le k\le m\), let \(\textbf{b}_k\) be an intersection point of \(\mathscr {C}_k\) with the real horizontal line. For \(s_k>0\), let \(\textbf{c}_{k1},\ldots ,\textbf{c}_{kn_k}\) be \(n_k\) distinct points on \(\mathscr {C}^{+}_k\) with \(n_k\le s_k\) (if \(\mathscr {C}_k\) is a hyperbola then the points are chosen on its one branch). Let \(\nu _{k1},\ldots ,\nu _{kn_k}\) be positive integers such that \(\nu _{k1}+\cdots +\nu _{kn_k}=s_k\). Assume that \(\{\textbf{b}_k, \textbf{c}_{ki}: i=1,\ldots ,n_k\}\cap \mathscr {C}_j=\emptyset \) for \(k>j\).

  1. 1.

    In the case \(s_m>0\), for any suitable defined function F, there exists a unique \(P\in \mathscr {S}_d\) such that

    $$\begin{aligned} P(\textbf{b}_{k})=F(\textbf{b}_{k}),\quad k=1,\ldots ,m, \end{aligned}$$
    (11)

    and

    $$\begin{aligned} \mu (P)=\mu (F),\quad \mu \in \bigcup _{k=1}^m\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}). \end{aligned}$$
    (12)
  2. 2.

    In the case \(s_m=0\), for any suitable defined function F, there exists a unique \(P\in \mathscr {S}_d\) such that

    $$\begin{aligned} P(\textbf{b}_{k})=F(\textbf{b}_{k}),\quad k=1,\ldots ,m, \end{aligned}$$
    (13)

    and

    $$\begin{aligned} \mu (P)=\mu (F),\quad \mu \in \bigcup _{k=1}^{m-1}\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}). \end{aligned}$$
    (14)

Here, "suitable defined function F" means that F is defined at \(\textbf{b}_k\) and is \((\nu _{ki}-1)\)-times differentiable at \(\textbf{c}_{ki}\).

Proof

We first prove the statement corresponding to \(s_m>0\). Since \(Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki})\) contains \(\nu _{ki}\) differential operators, the number of interpolation conditions in (11) and (12) is equal to

$$\begin{aligned} \sum _{k=1}^m\big (1+\sum _{i=1}^{n_k}\nu _{ki}\big )= \sum ^m_{k=1}(1+s_k)=\dim \mathscr {S}_d \end{aligned}$$
(15)

where we use Lemma 8 in the second equality. Hence, it suffices to show that if \(P\in \mathscr {S}_d\) satisfies the following interpolation conditions

$$\begin{aligned} P(\textbf{b}_{k})=0,\quad \mu (P)=0,\quad k=1,\ldots ,m,\quad \mu \in \bigcup _{k=1}^m\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}), \end{aligned}$$
(16)

then \(P=0\).

We take the polynomial \(q_k\) such that \(\mathscr {C}_k=\{(x,y)\in \mathbb {R}^2: q_k(x,y)=0\}\). For \(k=1\), relation (16) gives

$$\begin{aligned} P(\textbf{b}_{1})=0,\quad \mu (P)=0,\quad \mu \in \bigcup _{i=1}^{n_1}Diff (\mathscr {C}_1,\textbf{c}_{1i},\nu _{1i}), \end{aligned}$$

Since \(\nu _{11}+\cdots +\nu _{1n_1}=s_1=d\), we can use Lemmas 46 to get

$$\begin{aligned} P=q_1 P_1,\quad P_1\in \mathscr {S}_{s_1}. \end{aligned}$$

For each \(2\le k\le m\), relation (16) becomes

$$\begin{aligned} (q_1P_1)(\textbf{b}_{k})=0,\quad \mu (q_1P_1)=0,\quad \mu \in \bigcup _{k=2}^m\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}). \end{aligned}$$

Since \(\{\textbf{b}_k, \textbf{c}_{ki}: i=1,\ldots ,n_k\}\cap \mathscr {C}_1=\emptyset \), we have \(q_1(\textbf{b}_k)\ne 0\) and \(q_1(\textbf{c}_{ki})\ne 0\) for \(i=1,\ldots ,n_k\). Hence, Lemma 7 implies

$$\begin{aligned} P_1(\textbf{b}_{k})=0,\quad \mu (P_1)=0,\quad 2\le k\le m,\quad \mu \in \bigcup _{k=2}^m\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}). \end{aligned}$$

In the same manner we can see that \(P_1\) is divisible by \(q_2\), and hence we can find \(P_2\in \mathscr {S}_{s_2}\) such that \(P_1=q_2P_2\).

We continue in this fashion to obtain

$$\begin{aligned} P=\big (\prod _{k=1}^m q_k\big )\cdot P_{m+1} \end{aligned}$$
(17)

It follows from relation (17) that \(P=0\). Conversely, suppose that \(P\ne 0\). We see that the degree of the polynomial at the right hand side of (17) is at least \(2m>d\). This contradicts the fact that \(\deg P\le d\), and the proof of the first statement is complete.

In the case \(s_m=0\), we have one interpolation condition on \(\mathscr {C}_m\), that is \(P(\textbf{b}_m)=0\). Since \(s_m+1=1\), relation (15) also holds true in this case. The passage to the factorization similar to the above implies that

$$\begin{aligned} P=\big (\prod _{k=1}^{m-1} q_k\big )\cdot Q. \end{aligned}$$
(18)

We have \(d=2(m-1)\) since \(s_m=0\). Hence, from relation (18) we deduce that Q is a constant polynomial. Moreover, \(P(\textbf{b}_m)=0\) and \(q_k(\textbf{b}_m)\ne 0\) for \(1\le k\le m-1\), because \(\textbf{b}_m\) does not lie on \(\mathscr {C}_k\). It follows that \(Q(\textbf{b}_m)=0\), and hence \(Q=0\). This forces \(P=0\). \(\square \)

We now state two extremal cases of Theorem 3. The first one contains the maximum number of points on each curve which gives a unisolvent set for \(\mathscr {S}_d\). In the second one, each curve contains two points (the last curve may contain one point).

Corollary 1

Let \(\mathscr {C}_1, \mathscr {C}_2,\ldots ,\mathscr {C}_m\) be m distinct curves in \(\mathscr {Q}\). For \(1\le k\le m\), we take a set \(\textbf{A}_k\) of \(s_k+1\) distinct points on \(\mathscr {C}_k\), where one point is on the horizontal axis and the remaining points lie on \(\mathscr {C}^{+}_k\) (if \(\mathscr {C}_k\) is a hyperbola then the points are chosen on its one branch). We assume that \(\textbf{A}_k\cap \mathscr {C}_j=\emptyset \) for \(k>j\). Then the set

$$\begin{aligned} \textbf{A}=\bigcup _{k=1}^m \textbf{A}_k\end{aligned}$$

is unisolvent for \(\mathscr {S}_d\). In other words, for any function \(F: \textbf{A}\rightarrow \mathbb {R}\), there exists a unique \(P\in \mathscr {S}_d\) such that

$$\begin{aligned} P(\textbf{a})=F(\textbf{a}),\quad \textbf{a}\in \textbf{A}. \end{aligned}$$

Corollary 2

Let \(\mathscr {C}_1, \mathscr {C}_2,\ldots ,\mathscr {C}_m\) be m distinct curves in \(\mathscr {Q}\). For \(1\le k\le m\), we take two points \(\textbf{b}_k\in \mathscr {C}_k\cap Ox\) and \(\textbf{c}_k\in \mathscr {C}^{+}_k\) (if \(\mathscr {C}_k\) is a hyperbola then the points are chosen on its one branch). We assume that \(\{\textbf{b}_k,\textbf{c}_k\}\cap \mathscr {C}_j=\emptyset \) for \(k>j\).

  1. 1.

    In the case \(s_m>0\), for any suitable defined function F, there exists a unique \(P\in \mathscr {S}_d\) such that

    $$\begin{aligned} P(\textbf{b}_{k})=F(\textbf{b}_{k}),\quad k=1,\ldots ,m, \end{aligned}$$

    and

    $$\begin{aligned} \mu (P)=\mu (F),\quad \mu \in \bigcup _{k=1}^mDiff (\mathscr {C}_k,\textbf{c}_{k},s_{k}). \end{aligned}$$
  2. 2.

    In the case \(s_m=0\), for any suitable defined function F, there exists a unique \(P\in \mathscr {S}_d\) such that

    $$\begin{aligned} P(\textbf{b}_{k})=F(\textbf{b}_{k}),\quad k=1,\ldots ,m, \end{aligned}$$

    and

    $$\begin{aligned} \mu (P)=\mu (F),\quad \mu \in \bigcup _{k=1}^{m-1}Diff (\mathscr {C}_k,\textbf{c}_{k},s_{k}). \end{aligned}$$

Note that the set of interpolation points contains exactly m points on the horizontal axis. The interpolation conditions at these points are the evaluation functionals, i.e., \(\textbf{b}_k\mapsto f(\textbf{b}_k)\). The following examples show some singular Hermite schemes when the derivatives at the \(\textbf{b}_k\) appear.

Example 1

Consider the case \(d=2\) and \(m=\left[ \frac{d}{2}\right] +1=2\). We have \(s_1=2\) and \(s_2=0\). We choose two quadratic curve \(\mathscr {C}_1=\{(x,y): x^2+y^2-1=0\}\) and \(\mathscr {C}_2=\{(x,y): x-y^2=0\}\). We take \(\textbf{b}_1=(1,0)\) on \(\mathscr {C}_1\) and \(\textbf{b}_2=(0,0)\) on \(\mathscr {C}_2\). Consider the following Hermite interpolation scheme

$$\begin{aligned} F\longmapsto \dfrac{d^k}{d t^k}F(\cos t,\sin t)\Big |_{t=0},\quad k=0,1,2,\quad F\longmapsto F(\textbf{b}_2). \end{aligned}$$

Remark that the above Hermite scheme consists of 4 interpolation conditions that matches the dimension of \(\mathscr {S}_2\). Let us choose

$$\begin{aligned} Q(x,y)=2x^2-2x+y^2.\end{aligned}$$

Direct computations show that \(Q(\textbf{b}_1)=Q(\textbf{b}_2)=0\) and

$$\begin{aligned} \dfrac{d}{d t}Q(\cos t,\sin t)\Big |_{t=0}=\dfrac{d^2}{d t^2}Q(\cos t,\sin t)\Big |_{t=0}=0.\end{aligned}$$

It follows that the above Hermite scheme is not regular for \(\mathscr {S}_2\).

Example 2

As in Example 1, we take \(d=2\), \(s_1=2\) and \(s_2=0\). Let us choose \(\mathscr {C}_1=\{(x,y): x-y^2=0\}\) and \(\mathscr {C}_2=\{(x,y): x^2+y^2-1=0\}\). Take two points \(\textbf{b}_1=(0,0)\) and \(\textbf{c}_1=(1,1)\) on \(\mathscr {C}_1\). Let \(\textbf{b}_2=(1,0)\) be the point \(\mathscr {C}_2\). Evidently, the polynomial \(Q(x,y)=x^2-x\) belongs to \(\mathscr {S}_2\) and satisfies the following relations

$$\begin{aligned} Q(\textbf{b}_1)=Q(\textbf{c}_1)=Q(\textbf{b}_2)=0,\quad \dfrac{d}{dy} Q(y^2,y)\Big |_{y=0}=0.\end{aligned}$$

Hence, the Hermite interpolation scheme

$$\begin{aligned} F\longmapsto F(\textbf{b}_1),\quad F\longmapsto F(\textbf{c}_1),\quad F\longmapsto \dfrac{d}{d t}F(y^2,y)\Big |_{t=0},\quad F\longmapsto F(\textbf{b}_2),. \end{aligned}$$

is not regular.

5 A formula for Hermite interpolation polynomial

The aim of this section is to establish a Newton type formula for bivariate Hermite interpolation polynomial.

Theorem 4

Under the assumptions of Theorem 3 where \(\mathscr {C}_i=\{(x,y): q_i(x,y)=0\}\), the interpolation polynomial P can be written as

$$\begin{aligned} P=P_1+\cdots +P_m,\end{aligned}$$

where

$$\begin{aligned} P_1=\mathfrak H_{\mathscr {C}_1}[\textbf{A}_1;F],\quad P_k=q_1\cdots q_{k-1}\mathfrak {H}_{\mathscr {C}_k}\left[ \textbf{A}_k; \dfrac{F-P_1-\cdots - P_{k-1}}{q_1\cdots q_{k-1}}\right] , \quad k=2,\ldots ,m,\end{aligned}$$

with

$$\begin{aligned} \textbf{A}_k=\{(\textbf{b}_k;1), (\textbf{c}_{k1};\nu _{k1}),\ldots (\textbf{c}_{kn_k};\nu _{kn_k})\}.\end{aligned}$$

Proof

We only prove the theorem corresponding to \(s_m>0\). The proof for the case \(s_m=0\) is the same, where we replace m by \(m-1\) in the third step.

Step 1. We will show that the polynomial P belongs to the space \(\mathscr {S}_d\). Since the polynomial \(\mathfrak H_{\mathscr {C}_k}[\textbf{A}_k;G]\in \mathscr {S}_{s_k}\) and \(q_i\in \mathscr {S}_2\) for \(i=1,\ldots ,m\), we have \(\deg P_k\le s_k+2(k-1)=d\). Moreover, \(P_k\) is also a symmetric polynomial. Hence, \(P_k\in \mathscr {S}_d\), and consequently \(P\in \mathscr {S}_d\). It is sufficient to show that P satisfies interpolation conditions,

$$\begin{aligned} P(\textbf{b}_{k})=F(\textbf{b}_{k}),\quad \mu (P)=\mu (F),\quad k=1,\ldots ,m,\quad \mu \in \bigcup _{k=1}^m\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}). \end{aligned}$$
(19)

We will prove (19) in the next two steps.

Step 2. We shall check that \(P(\textbf{b}_k)=F(\textbf{b}_k)\) for \(k=1,\ldots ,m\). Since \(P_j\) contains the factor \(q_k\) for \(j>k\), \(P_j\) vanishes at \(\textbf{b}_k\) for \(j>k\). It follows that

$$\begin{aligned} P(\textbf{b}_k)=\sum _{i=1}^k P_i(\textbf{b}_k). \end{aligned}$$
(20)

On the other hand, since \(\textbf{b}_k\in \textbf{A}_k\), the interpolation property of Hermite interpolation in Lemmas 13 gives

$$\begin{aligned} P_k(\textbf{b}_k)= & {} q_1(\textbf{b}_k)\cdots q_{k-1}(\textbf{b}_k) \dfrac{F(\textbf{b}_k)-P_1(\textbf{b}_k)-\cdots - P_{k-1}(\textbf{b}_k)}{q_1(\textbf{b}_k)\cdots q_{k-1}(\textbf{b}_k) }\nonumber \\= & {} F(\textbf{b}_k)-P_1(\textbf{b}_k)-\cdots - P_{k-1}(\textbf{b}_k). \end{aligned}$$
(21)

Combining (20) and (21), we obtain \(P(\textbf{b}_k)=F(\textbf{b}_k).\)

Step 3. We remain to show that

$$\begin{aligned} \mu (P)=\mu (F),\quad k=1,\ldots ,m,\quad \mu \in \bigcup _{k=1}^m\bigcup _{i=1}^{n_k}Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki}). \end{aligned}$$
(22)

Let us fix \(k\in \{1,\ldots ,m\}\) and \(\mu ^{*}\in Diff (\mathscr {C}_k,\textbf{c}_{ki},\nu _{ki})\). It suffices to check that

$$\begin{aligned} \mu ^{*}(P)=\mu ^{*}(F). \end{aligned}$$
(23)

Since \(P_j\) contains the factor \(q_k\) for \(j>k\), \(P_j(x,y)=0\) for every \((x,y)\in \mathscr {C}_k\), \(j>k\). It concludes from the definition of \(\mu ^{*}\) in Sect. 2 that

$$\begin{aligned} \mu ^{*}(P_j)=0,\quad j>k. \end{aligned}$$
(24)

Next, we will prove that

$$\begin{aligned} \mu ^{*}(P_k)=\mu ^{*}(F)-\mu ^{*}(P_1)-\cdots -\mu ^{*}(P_{k-1}). \end{aligned}$$
(25)

We only prove (25) in the case where \(\mathscr {C}_k\) is a circle with \(q_k(x,y)=(x-\alpha _k)^2+y^2-R^2_k\). The proof for the parabola and the hyperbola is the same. For simplicity, we set \(\Pi _k=q_1\cdots q_{k-1}\) and \(\Sigma _k = P_1+\cdots + P_{k-1}\). Without loss of generality, we assume that

$$\begin{aligned} \mu ^{*}: G\mapsto \dfrac{d^\ell }{d t^\ell } G(\alpha _k+R_k\cos t, R_k\sin t)\Big |_{t=t_{k1}},\end{aligned}$$

where \(\textbf{c}_{k1}=(\alpha _k+R_k\cos t_{k1}, R_k\sin t_{k1})\) and \(0\le \ell \le \nu _{k1}-1\). Using the Leibniz rule, we can write

$$\begin{aligned}&\mu ^{*}(P_k) =\dfrac{d^\ell }{d t^\ell } \Big (\Pi _k\mathfrak {H}_{\mathscr {C}_k}\left[ \textbf{A}_k; \dfrac{F-\Sigma _k}{\Pi _k}\right] (\alpha _k+R_k\cos t, R_k\sin t)\Big )\Big |_{t=t_{k1}}\\&=\sum _{i=0}^\ell \left( {\begin{array}{c}\ell \\ i\end{array}}\right) \dfrac{d^{\ell -i}}{d t^{\ell -i}} \Pi _k (\alpha _k+R_k\cos t, R_k\sin t)\Big |_{t=t_{k1}} \dfrac{d^i}{d t^i}\\&\quad \Big (\mathfrak {H}_{\mathscr {C}_k}\left[ \textbf{A}_k; \dfrac{F-\Sigma _k}{\Pi _k}\right] (\alpha _k+R_k\cos t, R_k\sin t)\Big )\Big |_{t=t_{k1}}\\&= \sum _{i=0}^\ell \left( {\begin{array}{c}\ell \\ i\end{array}}\right) \dfrac{d^{\ell -i}}{d t^{\ell -i}} \Pi _k (\alpha _k+R_k\cos t, R_k\sin t)\Big |_{t=t_{k1}} \dfrac{d^i}{d t^i} \dfrac{F-\Sigma _k}{\Pi _k}(\alpha _k+R_k\cos t, R_k\sin t)\Big |_{t=t_{k1}}\\&=\dfrac{d^\ell }{d t^\ell } \Big (\Pi _k \dfrac{F-\Sigma _k}{\Pi _k}(\alpha _k+R_k\cos t, R_k\sin t)\Big )\Big |_{t=t_{k1}}\\&=\dfrac{d^\ell }{d t^\ell } \Big ((F-\Sigma _k)(\alpha _k+R_k\cos t, R_k\sin t)\Big )\Big |_{t=t_{k1}}\\&=\dfrac{d^\ell }{d t^\ell } F(\alpha _k+R_k\cos t, R_k\sin t)\Big |_{t=t_{k1}}-\dfrac{d^\ell }{d t^\ell } \Sigma _k(\alpha _k+R_k\cos t, R_k\sin t)\Big |_{t=t_{k1}}\\&=\mu ^{*}(F)-\mu ^{*}(\Sigma _k), \end{aligned}$$

where we use Lemma 2 in the third equality, and (25) is proved completely. Finally, combining (24) and (25), we obtain

$$\begin{aligned} \mu ^{*}(P)=\mu ^{*}(\Sigma _k)+\mu ^{*}(P_k)+\sum _{j=k+1}^m \mu ^{*}(P_j)=\mu ^{*}(F).\end{aligned}$$

\(\square \)

In Sect. 2, we have proved that the Hermite interpolation on the curve \(\mathscr {C}\) is identity with a univariate Hermite interpolation,

$$\begin{aligned} \mathfrak H_{\mathscr {C}}[\textbf{A};F](x,y):=\textbf{H}[A;f](x).\end{aligned}$$

Since univariate Hermite interpolation polynomial is continuous with respect to the interpolation points and the interpolated function (see [12, Theorem 2]), so is the Hermite interpolation polynomial on \(\mathscr {C}\). This result is an analogue of [12, Lemma 5]. Hence, we can use Theorem 4 and the method used in [12, Theorem 4] to get a continuity property of bivariate Hermite interpolation polynomial. Here, the interpolation points on the horizontal axis are kept fixed. The details are left to the readers.

Example 3

Let us consider the case \(d=1\) and \(\mathscr {C}_1=\{(x,y)\in \mathbb {R}^2: x-1-\beta y^2=0\}\). We have \(m=1\) and \(s_1=1\). Let us choose \(\textbf{A}=\textbf{A}_1=\{((1,0);1),((\beta +1,1);1)\}\) and \(F(x,y)=xy^2\). From Theorem 4, we get

$$\begin{aligned} P(x,y)=P_1(x,y)=\mathfrak H_{\mathscr {C}_1}[\textbf{A}_1;F](x,y).\end{aligned}$$

On the other hand, Lemma 1 enables us to write

$$\begin{aligned} \mathfrak H_{\mathscr {C}_1}[\textbf{A}_1;F](x,y)= & {} \textbf{H}[\{(1;1), (\beta +1;1)\};g](x)\\= & {} \dfrac{g(\beta +1)-g(1)}{\beta }(x-1)+g(1)\\= & {} (\frac{1}{\beta }+1)(x-1), \end{aligned}$$

where \(g(x)=F\big (x,\sqrt{\frac{x-1}{\beta }}\big )=\frac{x(x-1)}{\beta }\). It follows that

$$\begin{aligned} P(x,y)=(\frac{1}{\beta }+1)(x-1) \end{aligned}$$
(26)

Next, we consider a family of parabolas \(\mathscr {C}_1^{(n)}=\{(x,y)\in \mathbb {R}^2: x-1-\frac{1}{n} y^2=0\}\) and unisolvent sets \(\textbf{A}^{(n)}=\textbf{A}_1^{(n)}=\{((1,0);1),((\frac{1}{n}+1,1);1)\}\). Note that \(\textbf{A}^{(n)}\) converges to \(\textbf{A}^{(*)}=\{((1,0);1),((1,1);1)\}\) as \(n\rightarrow \infty \) and \(\textbf{A}^{*}\) is not unisolvent for \(\mathscr {S}_1\), because the nonzero polynomial \(Q(x,y)=x-1\) vanishes at (1, 0) and (1, 1). From relation (26), the bivariate Hermite interpolation polynomial is equal to \((n+1)(x-1)\) which does not converge as \(n\rightarrow \infty \).

Example 4

Consider the case \(d=2\) and \(m=\left[ \frac{d}{2}\right] +1=2\). We have \(s_1=2\) and \(s_2=0\). We choose two quadratic curves \(\mathscr {C}_1=\{(x,y): x-y^2=0\}\) and \(\mathscr {C}_2=\{(x,y): x^2+y^2-1=0\}\). We take \(\textbf{b}_1=(0,0)\) on \(\mathscr {C}_1\) and \(\textbf{b}_2=(1,0)\) on \(\mathscr {C}_2\). We choose a points \(\textbf{c}_{1}=(y_1^2, y_1)\) on \(\mathscr {C}_1\) with \(y_{1}>0\). Let \(\textbf{A}_1=\{(\textbf{b}_1;1), (\textbf{c}_1;2)\}\) and \(\textbf{A}_2=\{(\textbf{b}_2;1)\}\). We will compute the bivariate Hermite interpolation polynomial of the function \(F(x,y)=x^2y^4+y^2\). From Theorem 5.1 we have

$$\begin{aligned} P=P_1+P_2\end{aligned}$$

where

$$\begin{aligned} P_1=\mathfrak H_{\mathscr {C}_1}[\textbf{A}_1;F](x,y)=\textbf{H}[\{(0;1),(y_1^2;2)\};f](x) \end{aligned}$$

with \(f(x)=F(x,\sqrt{x})=x^4+x\), and

$$\begin{aligned} P_2(x,y)=q_1(x,y)\dfrac{F(\textbf{b}_2)-P_1(\textbf{b}_2)}{q_1(\textbf{b}_2)},\quad q_1(x,y)=x-y^2.\end{aligned}$$

Using the Newton representation in (1) we have

$$\begin{aligned} P_1(x,y)=f(0)+f[0,y^2_1]x+f[0,y_1^2,y_1^2] x(x-y_1^2)=3y_1^4 x^2+(1-2y_1^6)x. \end{aligned}$$

Hence

$$\begin{aligned} P_2(x,y)=(2y_1^6-3y_1^4-1)(x-y^2). \end{aligned}$$

Combining the above computation we get

$$\begin{aligned} P(x,y)=3y_1^4 x^2-3y_1^4x-(2y_1^6-3y_1^4-1)y^2.\end{aligned}$$

Letting \(y_1\rightarrow 0\), we see that \(\textbf{A}_1\) converges to \(A_1^{*}=\{(\textbf{b}_1;3)\}\) and P(xy) tends to \(P^{*}(x,y)=y^2\). Observe that \(P^{*}\) interpolates F at \(A_1^{*}\) and \(A_2\), i.e.,

$$\begin{aligned} \dfrac{d^k}{dy^k}P^{*}(y^2,y)\Big |_{y=0}=\dfrac{d^k}{dy^k}F(y^2,y)\Big |_{y=0},\quad k=0, 1, 2,\quad P^{*}(\textbf{b}_2)=F(\textbf{b}_2).\end{aligned}$$

However, the Hermite interpolation scheme

$$\begin{aligned} \quad F\longmapsto \dfrac{d^k }{dy^k}F(y^2,y)\Big |_{y=0},\quad k=0,1,2,\quad F\longmapsto F(\textbf{b}_2)\end{aligned}$$

is singular for \(\mathscr {S}_2\), because the polynomial \(Q(x,y)=x^2-x+y^2\) in \(\mathscr {S}_2\) satisfies the following relations:

$$\begin{aligned} \dfrac{d^k }{dy^k}Q(y^2,y)\Big |_{y=0}=0,\quad k=0,1,2,\quad Q(\textbf{b}_2)=0.\end{aligned}$$