1 Introduction

Surface modeling, an important issue in computer aided geometric design (CAGD), has been widely applied in many fields such as industrial design and manufacture, atmospheric analysis, geology and medical imaging, etc. Generally speaking, for most applications, \(C^1\) smoothness is sufficient, and there are many ways to tackle this problem [2, 8, 15, 19, 20, 22, 24]. However, curvature continuity sometimes is needed and this leads to the need for \(C^2\) smoothness. Generating a \(C^2\) bivariate interpolation is a more difficult task. In [10], a bicubic spline interpolation scheme was proposed as a extension of the theory of cubic splines to two dimension, this type of interpolation scheme has become the standard scheme for rectangular regions, and which has studied in many literatures [3, 6, 7, 16, 17]. In recent years, some of the literatures have contributed to the \(C^2\) bivariate interpolation also. For example, in [4], Brou and Méhauté proposed a construction of \(C^r\) bivariate rational splines over a triangulation, via a finite element approach; In [5], a novel surface modeling scheme was presented based on an envelope template, and \(G^2\) or \(C^2\) composite surfaces can be obtained utilizing the envelope template sweeping over the data points; In [9], a refinable function vector of \(C^2\)-quartic splines was introduced for generating approximation quadrilateral subdivisions, and that of \(C^2\)-quintic splines was constructed for generating a second order Hermite interpolatory quadrilateral subdivision; In [11], two families of solutions provided by two Hermite subdivision schemes \(HD^2\) and \(HR^2\) were investigate, and a \(C^2\) interpolant on any semiregular rectangular mesh was generated with Hermite data of degree 2; In [18], two \(C^2\) shape-preserving bivariate interpolants on rectangular grids were developed by using polynomial splines; In [21], the authors presented a method based on \(C^2\) polynomial bivariate splines of degree 7 which can be used to interpolate function values at a set of arbitrarily scattered points in a planar domain. In [25], \(C^1\)- and \(C^2\)-continuous spline-interpolation surfaces were constructed in a regular triangular net with the help of polynomial basic functions; In [26], author proved that there exists a \(C^3\) piecewise polynomial of degree 7 on the twice CT type split of a triangle, which interpolate arbitrarily given values and derivatives of orders up to three at the vertices and on the edges of the triangle.

In this paper, we are concerned with the \(C^2\) bivariate rational spline interpolations with a simple and explicit mathematical representation. This kind of interpolation can be conveniently used for both practical application and theoretical analysis. In fact, in recent years, motivated by the univariate rational spline interpolation, the \(C^1\) bivariate rational spline, which has a simple and explicit mathematical representation with parameters, has been studied [1, 1214]. Since the parameters in the interpolation function are selective according to the control constrains, the constrained control of the shape becomes possible.

This paper aims to provide a \(C^2\) piecewise rational surface modeling scheme over rectangular mesh. To solve the problem, a new approach is proposed by using a constructed interpolation function comprising a simple and explicit mathematical representation with the new parameters \(\alpha _{i,j}\) and \(\beta _{i,j}\). The shape of the interpolating surfaces can be modified by using these parameters being achieved for the unchanged interpolating data, and a local shape control method of interpolating surface is developed.

This paper is arranged as follows. In Section 2, a piecewise bivariate rational spline interpolation with parameters is constructed over rectangular mesh. In Section 3, the \(C^2\) continuity of the interpolant is proved. In Section 4, the basis of this interpolator is derived, and the bounded property of the interpolant is obtained. Sections 5 deals with the error estimates of the interpolator. Some examples are given in Section 6, which show that this interpolator gives a good approximation to the interpolated function and the shape of the interpolating surfaces can be modified by selecting suitable parameters.

2 Interpolation

Let \(\varOmega :[a,b;c,d]\) be the plane region, and \(\{(x_i,y_i,f_{i,j}),i=1,2,\cdots ,n;j=1,2,\cdots ,m\}\) be a given set of data points, where \(a=x_1<x_2<\cdots <x_n=b\) and \(c=y_1<y_2<\cdots <y_m=d\) are the knot spacings, \(f_{i,j}=f(x_i,y_j)\). \(d_{i,j}\) and \(e_{i,j}\) are chosen partial derivative values \(\frac{\partial f(x,y)}{\partial x}\) and \(\frac{\partial f(x,y)}{\partial y}\) at the knots \((x_i,y_j)\), respectively. Let \(h_i=x_{i+1}-x_i\), and \(l_j=y_{j+1}-y_j\), and for any point \((x,y)\in [x_i,x_{i+1};y_j,y_{j+1}]\) in the \(xy\)-plane, Let \(\theta =(x-x_i)/h_i\) and \(\eta =(y-y_j)/l_j\). Denoting

$$\begin{aligned} \varDelta ^{(x)}_{i,j}=\frac{f_{i+1,j}-f_{i,j}}{h_i},\,\varDelta ^{(y)}_{i,j}=\frac{f_{i,j+1}-f_{i,j}}{l_j}. \end{aligned}$$

First, for each \(y=y_j, j=1,2,\cdots ,m\), construct the \(x\)-direction interpolating curve, this is given by

$$\begin{aligned} P_{i,j}^{*}(x)=\frac{p_{i,j}^{*}(x)}{q_{i,j}^{*}(x)}, \,\,i=1,2,\cdots ,n-1, \end{aligned}$$
(1)

where

$$\begin{aligned} p_{i,j}^{*}(x)&= (1-\theta )^3f_{i,j}+\theta (1-\theta )^2V^*_{i,j}(x)\\&+\,\theta ^2(1-\theta )W^*_{i,j}(x)+\theta ^3f_{i+1,j},\\ q_{i,j}^{*}(x)&= (1-\theta )^3+\theta (1-\theta )\alpha _{i,j}+\theta ^3, \end{aligned}$$

with

$$\begin{aligned} V^*_{i,j}(x)&= \alpha _{i,j}f_{i,j}+h_id_{i,j}-\theta (f_{i+1,j}-f_{i,j})\\&+\,\theta (2-\theta -(1\!-\!\theta )\alpha _{i,j})(f_{i+1,j}\!-\!f_{i,j}-h_id_{i,j}),\\ W^*_{i,j}(x)&= \alpha _{i,j}f_{i+1,j}\!-\!h_id_{i+1,j}\!+\!(1-\theta )(f_{i+1,j}-f_{i,j})\\&\!+\,(1\!-\!\theta )(1+\theta \!-\!\theta \alpha _{i,j}) (f_{i,j}\!-\!f_{i+1,j}\!+\!h_id_{i+1,j}), \end{aligned}$$

and \(\alpha _{i,j}>0\). This interpolation \(P_{i,j}^{*}(x)\) defined by (1) is called the rational quintic interpolator which satisfies

$$\begin{aligned} P^*_{i,j}(x_i)&= f_{i,j},\, P^*_{i,j}(x_{i+1})=f_{i+1,j},\, {P^*_{i,j}}^{\prime }(x_i)\\&= d_{i,j},\, {P^*_{i,j}}^{\prime }(x_{i+1})=d_{i+1,j}. \end{aligned}$$

If we define

$$\begin{aligned} d_{i,j}=\displaystyle \frac{h_{i-1}\varDelta ^{(x)}_{i,j}+h_i\varDelta ^{(x)}_{i-1,j}}{h_{i-1}+h_i},i=2,3,\cdots ,n-1, \end{aligned}$$
(2)

then the interpolation function \(P^*_{i,j}(x)\) defined by (1) is \(C^2\) continuous in \([a,b]\), and which satisfies

$$\begin{aligned} P^{\prime \prime }(x_i)=\frac{2}{h_{i-1}+h_i}(\varDelta ^{(x)}_{i,j}-\varDelta ^{(x)}_{i-1,j}),\,i=2,3,\cdots ,n-1. \end{aligned}$$

Remark

At the end knots \(x_1,x_n\), the derivative values are given as

$$\begin{aligned} \begin{array}{l} d_{1,j}=\displaystyle \varDelta ^{(x)}_{1,j}-\frac{h_1}{h_1+h_2}(\varDelta ^{(x)}_{2,j}-\varDelta ^{(x)}_{1,j}),\\ d_{n,j}=\displaystyle \varDelta ^{(x)}_{n-1,j}+\frac{h_{n-1}}{h_{n-1}+h_{n-2}}(\varDelta ^{(x)}_{n-1,j}-\varDelta ^{(x)}_{n-2,j}), \end{array} \end{aligned}$$
(3)

For each pair of \((i,j),i=1,2,\cdots ,n-1\) and \(j=1,2,\cdots ,m-1\), using the \(x\)-direction interpolation \(P^*_{i,j}(x)\), define the interpolation function \(P_{i,j}(x,y)\) on \([x_i,x_{i+1};y_j,y_{j+1}]\) as follows:

$$\begin{aligned} P_{i,j}(x,y)&= \frac{p_{i,j}(x,y)}{q_{i,j}(y)},\,\, i=1,2,\cdots ,n-1;j\nonumber \\&= 1,2,\cdots ,m-1, \end{aligned}$$
(4)

where

$$\begin{aligned} p_{i,j}(x,y)&= (1-\eta )^3P^*_{i,j}(x)+\eta (1-\eta )^2V_{i,j}\\&+\,\eta ^2(1-\eta )W_{i,j}+\eta ^3P^*_{i,j+1}(x),\\ q_{i,j}(y)&= (1-\eta )^3+\eta (1-\eta )\beta _{i,j}+\eta ^3, \end{aligned}$$

with

$$\begin{aligned} V_{i,j}&= \beta _{i,j}P^*_{i,j}(x)+l_j\phi _{i,j}(x)+\varphi _{i,j}(x,y),\\ W_{i,j}&= \beta _{i,j}P^*_{i,j+1}(x)-l_j\phi _{i,j+1}(x)+\psi _{i,j}(x,y),\\ \end{aligned}$$

and

$$\begin{aligned} \phi _{i,s}(x)&= (1-\theta )^3(1+4\theta +9\theta ^2)e_{i,s}\\&+\,\theta ^3(6-8\theta +3\theta ^2)e_{i+1,s},s=j,j+1,\\ \varphi _{i,j}(x,y)&= \eta (2-\eta -(1-\eta )(\beta _{i,j}+1))(P^*_{i,j+1}(x)\\&-\,P^*_{i,j}(x)-l_j\phi _{i,j}(x))\\&-\,\eta (P^*_{i,j+1}(x)-P^*_{i,j}(x)),\\ \psi _{i,j}(x,y)&= (1-\eta )(1+\eta -\eta (\beta _{i,j}+1))(P^*_{i,j}(x)\\&-\,P^*_{i,j+1}(x)+l_j\phi _{i,j+1}(x))\\&+\,(1-\eta )(P^*_{i,j+1}(x)-P^*_{i,j}(x)), \end{aligned}$$

and \(\beta _{i,j}>0\). The interpolation function \(P_{i,j}(x,y)\) defined by (4) is called a bivariate piecewise rational interpolator, which satisfies

$$\begin{aligned} P_{i,j}(x_r,y_s)&= f(x_r,y_s),\,\displaystyle \frac{\partial P_{i,j}(x_r,y_s)}{\partial x}=d_{r,s},\\ \displaystyle \frac{\partial P_{i,j}(x_r,y_s)}{\partial y}&= e_{r,s}, r=i,i+1,\ s=j,j+1. \end{aligned}$$

3 The \(C^2\) Continuity of the Interpolant

This section deals with the \(C^2\) continuity conditions of the interpolating function \(P_{i,j}(x,y)\) defined by (4). Let the knots be equally spaced for variable \(x\), namely, \(h_i=(b-a)/n\). We can easily derive that the interpolation function \(P_{i,j}(x,y)\) is \(C^1\) continuous in the whole interpolating region \([x_1,x_n;y_1,y_m]\) when the parameters \(\beta _{i,j}=\)constant for each \(j\in \{1,2,\cdots ,m-1\}\) and all \(i=1,2,\cdots ,n-1\), no matter what the parameters \(\alpha _{i,j}\) might be (see [12]). Since the rational interpolation function \(P^*_{i,j}(x)\) defined by (1) is \(C^2\) continuous in \([x_1,x_n]\), it is easy to show that the bivariate interpolation function \(P_{i,j}(x,y)\) has continuous second-order partial derivatives \(\frac{\partial P_{i,j}^2(x,y)}{\partial x^2}\) and \(\frac{\partial P_{i,j}^2(x,y)}{\partial y^2}\) in the interpolating region \([x_1,x_n;y_1,y_m]\) except \(\frac{\partial P_{i,j}^2(x,y)}{\partial x^2}\) for every \(y\in [y_j,y_{j+1}], j=1,2,\cdots ,m-1\), at the points \((x_i,y), i=2,3,\cdots ,n-1\), and \(\frac{\partial P_{i,j}^2(x,y)}{\partial y^2}\) for every \(x\in [x_i,x_{i+1}], i=1,2,\cdots ,n-1\), at the points \((x,y_j), j=2,3,\cdots ,m-1\). Thus it is sufficient for \(P_{i,j}(x,y)\in C^2\) in the whole interpolating region \([x_1,x_n;y_1,y_m]\) if \(\frac{\partial P_{i,j}^2(x_i^+,y)}{\partial x^2}=\frac{\partial P_{i,j}^2(x_i^-,y)}{\partial x^2}\), \(\frac{\partial P_{i,j}^2(x,y_j^+)}{\partial y^2}=\frac{\partial P_{i,j}^2(x,y_j^-)}{\partial y^2}\), \(\frac{\partial P_{i,j}^2(x_i^+,y)}{\partial x\partial y}=\frac{\partial P_{i,j}^2(x_i^-,y)}{\partial x\partial y}\) and \(\frac{\partial P_{i,j}^2(x,y_j^+)}{\partial x\partial y}=\frac{\partial P_{i,j}^2(x,y_j^-)}{\partial x\partial y}\) hold. This leads to the following theorem.

Theorem 1

If the knots are equally spaced for variable \(x\), namely, \(h_i=(b-a)/n\), a sufficient condition for the interpolation function \(P_{i,j}(x,y),i=1,2,\cdots ,n-1; j=1,2,\cdots ,m-1\), to be \(C^2\) in the whole interpolating region \([x_1,x_n;y_1,y_m]\) is that the parameters \(\beta _{i,j}=\)constant for each \(j\in \{1,2,\cdots ,m-1\}\) and all \(i=1,2,\cdots ,n-1\), no matter what the parameters \(\alpha _{i,j}\) might be.

Proof

Based on the analysis above, for any pair \((i,j)\), \(1\le i\le n-1, 1\le j \le m-1\), in order to ensure the continuity of the interpolation function \(P_{i,j}(x,y)\) defined by (4), we only need to prove that

$$\begin{aligned} \frac{\partial P_{i,j}^2(x_i^+,y)}{\partial x^2}\!=\!\frac{\partial P_{i,j}^2(x_i^-,y)}{\partial x^2}, \frac{\partial P_{i,j}^2(x,y_j^+)}{\partial y^2}\!=\!\frac{\partial P_{i,j}^2(x,y_j^-)}{\partial y^2},\\ \frac{\partial P_{i,j}^2(x_i^+,y)}{\partial x\partial y}=\frac{\partial P_{i,j}^2(x_i^-,y)}{\partial x\partial y}, \frac{\partial P_{i,j}^2(x,y_j^+)}{\partial x\partial y}\!=\!\frac{\partial P_{i,j}^2(x,y_j^-)}{\partial x\partial y}. \end{aligned}$$

From (4), it can be derived that

$$\begin{aligned}&\displaystyle \frac{\partial P_{i,j}^2(x,y)}{\partial x\partial y}\!=\!\displaystyle \frac{1}{l_jq^2_{i,j}(y)}\left[ 3\eta ^2(1\!-\!\eta )^2(1\!+\!(3\!-\!6\eta \!+\!6\eta ^2)\beta _{i,j} \right. \nonumber \\&\quad \left. +\,2\eta (1-\eta )\beta _{i,j}^2)\times \left( \frac{d P^*_{i,j+1}(x)}{dx}-\frac{d P^*_{i,j}(x)}{dx}\right) \right. \nonumber \\&\quad \left. +\,l_j(1-\eta )^2\left( 1-4\eta +6\eta ^2-6\eta ^3\right. \right. \nonumber \\&\quad \left. \left. \displaystyle +\,\eta (2-10\eta +14\eta ^2-9\eta ^3)\beta _{i,j}\right. \right. \nonumber \\&\quad \left. \left. +\,\eta ^2(1\!-\!4\eta \!+\!3\eta ^2)\beta ^2_{i,j}\right) \frac{d \phi _{i,j}(x)}{dx}\displaystyle \!-\!l_j\eta ^2\left( 3\!-\!\!10\eta \!+\!12\eta ^2\right. \right. \nonumber \\&\quad \left. \left. -\,\,6\eta ^3 +\,(3-12\eta +22\eta ^2-22\eta ^3+9\eta ^4)\beta _{i,j}\right. \right. \\&\quad \left. \left. \displaystyle +\,\eta (1-\eta )^2(2-3\eta )\beta ^2_{i,j}\right) \frac{d \phi _{i,j+1}(x)}{dx}\right] .\nonumber \end{aligned}$$
(5)

Thus, we can obtain from (5) that

$$\begin{aligned} \frac{\partial P_{i,j}^2(x,y_j^+)}{\partial x\partial y}=\frac{d \phi _{i,j}(x)}{dx},\,\,\frac{\partial P_{i,j}^2(x,y_{j+1}^-)}{\partial x\partial y}=\frac{d \phi _{i,j+1}(x)}{dx}. \end{aligned}$$

This imply \(\frac{\partial P_{i,j}^2(x,y)}{\partial x\partial y}\) is continuous at the points \((x,y_j)\) \((j=2,3,\cdots ,m-1)\). Furthermore, since \(P^*_{i,j}(x)\) is \(C^1\) continuous, and \(\frac{d \phi _{i,j}(x_i^+)}{dx}=\frac{e_{i,j}}{h_i},\frac{d \phi _{i,j}(x_i^-)}{dx}=\frac{e_{i,j}}{h_{i-1}}\), \(\frac{d \phi _{i,j+1}(x_i^+)}{dx}=\frac{e_{i,j+1}}{h_i},\frac{d \phi _{i,j+1}(x_i^-)}{dx}=\frac{e_{i,j+1}}{h_{i-1}}\), then \(\frac{\partial P_{i,j}^2(x,y)}{\partial x\partial y}\) is continuous at the points \((x_i,y)\) \((i=2,3,\cdots ,n-1)\) when \(\beta _{i-1,j}=\beta _{i,j}\) and \(h_i=h_{i-1}\). The analysis above imply that \(\frac{\partial P_{i,j}^2(x,y)}{\partial x\partial y}\) is continuous in the whole interpolating region \([x_1,x_n;y_1,y_m]\).

Also, we use (4) to arrive at

$$\begin{aligned} \displaystyle \frac{\partial P_{i,j}^2(x,y)}{\partial x^2}\!&= \!\displaystyle \frac{1}{q_{i,j}(y)}[(1\!-\!\eta )^3(1\!+\!\eta (1\!+\!2\eta )\beta _{i,j}) \frac{d^2P^*_{i,j}(x)}{dx^2}\nonumber \\&\displaystyle +\,\eta ^3(1+(3-5\eta +2\eta ^2)\beta _{i,j}) \frac{d^2P^*_{i,j+1}(x)}{dx^2}\nonumber \\&\displaystyle +\,l_j\eta (1-\eta )^3(1+\eta \beta _{i,j})\frac{d^2\phi _{i,j}(x)}{dx^2}\\&\displaystyle -\,l_j\eta ^3(1-\eta )(1+(1-\eta )\beta _{i,j})\frac{d^2\phi _{i,j+1}(x)}{dx^2}].\nonumber \end{aligned}$$
(6)

Since

$$\begin{aligned} \frac{d^2\phi _{i,s}(x_i^{+})}{dx^2}=0,\, \frac{d^2\phi _{i,s}(x_{i+1}^{-})}{dx^2}=0,\, s=j,j+1, \end{aligned}$$

and the interpolation function \(P^*_{i,j}(x)\) is \(C^2\) continuous, it is easy to see from (6) that \(\frac{\partial P_{i,j}^2(x,y)}{\partial x^2}\) is continuous at the points \((x_i,y)\) \((i=2,3,\cdots ,n-1)\) when \(\beta _{i-1,j}=\beta _{i,j}\) and \(h_{i-1}=h_i\). The proof of the case which \(\frac{\partial P_{i,j}^2(x,y)}{\partial y^2}\) is continuous at the points \((x,y_j)\) is similar. This completes the proof. \(\square \)

4 Basis of the Interpolant

For the interpolation defined by (1), it is easy to see that \(P^*_{i,j}(x)\) can be rewritten as

$$\begin{aligned} P^*_{i,j}(x)&= \omega _{0,0}(\theta )f_{i,j}+\omega _{1,0}(\theta )f_{i+1,j}\\&+\,\omega _{0,1}(\theta )h_id_{i,j}+\omega _{1,1}(\theta )h_id_{i+1,j}, \end{aligned}$$

where

$$\begin{aligned}&\omega _{0,0}(\theta )=\displaystyle \frac{(1-\theta )^2[1+\theta (1+\theta -2\theta ^2)(\alpha _{i,j}-1)]}{(1-\theta )^3+\theta ^3+\theta (1-\theta )\alpha _{i,j}},\\&\omega _{1,0}(\theta )=\displaystyle \frac{\theta ^2[1+\theta (3-5\theta +2\theta ^2)(\alpha _{i,j}-1)]}{(1-\theta )^3+\theta ^3+\theta (1-\theta )\alpha _{i,j}},\\&\omega _{0,1}(\theta )=\displaystyle \frac{\theta (1-\theta )^3(1-\theta +\theta \alpha _{i,j})}{(1-\theta )^3+\theta ^3+\theta (1-\theta )\alpha _{i,j}},\\&\omega _{1,1}(\theta )=\displaystyle -\,\frac{\theta ^3(1-\theta )[\theta +(1-\theta )\alpha _{i,j}]}{(1-\theta )^3+\theta ^3+\theta (1-\theta )\alpha _{i,j}}. \end{aligned}$$

The set \(\{\omega _{i,j}(\theta ), i,j=0,1\}\) are called the basis of the interpolation (1). It is obvious that when \(d_{i,j}=\frac{\partial f (x_i,y_j)}{\partial x}\) and \(\alpha _{i,j}\rightarrow +\infty \), the terms \(\omega _{i,j}(\theta )\) are the well-known basis of standard cubic Hermite interpolation. That is to say, in this special case, the interpolant \(P^*_{i,j}(x)\) defined by (1) will give approximately a standard cubic Hermite interpolation.

Similarly, from (1) and (4), the interpolation function \(P_{i,j}(x,y)\) can be written as follows:

$$\begin{aligned} P_{i,j}(x,y)&= \sum _{r=i}^{i+1}\sum _{s=j}^{j+1}[a_{r,s}(\theta ,\eta )f_{r,s}\nonumber \\&+\,b_{r,s}(\theta ,\eta )h_id_{r,s}+c_{r,s}(\theta ,\eta )l_je_{r,s}], \end{aligned}$$
(7)

where

$$\begin{aligned}\begin{array}{l} a_{i,j}(\theta ,\eta )\\ \quad \!=\!\displaystyle \frac{(1\!-\!\theta )^2(1\!-\!\eta )^3(1\!-\!\theta \!-\!\theta ^2\!+\!2\theta ^3 \!+\!\theta (1\!+\!\theta -2\theta ^2)\alpha _{i,j}) (1\!+\!\eta (1\!+\!2\eta )\beta _{i,j})}{((1-\theta )^3\!+\!\theta (1\!-\!\theta )\alpha _{i,j}\!+\!\theta ^3) ((1-\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3)},\\ a_{i,j+1}(\theta ,\eta )\\ \quad \!=\!\displaystyle \frac{(1\!-\!\theta )^2\eta ^3(1\!-\!\theta \!-\!\theta ^2\!+\!2\theta ^3 \!+\!\theta (1\!+\!\theta \!-\!2\theta ^2)\alpha _{i,j+1}) (1\!+\!(3-5\eta \!+\!2\eta ^2)\beta _{i,j})}{((1\!-\!\theta )^3\!+\!\theta (1-\theta )\alpha _{i,j+1}\!+\!\theta ^3) ((1-\eta )^3\!+\!\eta (1-\eta )\beta _{i,j}\!+\!\eta ^3)},\\ a_{i+1,j}(\theta ,\eta )\\ \quad \!=\! \displaystyle \frac{\theta ^2(1\!-\!\eta )^3(1-3\theta \!+\!5\theta ^2\!-\!2\theta ^3 \!+\!\theta (3-5\theta \!+\!2\theta ^2)\alpha _{i,j}) (1\!+\!\eta (1\!+\!2\eta )\beta _{i,j})}{((1\!-\!\theta )^3\!+\!\theta (1\!-\!\theta )\alpha _{i,j}\!+\!\theta ^3) ((1-\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3)},\\ a_{i+1,j+1}(\theta ,\eta )\\ \quad \!=\! \displaystyle \frac{\theta ^2\eta ^3(1\!-\!3\theta \!+\!5\theta ^2\!-\!2\theta ^3 \!+\!\theta (3\!-\!5\theta \!+\!2\theta ^2)\alpha _{i,j+1}) (1\!+\!(3-5\eta \!+\!2\eta ^2)\beta _{i,j})}{((1-\theta )^3\!+\!\theta (1-\theta )\alpha _{i,j+1}\!+\!\theta ^3) ((1-\eta )^3\!+\!\eta (1-\eta )\beta _{i,j}\!+\!\eta ^3)}, \end{array} \end{aligned}$$
$$\begin{aligned}&b_{i,j}(\theta ,\eta )\\&\quad =\displaystyle \frac{\theta (1\!-\!\theta )^3(1\!-\!\eta )^3(1\!-\!\theta \!+\!\theta \alpha _{i,j}) (1\!+\!\eta (1\!+\!2\eta )\beta _{i,j})}{((1\!-\!\theta )^3\!+\!\theta (1\!-\!\theta )\alpha _{i,j}\!+\!\theta ^3) ((1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3)},\\&b_{i,j+1}(\theta ,\eta )\\&\quad \!=\!\displaystyle \frac{\theta (1\!-\!\theta )^3\eta ^3(1\!-\!\theta \!+\!\theta \alpha _{i,j\!+\!1}) (1\!+\!(3\!-\!5\eta \!+\!2\eta ^2)\beta _{i,j})}{((1\!-\!\theta )^3\!+\!\theta (1\!-\!\theta )\alpha _{i,j+1}\!+\!\theta ^3) ((1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3)},\\&b_{i+1,j}(\theta ,\eta )\\&\quad =-\,\displaystyle \frac{\theta ^3(1\!-\!\theta )(1\!-\!\eta )^3(\theta \!+\!(1\!-\!\theta )\alpha _{i,j}) (1\!+\!\eta (1\!+\!2\eta )\beta _{i,j})}{((1\!-\!\theta )^3\!+\!\theta (1\!-\!\theta )\alpha _{i,j}\!+\!\theta ^3) ((1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3)},\\&b_{i+1,j+1}(\theta ,\eta )\\&\quad =-\displaystyle \frac{\theta ^3(1\!-\!\theta )\eta ^3(\theta \!+\!(1\!-\!\theta )\alpha _{i,j\!+\!1}) (1\!+\!(3\!-\!5\eta \!+\!2\eta ^2)\beta _{i,j})}{((1\!-\!\theta )^3\!+\!\theta (1\!-\!\theta )\alpha _{i,j+1}\!+\!\theta ^3) ((1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3)},\\&c_{i,j}(\theta ,\eta ) =\displaystyle \frac{(1\!-\!\theta )^3\eta (1\!-\!\eta )^3(1\!+\!4\theta \!+\!9\theta ^2) (1\!+\!\eta \beta _{i,j})}{ (1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3},\\&c_{i,j+1}(\theta ,\eta ) =-\,\displaystyle \frac{(1\!-\!\theta )^3\eta ^3(1\!-\!\eta )(1\!+\!4\theta \!+\!9\theta ^2) (1\!+\!(1\!-\!\eta )\beta _{i,j})}{ (1-\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3},\\&c_{i+1,j}(\theta ,\eta ) =\displaystyle \frac{\theta ^3\eta (1\!-\!\eta )^3(6\!-\!8\theta \!+\!3\theta ^2) (1\!+\!\eta \beta _{i,j})}{ (1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3},\\&c_{i+1,j+1}(\theta ,\eta )=-\,\displaystyle \frac{\theta ^3\eta ^3(1\!-\!\eta )(6\!-\!8\theta \!+\!3\theta ^2) (1\!+\!(1\!-\!\eta )\beta _{i,j})}{ (1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3}. \end{aligned}$$

The teams \(a_{r,s}(\theta ,\eta ), b_{r,s}(\theta ,\eta ), c_{r,s}(\theta ,\eta ), r= i,i+1, s=j,j+1\) are called the basis of the interpolant defined by (4), which satisfy

$$\begin{aligned}&a_{i,j}(\theta ,\eta )\!+\!a_{i,j+1}(\theta ,\eta )\!+\!a_{i+1,j}(\theta ,\eta )\!+\!a_{i+1,j+1s}(\theta ,\eta )\!=\!1,\nonumber \\&b_{i,j}(\theta ,\eta )\!+\!b_{i,j+1}(\theta ,\eta )\!-\!b_{i+1,j}(\theta ,\eta ) \!-\!b_{i+1,j+1s}(\theta ,\eta )\!=\!\theta (1\!-\!\theta ),\nonumber \\&c_{i,j}(\theta ,\eta )\!-\!c_{i,j+1}(\theta ,\eta )\!+\!c_{i+1,j}(\theta ,\eta ) \!-\!c_{i+1,j+1s}(\theta ,\eta )\nonumber \\&\displaystyle \!=\!\frac{\eta (1\!-\!\eta )(1\!-\!2\eta \!+\!2\eta ^2\!+\!\eta (1\!-\!\eta )\beta _{i,j}) (1\!+\!\theta \!-\!10\theta ^3\!+\!15\theta ^4\!-\!6\theta ^5)}{ (1\!-\!\eta )^3\!+\!\eta (1\!-\!\eta )\beta _{i,j}\!+\!\eta ^3}.\nonumber \\ \end{aligned}$$
(8)

Denote

$$\begin{aligned}\begin{array}{l} M=\max \{|f_{r,s}|,r=i,i+1;s=j,j+1\},\\ Q_1=\max \{h_i|d_{r,s}|,r=i,i+1;s=j,j+1\},\\ Q_2=\max \{l_j|e_{r,s}|,r=i,i+1;s=j,j+1\}. \end{array} \end{aligned}$$

For the given data, the piecewise bivariate interpolation function \(P_{i,j}(x,y)\) defined by (4) has the following bounded theorem.

Theorem 2

Let \(P_{i,j}(x, y)\) is the interpolation function over \([x_i,x_{i+1};y_j,y_{j+1}]\) defined by (4). No matter what positive number the parameters \(\alpha _{i,s}\) and \(\beta _{r,j}\) take, the values of \(P_{i,j}(x, y)\) in \([x_i,x_{i+1};y_j,y_{j+1}]\) satisfy

$$\begin{aligned} |P_{i,j}(x, y)|\le M+\frac{1}{4}Q_1+0.573375Q_2. \end{aligned}$$

Proof

From (7) and (8), it is easy to derive that

$$\begin{aligned} |P_{i,j}(x, y)|\!&\le \! \displaystyle M\sum _{r=i}^{i+1}\sum _{s=j}^{j+1}|a_{r,s}(\theta ,\eta )|\!+\!Q_1\sum _{r=i}^{i+1}\sum _{s=j}^{j+1} |b_{r,s}(\theta ,\eta )|\\&+\,Q_2\sum _{r=i}^{i+1}\sum _{s=j}^{j+1}|c_{r,s}(\theta ,\eta )|\\&\le \displaystyle M+\theta (1-\theta )Q_1+Q_2\sum _{r=i}^{i+1}\sum _{s=j}^{j+1}|c_{r,s}(\theta ,\eta )|\\&\le \displaystyle M+\frac{1}{4}Q_1+Q_2\sum _{r=i}^{i+1}\sum _{s=j}^{j+1}|c_{r,s}(\theta ,\eta )|. \end{aligned}$$

Since

$$\begin{aligned}\begin{array}{l} \displaystyle \sum \limits _{r=i}^{i+1}\sum \limits _{s=j}^{j+1}|c_{r,s}(\theta ,\eta )|\\ \le \displaystyle \frac{\eta (1-\eta )(1-2\eta +2\eta ^2+\eta (1-\eta )\beta _{i,j}) (1+\theta -10\theta ^3+15\theta ^4-6\theta ^5)}{ (1-\eta )^3+\eta (1-\eta )\beta _{i,j}+\eta ^3}\\ \le \displaystyle (1+\theta -10\theta ^3+15\theta ^4-6\theta ^5)\frac{\eta (1-3\eta +4\eta ^2-2\eta ^3)}{1-3\eta +3\eta ^2}, \end{array} \end{aligned}$$

and

$$\begin{aligned}\begin{array}{l} \displaystyle \max \limits _{\theta \in [0,1]}(1+\theta -10\theta ^3+15\theta ^4-6\theta ^5)=1.14675,\\ \displaystyle \max \limits _{\eta \in [0,1]}\frac{\eta (1-3\eta +4\eta ^2-2\eta ^3)}{1-3\eta +3\eta ^2}=\frac{1}{2}, \end{array} \end{aligned}$$

thus, the proof is completed. \(\square \)

5 Error Estimates of the Interpolation

Note that the interpolator defined by (4) is local, without loss of generality, it is only necessary to consider the interpolating region \([x_i,x_{i+1};y_j,y_{j+1}]\) in order to process its error estimates. Let \(f(x,y)\in C^2\) be the interpolated function, and \(P_{i,j}(x,y)\) be the interpolation function defined by (4) over \([x_i,x_{i+1};y_j,y_{j+1}]\).

Denoting

$$\begin{aligned} \Bigg \Vert \frac{\partial f}{\partial y}\Bigg \Vert =\max _{(x,y)\in D}\Bigg | \frac{\partial f(x,y)}{\partial y}\Bigg |,\,\, \Bigg \Vert \frac{\partial P}{\partial y}\Bigg \Vert =\max _{(x,y)\in D}\Bigg | \frac{\partial P_{i,j}(x,y)}{\partial y}\Bigg |, \end{aligned}$$

where \(D=[x_i,x_{i+1};y_j,y_{j+1}]\). By the Taylor expansion and the Peano-Kernel Theorem [23] gives the following:

$$\begin{aligned}&|f(x,y)-P_{i,j}(x,y)| \nonumber \\&\le \displaystyle |f(x,y)-f(x,y_j)|+|P_{i,j}(x,y_j)\\&\quad -P_{i,j}(x,y)|+|f(x,y_j)-P_{i,j}(x,y_j)|\nonumber \\&\displaystyle \!\le \! l_j(\Vert \frac{\partial f}{\partial y}\Vert \!+\!\Vert \frac{\partial P}{\partial y}\Vert )\!+\!|\int _{x_i}^{x_i\!+\!1}\frac{\partial ^2 f(\tau ,y_j)}{\partial x^2}R_{x}[(x\!-\!\tau )_{\!+\!}]d\tau | \nonumber \\&\displaystyle \!\le \! l_j(\Vert \frac{\partial f}{\partial y}\Vert \!+\!\Vert \frac{\partial P}{\partial y}\Vert )\!\!+\!\!\Vert \frac{\partial ^2 f(x,y_j)}{\partial x^2}\Vert \int _{x_i}^{x_{i\!+\!1}}| R_{x}[(x\!\!-\!\!\tau )_{\!+\!}]| d \tau ,\nonumber \end{aligned}$$
(9)

where \(\Vert \frac{\partial ^2 f(x,y_j)}{\partial x^2}\Vert =\max _{x\in [x_i,x_{i+1}]}|\frac{\partial ^2 f(x,y_j)}{\partial x^2}|\), and

$$\begin{aligned}&R_x[(x-\tau )_{+}]\\&=\left\{ \begin{array}{l} (x\!-\!\tau )\!-\!a_{i+1,j}(\theta ,0)(x_{i+1}\!-\!\tau )\!-\!b_{i+1,j}(\theta ,0)h_i,\, x_i\!<\!\tau \!<\!x;\\ \!-\,\!a_{i+1,j}(\theta ,0)(x_{i+1}\!-\!\tau )\!-\!b_{i+1,j}(\theta ,0)h_i,\, x<\tau <x_{i+1}, \end{array} \right. \\&=\left\{ \begin{array}{l} r(\tau ),\ \ \ \ \ \ \ \ x_i<\tau <x; \\ s(\tau ),\ \ \ \ \ \ \ \ x<\tau <x_{i+1}. \end{array} \right. \end{aligned}$$

Thus, by simple integral calculation, it can be derived that

$$\begin{aligned} \int _{x_i}^{x_{i+1}}| R_{x}[(x-\tau )_{+}]| d \tau = =h_i^2B(\theta ,\alpha _{i,j}), \end{aligned}$$
(10)

where

$$\begin{aligned}&B(\theta ,\alpha _{i,j})\nonumber \\&\quad \!=\!\frac{t^2(1\!-\!t)^2(1\!+\!2t(1\!-\!t)(\alpha _i\!-\!1))^2}{(1\!+\!t(3\!-\!5t\!+\!2t^2)(\alpha _i-1))(1\!+\!t(1\!+\!t-2t^2)(\alpha _i\!-\!1))}. \end{aligned}$$
(11)

For the fixed \(\alpha _{i,j}\), let

$$\begin{aligned} B_{i,j}^{(x)}=\max _{\theta \in [0,1]}B(\theta ,\alpha _{i,j}). \end{aligned}$$
(12)

This leads to the following theorem.

Theorem 3

Let \(f(x, y)\in C^2\) be the interpolated function, and \(P_{i,j}(x,y)\) be its interpolator defined by (4) in \([x_i,x_{i+1};y_j,y_{j+1}]\). Whatever the positive values of the parameters \(\alpha _{i,s},\beta _{r,j}\) might be, the error of the interpolation satisfies

$$\begin{aligned}&|f(x,y)-P_{i,j}(x,y)|\le l_j\Big (\Big \Vert \frac{\partial f}{\partial y}\Big \Vert +\Big \Vert \frac{\partial P}{\partial y}\Big \Vert \Big )\nonumber \\&\quad +\,h_i^2 \Big \Vert \frac{\partial ^2 f(x,y_j)}{\partial x^2}\Vert B_{i,j}^{(x)}, \end{aligned}$$

where \(B_{i,j}^{(x)}\) defined by (12).

Similarly, denoting \(\Vert \frac{\partial ^2 f(x,y_{j+1})}{\partial x^2}\Vert =\max _{x\in [x_i,x_{i+1}]}\) \(|\frac{\partial ^2 f(x,y_{j+1})}{\partial x^2}|\), then the following theorem holds.

Theorem 4

Let \(f(x, y)\in C^2\) be the interpolated function, and \(P_{i,j}(x,y)\) be its interpolation function defined by (4) in \([x_i,x_{i+1};y_j,y_{j+1}]\). Whatever the positive values of the parameters \(\alpha _{i,s},\beta _{r,j}\) might be, the error of the interpolation satisfies

$$\begin{aligned} |f(x,y)-P_{i,j}(x,y)|&\le l_j\Big (\Big \Vert \frac{\partial f}{\partial y}\Big \Vert +\Big \Vert \frac{\partial P}{\partial y}\Big \Vert \Big )\\&+\,h_i^2 \Vert \frac{\partial ^2 f(x,y_{j+1})}{\partial x^2}\Vert B_{i,j+1}^{(x)}, \end{aligned}$$

where \(B_{i,j+1}^{(x)}=\max _{\theta \in [0,1]}B(\theta ,\alpha _{i,j+1})\), and \(B(\theta ,\alpha _{i,j})\) defined by (11).

Furthermore, for \(B_{i,s}^{(x)}\), we can conclude the following theorem.

Theorem 5

For any positive parameters \(\alpha _{i,s},s=j,j+1\), \(B_{i,s}^{(x)}\) are bounded, and

$$\begin{aligned} \frac{1}{16}\le B_{i,s}^{(x)}\le \frac{3}{16}. \end{aligned}$$

6 Numerical Examples

For the bivariate rational spline interpolant defined by (4) , since there are three shape parameters in the interpolation function, when the parameters vary, the interpolation function can be changed for the unchanged interpolating data. Thus, the shape of the interpolating surface can be modified by selecting suitable shape parameters according to the control need. Also, the interpolator can give a good approximation to the interpolated function. In this section, in order to show the effectiveness which the interpolator defined by (4) approximate a function, and to describe that the shape of the interpolating surface can be modified by free shape parameters, some examples will be given.

Example 1

Let the interpolated function be \(f(x,y)=\cos (x^2+y)\), \((x,y)\in [0,0.8;0,0.8]\), and let \(h_i=l_j=0.2\), then \(x_i=0.2(i-1)\), \(y_j=0.2(j-1)\), \(i,j=1,2,3,4,5\). Also let \(\alpha _{i,j}=0.3+0.2i+0.1j\), \(\beta _{i,j}=0.6+0.1j\). The partial derivative values \(d_{i,j}\) at the knots \((x_i,y_j)\) \((i,j=1,2,3,4,5)\) are conducted by using (2) and (3). The partial derivative values \(e_{i,j}\) at the knots \((x_i,y_j)\) \((i,j=1,2,3,4,5)\) are given as:

$$\begin{aligned} e_{i,j}&= \displaystyle \frac{l_{j-1}\varDelta ^{(y)}_{i,j}+l_j\varDelta ^{(y)}_{i,j-1}}{l_{j-1}+l_j},j=2,3,\cdots ,m-1,\nonumber \\ e_{i,1}&= \displaystyle \varDelta ^{(y)}_{i,1}-\frac{l_1}{l_1+l_2}(\varDelta ^{(y)}_{i,2}-\varDelta ^{(y)}_{i,1}),\nonumber \\ e_{i,m}&= \displaystyle \varDelta ^{(y)}_{i,m-1}+\frac{l_{m-1}}{l_{m-1}+l_{m-2}}(\varDelta ^{(y)}_{i,m-1}-\varDelta ^{(y)}_{i,m-2}). \end{aligned}$$
(13)

Figure 1 shows the graph of the interpolated function \(f(x,y)\). Figure 2 shows the graph of the interpolation function \(P(x,y)\) defined by (4). Figure 3 shows the surface of the error \(f(x,y)-P(x,y)\). From Fig. 3, it is easy to see that the interpolator defined by (4) gives a good approximation to the interpolated function.

Fig. 1
figure 1

Graph of surface \(f(x,y)\)

Fig. 2
figure 2

Graph of surface \(P(x,y)\)

Fig. 3
figure 3

Graph of surface \(f(x,y)-P(x,y)\)

Example 2

Let \(\varOmega :[0,1.5;0,1.5]\) be the plane region, and the interpolation data are given in Table 1. The interpolation function \(P_{i,j}(x,y)\) defined by (4) can be constructed in \([0,1.5;0,1.5]\) for the given positive parameters \(\alpha _{i,j},\alpha _{i,j+1}\) and \(\beta _{i,j}\). In order to show that the shape of the interpolating surface can be modified by selecting suitable parameters according to control need, we consider the value control of the interpolating surface. Assume \(\alpha _{i,j}=\alpha _{i,j+1}\) and \(\beta _{i,j}=\)constant for each \(j\in \{1,2,3\}\) and all \(i=1,2,3\), then the interpolant \(P_{i,j}(x,y)\) defined by (4) is \(C^2\) in \([0,1.5;0,1.5]\). Note that the interpolant is local, without loss of generality, we only consider a subinterval \([0.5,1;0.5,1]\).

Table 1 Set of the interpolating data

Let \(\alpha _{i,j}=\frac{2}{5},\beta _{i,j}=\frac{3}{5}\). The partial derivative values \(d_{i,j}\) and \(e_{i,j}\) at the knots \((x_i,y_j)\) are conducted by using (2) and (13), respectively. For the given interpolation data, denote the interpolation function by \(P_1(x,y)\) which is defined over \([0.5,1;0.5,1]\). Figure 4 shows the graph of the bivariate rational interpolating surface \(P_1(x,y)\). It is easy to compute that \(P_1(0.75,0.75)=\frac{111}{64}=1.73438\cdots \). If the practical design requires \(P(0.75,0.75)=1.7\), then \(\beta _{i,j}=\frac{1}{9}\) and \(\alpha _{i,j}=1\) can be obtained. Denote the interpolation by \(P_2(x,y)\). Figure 5 shows the graph of the surface \(P_2(x,y)\).

Fig. 4
figure 4

Graph of the interpolating surface \(P_1(x,y)\)

Fig. 5
figure 5

Graph of the interpolating surface \(P_2(x,y)\)

Furthermore, if the practical design requires \(P(0.75,0.75)=1.75\), then \(\alpha _{i,j}=1\) and \(\beta _{i,j}=1\) can be derived. Denote the interpolation by \(P_3(x,y)\). Figure 6 shows the graph of the surface \(P_3(x,y)\).

Fig. 6
figure 6

Graph of the interpolating surface \(P_3(x,y)\)

Each interpolant of the family of the \(C^2\) bivariate rational spline interpolation defined by (4) is identified uniquely by the values of the shape parameters \(\alpha _{i,j}\) and \(\beta _{i,j}\). For different shape parameters, from Figs. 4, 5 and 6, we can catch sight of some minor changes of the surfaces in shape. It means that the shape modification of interpolating surface can be achieved by selecting suitable shape parameters according to needs of practical design.

7 Concluding Remarks

In many practical situations, an interpolation surface of class \(C^1\) or \(C^2\) is required. However, generating a \(C^2\) smooth surface is a very difficult task, it requires up to second-order partial derivative values of the interpolated function. Some methods constructing \(C^2\) smooth surface have been given as mentioned above, most of them were polynomial methods. Usual NURBS method is the most popular technology in modern surface modeling, however, preset weights are needed to generate a \(C^2\) rational surface. Also, NURBS approach can be used to modify the local shape of the surfaces by adjusting control points or corresponding weights, the given points play the role of the control points.

In this paper, a new approach is proposed to construct a \(C^2\) piecewise bivariate rational spline interpolation over rectangular mesh only based on the values of the interpolated function. The bivariate interpolant has an explicit mathematical representation. Generally, when the interpolating data are given, because of the uniqueness of the interpolation function, the shape of interpolating surface is fixed. However, more important, in this interpolant, since there are three positive parameters: \(\alpha _{i,j},\alpha _{i,j+1},\beta _{i,j}\), the shape of the interpolating surfaces can be modified by selecting suitable parameters for the unchanged interpolating data according to the control need, and numerical examples illustrate this case.

For each pitch of the interpolating surface, the value of the interpolation function depends on the interpolating data. Since the interpolation function has the convenient basis functions, error estimate formula of the interpolator is worked out in Theorem 3 and Theorem 4. Theorem 5 shows that the interpolation is stable for the parameters. Also, numerical example shows that the interpolator can give a good approximation to the interpolated function.