1 Introduction

Recent advances in radial basis function (RBF) theory have demonstrated the usefulness of RBFs for scattered data approximation in high dimensional spaces. RBFs are useful in the design of mesh-free methods [8, 18, 19], for global optimisation [11] and computer aided design [7, 12, 14, 20, 23, 27, 28]. It has for long been established that certain types of RBFs guarantee non-singular interpolation matrices provided the interpolating points are distinct [21]. There is no doubt that such a guaranteed solvability is one of the convincing reasons why methods based on RBFs continue to attract much attention. As is well known—and as we shall describe—the current solvability of the RBF interpolation problem depends on the symmetry property of the interpolation matrix; it requires that the centres of the basis functions are the interpolating points and that the basis functions are of the same scale. In practice it is often desirable to define/use RBF approximation/interpolation with different shapes—different scales or different bases. However, for RBF interpolation with different shape parameters, previous results say little on the issue of solvability. To make this more clear, the reader is invited to see how the previous proof works for the case of using radial basis functions of the same scale.

Consider the radial basis function (RBF) interpolation problem in high dimensional spaces: given observations \(f_1,f_2,\ldots ,f_n\) on a set of points \(\fancyscript{X}=\{\mathbf {x}_i \in \mathbb {R}^d: i=1,\ldots n\}\), find an approximation of the form \(s(\mathbf {x})=\sum _{i=1}^{n} \alpha _j \phi _j(\mathbf {x})\) to an unknown function \(f(\mathbf {x})\) such that

$$\begin{aligned} s(\mathbf {x}_k)=f_k, \text { for } 1 \le k \le n, \end{aligned}$$
(1)

where \(\phi _j(\mathbf {x})=\varPhi (\mathbf {x}-\mathbf {x}_j)=\varphi (\Vert \mathbf {x}-\mathbf {x}_j \Vert )\). Equation (1) results in a linear system \(A \underline{\alpha } = \underline{f}\), where \(\underline{f}=(f_1, f_2 , \ldots , f_n)^T\) and

$$\begin{aligned} A= \left( \begin{array}{llll} \varPhi (\mathbf {x}_1 -\mathbf {x}_1) &{} \varPhi (\mathbf {x}_1-\mathbf {x}_2) &{} \cdots &{}\varPhi (\mathbf {x}_1-\mathbf {x}_n) \\ \varPhi (\mathbf {x}_2 -\mathbf {x}_1) &{} \varPhi (\mathbf {x}_2-\mathbf {x}_2) &{} \cdots &{}\varPhi (\mathbf {x}_2-\mathbf {x}_n) \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \varPhi (\mathbf {x}_n -\mathbf {x}_1) &{} \varPhi (\mathbf {x}_n-\mathbf {x}_2) &{} \cdots &{} \varPhi (\mathbf {x}_n-\mathbf {x}_n) \\ \end{array}\right) . \end{aligned}$$

If \(\varPhi \) has an integrable Fourier transform \(\hat{\varPhi }\), then \(\varPhi \) can be recovered from \(\hat{\varPhi }\) by the Fourier inversion formula [28, p. 67]

$$\begin{aligned} \varPhi (\mathbf {x})= \frac{1}{\sqrt{(2 \pi )^d}} \int _{\mathbb {R}^d} \hat{\varPhi }(\omega ) e^{i \mathbf {x}^T \omega } d\omega . \end{aligned}$$
(2)

For any \(\underline{\beta }\in \mathbb {C}^n\), consider the following quadratic form

$$\begin{aligned} \underline{\beta }^H A \underline{\beta }= \sum _{k=1}^{n}\sum _{j=1}^n \beta _j \bar{\beta }_k \varPhi (\mathbf {x}_k-\mathbf {x}_j)= \frac{1}{\sqrt{(2\pi )^d}} \sum _{k=1}^n\sum _{j=1}^n \beta _j \bar{\beta }_k \int _{\mathbb {R}^d} \hat{\varPhi }(\omega )e^{i (\mathbf {x}_k-\mathbf {x}_j)^T \omega } d\omega . \end{aligned}$$
(3)

Separate \(e^{i (\mathbf {x}_k-\mathbf {x}_j)^T \omega }\) as \( e^{i \mathbf {x}_k^T \omega } e^{-i \mathbf {x}_j ^T \omega }\), then

$$\begin{aligned} \sum _{k=1}^n\sum _{j=1}^n\beta _j \bar{\beta }_k e^{i(\mathbf {x}_k-\mathbf {x}_j)^T\omega }=\sum _{j=1}^n\beta _j e^{-i\mathbf {x}_j^T \omega } \sum _{k=1}^n \bar{\beta }_k {e^{i \mathbf {x}_k^T\omega }}= \left| \sum _{j=1}^n \beta _j e^{-i \mathbf {x}_j^T \omega }\right| ^2. \end{aligned}$$
(4)

Therefore it follows that

$$\begin{aligned} \underline{\beta }^H A \underline{\beta }=\frac{1}{\sqrt{(2 \pi )^d}} \int _{\mathbb {R}^d} \hat{\varPhi }(\omega ) \left| \sum _{j=1}^n \beta _j e^{- i \mathbf {x}_j^T \omega } \right| ^2 d \omega . \end{aligned}$$
(5)

If \(\hat{\varPhi }>0\), then the interpolation matrix \(A\) is symmetric and positive definite. Such functions with positive Fourier transform are called positive definite functions. It is obvious that positive definite radial basis functions \(\varphi (\Vert \cdot \Vert )\) guarantee invertible interpolation matrices. One of the most famous positive definite radial basis functions is the Gaussian \(e^{-\epsilon ^2 \Vert \mathbf {x}\Vert _2^2}\), with Fourier transform \(\frac{1}{(\sqrt{2}\epsilon )^d}e^{-\frac{\Vert \omega \Vert ^2}{4 \epsilon ^2}}\). Also included in this class are inverse multiquadrics, truncated power functions, Wu and Wendland compactly supported radial basis functions. See [30] [28, p. 76, p. 80, p. 128] for details.

A natural generalization of a positive definite function is a conditional positive definite function. A conditional positive definite function is a continuous function which guarantees the quadratic form \(\underline{\beta }^H A\underline{\beta }\) is positive on a subspace, where \(\underline{\beta }\in \mathbb {C}^{n} / \{0\}\) satisfies \(\sum _{j=1}^n \beta _j p(\mathbf {x}_j)=0\) for any distinct point set \(\fancyscript{X}\) and all complex-valued polynomials, \(p(\mathbf {x})\), of degree less than \(m\). For such conditional positive definite functions, one can prove that a modified approximation

$$\begin{aligned} s(x)= \sum _{j=1}^n\alpha _j \phi _j(\mathbf {x})+\sum _{k=1}^{Q} \gamma _{k}p_k(\mathbf {x}) \end{aligned}$$
(6)

with lower order polynomial constraints is solvable under certain conditions;

$$\begin{aligned} \sum _{j=1}^n\alpha _jp_{k}(\mathbf {x}_j)=0, \text { for } 1 \le k \le Q, \end{aligned}$$
(7)

where \(p_{k}, 1\le k\le Q\) is a linear independent basis of the space of polynomials of degree less than \(m\), \(\pi _{m-1}(\mathbb {R}^d)\). The approximation (6) together with the polynomial constraints results in a saddle point system

$$\begin{aligned} \left( \begin{array}{ll} A &{} P \\ P^T &{} 0 \end{array}\right) \left( \begin{array}{l} \underline{\alpha } \\ \underline{\gamma }\end{array}\right) = \left( \begin{array}{ll} \underline{f}\\ 0 \end{array}\right) \end{aligned}$$
(8)

If \(P\) is full rank, and the quadratic form \(\underline{\alpha } ^H A \underline{\alpha } \) is positive on the null space of \(P^T\), i.e. \(\{ \underline{\alpha } : P^T\underline{\alpha } =0\}\), then the saddle point system (8) has a unique solution [28, p. 117]. Whether \(P\) is full rank still depends on the point set \(\fancyscript{X}\) for \(\pi _{m-1}(\mathbb {R}^d)\) with \(m>1\), whereas Micchelli proves that \(\underline{\alpha } ^H A \underline{\alpha } \) is positive on the null space of \(P^T\) is always true for all conditional positive definite functions of order \(m\) on any data set \(\fancyscript{X}\) [21]. His proof in fact also depends on the condition that the quadratic form (5) is positive on the null space of \(P^T\), which as a consequence requires that the basis functions are of the same scale according to (4). See [21, p. 17] for details.

If the interpolation points and the basis function centres are different, or the basis functions in (1) are in different scales or different kinds of functions, the formula (4) does not hold. An example of the latter case, using different kinds of radial basis function, is the unsymmetrical collocation method for solving partial differential equations (PDEs) [16], where the author uses basis functions of different scales or different combinations at each centre. Such an unsymmetrical collocation method works most of the time, but singularity still can occur for some cases because there is no theory to guarantee the unsymmetrical collocation matrix is always invertible. For complicated real applications, many are eager for an adaptive refinement scheme, arranging more data where it should be, to reduce the total computing complexity. In this case, one also wants to use radial basis functions with different scales for the locally clustered points; if two basis function centres are too close to each other, the corresponding two columns of the system matrix \(A\) are nearly linearly dependent, and thus the matrix can be ill-conditioned. Whereas if such two basis functions employ different scales or are different, then the conditioning may be improved [10]. The eagerness of using radial basis functions with different scales dates back to the 1970s [13], whereas few practical results for large data sets were obtained. There are papers which discuss how to choose various shape parameters both numerically [3, 6, 9, 15, 17, 22, 32] and theoretically [1, 3]. In particular in [1], Bozzini et al. use matrix perturbation arguments, concluding that if the scales do not vary a lot across the domain, then interpolation with various scales can result in a non-singular interpolation matrix. The threshold of variation of the scales depends on the estimation of the smallest eigenvalue of an interpolation matrix with the same scale [1, Theorem 2]; such a dependence limits the utility of the method, but see the recent paper [2]. Solvability for more cases remains to be established for general radial basis functions.

We focus on compactly supported radial basis functions [28, 30]. Sufficient conditions to guarantee a unique solution to the RBF interpolation problem with different shapes are supplied. The conditions only depend on a local convexity of the underlying basis functions, and a local geometric property of a neighbourhood of each centre. Information on both of these aspects are easy to obtain. Algorithms can be implemented efficiently, whereas the error estimate for such a scheme is unclear at moment and some numerical examples show that the scheme may not guarantee good approximation quality, there is still some posibility to construct a realistic local refinement scheme according to this approach. At moment, the scheme can be used in inaccurate models, like some difficult 3D surface reconstruction model, where efficiency is more important and high resolution is not crucial. When it is applied on the Stanford scanning repository data sets, reasonable results can be obtained.

Here we share our results, so that others may find a better solution and improvement. We first introduce notation and preliminaries in Sect. 2 and then present the main results in Sect. 3. We further discuss several issues related to the main results in Sect. 4 and put algorithms and numerical results in Sects. 5 and 6 respectively. Finally we give some conclusions in Sect. 7.

2 Convexity of Compactly Supported Radial Basis Functions

Let \(f(r)\in C^2[a,b]\), if \(f^{\prime }(r) <0\) and \(f^{\prime \prime }(r)\ge 0\), then \(f(r)\) is convex on \((a,b)\). (We suppose a linear function with negative slope is convex in this paper.) This section focuses on one popular class of compactly supported radial basis function—Wendland functions.

A Wendland function \(\phi _{d,k}\) is obtained through the following integral operator

$$\begin{aligned} \fancyscript{I} (\phi ) (r): =\int _{r}^\infty t \phi (t)d t. \end{aligned}$$
(9)

The Wendland function \(\phi _{d,k}\) is defined as

$$\begin{aligned} \phi _{d,k}=\fancyscript{I}^{k} \phi _{\lfloor d/2\rfloor +k+1}, \end{aligned}$$
(10)

where \(\phi _{\ell }(r)=\max \{(1-r)^\ell ,0\}\) and \(\lfloor \cdot \rfloor \) is the floor operator. Here, \(\phi _{d,k}(\Vert \mathbf {x}\Vert )\) is a compactly supported radial basis function and \(\phi _{d,k} \in C^{2k}(\mathbb {R}^d)\). The operator \(\fancyscript{I}\) was introduced to construct compactly supported radial basis functions [30]. An inversion like operator \(\fancyscript{D}\) was also introduced to simplify computations for high dimensional Fourier transforms of radial functions [30] [28, p. 121]:

$$\begin{aligned} {\fancyscript{D}}\phi (r)= -\frac{1}{r}\phi '(r) \text { for } \phi \in C^2(\mathbb {R}) \text { and } r >0. \end{aligned}$$
(11)

It is known that \(\fancyscript{D}\fancyscript{I}\phi =\phi \) and \(\fancyscript{I}\fancyscript{D}\phi =\phi \) [28, p. 121]. Applying the formula (11), we can get the following properties of functions defined in (9).

Lemma 1

If \(\phi \in L_{1}[0,\infty )\) is a non-negative function, then,

  1. 1.

    \(\frac{d}{dr}\fancyscript{I}^{k}\phi (r)=-r \fancyscript{I}^{k-1}\phi (r) \le 0\), for \(k\ge 1\);

  2. 2.

    \(\frac{d^2}{dr^2}\fancyscript{I}^{k}\phi (r)=r^2 \fancyscript{I}^{k-2}\phi (r)-\fancyscript{I}^{k-1}\phi (r)\), for \(k\ge 2\).

Proof

Since \(\phi \) is non-negative, by the definition of the operator \(\fancyscript{I}\) in (9), \(\fancyscript{I}^k\phi (r)\) is positive. Employing the formula (11), we can get

$$\begin{aligned} -\frac{1}{r}\frac{d}{dr}\fancyscript{I}^{k}\phi (r)= \fancyscript{D}\fancyscript{I}^{k}\phi (r)=\fancyscript{I}^{k-1}\phi (r) \ge 0. \end{aligned}$$

Reformulating the equation we can obtain

$$\begin{aligned} \frac{d}{dr}\fancyscript{I}^{k}\phi (r)=-r \fancyscript{I}^{k-1}\phi (r) \le 0, \end{aligned}$$

for \(k\ge 1\). The second part holds because

$$\begin{aligned} \frac{d^2}{dr^2} \fancyscript{I}^{k}= \frac{d}{dr}(-r \fancyscript{I}^{k-1}\phi (r))=r^2 \fancyscript{I}^{k-2}\phi (r)-\fancyscript{I}^{k-1}\phi (r). \end{aligned}$$

\(\square \)

The first part of Lemma 1 demonstrates that Wendland functions are non-increasing on \([0,1]\), and the second part gives a recurrence formula to compute the second derivative of a Wendland function.

Lemma 2

If \(\phi (r)\) is a continuous and monotonically decreasing function on \([0,1]\), which satisfies \(\phi (0)>0\) and \(\phi (1)=0\), then for the function

$$\begin{aligned} f(r)=r^2\phi (r)-\int _{r}^1t \phi (t)dt, \end{aligned}$$

there exists a positive number \(\gamma \in (0,1)\) such that \(f(r)\ge 0\) on \([\gamma ,1]\).

Proof

Since \(\phi \) is a monotonically decreasing function on \([0,1]\), then for any \(t\in (r,1)\), \(0<\phi (t)<\phi (r)\). By the Cauchy–Schwarz inequality,

$$\begin{aligned} \int _{r}^1 t \phi (t)dt&\le \left( \int _{r}^{1}t^2dt\right) ^{1/2}\left( \int _{r}^1 \phi ^2(t)dt\right) ^{1/2} \\&\le \sqrt{\frac{1-r^3}{3}}\left( \phi (r) \sqrt{1-r}\right) \le \phi (r) \sqrt{\frac{1-r^3}{3}}. \end{aligned}$$

Therefore

$$\begin{aligned} f(r)\ge \left( r^2-\sqrt{\frac{1-r^3}{3}}\right) \phi (r)=h(r) \phi (r). \end{aligned}$$
(12)

Compute \(h^{\prime }(r)=2r+\frac{\sqrt{3}r^2}{2\sqrt{1-r^3}}>0\) on \((0,1)\). On the other hand, \(h(0)=-\sqrt{1/3}\) and \(h(1)=1>0\). Therefore, according to the intermediate value theorem for continuous functions, there must exists a positive number \(\gamma \in (0,1)\), such that \(h(\gamma )=0\) and \(h(r)>0\) on \((\gamma ,1)\). Thus \(f(r)\ge 0\) on \([\gamma ,1]\). \(\square \)

Theorem 1

For any Wendland Function \(\phi _{d,k}\) defined above, there exists a positive real number \(q\in [0,1)\), such that \(\phi _{d,k}\) is convex on \([q,1]\).

Proof

If \(k=0\), \(\phi _{d,0}=(1-r)^{\lfloor d/2 \rfloor +1}\), then

$$\begin{aligned} \phi _{d,0}^{\prime }(r)=-1 (\lfloor d/2 \rfloor +1)(1-r)^{\lfloor d/2 \rfloor }<0 \end{aligned}$$

on \((0,1)\). If \(d=1\), \(\phi _{1,0}^{\prime \prime }(r)=0\), otherwise

$$\begin{aligned} \phi _{d,0}^{\prime \prime }(r)=(-1)^2(\lfloor d/2 \rfloor +1)(\lfloor d/2 \rfloor )(1-r)^{\lfloor d/2 \rfloor -1}\ge 0. \end{aligned}$$

Thus \(\phi _{d,0}(r)\) is convex on \([0,1]\).

For \(k=1\), \(\phi _{d,1}(r)=\int _{r}^1t (1-t)^{\lfloor d/2 \rfloor +1 +1}dt\),

$$\begin{aligned} \phi _{d,1}^{\prime }(r)&=-r(1-r)^{\lfloor d/2 \rfloor +2}\le 0 \text { in} (0,1), \\ \phi _{d,1}^{\prime \prime }(r)&=(1-r)^{\lfloor d/2 \rfloor +1}\left( (\lfloor d/2 \rfloor +3) r -1\right) . \end{aligned}$$

Let \(q=\frac{1}{\lfloor d/2 \rfloor +3}\), then \(\phi ^{\prime \prime }_{d,1}(r)\ge 0\) on \([q,1]\).

For \(k\ge 2\), according to Lemma 1, \(\phi _{d,k}^{\prime }(r) \le 0\) and

$$\begin{aligned} \phi _{d,k}^{\prime \prime }(r)=r^2 \fancyscript{I}^{k-2}\phi _{\ell }(r)-\fancyscript{I}^{k-1}\phi _{\ell }(r), \end{aligned}$$

where \(\ell =\lfloor d/2 \rfloor +k+1\), and \(\phi _{\ell }(r)=(1-r)^{\ell }_{+}\). Define \(\varphi (r)=\fancyscript{I}^{k-2}\phi _{\ell }(r) \), then \(\varphi (r)\) is a decreasing and non-negative function on \([0,1]\). Note that

$$\begin{aligned} \fancyscript{I}^{k-1}\phi _{\ell }(r)=\int _{r}^1t \varphi (t)dt. \end{aligned}$$

Employing Lemma 2, there exists a positive \(0<\gamma <1\), such that \(\phi _{d,k}^{\prime \prime }(r)\) is positive on \([\gamma ,1]\). Let \(q=\gamma \), which finishes the proof. \(\square \)

Lemma 3

For any Wendland function \(\phi _{d,k}(r)\), there exists a positive number \(m\) and a positive constant \(c\) such that for \(r\in (\delta _1,1)\), \(\phi _{d,k}(r)\le c(1-r)^m\).

Proof

Since \(\phi _{d,k}\) vanishes at 1, thus \(\phi _{d,k}=(1-r)^mp(r)\) on \([0,1]\), where \(p(r)\) is a polynomial with non-zeros on \([0,1]\). Let \(c=\max _{r \in [0,1]} p(r)\), since \(\phi _{d,k}\) is on \([0,1)\), thus \(c>0\). \(\square \)

3 Main Results

Let \( \fancyscript{N}(\mathbf {x}_j,n_j) \subset \fancyscript{X}=\{\mathbf {x}_1, \mathbf {x}_2, \ldots , \mathbf {x}_N \} \subset \mathbb {R}^d\), be the set of the \(n_j\) nearest neighbouring points to \(\mathbf {x}_j\), where \(\mathbf {x}_j \notin \fancyscript{N}(\mathbf {x}_j,n_j)\). For each \(\fancyscript{N}(\mathbf {x}_j,n_j)\), define

$$\begin{aligned} r_j=\min _{\mathbf {x}_k\in \fancyscript{N}(\mathbf {x}_j,n_j)}\Vert \mathbf {x}_k-\mathbf {x}_j \Vert \quad \text { and }\quad R_j=\max _{\mathbf {x}_k \in \fancyscript{N}(\mathbf {x}_j,n_j)}\Vert \mathbf {x}_k-\mathbf {x}_j\Vert . \end{aligned}$$
(13)

It is obvious that \(\fancyscript{N}(\mathbf {x}_j,n_j)\subset \fancyscript{B}(\mathbf {x}_j, R_j)\backslash \fancyscript{B}(\mathbf {x}_j,r_j)\), where \(\fancyscript{B}(\mathbf {x},R)\) is a Euclidean ball with center \(\mathbf {x}\) and radius \(R\). Further, define

$$\begin{aligned} q_j = \frac{r_j}{R_j}, \, \text { and } m_j=\frac{1}{n_j R_j}\sum _{\mathbf {x}_k\in \fancyscript{N}(\mathbf {x}_j,n_j)}\Vert \mathbf {x}_k-\mathbf {x}_j\Vert . \end{aligned}$$
(14)

It is easy to find that \(0<q_j\le m_j \le 1\), the equalities holding only when all of the points in \(\fancyscript{N}(\mathbf {x}_j,n_j)\) are located on the same sphere with \(\mathbf {x}_j\) as center (including the special case \(n_j=1\)). The quantities \(q_j\) and \(m_j\) are associated with the particular geometric property of \(\fancyscript{N}(\mathbf {x}_j,n_j)\): the points in \(\fancyscript{N}(\mathbf {x}_j,n_j)\) are located in \(\fancyscript{B}(\mathbf {x}_j, R_j)\backslash \fancyscript{B}(\mathbf {x}_j,r_j)\), and can be viewed as distributed on the sphere \(\Vert \mathbf {x}_j-\mathbf {x}\Vert _2=m_jR_j\) on average. Such a local geometric property only depends on the relative distances between \(\mathbf {x}_j\) and points in \(\fancyscript{N}(\mathbf {x}_j,n_j)\) (see Fig. 1a for an illustration).

Fig. 1
figure 1

a Illustrates a ‘scaled’ neighbourhood of \(\mathbf {x}_j\) with 10 neighbouring points. b Illustrates a Wendland function which is convex in \((q,1)\). One can see that \(\frac{y_m}{\phi (q)}=\frac{1-m}{1-q}\) holds, which is used to prove Theorem 2

Denote the radial basis functions \(\varphi (\Vert \mathbf {x}\Vert )\) with different scales, \(\epsilon _j\), as \(\phi _{\epsilon _j}(\mathbf {x})=\varphi (\epsilon _j\Vert \mathbf {x}\Vert )\), for \(1\le j \le n\). The corresponding approximation to (1) is written as

$$\begin{aligned} s(\mathbf {x})=\sum _{j=1}^N \alpha _j \phi _{\epsilon _j}(\mathbf {x}) \end{aligned}$$
(15)

and the corresponding interpolation matrix on the data set \(\fancyscript{X}\) with \(\{\phi _{\epsilon _j}\}_{j=1}^n\) is denoted as \(A_{\phi _{\epsilon _j},\fancyscript{X}}\).

Theorem 2

Let \(\phi (r)\) be a univariate compactly supported radial basis function on \([0,1]\) which is convex on \([q,1]\) and satisfies \(\phi (0)=1\). \(\fancyscript{X}=\{\mathbf {x}_1,\mathbf {x}_2, \ldots , \mathbf {x}_N\} \) is a scattered data set in \(\mathbb {R}^d\). For a set \(\fancyscript{N}(\mathbf {x}_j,n_j)\) of \(n_j\) nearest neighbouring points, \(R_j\) is defined by (13) and \(q_j\) and \(m_j\) are defined by (14). If for every \(\mathbf {x}_j\in \fancyscript{X}\), there is a set \(\fancyscript{N}(\mathbf {x}_j,n_j)\) of \(n_j\) nearest neighbouring points which satisfies \(q_j \ge q\) and \(m_j > 1-\frac{1-q}{\phi (q)n_j}\), then the interpolation matrix \(A_{\phi _{\epsilon _j},\fancyscript{X}}=\phi ( \epsilon _j \Vert \mathbf {x}_i-\mathbf {x}_j\Vert )_{1\le i,j\le N}\) is non-singular when \(\epsilon _j \ge 1/R_j\).

Proof

For \(\mathbf {x}_k \in \fancyscript{N}(\mathbf {x}_j,n_j)\), define \(d_{kj}=\frac{\Vert \mathbf {x}_k-\mathbf {x}_j \Vert }{R_j}\). Since \(q_j=\frac{r_j}{R_j}\ge q\), then for each \(k\) and \(j\), \(d_{kj}\ge q_j \ge q\). Suppose now \(y_{kj}\) is defined so that the points \((d_{kj},y_{kj})\) are located on the straight line which passes through \((q, \phi (q))\), and \((1, 0)\), then \(y_{kj}= \frac{1-d_{kj}}{1-q}\phi (q)\) (see Fig. 1 for an illustration). Since \(d_{kj}\in (q_j,1)\subset [q,1]\) and \(\phi \) is convex on \([q,1]\), then \(y_{kj}\ge \phi (d_{kj})\). It follows that

$$\begin{aligned} \sum _{\mathbf {x}_k \in \fancyscript{N}(\mathbf {x}_j,n_j)} y_{kj}=\frac{\phi (q)}{1-q}\sum _{\mathbf {x}_k\in \fancyscript{N}(\mathbf {x}_j,n_j)}(1-d_{kj}) =\frac{n_j\phi (q)}{1-q}(1-m_j) \le 1, \end{aligned}$$
(16)

under the stated assumption that \(m_j > 1-\frac{1-q}{\phi (q)n_j}\). This proves that when the scale parameter \(\epsilon _j\) is taken as \(\epsilon _j=1/R_j\) then for each \(j\) we must have

$$\begin{aligned} 1-\sum _{\mathbf {x}_{k}\in \fancyscript{N}(\mathbf {x}_j,n_j)} \phi (d_{kj})=\phi (0)-\sum _{\mathbf {x}_{k}\in \fancyscript{N}(\mathbf {x}_j,n_j)}{\phi (\epsilon _j \Vert \mathbf {x}_k -\mathbf {x}_j\Vert _2)}> 0. \end{aligned}$$

That is: the interpolation matrix \(A_{\phi _{\epsilon _j},\fancyscript{X}}\) is diagonally dominant and thus non-singular. For the case \(\epsilon _j \ge 1/R_j\) , it follows because

$$\begin{aligned} g(\epsilon ):= \phi (0)-\sum _{\mathbf {x}_{k}\in \fancyscript{N}(\mathbf {x}_j,n_j)}{\phi (\epsilon \Vert \mathbf {x}_k -\mathbf {x}_j\Vert _2)} \end{aligned}$$
(17)

is a decreasing function on \(\epsilon \). \(\square \)

Remark 1

If \(n_j=1\), then \(q_j=m_j=1 > 1-\frac{1-q}{\phi (q)n_j}\), Theorem 2 always holds. In this case the interpolation matrix is a diagonal matrix, and of course non-singular. However, this case is not of interest. For \(n_j=2\), then the non-zero off-diagonal elements are always smaller than 1 if \(\epsilon _j =1/R_j\), even if \(\fancyscript{N}(\mathbf {x}_j,2)\) does not satisfy the condition. When \(q_j<q\), the nearest point is too close to \(\mathbf {x}_j\) and the two basis functions are more likely to correlate with each other.

Example 1

Consider the Wendland function \(\phi _{3,1}(r)=(1-r)^4_{+}(4r+1)\); its convex interval is \([0.25,1]\). Consider a point \(\mathbf {x}_j\) on the equally spaced mesh in \(\mathbb {R}^{2}\); see Fig. 2a for illustration. Let \(R_j=1\) for \(\fancyscript{N}(\mathbf {x}_j,n_j)\), \(n_j=5,6,7,8\). \(q_j=\frac{\sqrt{2}}{2}>0.25\), \(m_j=\frac{\sqrt{2}}{4}+\frac{1}{2} \approx 0.8536\) for \(n_j=8\).

$$\begin{aligned} 1-\frac{1-0.25}{8 \phi _{3,1}(0.25)} \approx 0.8519 <m_j. \end{aligned}$$

Therefore on the equally spaced mesh in \(\mathbb {R}^2\), \(\fancyscript{N}(\mathbf {x}_j,n_j)\) for \(n_j \le 8\) satisfy Theorem 2. This also illustrates that on a equally spaced mesh, the radii for a compact support basis function which satisfies Theorem 2 is proportional to the underlying grid size.

Theorem 3

Let \(\phi \) be a compactly supported Wendland radial basis function, for any set of distinct points \(\fancyscript{X}=\{ \mathbf {x}_k \in \mathbb {R}^d, k=1,2,\ldots ,n. \}\), there exist scale parameters \(\epsilon _j\), such that each column of the interpolation matrix \(A_{\phi _{\epsilon _j}, \fancyscript{X}}=\phi (\epsilon _j\Vert \mathbf {x}_i-\mathbf {x}_j \Vert _2)_{1\le i,j\le n}\) is strictly column diagonally dominant and each column has at least 3 non-zero elements.

Fig. 2
figure 2

Two examples for the supports of basis functions which satisfy a diagonal dominant condition. a On an equally spaced regular mesh. b On a mesh filled by isosceles triangles

Proof

For given \(\mathbf {x}_j\), let

$$\begin{aligned} \rho _1 = \min _{i\ne j} \Vert \mathbf {x}_i-\mathbf {x}_j \Vert , \rho _{m+1}= \min _{\Vert \mathbf {x}_k-\mathbf {x}_j\Vert >\rho _m} \Vert \mathbf {x}_k-\mathbf {x}_j \Vert , \text { for } m>1, \end{aligned}$$

and

$$\begin{aligned} N(\rho ):= \#\{\mathbf {x}_k: 0<\Vert \mathbf {x}_k -\mathbf {x}_j\Vert <\rho \}. \end{aligned}$$

be the number of points whose distance to \(\mathbf {x}_j\) smaller than \(\rho \), then for any \( \delta >0\) such that \(\rho _1+\delta <\rho _2\), we have \(N(\rho _1) =N(\rho _1+\delta )\). According to Lemma 3, we have

$$\begin{aligned} \sum _{\mathbf {x}_k \in \fancyscript{N}(\mathbf {x}_j, N(\rho ))} \phi \left( \frac{\Vert \mathbf {x}_k-\mathbf {x}_j\Vert }{\rho }\right) \le c N(\rho ) \left( 1-\frac{\rho _1}{\rho }\right) ^m . \end{aligned}$$
(18)

If \(N(\rho _1)\ge 2\), let \(\rho =\delta +\rho _1 \le \rho _2\), and force the right hand side in (16) to be less than one, we get

$$\begin{aligned} \delta \le \frac{\rho _1T}{1-T}=\delta ^{*}, \text { where } T=\left( \frac{1}{cN(\rho _1)}\right) ^{1/m}. \end{aligned}$$

Then for any \(0< \delta < \min \{\delta ^{*}, \rho _2 -\rho _1\}\), the shape parameter \(\epsilon _j =\frac{1}{\rho _1+\delta }\) can make the \(j\)th column have \( N(\rho _1)+1\) non-zero elements and column diagonally dominant.

For the case \(N(\rho _1)=1\), let \(\rho := \rho _2+\delta <\rho _3\), then

$$\begin{aligned} \sum _{\mathbf {x}_k \in \fancyscript{N}(\mathbf {x}_j, N(\rho ))} \phi \left( \frac{\Vert \mathbf {x}_k-\mathbf {x}_j \Vert }{\rho }\right)&\le \phi \left( \frac{\rho _1}{\rho _1+\delta }\right) +c(N(\rho _2)-1) \phi \left( \frac{\rho _2}{\rho _2+\delta }\right) \\&\le \phi \left( \frac{\rho _1}{\rho _1+\delta }\right) +c (N(\rho _2)-1) \left( \frac{\delta }{\rho _2+\delta }\right) ^m \\&\le \phi \left( \frac{\rho _1}{\rho _1+\delta }\right) +c (N(\rho _2)-1) \left( \frac{\delta }{\rho _2+\delta }\right) :=f(\delta ). \end{aligned}$$

Thus \(f(0)=\phi (\frac{\rho _1}{\rho _2})<1\) and \(f(\delta )\) is a monotonically increasing function. Suppose \(f(\delta )\le 1\) has solutions on \((0, \delta ^{*})\) for some \(\delta ^{*}\). Then for any \(0<\delta \min \{\delta ^{*}, \rho _3-\rho _2\}\), the choice \(\epsilon _j=\frac{1}{\rho _2+\delta }\) makes the column have \(N(\rho _2)+1\) non-zero elements whilst keeping the diagonal dominance condition. \(\square \)

Remark 2

Theorem 3 indicates that a point set \(\fancyscript{X}\) satisfying the Theorem is a sufficient but not necessary condition to guarantee a non-singular interpolation matrix with RBFs of different shapes.

Example 2

Consider a mesh which consists of isosceles triangles as illustrated in Fig. 2b. Since \(N(\rho _1)=6\), \(\fancyscript{N}(\mathbf {x}_j,n_j)\) for \(n_j\le 6\) satisfy a diagonal dominant condition but they result in interpolation matrices with only one non-zero element in column \(j\). For \(N(\rho _2)=12\), let \(\rho _2=R_j=1\), where \(R_j\) corresponds to \(\fancyscript{N}(\mathbf {x}_j,n_j)\) for \(n_j=7\) to \(12\), then \(q_j=\frac{\sqrt{3}}{3}>0.25\) and \(1-6*\phi _{3,1}(q_j) \approx 0.3664 >0\). Therefore, when using \(\phi _{3,1}\) with radius as \(\rho _2\), a diagonal dominant condition can also be guaranteed. The largest \(n_j\) such that \(\fancyscript{N}(\mathbf {x}_j,n_j)\) satisfies Theorem 2 is 8.

Remark 3

Both Example 1 and Example 2 illustrate that the radii of the basis functions which can guarantee a diagonal dominant interpolation matrix is a relative value to the mesh size.

Finally, we conclude that interpolation matrices constructed from Theorems 2 and 3 have an nice property which can guarantee (incomplete) LU factorizations. Matrices with such a property are called H-matrices. An H-matrix is a generalization of an M-matrix. A real matrix \(A=(a_{i,j})_{1\le , i,j \le n}\) with \(a_{i,j}\le 0\) for all \(i\ne j\) is an M-matrix if \(A\) is non-singular and \(A^{-1}\) is a non-negative matrix. A matrix \(A\) is an H-matrix if its comparison matrix \(\fancyscript{M}(A)\) is an M-matrix, where the comparison matrix \(\fancyscript{M}(A)=(\alpha _{i,j})_{1\le i,j,\le n} \) is defined by

$$\begin{aligned} \alpha _{i,i}:=|a_{i,i}|, \text { and } \alpha _{i,j}:=-|a_{i,j}| \text { for } i\ne j , (1\le i,j \le n). \end{aligned}$$

H-matrices can guarantee stable incomplete factorization pre-conditioners [26].

Theorem 4

The interpolations matrices in Theorems 2 and 3 are H-matrices.

Proof

Theorems 2 and 3 guarantee the underlying interpolation matrix strictly diagonally dominant. The result follows due to an established result: if a matrix is is either a strictly diagonally dominant or an irreducibly diagonally dominant matrix, then A is an H-matrix [25, p. 92, Theorem 3.27]. \(\square \)

4 Further Discussion

4.1 Estimation of the Lower Bound of the Convex Interval

A more accurate estimation on the threshold \(q\) for the convex interval \([q,1]\) can supply a more tight lower bound on \(m_j\). For Wendland function \(\phi _{d,0}\), we can take \(q=0\); for \(\phi _{d,1}\), \(q= \frac{1}{\lfloor d/2 \rfloor +3}\). In both cases \(q\) is minimal. It is easy to see that the lower bound of the convex interval \(\phi _{d,1}\) becomes smaller as \(d\) increases. This is also true for \(\phi _{d,2}\) when \(d=1,3,5\). See Fig. 3. Though neither Theorem 2 nor Lemma 2 indicates there is only one root for \(\phi _{d,k}^{\prime \prime }(r)=0\) for \(k\ge 2\), this is true for the case \(k=2\) and \(d=1,3,5\). It is possible that \(\phi _{d,k}^{\prime \prime }\) at most has only one zero in \((0,1)\). The estimations of the threshold \(q\) for several Wendland function in Table 1 are computed by Mathematica.

Fig. 3
figure 3

The second derivatives of Wendland function \(\phi _{d,k}(r)\) for \(d=1,3,5\) and \(k=1,2\). The root of \(\phi ^{\prime \prime }_{d,k}(r)\) in (0,1) is the exactly tight bound of \(q\)

Table 1 Estimations of the convex interval for Wendland functions

4.2 Sharper Bound for \(m_j\) and Eigenvalue Distribution

From Fig. 1, as one might predict, the lower bound of \(m_j\) in Theorem 2, \( 1-\frac{1-q}{n_j q}\), can be far from tight in most cases. For a tighter bound of \(m_j\), one can use \(\frac{1-q_j}{n_jq_j}\) in (16). One even can verify the diagonal dominance by “brute force”—evaluate \(\phi (\epsilon _j \Vert \mathbf {x}_k-\mathbf {x}_j \Vert )\) directly, but this involves more computations, every time changing the support or the size of the neighbourhood means that one has to re-evaluate the function. However, even if only a part of the centres are checked according to Theorem 2, it can save a lot of computations when adaptively choosing the support.

In either case, one can guarantee that the smallest eigenvalue of the interpolation matrix is bounded significantly away from the origin. The diagonal dominance indicates that \( \mu =\Vert A_{\phi _{\epsilon _j}, \fancyscript{X}} \Vert _1 <2\). According to the classical Gerschgorin disc theorem [25, p. 16], all the eigenvalues of the interpolation matrix are located in a disc with center \((1,0)\) and radius \(\mu -1\). The upper bound on the ratio of the largest eigenvalue to the smallest is \(\frac{\mu }{2-\mu }\).

4.3 The Range of \(R_j\) and \(n_j\)

For local refinements, the value \(R_j\) for \(\fancyscript{N}(\mathbf {x}_j,n_j)\) can vary a lot (see Fig. 4b for an illustration). The ratio of the largest support to the smallest support can be on a order of hundred, for example, for the mesh in Fig. 5b. Table 2 shows that the range of the value of \(R_j\) can vary by up to hundreds, while the mean of \(n_j\) may have a pattern.

Fig. 4
figure 4

A demonstration of radial basis function with different supports for non-uniform grid. a \(\fancyscript{N}(\mathbf {x}_j,6)\) for a distort grid. b \(\fancyscript{N}(\mathbf {x}_j,7)\) for a FEM mesh

Fig. 5
figure 5

Data sets, interpolation matrices, and eigenvalues distribution for TriDisc problem. The blue circles in (e) and (f) are eigenvalues, and the blue dashed lines are the boundaries of the convex hull of all the eigenvalues. a TriDisc1 data set. b TriDisc2 data set. c TriDisc1 interpolation matrix. d TriDisc2 interpolation matrix. e TriDisc1 eigenvalues. f TriDisc2 eigenvalues

Table 2 Information of several test problems by Algorithm 2

4.4 Accuracy

It is known that stationary interpolation with compactly supported positive definite kernels does not provide uniform convergence, because it violates the Strang-Fix condition [29, 31]. The Strang-Fix condition is a necessary and sufficient condition for stationary interpolation with compactly supported function to achieve uniform local \(L_p\) convergence [24, Thm 1]. (Stationary interpolation keeps the support size propositional to the mesh size; on regular mesh this corresponds to the basis function covers a fixed stencil). Making each basis function cover as many as the neighbouring points will do better than using a smaller support. Keeping the diagonal dominance leads to efficiency but also leads to a smaller support than usual cases when each basis function covers more than 25 to 50 nearest neighbouring points. The approximation quality would not be high. See an example in Fig. 6.

Fig. 6
figure 6

Error functions on an \(85 \times 85\) mesh on polar coordinates and the test function (f). b Uses the diagonal dominant condition for each basis center, other case use \(k\) nearest neighbouring points for each center. Where maxerr and rmserr are the infinity norm and the root-mean-square error. a 5 Nearest points, b average 8.5 nearest points, c 10 nearest points, d 16 nearest points, e 40 nearest points, f Test function

5 Algorithm and Implementation

One might think that it is cumbersome to find a set of nearest neighbouring points satisfying the local geometric property described above. In fact a very brief code can achieve the aim: one possible Matlab function to find the largest \(n_j\) satisfying Theorem 2 with a set of \(k\) (\(k\ge n_j\)) nearest points is given in Algorithm 1. Together with the \(k\) nearest neighbouring points searching algorithm (eg. knnsearch in the Matlab statistics toolbox), a program to generate the interpolation matrix with differently scaled RBFs can be written as in Algorithm 2.

figure a
figure b

Note that the cumsum takes \(k\) operations, calculating mj and qj takes \(k\) and \(3k\) operations, calculating RHS takes \(4k\) operations plus \(k\) evaluations; the find steps only need \(2k\) comparisons and \(k\) logical operations. Suppose \(k\) is a number less than 100, the total operation count is at most \(\fancyscript{O}(k^2)\). If an initial set of \(k\) neighbouring points is supplied for each point, then the total operation count to calculate the entries for the interpolation matrix is at most \(\fancyscript{O}(N k^2)\). Further, for each point, this procedure can run independently which is suitable for parallel computing( by replacing the serial for loop in Algorithm 2). The set up of the kd-tree structure takes takes \(\fancyscript{O}(dN\log (N))\) time [28, p. 240], and finding the \(k\) nearest neighbouring points can be done in \(\fancyscript{O}((\log N+k)N) \) time [28, p. 248] by a generalized kd-tree or bd-tree search. Such causal analysis shows that when \(N\) is large, the timing for generating the interpolation matrix is shorter than the time for nearest neighbourhood search. A comparison of the time for the knnsearch (knn) and that for checking the diagonal dominance condition is listed in Table 3.

Table 3 Timing results (in seconds) for C/C++ implementation

6 Numerical Verification

Two data sets from adaptive finite element refinement on three quarters of a disc are used as the first examples to verify the theory; the data sets are referred to as TriDisc1 (T1) and TriDisc2 (T2) respectively. We then apply the procedures to implicit curve/surface construction problems, demonstrating that the presented algorithms can solve very large scale real applications. For the 3D surface construction problem, we consider the Stanford bunny model and Stanford dragon model in the Stanford3D scanning repository.Footnote 1 The Stanford bunny (bunny.tar.gz) consists of 35947, 8171, 1889 and 453 points. The Stanford dragon (dragon_recon.tar.gz) is a larger model; only three low resolution data sets are used, the data sets with 100250, 22998 and 5205 vertices. The test problems are named as Bunny453 (B4), Bunny1889 (B3), Bunny8171 (B2), Bunny35947 (B1), Dragon5205 (D3), Dragon22998 (D2) and Dragon-100250 (D1) respectively. To illustrate the method for 3D surface construction with radial basis functions, a 2D parametric heart curve (HC) is also considered. In all of the problems, the Wendland function \(\phi _{3,1}=(1-r)^4_{+}(4r+1)\) is employed. A collection of the results related to our theorems are put together in Table 2, more information is detailed as follows.

6.1 TriDisc Problem

The data set TriDisc1 is generated by the following Matlab code

figure c

The data set TriDisc2 is generated similarly by replacing the parameter 100 by 3,000. Figure 5 illustrates the data sets, the structure of the interpolation matrices, and the eigenvalue distribution of the two interpolation matrices.

We interpolate the following test function

$$\begin{aligned} \left( x^2+y^2\right) ^{1/3} \cos \left( \frac{2 \mathrm{atan2 } (y,x) }{3}\right) \end{aligned}$$
(19)

on the data set TriDisc1 with two approaches and investigate the error behavior. One approach is to use the diagonally dominant condition described here, which results in an interpolation matrix with average non-zero elements 8.5 per column. In the other approach, we just use make each basis function cover its \(k\)-nearest points, where \(k=5,10,16,32,40\). The error function is evaluation on \(N=85\times 85\) points in the domain. Both the infinity norm and the root-mean-square error (RMS-error) are listed in Table 4. The root-mean-square error is defined as

$$\begin{aligned} \text {RMS-error}=\sqrt{\frac{1}{N} \sum _{i=1}^N | s(\mathbf {x}_i) -f(\mathbf {x}_i) |^2}. \end{aligned}$$

For the case the case when more than 10 nearest points for each basis function are used in Table 4, the underlying interpolation matrix is not diagonally dominant and could be singular. The method which keeps the diagonal dominance performance slightly better than the case when using 10 nearest points, but its approximation quality is low.

Table 4 Error for the test function with increasing support

6.2 Implicit Curve/Surface Reconstruction

Here we consider a realistic problem which allows low resolution schemes: constructing an implicit surface. An implicit surface/curve can be viewed as a zero level set of a function \(f(\mathbf {x})\), where \(f(\mathbf {x})\) satisfies

$$\begin{aligned} {\left\{ \begin{array}{ll} f(\mathbf {x})= 0 &{} \text { if } \mathbf {x}\text { on the curve/surface} ;\\ f(\mathbf {x}^{+}) >0 &{} \text{ if } \mathbf {x}^{+} \text { off the curve/surface (outside);}\\ f(\mathbf {x}^{-})<0 &{} \text { if } \mathbf {x}^{-} \text { off the curve/surface (inside).} \\ \end{array}\right. } \end{aligned}$$

Given a point set on the curve/surface and associated normal information, the artificial off-surface points \(\mathbf {x}^{+}\) and \(\mathbf {x}^{-}\) are easily obtained. Such a scheme is popular in reconstructing complicated 3D objects, see [4]. Because \(\mathbf {x}^{+}\) and \(\mathbf {x}^{-}\) are artificial and \(f(\mathbf {x}^{+})\) and \(f(\mathbf {x}^{-})\) can be any positive and negative number, and usually the supplied normal directions are approximated, inaccurate or measured with some noise. So a low resolution interpolant is enough and efficiency is of more concern in practice.

We first illustrate the procedure by an implicit curve construction problem and then show the method which guarantees diagonal dominance condition can solve large scale 3D surface reconstruction problems very efficiently with satisfactory results.

6.2.1 Illustration in 2D

Consider the following heart curve

$$\begin{aligned} {\left\{ \begin{array}{ll} x(t)=\sqrt{2}(\sin (t) -\cos (t));\\ y(t)=-\sqrt{2}(\sin (t)+\cos (t)(1+\sin (t))), \\ \end{array}\right. } \end{aligned}$$
(20)

which is modified from an example in [8, p. 257]. 84 points on the curve are sampled by setting \(\{t_i\}_{i=1}^{84}\) equally spaced in \([0,2\pi ]\). These points themselves are not equally spaced, see the small blue circles in Fig. 7a, c. The off curve artificial points are constructed as follows \(\mathbf {x}^{\pm }=\mathbf {x}\pm .75 \delta _{\mathbf {x}} \varvec{\mathbf {n}}_{\mathbf {x}}\), where \(\varvec{\mathbf {n}}_{\mathbf {x}}\) is the normalised normal direction at \(\mathbf {x}\) and \(\delta _{\mathbf {x}}\) is the distance from \(\mathbf {x}\) to its nearest neighbour on the curve; the ratio \(.75\) can be any scalar not far from \(1\). Figure 7a illustrates the support of each each basis function, the support is found by Algorithm 1. Figure 7b demonstrates that the eigenvalues of the interpolation matrix are located in the black circle by the diagonal dominance condition. The red curve in Fig. 7c and the zero level set in Fig. 7d is the reconstructed result.

Fig. 7
figure 7

Reconstruction of an implicit curve. The blue \(+\) points are artificial off-curve point constructed by related normal information. The black circles in (a) are the supports for the basis functions corresponding to red \(+\) centres. b Demonstrates that the eigenvalues of the interpolation matrix are located in the black circle. The red curve in (c) and the 0 level set in (d) is the reconstructed curve

6.2.2 3D Surface Reconstruction

The normal information are computed by the normalsply from the package ply.tar.gz provided by Greg Turk.Footnote 2 The artificial computed points are constructed by \(\mathbf {x}^{\pm }=\mathbf {x}\pm .5\delta _{\mathbf {x}} \varvec{\mathbf {n}}_{\mathbf {x}}\). Suppose dsites are the points in the cloud and normals is the corresponding calculated normal information. The 3D surface construction problem can be solved by the following framework described in Matlab code Algorithm 3, in which the function name and linear solver are chosen randomly; basically, it includes four steps: preparing data, generating linear systems, solving linear systems and post-precessing. The details of the post-precessing part can be find in [33].

figure d

6.3 Conditioning

Three ways are used to verify that the interpolation matrix is very well conditioned. First, the lower bounds of the real part of the eigenvalue are calculated when checking the diagonal dominance: \(\mathtt{Re} (\lambda ) \ge \min _{j} \{1-\sum _i{a_{ij}}\}\). Second, we observe the convergence history of GMRES methods without preconditioning. Third, Matlab condest is used for the largest sparse matrices in this paper; which returns a number between \(5\) to \(13\). It should be pointed out that the condest(A,2) can estimate the condition number to within a factor of 2, whilst condest(A,4) which gives a more accurate estimate returns a number around \(12\).

ILU(0) preconditioning for GMRES is also considered. The performance for the Stanford bunny and Stanford dragon problems are listed in Table 5. The restarted GMRES(50) are used, while all the problem are terminated in 32 iterations without any preconditioning.

Table 5 Information of several test problems

The eigenvalues for two relatively small models are calculated. It turns out that the eigenvalue distribution of the two problem share similar features, all the eigenvalues are clustered in a very small disc with center \((1,0)\) (see Fig. 8), and the convergence history for GMRES without preconditioning are similar. For large problems, it is not easy to calculate all of the eigenvalues of the interpolation matrix, but the convergence histories for GMRES are similar; see Fig. 8e for the largest problem in this paper. A GMRES solution approach for globally supported radial basis function is discussed in [5].

Fig. 8
figure 8

a, b The distribution of eigenvalues, the blue circles are eigenvalues and the blue polygon are the convex hull of the eigenvalues. ce The convergence history of GMRES and preconditioned GMRES methods. f The time of solving the interpolation linear systems resulted by Algorithm 2, mkA with GMRES solver without any preconditioning is linearly dependent on the dimensional of the linear system. where \(c = 1\).2168e\(-\)05

6.4 Efficiency

The Matlab implementation in Algorithms 1 and 2 is inefficient for large scale problems due to the long loop in Algorithm 2. Indeed the algorithms here do not guarantee at least 3 elements per column as described in Theorem 3. Procedures to guarantee Theorem 3 would require several if or while branches, which are inefficient in Matlab. If C/C++ is used, an algorithm based on Theorem 3 works efficiently. To illustrate the efficiency, timing results are presented in Table 3, where we report the time for knnsearch in Matlab (based on kd-tree) to search for the 20 nearest neighbourhood for each point, the time for the check procedure, and the command sparse(row,col,val) in Matlab to generate a sparse matrix, and the time for ilu and gmres. The check procedure is a Matlab wrapper function with C/C++ kernel function which checks the diagonal dominance condition in Theorem 3 and prepares row, col and val for the sparse function. If experiments are carried out outside of the Matlab environment, the sparse function is not necessary. The results are presented to demonstrate that checking the theorem and generating the interpolation matrix takes a relatively small fraction of the overall time. Details of the C/C++ implementation will be published in an ongoing software package.

Figure 9 shows the reconstruction results for the Stanford bunny model and the Stanford dragon model.

Fig. 9
figure 9

Surface reconstruction from 3D Scanning Repository

7 Conclusion

The main results establishing solvability of the equations for interpolation with radial basis functions generally employ Fourier Transforms and Bochner’s theorem. As such they can only apply to RBFs with identical shape parameter. In this manuscript, we provide some sufficient conditions for invertibility of the interpolation equations for Wendland functions with differing shape. The same idea can also be applied to other compactly supported basis functions which have a fast decay property, for instance the Wu functions. The idea might be useful for the Gaussian with some modification. For using globally supported radial basis function with different scales the reader is redirected to [2]

Numerical examples show the scheme is very efficient for large scale problems but with a lower accuracy. It can can be used when the efficiency is of great concern and a high resolution scheme is not necessary. Examples are given which demonstrate the utility. Our motivation and further consideration is how to combine such a scheme with a proper local refinement scheme and multi-step strategies, thus an error estimate for such a scheme is crucial for wider applications and need to be further considered.