1 Introduction

Particle methods, such as the smoothed particle hydrodynamics (SPH) [10, 18, 19] and moving particle semi-implicit (MPS) methods [15, 16, 29], are numerical methods for solving partial differential equations that are based on points called particles distributed in a domain. In such methods, an interpolant and several approximate differential operators are defined in terms of linear combinations of weighted interactions between neighboring particles. When such methods are applied to partial differential equations, the equations are effectively discretized in space. As the discretization procedure does not require mesh generation in the domain, particle methods can be applied to moving boundary problems, such as the deformation and destruction of structures [5, 22] and flow problems associated with free surfaces [21, 23].

The accuracy of particle methods has been widely researched. From an engineering perspective, many studies have been conducted into the convergence of such methods in practical applications, such as Amicarelli [1, 2], Fulk [9], and Quinlan et al. [25]. On the other hand, few studies in the literature have presented numerical analyses of these methods from a mathematical perspective. In the 1980s, Mas-Gallic and Raviart [20] and Raviart [26] provided error estimates for particle methods when applied to parabolic and hyperbolic partial differential equations on unbounded domains. In the 2000s, Ben Moussa and Via [4] and Ben Moussa [3] provided error estimates of nonlinear conservation laws on bounded domains. In their work, the time integrations of the particle positions and volumes were obtained by solving the differential equations with respect to advection fields. However, as their method is only applicable to problems described by solvable differential equations, it cannot be used with other problems, such as those involving the Navier–Stokes equations.

Sometime later, Ishijima and Kimura [13] developed a truncation error estimate for an approximate gradient operator in the MPS method. By introducing a regularity for particle distributions based on an indicator called the equivolume partition radius, they determined the conditions that depend solely on the space distributions of the particles. However, a practical limitation is that the indicator cannot be computed.

In previous works, we established truncation error estimates for an interpolant, approximate gradient operator, and approximate Laplace operator of a generalized particle method in which the particle volumes were given as Voronoi volumes [11, 12]. A generalized particle method is a numerical method that typically includes conventional particle methods, such as the SPH and MPS methods. In previous studies, we derived truncation error estimates by introducing a regularity using an indicator known as the covering radius, which is used in the numerical analysis of meshfree methods based on moving least-square methods and radial basis functions [17, 27, 30]. Although the formulations and conditions in those works are computable, they are difficult to deploy in practical computations as the computational costs associated with particle volumes based on Voronoi decomposition are high.

The focus of the current work was to analyze particle methods under more practical conditions by extending our results to cases with commonly used particle volumes. We also introduce another indicator of particle volumes, which we refer to as a Voronoi deviation, that represents the deviation between particle volumes and Voronoi volumes. Then, utilizing the Voronoi deviation, we extend the regularity and introduce two hypotheses of reference weight functions. Using the regularity and hypotheses, we derive truncation error estimates of the interpolant, approximate gradient operator, and approximate gradient operator of the generalized particle method. Finally, we numerically analyze our estimates and compare the results to those from the theory.

The remainder of this paper is organized as follows. The interpolant and approximate operators of the generalized particle method are introduced in Sect. 2. A regularity describing the family of discrete parameters is discussed in Sect. 3, after which we propose our primary theorem with respect to the truncation error estimates and provide some corollaries. Then, the primary theorem is proven in Sect. 4, numerical results are detailed in Sect. 5, and some concluding remarks are outlined in Sect. 6.

In the remainder of this section, we describe some notation and define some relevant function spaces. Let \({\mathbb {R}}^{+}\), \({\mathbb {R}}^{+}_0\), and \({\mathbb {N}}_0\) be the set of positive real numbers, the set of nonnegative real numbers, and the set of nonnegative integers, respectively. Let \(d\) be the dimension of a space. Let \({\mathbb {A}}^{d}\) be the set of all \(d\)-dimensional multi-indices. For \(x =(x_1,x_2,\ldots ,x_d)^{\mathrm{T}}\in {\mathbb {R}}^{d}\) and \(\alpha = (\alpha _1,\alpha _2,\ldots ,\alpha _d)^{\mathrm{T}}\in {\mathbb {A}}^{d}\), \(x^\alpha \) is defined as \(x^\alpha =x_1^{\alpha _1}x_2^{\alpha _2}\cdots x_d^{\alpha _d}\). If there is no ambiguity, the symbol \(|\cdot |\) is used to denote the following: |x| denotes the Euclidean norm for \(x\in {\mathbb {R}}^{d}\); |S| denotes the volume of S for \(S\subset {\mathbb {R}}^{d}\); \(|\alpha |\) denotes \(|\alpha |:=\alpha _1+\alpha _2+\cdots +\alpha _d\) for \(\alpha \in {\mathbb {A}}^{d}\). For \(S\subset {\mathbb {R}}^{d}\), let \(\text {diam}(S)\) be \(\text {diam}(S):=\sup \left\{ |x-y|;\,x,y\in S\right\} \). For \(S\subset {\mathbb {R}}^{d}\), let \(C(\overline{S})\) be the space of real continuous functions defined in \(\overline{S}\) with the norm \(\left\| \,\cdot \,\right\| _{C(\overline{S})}\) defined as

$$\begin{aligned} \left\| v\right\| _{C(\overline{S})}&:=\max _{x\in \overline{S}}\left| v(x)\right| . \end{aligned}$$

For \(S\subset {\mathbb {R}}^{d}\) and \(\ell \in {\mathbb {N}}\), let \(C^{\ell }(\overline{S})\) be the space of functions in \(C(\overline{S})\) with derivatives up to the \(\ell \)th order with its seminorm \(\left| \,\cdot \,\right| _{C^{\ell }(\overline{S})}\) and norm \(\left\| \,\cdot \,\right\| _{C^{\ell }(\overline{S})}\) defined as

$$\begin{aligned} \left| v\right| _{C^{\ell }(\overline{S})}&:=\max _{\alpha \in {\mathbb {A}}^{d}, |\alpha |=\ell }\left\| D^\alpha v\right\| _{C(\overline{S})}, \\ \left\| v\right\| _{C^{\ell }(\overline{S})}&:=\max _{j=0,1,\ldots ,\ell } \left| v\right| _{C^{j}(\overline{S})}, \end{aligned}$$

respectively. Here \(D^\alpha v:=\partial _1^{\alpha _1} \partial _2^{\alpha _2} \dots \partial _d^{\alpha _d}v\) with multi-index \(\alpha =(\alpha _1,\alpha _2,\ldots ,\alpha _d)\).

2 Approximate operators in a generalized particle method

Let \({\varOmega }\) be a bounded domain in \({\mathbb {R}}^{d}\). Let \(H\) be a fixed positive number. For \({\varOmega }\) and \(H\), we define extended domain \({\varOmega }_H\) as

$$\begin{aligned} {\varOmega }_H:=\left\{ x\in {\mathbb {R}}^{d}\Big |\exists y\in {\varOmega }~\text{ s.t. }~|x-y|<H\right\} . \end{aligned}$$

For \(N\in {\mathbb {N}}\), we define a particle distribution \({\mathcal {X}}_{N}\) and particle volume set \({{\mathcal {V}}}_{N}\) as

$$\begin{aligned}&{\mathcal {X}}_{N}:=\left\{ x_{i}\in {\varOmega }_H;\,i=1,2,\ldots ,N,\quad x_{i}\ne x_{j}\,(i\ne j)\right\} , \\&{{\mathcal {V}}}_{N}:=\left\{ V_{i}\in {\mathbb {R}}^{+};\,i=1,2,\ldots ,N,\quad \sum _{i=1}^NV_{i}=\left| {\varOmega }_H\right| \right\} , \end{aligned}$$

respectively. We refer to \(x_{i}\in {\mathcal {X}}_{N}\) and \(V_{i}\in {{\mathcal {V}}}_{N}\) as a particle and particle volume, respectively. An example of the particle distribution \({\mathcal {X}}_{N}\) in \({\varOmega }_H\,(\subset {\mathbb {R}}^2)\) is shown in Fig. 1.

Fig. 1
figure 1

Particle distribution \({\mathcal {X}}_{N}\) in \({\varOmega }_H\,(\subset {\mathbb {R}}^2)\)

We define an admissible reference weight function set \({\mathcal {W}}\) as

$$\begin{aligned} {\mathcal {W}}:=\left\{ w\in C({\mathbb {R}}^{+}_0);\,\text{ supp }(w)=[0,1],\,\int _{{\mathbb {R}}^{d}}w(|x|)\mathrm {d}x=1, \text{ absolutely } \text{ continuous }\right\} , \end{aligned}$$

we refer to \(w\in {\mathcal {W}}\) as a reference weight function, and we define the influence radius \(h_{N}\in {\mathbb {R}}\) as satisfying \(0<h_{N}<H\) and \(h_{N}\rightarrow 0\,(N\rightarrow \infty )\). If there is no ambiguity, we denote \(h_{N}\) as \(h\). For reference weight function \(w\) and influence radius \(h\), we define the weight function \(w_{h}\in C({\mathbb {R}}^{+}_0)\) as

$$\begin{aligned} w_{h}(r) :=\frac{1}{h^{d}}w\left( \frac{r}{h}\right) . \end{aligned}$$
(1)

Note that the weight function \(w_{h}\) satisfies

$$\begin{aligned} \text{ supp }(w_{h}) = [0,h], \quad \int _{{\mathbb {R}}^{d}}w_{h}(|x|)\mathrm {d}x=1, \end{aligned}$$

and is absolutely continuous.

For \(v\in C(\overline{{\varOmega }}_{H})\), we define interpolant \({\varPi }_h{}\), approximate gradient operator \(\nabla _h{}\), and approximate Laplace operator \({\varDelta }_h{}\) as

$$\begin{aligned} {\varPi }_h{} v(x)&:=\sum _{i\in {\varLambda }_0(x,h)}V_{i} v(x_{i}) w_{h}(|x_{i}-x|), \end{aligned}$$
(2)
$$\begin{aligned} \nabla _h{} v(x)&:=d\sum _{i\in {\varLambda }(x,h)}V_{i} \frac{v(x_{i})-v(x)}{|x_{i}-x|}\frac{x_{i}-x}{|x_{i}-x|} w_{h}(|x_{i}-x|), \end{aligned}$$
(3)
$$\begin{aligned} {\varDelta }_h{} v(x)&:=2d\sum _{i\in {\varLambda }(x,h)}V_{i} \frac{v(x_{i})-v(x)}{|x_{i}-x|^2} w_{h}(|x_{i}-x|), \end{aligned}$$
(4)

respectively. Here, for \(x\in {\mathbb {R}}^{d}\) and \(r\in {\mathbb {R}}^{+}\cup \{\infty \}\), \({\varLambda }_0(x,r)\) and \({\varLambda }(x,r)\) are index sets of particles defined as

$$\begin{aligned} {\varLambda }_0(x,r)&:=\left\{ i=1,2,\ldots ,N;\,0\le |x-x_{i}|<r\right\} , \\ {\varLambda }(x,r)&:=\left\{ i=1,2,\ldots ,N;\,0<|x-x_{i}|<r\right\} , \end{aligned}$$

respectively.

As discussed later in Appendix 1, the approximate operators (2), (3), and (4) indicate a wider class of approximate operators of particle methods than those in the SPH and MPS methods. Therefore, we refer to the approximate operators (2), (3), and (4) as generalized approximate operators and to a particle method that uses them as a generalized particle method.

3 Truncation error estimates of approximate operators

We first introduce a regularity of discrete parameters. Let \(\{\sigma _{i}\}\) be the Voronoi decomposition of \({\varOmega }_H\) associated with the particle distribution \({\mathcal {X}}_{N}\), where \(\sigma _{i}\) is the Voronoi region defined as

$$\begin{aligned} \sigma _{i} :=\left\{ x\in {\varOmega }_H;\,|x_{i}-x|<|x_{j}-x|,~\forall x_{j}\in {\mathcal {X}}_{N}\,(j\ne i) \right\} ,\quad i=1,2,\ldots ,N. \end{aligned}$$

We define a particle volume decomposition \({\varXi }=\{\xi _{i}\}\) as a decomposition of \({\varOmega }_H\) satisfying

$$\begin{aligned} \left| \xi _{i}\right| = V_{i},\quad \bigcup _{i=1}^N\overline{\xi }_{i}= \overline{{\varOmega }}_{H}~(i=1,2,\ldots ,N),\quad \xi _{i}\cap \xi _{j}=\emptyset ~(i\ne j). \end{aligned}$$

An example of the Voronoi decomposition of \({\varOmega }_H\) associated with the particle distribution \({\mathcal {X}}_{N}\) is shown in Fig. 2. We define a covering radius \(r_{N}\) for particle distribution \({\mathcal {X}}_{N}\) as

$$\begin{aligned} r_{N}:=\max _{i=1,2,\ldots ,N} \sup _{x\in \sigma _{i}}|x_{i}-x|. \end{aligned}$$
(5)

Moreover, we define a Voronoi deviation \(d_{N}\) for the particle distribution \({\mathcal {X}}_{N}\) and the particle volume set \({{\mathcal {V}}}_{N}\) as

$$\begin{aligned} d_{N}:=\inf _{{\varXi }} d_{{\varXi }} \end{aligned}$$
(6)

with

$$\begin{aligned} d_{{\varXi }} :=\max _{i=1,2,\ldots ,N}\left\{ \sum _{j=1}^N\dfrac{\left| \sigma _{i}\cap \xi _{j}\right| +\left| \xi _{i} \cap \sigma _{j}\right| }{\left| \sigma _{i}\right| }|x_{i}-x_{j}|\right\} . \end{aligned}$$

Then, we define a regularity for a family consisting of a particle distribution \({\mathcal {X}}_{N}\), particle volume set \({{\mathcal {V}}}_{N}\), and influence radius \(h\) as follows:

Fig. 2
figure 2

Example of the Voronoi decomposition of \({\varOmega }_H\) associated with the particle distribution \({\mathcal {X}}_{N}\)

Definition 1

A family \(\{({\mathcal {X}}_{N}, {{\mathcal {V}}}_{N}, h_{N})\}_{N\rightarrow \infty }\) is said to be regular with order \(m\,(m\ge 1)\) if there exists a positive constant \(c_0\) such that

$$\begin{aligned} h_{N}^{m} \ge c_0(r_{N}+d_{N}),\quad \forall N\in {\mathbb {N}}. \end{aligned}$$
(7)

Remark 1

As shown in Fig. 3, the covering radius \(r_{N}\) becomes large in the case of a particle distribution with both dense and sparse regions. Therefore, the covering radius \(r_{N}\) can be considered as an indicator representing the uniformness of particle distribution \({\mathcal {X}}_{N}\).

Fig. 3
figure 3

Two examples of covering radii \(r_{N}\) for particle distributions with same number of particles. The covering radius \(r_{N}\) for the uniform particle distribution (left) is smaller than that for the non-uniform particle distribution (right)

Remark 2

A Voronoi deviation \(d_{N}\) equals zero if and only if the particle volumes are given as the Voronoi volume (\(V_{i}=\left| \sigma _{i}\right| \)). Moreover, the Voronoi deviation \(d_{N}\) becomes large if the particle volumes are given as values far from the Voronoi volumes. Therefore, the Voronoi deviation \(d_{N}\) can be regarded as an indicator of the deviation between the particle volume set and the Voronoi volume set.

Remark 3

For a given family \(\{({\mathcal {X}}_{N}, {{\mathcal {V}}}_{N}, h_{N})\}_{N\rightarrow \infty }\) and given constant \(m\, (m\ge 1)\), it is possible to determine whether or not the family is regular with order \(m\) as the covering radius \(r_{N}\) and Voronoi deviation \(d_{N}\) are absolutely computable, as shown in Appendix 2.

Next, we introduce two hypotheses of reference weight function \(w\):

Hypothesis 1

For \(n\in {\mathbb {N}}\), the reference weight function \(w\) satisfies for all \(\alpha \in {\mathbb {A}}^{d}\) with \(1\le |\alpha |\le n\),

$$\begin{aligned} \int _{{\mathbb {R}}^{d}} x^\alpha w(|x|)\mathrm {d}x= 0. \end{aligned}$$

Hypothesis 2

For \(k\in {\mathbb {N}}_0\), the reference weight function \(w\) satisfies

$$\begin{aligned} \max \left\{ \sup _{r\in (0,1)}|w^{(k+1)}(r)|, \sup _{r\in (0,1)}\left| (w^{(k)})^\prime (r)\right| \right\} < \infty , \end{aligned}$$

where for \(j\in {\mathbb {N}}_0\), \(w^{(j)}(r): (0,\infty )\rightarrow {\mathbb {R}}\) is defined as

$$\begin{aligned} w^{(j)}(r) :={\left\{ \begin{array}{ll} \displaystyle \lim _{s\downarrow 0}\dfrac{w(s)}{s^{j}},\quad &{} r=0, \\ \dfrac{w(r)}{r^{j}},\quad &{} r>0 \end{array}\right. } \end{aligned}$$
(8)

and \((w^{(k)})^\prime \) is \(\mathrm{d}w^{(k)}/\mathrm{d}r\).

Remark 4

All reference functions \(w\in {\mathcal {W}}\) satisfy Hypothesis 1 with \(n=1\). Moreover, for all \(n\in {\mathbb {N}}\) and \(k\in {\mathbb {N}}\), reference weight functions satisfying Hypothesis 1 with \(n\) and Hypothesis 2 with \(k\) can be constructed as shown in Appendix 3.

We now state a theorem that defines truncation error estimates of approximate operators in the generalized particle method with a continuous norm:

Theorem 3

Suppose that a family\(\{({\mathcal {X}}_{N},{{\mathcal {V}}}_{N},h_{N})\}_{N\rightarrow \infty }\)is regular with order\(m\,(m\ge 1)\)and that reference weight function\(w\)satisfies Hypothesis 1with\(n\). Then, there exists a positive constant\(c\)independent of\(N\)such that

$$\begin{aligned} \left\| v-{\varPi }_h{} v\right\| _{C(\overline{{\varOmega }})}&\le c\, h^{\min \{m-1,n+1\}} \left\| v\right\| _{C^{n+1}(\overline{{\varOmega }}_{H})},\quad v\in C^{n+1}(\overline{{\varOmega }}_{H}). \end{aligned}$$
(9)

In addition, if\(w\in {\mathcal {W}}\)satisfies Hypothesis 2with\(k=0\), then we have

$$\begin{aligned} \left\| \nabla v-\nabla _h{}v\right\| _{C(\overline{{\varOmega }})} \le c\, h^{\min \{m-1,n+1\}} \left\| v\right\| _{C^{n+2}(\overline{{\varOmega }}_{H})},\quad v\in C^{n+2}(\overline{{\varOmega }}_{H}), \end{aligned}$$
(10)

and if\(w\in {\mathcal {W}}\)satisfies Hypothesis 2with\(k=1\), then we have

$$\begin{aligned} \left\| {\varDelta }v-{\varDelta }_h{}v\right\| _{C(\overline{{\varOmega }})} \le c\, h^{\min \{m-2,n+1\}} \left\| v\right\| _{C^{n+3}(\overline{{\varOmega }}_{H})},\quad v\in C^{n+3}(\overline{{\varOmega }}_{H}). \end{aligned}$$
(11)

The proof of Theorem 3 is presented in the next section. As shown in the corollaries in Appendix 1, the approximate operators commonly used in the SPH and MPS methods are valid for Theorem 3 under appropriate settings.

4 Proof of truncation error estimates

The following notation will be used in the subsequent proof of Theorem 3. Hereafter, let \(c\) be a generic positive constant independent of \(N\) (allowed dependence on the fixed positive parameter \(H\)). For \(\alpha \in {\mathbb {A}}^{d}\), set \(I_{\alpha }\) as

$$\begin{aligned} I_{\alpha }(x) :=\sum _{i\in {\varLambda }_0(x,h)}V_{i} (x_{i}-x)^{\alpha } w_{h}(|x_{i}-x|) - \int _{{\mathbb {R}}^{d}}y^\alpha w_{h}(|y|)\mathrm {d}y,\quad x\in \overline{{\varOmega }}. \end{aligned}$$

For \(\alpha \in {\mathbb {A}}^{d}\) and \(\ell \in {\mathbb {N}}\), set \(I_{\alpha ,\ell }\) as

$$\begin{aligned} I_{\alpha ,\ell }(x) :=\sum _{i\in {\varLambda }(x,h)} V_{i} \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell }} w_{h}(|x_{i}-x|) - \int _{{\mathbb {R}}^{d}} \frac{y^\alpha }{|y|^\ell } w_{h}(|y|)\mathrm {d}y,\quad x\in \overline{{\varOmega }}. \end{aligned}$$

For \(\ell \in {\mathbb {N}}\), set \(J_{\ell }\) as

$$\begin{aligned} J_{\ell }(x) :=\sum _{i\in {\varLambda }_0(x,h)} V_{i} |x_{i}-x|^{\ell } |w_{h}(|x_{i}-x|)|,\quad x\in \overline{{\varOmega }}. \end{aligned}$$

We now present the following lemma.

Lemma 1

Suppose that\(w\in {\mathcal {W}}\)satisfiesHypothesis 1with\(n\). Then, there exists a positive constant\(c\)independent of\(N\)such that

$$\begin{aligned}&\left\| v-{\varPi }_h{} v\right\| _{C(\overline{{\varOmega }})} \nonumber \\&\quad \le c\left(\sum _{0\le |\alpha |\le n} \left\| I_{\alpha }\right\| _{C(\overline{{\varOmega }})}+ \left\| J_{n+1}\right\| _{C(\overline{{\varOmega }})}\right) \left\| v\right\| _{C^{n+1}(\overline{{\varOmega }}_{H})}, \nonumber \\&\qquad v\in C^{n+1}(\overline{{\varOmega }}_{H}), \end{aligned}$$
(12)
$$\begin{aligned}&\left\| \nabla v-\nabla _h{}v\right\| _{C(\overline{{\varOmega }})} \nonumber \\&\quad \le c\left(\sum _{2\le |\alpha |\le n+2} \left\| I_{\alpha ,2}\right\| _{C(\overline{{\varOmega }})}+ \left\| J_{n+1}\right\| _{C(\overline{{\varOmega }})}\right)\left\| v\right\| _{C^{n+2}(\overline{{\varOmega }}_{H})}, \nonumber \\&\qquad v\in C^{n+2}(\overline{{\varOmega }}_{H}), \end{aligned}$$
(13)
$$\begin{aligned}&\left\| {\varDelta }v-{\varDelta }_h{}v\right\| _{C(\overline{{\varOmega }})} \nonumber \\&\quad \le c\left(\sum _{1\le |\alpha |\le n+3} \left\| I_{\alpha ,2}\right\| _{C(\overline{{\varOmega }})}+ \left\| J_{n+1}\right\| _{C(\overline{{\varOmega }})}\right)\left\| v\right\| _{C^{n+3}(\overline{{\varOmega }}_{H})}, \nonumber \\&\qquad v\in C^{n+3}(\overline{{\varOmega }}_{H}). \end{aligned}$$
(14)

Proof

First, we prove (12). We fix \(x\in \overline{{\varOmega }}\). Then, let \(B(x,r)\) be the open ball in \({\mathbb {R}}^{d}\) with center \(x\) and radius r, i.e.,

$$\begin{aligned} B(x,r):=\left\{ y\in {\mathbb {R}}^{d};\,|y-x|<r\right\} . \end{aligned}$$

From \(h<H\), we have \(B(x,h)\subset {\varOmega }_H\). Then, for all \(v\in C^{\ell +1}(\overline{{\varOmega }}_{H}) \, (\ell \in {\mathbb {N}})\) and \(x_{i}\in B(x,h)\), we obtain the Taylor expansion of \(v\) as

$$\begin{aligned}&v(x_{i}) = \sum _{0\le |\alpha |\le \ell } \frac{D^{\alpha }v(x)}{\alpha !}(x_{i}-x)^{\alpha } + \sum _{|\alpha |=\ell +1}(x_{i}-x)^{\alpha } R_{\alpha }(x_{i},x), \nonumber \\&R_{\alpha }(x_{i},x) :=\frac{|\alpha |}{\alpha !} \int _0^1 (1-t)^{|\alpha |-1}D^{\alpha }v(tx+(1-t)x_{i})\mathrm{d}t. \end{aligned}$$
(15)

From (2) and (15) with \(\ell =n\), we have

$$\begin{aligned} {\varPi }_h{} v(x)&= \sum _{0\le |\alpha |\le n} \frac{D^{\alpha }v(x)}{\alpha !} \sum _{i\in {\varLambda }_0(x,h)} V_{i} (x_{i}-x)^{\alpha } w_{h}(|x_{i}-x|) \\&\quad +\sum _{|\alpha |=n+1} \sum _{i\in {\varLambda }_0(x,h)}R_{\alpha }(x_{i},x)V_{i} (x_{i}-x)^{\alpha }w_{h}(|x_{i}-x|). \end{aligned}$$

Moreover, by Hypothesis 1, we have

$$\begin{aligned} {\varPi }_h{} v(x) - v(x)&= \sum _{0\le |\alpha |\le n} \frac{D^{\alpha }v(x)}{\alpha !} I_{\alpha }(x) \nonumber \\&\quad +\sum _{|\alpha |=n+1}\sum _{i\in {\varLambda }_0(x,h)} R_{\alpha }(x_{i},x)V_{i} (x_{i}-x)^{\alpha }w_{h}(|x_{i}-x|). \end{aligned}$$
(16)

Because

$$\begin{aligned} |R_{\alpha }(y,z)| \le \frac{1}{\alpha !}\left| v\right| _{C^{|\alpha |}(\overline{{\varOmega }}_{H})}, \quad y\in \overline{{\varOmega }},~z\in B(y,h),~\alpha \in {\mathbb {A}}^{d}, \end{aligned}$$
(17)

we have

$$\begin{aligned}&\left| \sum _{|\alpha |=n+1}\sum _{i\in {\varLambda }_0(x,h)} R_{\alpha }(x_{i},x)V_{i} (x_{i}-x)^{\alpha }w_{h}(|x_{i}-x|)\right| \nonumber \\&\quad \le c|J_{n+1}(x)| \left| v\right| _{C^{n+1}(\overline{{\varOmega }}_{H})}. \end{aligned}$$
(18)

Moreover, we have

$$\begin{aligned} \left| \sum _{0\le |\alpha |\le n} \frac{D^{\alpha }v(x)}{\alpha !} I_{\alpha }(x)\right| \le c\left\| v\right\| _{C^{n}(\overline{{\varOmega }})} \sum _{0\le |\alpha | \le n}\left| I_{\alpha }(x)\right| . \end{aligned}$$
(19)

Therefore, from (16), (18), and (19), we obtain (12).

Next, we prove (13). From (3) and (15) with \(\ell =n+1\), we have

$$\begin{aligned} \nabla _h{} v(x)&=d\sum _{1\le |\alpha |\le n+1} \frac{D^{\alpha }v(x)}{\alpha !} \sum _{i\in {\varLambda }(x,h)}V_{i}\frac{(x_{i}-x) (x_{i}-x)^{\alpha }}{|x_{i}-x|^2}w_{h}(|x_{i}-x|) \\&\quad +d\sum _{|\alpha |=n+2} \sum _{i\in {\varLambda }(x,h)}R_{\alpha }(x_{i},x) V_{i}\frac{(x_{i}-x)(x_{i}-x)^{\alpha }}{|x_{i} -x|^2}w_{h}(|x_{i}-x|). \end{aligned}$$

Because for \(\beta \in {\mathbb {A}}^{d}\) with \(|\beta |=2\),

$$\begin{aligned} d\int _{{\mathbb {R}}^{d}} \frac{y^{\beta }}{|y|^2}w_{h}(|y|)\mathrm {d}y= {\left\{ \begin{array}{ll} 1,\quad &{} \text{ all } \text{ elements } \text{ of } \, \beta \, \text{ are } \text{ even }, \\ 0,\quad &{} \text{ otherwise }, \end{array}\right. } \end{aligned}$$
(20)

we have

$$\begin{aligned} d\sum _{|\alpha |=1}\frac{D^{\alpha }v(x)}{\alpha !}\int _{{\mathbb {R}}^{d}} \frac{yy^\alpha }{|y|^2} w_{h}(|y|)\mathrm {d}y= \nabla v(x). \end{aligned}$$
(21)

Hypothesis 1 with \(n\) yields

$$\begin{aligned} \int _{{\mathbb {R}}^{d}} \frac{yy^\alpha }{|y|^2} w_{h}(|y|)\mathrm {d}y= 0\qquad \alpha \in {\mathbb {A}}^{d} \text{ with } 2\le |\alpha |\le n+1. \end{aligned}$$
(22)

From (21) and (22), we have

$$\begin{aligned} \nabla _h{}v(x)-\nabla v(x)&=-d\sum _{1\le |\alpha |\le n+1} \frac{D^{\alpha }v(x)}{\alpha !}\int _{{\mathbb {R}}^{d}} \frac{yy^\alpha }{|y|^2} w_{h}(|y|)\mathrm {d}y\nonumber \\&\quad +d\sum _{1\le |\alpha |\le n+1} \frac{D^{\alpha }v(x)}{\alpha !}\sum _{i\in {\varLambda }(x,h)} V_{i}\frac{(x_{i}-x)(x_{i}-x)^{\alpha }}{|x_{i}-x|^2} w_{h}(|x_{i}-x|) \nonumber \\&\quad +d\sum _{|\alpha |=n+2}\sum _{i\in {\varLambda }(x,h)} R_{\alpha }(x_{i},x) V_{i}\frac{(x_{i}-x) (x_{i}-x)^{\alpha }}{|x_{i}-x|^2}w_{h}(|x_{i}-x|). \end{aligned}$$
(23)

From (17), we have

$$\begin{aligned}&\left| \sum _{|\alpha |=n+2}\sum _{i\in {\varLambda }(x,h)} R_{\alpha }(x_{i},x)V_{i}\frac{(x_{i}-x) (x_{i}-x)^{\alpha }}{|x_{i}-x|^2}w_{h}(|x_{i}-x|)\right| \nonumber \\&\quad \le c|J_{n+1}(x)|\left| v\right| _{C^{n+2}(\overline{{\varOmega }}_{H})}. \end{aligned}$$
(24)

Moreover, we have

$$\begin{aligned}&\sum _{1\le |\alpha |\le n+1}\left| \sum _{i\in {\varLambda }(x,h)} V_{i}\frac{(x_{i}-x)(x_{i}-x)^{\alpha }}{|x_{i}-x|^2} w_{h}(|x_{i}-x|)-\int _{{\mathbb {R}}^{d}}\frac{yy^\alpha }{|y|^2}w_{h}(|y|)\mathrm {d}y\right| \nonumber \\&\quad \le c\sum _{2\le |\alpha |\le n+2} |I_{\alpha ,2}(x)|. \end{aligned}$$
(25)

Therefore, from (23), (24), and (25), we obtain (13).

Finally, we prove (14). From (4) and (15) with \(\ell =n+2\), we have

$$\begin{aligned} {\varDelta }_h{} v(x)&= 2d\sum _{1\le |\alpha |\le n+2} \frac{D^{\alpha }v(x)}{\alpha !} \sum _{i\in {\varLambda }(x,h)} V_{i} \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^2} w_{h}(|x_{i}-x|) \\&\quad + 2d\sum _{|\alpha |=n+3} \sum _{i\in {\varLambda }(x,h)} R_{\alpha }(x_{i},x) V_{i}\frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^2}w_{h}(|x_{i}-x|). \end{aligned}$$

From (20), we have

$$\begin{aligned} 2d\sum _{|\alpha |=2}\frac{D^{\alpha }v(x)}{\alpha !} \int _{{\mathbb {R}}^{d}}\frac{y^\alpha }{|y|^2} w_{h}(|y|)\mathrm {d}y= {\varDelta }v(x). \end{aligned}$$

Hypothesis 1 with \(n\) yields

$$\begin{aligned} \int _{{\mathbb {R}}^{d}} \frac{y^\alpha }{|y|^2} w_{h}(|y|)\mathrm {d}y= 0,\quad \alpha \in {\mathbb {A}}^{d} \, \text{ with } \, |\alpha |=1 \, \text{ or } \, 3\le |\alpha |\le n+2. \end{aligned}$$

Therefore, we have

$$\begin{aligned} {\varDelta }_h{}v(x)-{\varDelta }v(x)&=2d\sum _{1\le |\alpha |\le n+2} \frac{D^{\alpha }v(x)}{\alpha !} I_{\alpha ,2}(x) \nonumber \\&\quad + 2d\sum _{|\alpha |=n+3} \sum _{i\in {\varLambda }(x,h)}R_{\alpha }(x_{i},x)V_{i} \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^2}w_{h}(|x_{i}-x|). \end{aligned}$$
(26)

From (17), we have

$$\begin{aligned}&\left| \sum _{|\alpha |=n+3}\sum _{i\in {\varLambda }(x,h)} R_{\alpha }(x_{i},x)V_{i} \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^2}w_{h}(|x_{i}-x|)\right| \nonumber \\&\quad \le c|J_{n+1}(x)|\left| v\right| _{C^{n+3}(\overline{{\varOmega }}_{H})}. \end{aligned}$$
(27)

Moreover, we have

$$\begin{aligned} \left| \sum _{1\le |\alpha |\le n+2} \frac{D^{\alpha }v(x)}{\alpha !} I_{\alpha ,2}(x)\right| \le c\left\| v\right\| _{C^{n+2}(\overline{{\varOmega }})} \sum _{1\le |\alpha | \le n+2}\left| I_{\alpha ,2}(x)\right| . \end{aligned}$$
(28)

Therefore, from (26), (27), and (28), we obtain (14).

Next, we show estimates of \(I_{\alpha }\), \(I_{\alpha ,\ell }\), and \(J_{\ell }\).

Lemma 2

There exists a positive constant \(c\) independent of \(N\) such that

$$\begin{aligned} \Vert I_{\alpha }\Vert _{C(\overline{{\varOmega }})} \le c\left(1+2\frac{r_{N}}{h}\right)^{d}\left(\frac{r_{N}+d_{N}}{h}\right),\quad \alpha \in {\mathbb {A}}^{d}. \end{aligned}$$
(29)

Proof

We arbitrarily fix \(x\in \overline{{\varOmega }}\), \(\alpha \in {\mathbb {A}}^{d}\), and particle volume decomposition \({\varXi }=\{\xi _{i}\mid i=1,2,\ldots ,N\}\) and split \(I_{\alpha }\) into

$$\begin{aligned} I_{\alpha }(x) = E_{1}(x) + E_{2}(x) + E_{3}(x) \end{aligned}$$

with

$$\begin{aligned} E_{1}(x)&:=\sum _{i\in {\varLambda }_0(x,h)}V_{i} (x_{i}-x)^{\alpha }w_{h}(|x_{i}-x|) \\&\quad -\sum _{i=1}^{N}\sum _{j=1}^{N} \left| \sigma _{j}\cap \xi _{i}\right| (x_{i}-x)^{\alpha }w_{h}(|x_{j}-x|), \\ E_{2}(x)&:=\sum _{i=1}^{N}\sum _{j=1}^{N}(x_{i}-x)^{\alpha } \int _{\sigma _{j}\cap \xi _{i}}\{w_{h}(|x_{j}-x|)-w_{h}(|y-x|)\}\mathrm {d}y, \\ E_{3}(x)&:=\sum _{i=1}^{N}\sum _{j=1}^{N}(x_{i}-x)^{\alpha } \int _{\sigma _{j}\cap \xi _{i}}w_{h}(|y-x|)\mathrm {d}y-\int _{{\mathbb {R}}^{d}}y^\alpha w_{h}(|y|)\mathrm {d}y. \end{aligned}$$

Then, we estimate \(E_{1}\), \(E_{2}\), and \(E_{3}\).

First, we estimate \(E_{1}\). Because

$$\begin{aligned} \sum _{j=1}^{N} \left| \sigma _{j}\cap \xi _{i}\right| =V_{i},\quad i=1,2,\ldots ,N, \end{aligned}$$
(30)

we can rewrite \(E_{1}\) as

$$\begin{aligned} E_{1}=\sum _{i=1}^{N}\sum _{j=1}^{N}\left| \sigma _{j}\cap \xi _{i}\right| (x_{i}-x)^{\alpha }\{w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)\}. \end{aligned}$$

From

$$\begin{aligned} |(y-x)^{\alpha }| \le \text {diam}({\varOmega }_H)^{|\alpha |}, \quad y\in {\varOmega }_H, \end{aligned}$$
(31)

we obtain

$$\begin{aligned} \left| E_{1}(x)\right|&\le c\sum _{i=1}^{N} \sum _{j=1}^{N}\left| \sigma _{j} \cap \xi _{i}\right| |w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)|. \end{aligned}$$
(32)

From

$$\begin{aligned} |w_{h}(|y-x|)-w_{h}(|z-x|)| = 0, \quad \forall y, z\in {\mathbb {R}}^{d}{\setminus }B(x,h), \end{aligned}$$

we have

$$\begin{aligned}&\sum _{i=1}^{N}\sum _{j=1}^{N}\left| \sigma _{j}\cap \xi _{i}\right| | w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)|\nonumber \\&\quad \le \sum _{i\in {\varLambda }_0(x,h)}\sum _{j=1}^{N}\left| \sigma _{j} \cap \xi _{i}\right| |w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)|\nonumber \\&\qquad +\sum _{i=1}^{N}\sum _{j\in {\varLambda }_0(x,h)}\left| \sigma _{j} \cap \xi _{i}\right| |w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)|\nonumber \\&\quad =\sum _{i\in {\varLambda }_0(x,h)}\sum _{j=1}^{N}(\left| \sigma _{i} \cap \xi _{j}\right| +\left| \sigma _{j}\cap \xi _{i}\right| )|w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)|. \end{aligned}$$
(33)

Because \(w_{h}\) is absolutely continuous, we have

$$\begin{aligned}&\left| w_{h}(|y-x|)-w_{h}(|z-x|)\right| \nonumber \\&\quad = \left| \{(y-x)-(z-x)\}\int _0^1w_{h}^\prime (t |y-x|+(1-t)|z-x|)\mathrm{d}t\right| \nonumber \\&\quad \le |y-z| \left| \int _0^1w_{h}^\prime (t |y-x|+(1-t)|z-x|)\mathrm{d}t\right| \nonumber \\&\quad \le |y-z| \int _0^h\left| w_{h}^\prime (r)\right| \mathrm{d}r \nonumber \\&\quad \le \dfrac{|y-z|}{h^{d+1}}\int _0^1\left| w^\prime (r)\right| \mathrm{d}r, \end{aligned}$$
(34)

for all \(y, z\in {\mathbb {R}}^{d}\). Here, \(w^\prime \) and \(w_{h}^\prime \) are \(\mathrm{d}w/\mathrm{d}r\) and \(\mathrm{d}w_{h}/\mathrm{d}r\), respectively. Moreover, we have

$$\begin{aligned} \sum _{i\in {\varLambda }_0(x,r)}|\sigma _{i}|\le \left| B(x,1)\right| \left(r+r_{N}\right)^{d},\quad \forall r\in {\mathbb {R}}^{+}_0. \end{aligned}$$
(35)

From (33), (34), and (35), we have

$$\begin{aligned}&\sum _{i=1}^{N}\sum _{j=1}^{N}\left| \sigma _{j}\cap \xi _{i}\right| |w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)|\nonumber \\&\quad =\dfrac{c}{h^{d+1}}\sum _{i\in {\varLambda }_0(x,h)} \sum _{j=1}^{N}\left( \left| \sigma _{i}\cap \xi _{j}\right| +\left| \sigma _{j} \cap \xi _{i}\right| \right) |x_{i}-x_{j}| \nonumber \\&\quad \le \dfrac{c}{h^{d+1}}\sum _{i\in {\varLambda }_0(x,h)} |\sigma _{i}|\sum _{j=1}^{N} \frac{\left| \sigma _{i} \cap \xi _{j}\right| +\left| \sigma _{j}\cap \xi _{i}\right| }{|\sigma _{i}|}|x_{i}-x_{j}| \nonumber \\&\quad \le c\dfrac{d_{{\varXi }}}{h^{d+1}} \sum _{i\in {\varLambda }_0(x,h)}|\sigma _{i}| \nonumber \\&\quad \le c\left(1+\frac{r_{N}}{h}\right)^{d} \frac{d_{{\varXi }}}{h}. \end{aligned}$$
(36)

Therefore, from (32) and (36), we obtain

$$\begin{aligned} \left| E_{1}(x)\right| \le c\left(1+\frac{r_{N}}{h}\right)^{d} \frac{d_{{\varXi }}}{h}. \end{aligned}$$

Next, we estimate \(E_{2}\). Because \(\text{ supp }(w_{h}) = [0,h]\) and \(\sigma _{j}\subset B(x_{j},r_{N})\), we have

$$\begin{aligned}&\int _{\sigma _{j}\cap \xi _{i}}|w_{h}(|x_{j}-x|)-w_{h}(|y-x|)|\mathrm {d}y=0, \nonumber \\&\quad i=1,2,\ldots ,N,~j\not \in {\varLambda }_0(x,h+r_{N}). \end{aligned}$$
(37)

From (37), we have

$$\begin{aligned}&\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j}\cap \xi _{i}}| w_{h}(|x_{j}-x|)-w_{h}(|y-x|)|\mathrm {d}y\\&\quad =\sum _{i=1}^{N}\sum _{j\in {\varLambda }_0(x,h+r_{N})} \int _{\sigma _{j}\cap \xi _{i}}|w_{h}(|x_{j}-x|)-w_{h}(|y-x|)|\mathrm {d}y\\&\quad =\sum _{j\in {\varLambda }_0(x,h+r_{N})}\int _{\sigma _{j}}| w_{h}(|x_{j}-x|)-w_{h}(|y-x|)|\mathrm {d}y. \end{aligned}$$

Moreover, from (34) and (35), we have

$$\begin{aligned} \sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j}\cap \xi _{i}}| w_{h}(|x_{j}-x|)-w_{h}(|y-x|)|\mathrm {d}y&\le \dfrac{c}{h^{d+1}}\sum _{j\in {\varLambda }_0(x,h+r_{N})} \int _{\sigma _{j}}|x_{j}-y|\mathrm {d}y\nonumber \\&\le c\dfrac{r_{N}}{h^{d+1}}\sum _{j\in {\varLambda }_0(x,h+r_{N})}|\sigma _{j}| \nonumber \\&\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d}\dfrac{r_{N}}{h}. \end{aligned}$$
(38)

Therefore, from (31) and (38), we obtain

$$\begin{aligned} |E_{2}(x)|&\le \sum _{i=1}^{N}\sum _{j=1}^{N}\left| (x_{i}-x)^{\alpha }\right| \int _{\sigma _{j}\cap \xi _{i}}\left| w_{h}(|x_{j}-x|)-w_{h}(|y-x|)\right| \mathrm {d}y\\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j}\cap \xi _{i}} \left| w_{h}(|x_{j}-x|)-w_{h}(|y-x|)\right| \mathrm {d}y\\&\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d}\dfrac{r_{N}}{h}. \end{aligned}$$

Finally, we estimate \(E_{3}\). Because

$$\begin{aligned} \int _{{\mathbb {R}}^{d}}y^\alpha w_{h}(|y|)\mathrm {d}y=\int _{{\varOmega }_H}(y-x)^\alpha w_{h}(|y-x|)\mathrm {d}y, \end{aligned}$$

we can rewrite \(E_{3}\) as

$$\begin{aligned} E_{3}(x) =\sum _{i=1}^{N} \sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi _{i}}\{(x_{i}-x)^{\alpha }- (y-x)^\alpha \}w_{h}(|y-x|)\mathrm {d}y. \end{aligned}$$

Because \(E_{3}=0\) when \(|\alpha |=0\), we estimate when \(|\alpha |\ge 1\). Let \(\beta _k\,(k=1,2,\ldots ,|\alpha |)\) be \(d\)-dimensional multi-indices with satisfying

$$\begin{aligned} \sum _{k=1}^{|\alpha |}\beta _k=\alpha ,\quad |\beta _k|=1~(k=1,2,\ldots ,|\alpha |). \end{aligned}$$

Then, we have, for all \(y,z\in {\mathbb {R}}^{d}\),

$$\begin{aligned} \left| y^{\alpha }-z^{\alpha }\right|&\le \left| y^{\alpha }-y^{\alpha -\beta _1}z^{\beta _1}\right| + \left| y^{\alpha -\beta _1}z^{\beta _1}-z^{\alpha }\right| \nonumber \\&\le \left| y-z\right| |y|^{|\alpha |-1}+ \left| y^{\alpha -\beta _1}-z^{\alpha -\beta _1}\right| |z| \nonumber \\&\le \left| y-z\right| |y|^{|\alpha |-1}+\left| y-z\right| |y|^{|\alpha |-2}|z|+ \left| y^{\alpha -\beta _1-\beta _2}-z^{\alpha -\beta _1-\beta _2}\right| |z|^2 \nonumber \\&~\vdots \nonumber \\&\le \left| y-z\right| \sum _{k=1}^{|\alpha |}|y|^{|\alpha |-k}|z|^{k-1}. \end{aligned}$$
(39)

From (31) and (39), we obtain

$$\begin{aligned} |E_{3}(x)|&\le \sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi _{i}}|(x_{i}-x)^{\alpha }-(y-x)^\alpha || w_{h}(|y-x|)|\mathrm {d}y\nonumber \\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi _{i}}|y-x_{i}||w_{h}(|y-x|)|\mathrm {d}y. \end{aligned}$$
(40)

By \(\text{ supp }(w_{h}) = [0,h]\) and \(\sigma _{j}\subset B(x_{j},r_{N})\), if \(j\not \in {\varLambda }_0(x,h+r_{N})\), then

$$\begin{aligned} \int _{\sigma _{j}\cap \xi _{i}}|y-x_{i}||w_{h}(|y-x|)|\mathrm {d}y=0, \quad i=1,2,\ldots ,N. \end{aligned}$$
(41)

Moreover, from \(w\in {\mathcal {W}}\subset C({\mathbb {R}}^{+}_0)\), we have

$$\begin{aligned} |w_{h}(|y-x|)|=\dfrac{1}{h^d}\left| w\left(\dfrac{|y-x|}{h}\right)\right| \le \dfrac{1}{h^d}\left\| w\right\| _{C({\mathbb {R}}^{+}_0)},\quad \forall y\in {\varOmega }_H. \end{aligned}$$
(42)

From (35), (41), and (42), we have

$$\begin{aligned}&\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j}\cap \xi _{i}}| y-x_{i}||w_{h}(|y-x|)|\mathrm {d}y\nonumber \\&\quad =\sum _{i=1}^{N}\sum _{j\in {\varLambda }_0(x,h+r_{N})} \int _{\sigma _{j}\cap \xi _{i}}|y-x_{i}||w_{h}(|y-x|)|\mathrm {d}y\nonumber \\&\quad \le \dfrac{c}{h^d}\sum _{i=1}^{N} \sum _{j\in {\varLambda }_0(x,h+r_{N})}\int _{\sigma _{j} \cap \xi _{i}}|y-x_{i}|\mathrm {d}y\nonumber \\&\quad \le \dfrac{c}{h^d}\sum _{i=1}^{N}\sum _{j\in {\varLambda }_0(x,h+r_{N})}\int _{\sigma _{j}\cap \xi _{i}}(|y-x_{j}|+|x_{j}- x_{i}|)\mathrm {d}y\nonumber \\&\quad \le \dfrac{c}{h^d}\left(r_{N}\sum _{j\in {\varLambda }_0(x,h+r_{N})}\left| \sigma _{j}\right| +\sum _{j\in {\varLambda }_0(x,h+r_{N})} \sum _{i=1}^{N}\left| \sigma _{j}\cap \xi _{i}\right| |x_{j}-x_{i}|\right)\nonumber \\&\quad \le \dfrac{c}{h^d}\left(\sum _{j\in {\varLambda }_0(x,h+r_{N})} \left| \sigma _{j}\right| \right)\left\{ r_{N}+\max _{j=1,2,\ldots ,N}\left(\sum _{i=1}^{N} \dfrac{\left| \sigma _{i}\cap \xi _{j}\right| +\left| \sigma _{j}\cap \xi _{i}\right| }{\left| \sigma _{j}\right| }|x_{j}-x_{i}|\right)\right\} \nonumber \\&\quad \le c\left(1+2\dfrac{r_{N}}{h}\right)^{d} \left(r_{N}+d_{{\varXi }}\right). \end{aligned}$$
(43)

Therefore, from (40), (43), and \(h\le H\), we obtain

$$\begin{aligned} |E_{3}(x)|&\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d} (r_{N}+d_{{\varXi }}) \\&\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d} \frac{r_{N}+d_{{\varXi }}}{h}. \end{aligned}$$

From the estimates of \(E_{1}\), \(E_{2}\), and \(E_{3}\), we obtain

$$\begin{aligned} \Vert I_{\alpha }\Vert _{C(\overline{{\varOmega }})} \le c\left(1+2\frac{r_{N}}{h}\right)^{d}\frac{r_{N}+d_{{\varXi }}}{h}. \end{aligned}$$

Because \({\varXi }\) is arbitrary, we establish (29).

Lemma 3

Suppose that a reference weight function\(w\)satisfiesHypothesis 2with\(k\). Then, there exists a positive constant\(c\)independent of\(N\)such that for all\(\alpha \in {\mathbb {A}}^{d}\)and\(\ell \in {\mathbb {N}}\)with\(1\le \ell -k\le |\alpha |\),

$$\begin{aligned} \Vert I_{\alpha ,\ell }\Vert _{C(\overline{{\varOmega }})} \le c\left(1+2\frac{r_{N}}{h}\right)^{d} \frac{r_{N}+d_{N}}{h^{k+1}}. \end{aligned}$$
(44)

Proof

We arbitrarily fix \(x\in \overline{{\varOmega }}\), \(\alpha \in {\mathbb {A}}^{d}\), particle volume decomposition \({\varXi }=\{\xi _{i}\mid i=1,2,\ldots ,N\}\), and \(\ell \in {\mathbb {N}}\) with \(1\le \ell -k\le |\alpha |\) and split \(I_{\alpha ,\ell }\) into

$$\begin{aligned} I_{\alpha ,\ell }(x) = E_{4}(x) + E_{5}(x) + E_{6}(x) \end{aligned}$$

with

$$\begin{aligned} E_{4}(x)&:=\sum _{i\in {\varLambda }(x,h)} V_{i}\frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell }} w_{h}(|x_{i}-x|) \\&\qquad -\sum _{i\in {\varLambda }(x,\infty )} \sum _{j\in {\varLambda }(x,\infty )}\left| \sigma _{j} \cap \xi _{i}\right| \frac{(x_{i}-x)^{\alpha }}{|x_{i}- x|^{\ell -k}}\dfrac{w_{h}(|x_{j}-x|)}{|x_{j}- x|^{k}}, \\ E_{5}(x)&:=\sum _{i\in {\varLambda }(x,\infty )} \sum _{j\in {\varLambda }(x,\infty )}\left| \sigma _{j} \cap \xi _{i}\right| \frac{(x_{i}-x)^{\alpha }}{|x_{i}- x|^{\ell -k}}\dfrac{w_{h}(|x_{j}-x|)}{|x_{j}-x|^{k}} \\&\qquad -\sum _{i\in {\varLambda }(x,\infty )}\sum _{j=1}^{N} \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell -k}} \int _{\sigma _{j}\cap \xi _{i}}\dfrac{w_{h}(|y-x|)}{|y- x|^{k}}\mathrm {d}y, \\ E_{6}(x)&:=\sum _{i\in {\varLambda }(x,\infty )} \sum _{j=1}^{N}\frac{(x_{i}-x)^{\alpha }}{|x_{i}- x|^{\ell -k}}\int _{\sigma _{j}\cap \xi _{i}} \dfrac{w_{h}(|y-x|)}{|y-x|^{k}}\mathrm {d}y\\&\qquad -\int _{{\mathbb {R}}^{d}} \frac{y^\alpha }{|y|^\ell } w_{h}(|y|)\mathrm {d}y. \end{aligned}$$

Then, we estimate \(E_{4}\), \(E_{5}\), and \(E_{6}\).

First, we estimate \(E_{4}\) and set \(w^{(k)}\) as (8) and \(w^{(k)}_h\) as

$$\begin{aligned} w^{(k)}_h(r) :=\frac{1}{h^{d+k}}w^{(k)}\left( \frac{r}{h}\right) , \quad r\in {\mathbb {R}}^{+}_0. \end{aligned}$$

Then, from (30), we can rewrite \(E_{4}\) as

$$\begin{aligned} E_{4}(x) = \sum _{i\in {\varLambda }(x,\infty )} \sum _{j=1}^{N}\left| \sigma _{j}\cap \xi _{i}\right| \frac{(x_{i}- x)^{\alpha }}{|x_{i}-x|^{\ell -k}} \{w^{(k)}_h(|x_{i}-x|)-w^{(k)}_h(|x_{j}-x|)\}. \end{aligned}$$

Because

$$\begin{aligned} \left| \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell -k}}\right| \le |x_{i}-x|^{|\alpha |-\ell +k}\le \text {diam}({\varOmega }_H)^{|\alpha |-\ell +k}, \quad i\in {\varLambda }(x,\infty ), \end{aligned}$$
(45)

we obtain

$$\begin{aligned} |E_{4}(x)|\le c\sum _{i=1}^{N}\sum _{j=1}^{N} \left| \sigma _{j}\cap \xi _{i}\right| \left| w^{(k)}_h (|x_{i}-x|)-w^{(k)}_h(|x_{j}-x|)\right| . \end{aligned}$$

From \(\text{ supp }(w^{(k)}_h) = [0,h]\), we have

$$\begin{aligned} w^{(k)}_h(|x_{i}-x|)-w^{(k)}_h(|x_{j}-x|)= 0,\quad i, j\not \in {\varLambda }(x,h). \end{aligned}$$

Thus, we obtain

$$\begin{aligned} |E_{4}(x)|&\le c\Bigg (\sum _{i\in {\varLambda }(x,h)} \sum _{j=1}^{N}\left| \sigma _{j}\cap \xi _{i}\right| \left| w^{(k)}_h(|x_{i}-x|)- w^{(k)}_h(|x_{j}-x|)\right| \nonumber \\&\quad +\sum _{i=1}^{N}\sum _{j\in {\varLambda }(x,h)} \left| \sigma _{j}\cap \xi _{i}\right| \left| w^{(k)}_h (|x_{i}-x|)-w^{(k)}_h(|x_{j}-x|)\right| \Bigg ). \end{aligned}$$
(46)

Using an argument similar to (34), if \(w\) satisfies Hypothesis 2 with \(k\), then for all \(y, z\in {\mathbb {R}}^{d}\),

$$\begin{aligned} |w^{(k)}_h(|y-x|)-w^{(k)}_h(|z-x|)| \le \dfrac{|y-z|}{h^{d+k+1}}\int _0^1\left| (w^{(k)})^\prime (r)\right| \mathrm{d}r. \end{aligned}$$
(47)

From (46) and (47), we obtain

$$\begin{aligned} |E_{4}(x)|&\le \dfrac{c}{h^{d+k+1}} \sum _{i\in {\varLambda }(x,h)}\sum _{j=1}^{N} \left(\left| \sigma _{i}\cap \xi _{j}\right| +\left| \sigma _{j} \cap \xi _{i}\right| \right)\left| x_{i}-x_{j}\right| \\&\le \dfrac{c}{h^{d+k+1}} \sum _{i\in {\varLambda }(x,h)}\left| \sigma _{i}\right| \sum _{j=1}^{N}\dfrac{\left| \sigma _{i}\cap \xi _{j}\right| + \left| \sigma _{j}\cap \xi _{i}\right| }{\left| \sigma _{i}\right| }\left| x_{i}-x_{j}\right| \\&\le c\left(1+\frac{r_{N}}{h}\right)^{d} \frac{d_{{\varXi }}}{h^{k+1}}. \end{aligned}$$

Next, we estimate \(E_{5}\). By using \(w^{(k)}_h\), we can rewrite \(E_{5}\) as

$$\begin{aligned} E_{5}(x)=\sum _{i\in {\varLambda }(x,\infty )}\sum _{j=1}^N\frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell -k}} \int _{\sigma _{j}\cap \xi _{i}}\left\{ w^{(k)}_h(|x_{j}-x|) -w^{(k)}_h(|y-x|)\right\} \mathrm {d}y. \end{aligned}$$

From (45), we obtain

$$\begin{aligned} |E_{5}(x)|&\le c\sum _{i=1}^N\sum _{j=1}^N\int _{\sigma _{j}\cap \xi _{i}}\left| w^{(k)}_h (|x_{j}-x|)-w^{(k)}_h(|y-x|)\right| \mathrm {d}y\\&\le c\sum _{j=1}^N\int _{\sigma _{j}} \left| w^{(k)}_h(|x_{j}-x|)-w^{(k)}_h(|y-x|)\right| \mathrm {d}y. \end{aligned}$$

By \(\text{ supp }(w^{(k)}_h) = [0,h]\) and \(\sigma _{j}\subset B(x_{j},r_{N})\), we have

$$\begin{aligned} \int _{\sigma _{j}}\left| w^{(k)}_h(|x_{j}-x|)- w^{(k)}_h(|y-x|)\right| \mathrm {d}y= 0,\quad j\not \in {\varLambda }(x,h+r_{N}). \end{aligned}$$
(48)

From (47) and (48), we obtain

$$\begin{aligned} |E_{5}(x)|&\le c\sum _{j\in {\varLambda }_0(x,h+r_{N})} \int _{\sigma _{j}}\left| w^{(k)}_h(|x_{j}-x|)- w^{(k)}_h(|y-x|)\right| \mathrm {d}y\\&\le \dfrac{c}{h^{d+k+1}} \sum _{j\in {\varLambda }_0(x,h+r_{N})}\int _{\sigma _{j}}\left| x_{j}-y\right| \mathrm {d}y\\&\le c\dfrac{r_{N}}{h^{d+k+1}} \sum _{j\in {\varLambda }_0(x,h+r_{N})}\left| \sigma _{j}\right| \\&\le c\left(1+2\frac{r_{N}}{h}\right)^{d} \dfrac{r_{N}}{h^{k+1}}. \end{aligned}$$

Finally, we estimate \(E_{6}\). Using \(w^{(k)}_h\), we can rewrite \(E_{6}\) as

$$\begin{aligned} E_{6}(x)&=\sum _{i\in {\varLambda }(x,\infty )}\sum _{j=1}^{N} \int _{\sigma _{j}\cap \xi _{i}}\left\{ \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell -k}}-\frac{(y-x)^{\alpha }}{|y-x|^{\ell -k}}\right\} w^{(k)}_h(|y-x|)\mathrm {d}y\\&\quad -\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j}\cap \xi ^{*}_{i}(x)} \frac{(y-x)^{\alpha }}{|y-x|^{\ell -k}} w^{(k)}_h(|y-x|)\mathrm {d}y, \end{aligned}$$

where \(\xi ^{*}_{i}(x)\) is

$$\begin{aligned} \xi ^{*}_{i}(x) = {\left\{ \begin{array}{ll} \xi _{i},\quad &{}x=x_{i}, \\ \emptyset ,\quad &{}\text{ otherwize }. \end{array}\right. } \end{aligned}$$

For \(\alpha \in {\mathbb {A}}^{d}\), let \(\beta _j\,(j=1,2,\ldots ,|\alpha |)\) be \(d\)-dimensional multi-indices satisfying

$$\begin{aligned} |\beta _j| = 1 \quad \text{ and } \quad \sum _{j=1}^{|\alpha |}\beta _j = \alpha . \end{aligned}$$

Let \(\beta _j^*\,(j=0,1,\ldots ,|\alpha |)\) be \(d\)-dimensional multi-indices defined as

$$\begin{aligned} \beta _j^*:={\left\{ \begin{array}{ll} 0,\quad &{} j=0, \\ \displaystyle \sum _{\ell =1}^{j}\beta _\ell ,\quad &{} j=1,2,\ldots ,|\alpha |. \end{array}\right. } \end{aligned}$$

For all \(y, z\in {\mathbb {R}}^{d}{\setminus }\{0\}\), when \(|\alpha |=\ell -k\), we have

$$\begin{aligned} \left| \dfrac{y^{\alpha }}{|y|^{\ell -k}}- \dfrac{z^{\alpha }}{|z|^{\ell -k}}\right|&\le \sum _{j=0}^{\ell -k-1} \left| \dfrac{y^{\beta _{|\alpha |-j}^*} z^{\beta _{j}^*}}{|y|^{\ell -k-j} |z|^{j}}-\dfrac{y^{\beta _{|\alpha |-j-1}^*} z^{\beta _{j+1}^*}}{|y|^{\ell -k-j-1}|z|^{j+1}}\right| \nonumber \\&\le \sum _{j=0}^{\ell -k-1} \left| \dfrac{y^{\beta _{|\alpha |-j}^*} z^{\beta _{j}^*}-y^{\beta _{|\alpha |-j-1}^*} z^{\beta _{j+1}^*}}{|y|^{\ell -k-j}|z|^{j}}\right| \nonumber \\&\quad +\sum _{j=0}^{\ell -k-1} \left| \dfrac{y^{\beta _{|\alpha |-j-1}^*} z^{\beta _{j+1}^*}}{|y|^{\ell -k-j}| z|^{j}}-\dfrac{y^{\beta _{|\alpha |-j-1}^*} z^{\beta _{j+1}^*}}{|y|^{\ell -k-j-1}|z|^{j+1}}\right| \nonumber \\&\le 2(\ell -k)\dfrac{|y-z|}{|y|}. \end{aligned}$$
(49)

Moreover, from (39) and (49), when \(|\alpha |>\ell -k\), we have

$$\begin{aligned} \left| \dfrac{y^{\alpha }}{|y|^{\ell -k}}- \dfrac{z^{\alpha }}{|z|^{\ell -k}}\right|&\le \left| \dfrac{y^{\alpha }}{|y|^{\ell -k}}- \dfrac{y^{\beta _{|\alpha |-\ell +k}^*} z^{\beta _{\ell -k}^*}}{|z|^{\ell -k}}\right| \\&\quad +\left| \dfrac{y^{\beta _{|\alpha |-\ell +k}^*} z^{\beta _{\ell -k}^*}}{|z|^{\ell -k}} -\dfrac{z^{\alpha }}{|z|^{\ell -k}}\right| \\&\le |y|^{|\alpha |-\ell +k} \left| \dfrac{y^{\beta _{\ell -k}^*}}{|y|^{\ell - k}}-\dfrac{z^{\beta _{\ell -k}^*}}{|z|^{\ell -k}}\right| +\left| y^{\beta _{|\alpha |-\ell +k}^*}- z^{\beta _{|\alpha |-\ell +k}^*}\right| \\&\le 2(\ell -k)|y-z||y|^{|\alpha |-\ell + k-1} \\&\quad +|y-z|\sum _{j=0}^{|\alpha |-\ell +k-1}| y|^{j}|z|^{|\alpha |-\ell +k-1-j}. \end{aligned}$$

Therefore, when \(|\alpha |\ge \ell -k\), we have for all \(y\in {\varOmega }_H{\setminus }\{x\}\) and \(i\in {\varLambda }(x,\infty )\),

$$\begin{aligned} \left| \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell - k}}-\frac{(y-x)^{\alpha }}{| y-x|^{\ell -k}}\right|&\le c\dfrac{|y-x_{i}|}{|y-x|}. \end{aligned}$$
(50)

From (50), we obtain

$$\begin{aligned} |E_{6}(x)|&\le \sum _{i=1}^{N}\sum _{j=1}^{N} \int _{\sigma _{j}\cap \xi _{i}}\left| \frac{(x_{i}-x)^{\alpha }}{|x_{i}-x|^{\ell -k}}-\frac{(y-x)^{\alpha }}{|y-x|^{\ell -k}}\right| \left| w^{(k)}_h (|y-x|)\right| \mathrm {d}y\\&\quad +\left| \sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi ^{*}_{i}(x)}\frac{(y-x)^{\alpha }}{|y-x|^{\ell - k}}w^{(k)}_h(|y-x|)\mathrm {d}y\right| \\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N} \int _{\sigma _{j} \cap \xi _{i}}|y-x_{i}|\left| w^{(k+1)}_h(|y-x|)\right| \mathrm {d}y\\&\quad +\left| \sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi ^{*}_{i}(x)}\frac{(y-x)^{\alpha }}{|y- x|^{\ell -k}}w^{(k)}_h(|y-x|)\mathrm {d}y\right| . \end{aligned}$$

Because \(|\alpha |\ge \ell -k\), we have

$$\begin{aligned}&\left| \sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j}\cap \xi ^{*}_{i}(x)} \frac{(y-x)^{\alpha }}{|y-x|^{\ell -k}} w^{(k)}_h(|y-x|)\mathrm {d}y\right| \\&\quad \le c\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi ^{*}_{i}(x)}\left| \frac{(y-x)^{\alpha }}{|y- x|^{\ell -k-1}}\right| \left| w^{(k+1)}_h (|y-x|)\right| \mathrm {d}y\\&\quad \le c\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi ^{*}_{i}(x)}\left| y-x\right| \left| w^{(k+1)}_h(|y-x|)\right| \mathrm {d}y. \end{aligned}$$

Therefore, we have

$$\begin{aligned} |E_{6}(x)| \le c\sum _{i=1}^{N}\sum _{j=1}^{N} \int _{\sigma _{j}\cap \xi _{i}}|y-x_{i}|\left| w^{(k+1)}_h (|y-x|)\right| \mathrm {d}y. \end{aligned}$$

Because for all \(y\in {\varOmega }_H\),

$$\begin{aligned} \left| w^{(k+1)}_h(|y-x|)\right| =\dfrac{1}{h^{d+k+1}} \left| w^{(k+1)}\left(\dfrac{|y-x|}{h}\right)\right| \le \dfrac{1}{h^{d+k+1}}\Vert w^{(k+1)}\Vert _{C({\mathbb {R}}^{+}_0)}, \end{aligned}$$

by the same procedure as (43), we have

$$\begin{aligned} \sum _{i=1}^{N}\sum _{j=1}^{N} \int _{\sigma _{j}\cap \xi _{i}}|y-x_{i}| \left| w^{(k+1)}_h(|y-x|)\right| \mathrm {d}y\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d} \dfrac{r_{N}+d_{{\varXi }}}{h^{k+1}}. \end{aligned}$$

Therefore, we obtain

$$\begin{aligned} |E_{6}(x)|\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d} \dfrac{r_{N}+d_{{\varXi }}}{h^{k+1}}. \end{aligned}$$

From the estimates of \(E_{4}\), \(E_{5}\), and \(E_{6}\), we obtain

$$\begin{aligned} \Vert I_{\alpha ,\ell }\Vert _{C(\overline{{\varOmega }})} \le c\left(1+2\frac{r_{N}}{h}\right)^{d}\frac{r_{N}+ d_{{\varXi }}}{h^{k+1}}. \end{aligned}$$

Because \({\varXi }\) is arbitrary, we establish (44).

Lemma 4

There exists a positive constant \(c\) independent of \(N\) such that

$$\begin{aligned} \left\| J_{\ell }\right\| _{C(\overline{{\varOmega }})} \le c\left\{ \left(1+2 \frac{r_{N}}{h}\right)^d\dfrac{r_{N}+d_{N}}{h}+h^{\ell }\right\} ,\quad \ell \in {\mathbb {N}}. \end{aligned}$$
(51)

Proof

We arbitrarily fix \(x\in \overline{{\varOmega }}\) and particle volume decomposition \({\varXi }=\{\xi _{i}\mid i=1,2,\ldots ,N\}\), and split \(J_{\ell }\) into

$$\begin{aligned} J_{\ell }(x) = E_{7}(x) + E_{8}(x) + E_{9}(x) + E_{10}(x) \end{aligned}$$

with

$$\begin{aligned} E_{7}(x)&:=J_{\ell }(x)- \sum _{i=1}^{N}\sum _{j=1}^{N} \left| \sigma _{j}\cap \xi _{i}\right| |x_{i}-x|^{\ell }|w_{h}(|x_{j}-x|)|, \\ E_{8}(x)&:=\sum _{i=1}^{N}\sum _{j=1}^{N} |x_{i}-x|^{\ell } \int _{\sigma _{j}\cap \xi _{i}}\left\{ |w_{h}(|x_{j}-x|)|-|w_{h}(|y-x|)|\right\} \mathrm {d}y, \\ E_{9}(x)&:=\sum _{i=1}^{N}\sum _{j=1}^N\int _{\sigma _{j}\cap \xi _{i}}\{|x_{i}-x|^{\ell } - |y-x|^{\ell }\} |w_{h}(|y-x|)| \mathrm {d}y, \\ E_{10}(x)&:=\int _{{\mathbb {R}}^{d}}|y-x|^{\ell }|w_{h}(|y-x|)|\mathrm {d}y. \end{aligned}$$

Then, we estimate \(E_{7}\), \(E_{8}\), \(E_{9}\), and \(E_{10}\).

From (30), we can rewrite \(E_{7}\) as

$$\begin{aligned} E_{7}(x)=\sum _{i=1}^{N}\sum _{j=1}^{N} \left| \sigma _{j}\cap \xi _{i}\right| |x_{i}-x|^{\ell } \{|w_{h}(|x_{i}-x|)|-|w_{h}(|x_{j}-x|)|\}. \end{aligned}$$

For all \(y\in {\varOmega }_H\), we have

$$\begin{aligned} |y-x|^{\ell }\le \text {diam}({\varOmega }_H)^{\ell }. \end{aligned}$$
(52)

From (36) and (52), we obtain

$$\begin{aligned} \left| E_{7}(x)\right|&\le \sum _{i=1}^{N}\sum _{j=1}^{N} \left| \sigma _{j}\cap \xi _{i}\right| |x_{i}-x|^{\ell }\big || w_{h}(|x_{i}-x|)|-|w_{h}(|x_{j}-x|)|\big | \\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N}\left| \sigma _{j} \cap \xi _{i}\right| \big ||w_{h}(|x_{i}-x|)|-|w_{h}(|x_{j}-x|)|\big | \\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N}\left| \sigma _{j} \cap \xi _{i}\right| \left| w_{h}(|x_{i}-x|)-w_{h}(|x_{j}-x|)\right| \\&\le c\left(1+\frac{r_{N}}{h}\right)^{d} \frac{d_{{\varXi }}}{h}. \end{aligned}$$

From (38) and (52), we obtain

$$\begin{aligned} |E_{8}(x)|&\le \sum _{i=1}^{N} \sum _{j=1}^{N}|x_{i}-x|^{\ell } \int _{\sigma _{j}\cap \xi _{i}}\big ||w_{h}(|x_{j}-x|)|-|w_{h}(|y-x|)|\big |\mathrm {d}y\\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi _{i}}\big ||w_{h}(|x_{j}-x|)|-|w_{h}(|y-x|)|\big |\mathrm {d}y\\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N}\int _{\sigma _{j} \cap \xi _{i}}\left| w_{h}(|x_{j}-x|)-w_{h}(|y-x|)\right| \mathrm {d}y\\&\le c\left(1+2\frac{r_{N}}{h}\right)^{d}\frac{r_{N}}{h}. \end{aligned}$$

For all \(x_{i}\in {\mathcal {X}}_{N}\) and \(y\in {\varOmega }_H\), we have

$$\begin{aligned} \left| |x_{i}-x|^{\ell } - |y-x|^{\ell }\right|&=\left| (x_{i}-x)-(y-x)\right| \sum _{k=1}^{\ell } (x_{i}-x)^{k-1}(y-x)^{\ell -k}\nonumber \\&\le \ell \,\text {diam}({\varOmega }_H)^{\ell -1}\left| y-x_{i}\right| . \end{aligned}$$
(53)

From (43), (53), and \(h<H\), we obtain

$$\begin{aligned} |E_{9}(x)|&\le \sum _{i=1}^{N}\sum _{j=1}^N\int _{\sigma _{j} \cap \xi _{i}}||x_{i}-x|^\ell -|y-x|^\ell ||w_{h}(|y-x|)| \mathrm {d}y\\&\le c\sum _{i=1}^{N}\sum _{j=1}^{N} \int _{\sigma _{j}\cap \xi _{i}} |y-x_{i}| |w_{h}(|y-x|)| dy \\&\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d}\left(r_{N}+ d_{{\varXi }}\right) \\&\le c\left(1+2\dfrac{r_{N}}{h}\right)^{d}\dfrac{r_{N}+ d_{{\varXi }}}{h}. \end{aligned}$$

From (1), we obtain

$$\begin{aligned} |E_{10}(x)| \le \int _{{\mathbb {R}}^{d}}|y|^{\ell }|w_{h}(|y|)| \mathrm {d}y=h^{\ell }\int _{{\mathbb {R}}^{d}}|y|^{\ell }|w(|y|)|\mathrm {d}y. \end{aligned}$$

From the estimates of \(E_{7}\), \(E_{8}\), \(E_{9}\), and \(E_{10}\), we obtain

$$\begin{aligned} \left\| J_{\ell }\right\| _{C(\overline{{\varOmega }})} \le c\left\{ \left(1+2\frac{r_{N}}{h}\right)^d\dfrac{r_{N}+d_{{\varXi }}}{h}+h^{\ell }\right\} . \end{aligned}$$

Because \({\varXi }\) is arbitrary, we establish (51).

Using the lemmas defined above, we now prove Theorem 3.

Proof of Theorem 3

By Lemmas 12, and 4, we have for all \(v\in C^{n+1}(\overline{{\varOmega }}_{H})\)

$$\begin{aligned} \left\| v-{\varPi }_h{} v\right\| _{C(\overline{{\varOmega }})} \le c\left\{ \left(1+2\frac{r_{N}}{h_{N}}\right)^{d}\frac{r_{N}+ d_{N}}{h_{N}}+h_{N}^{n+1}\right\} \left\| v\right\| _{C^{n+1}(\overline{{\varOmega }}_{H})}. \end{aligned}$$
(54)

Moreover, by Lemmas 13, and 4, when \(w\) satisfies Hypothesis 2 with \(k=0\), we have for all \(v\in C^{n+2}(\overline{{\varOmega }}_{H})\)

$$\begin{aligned} \left\| \nabla v-\nabla _h{}v\right\| _{C(\overline{{\varOmega }})} \le c\left\{ \left(1+2\frac{r_{N}}{h_{N}}\right)^{d}\frac{r_{N}+ d_{N}}{h_{N}}+h_{N}^{n+1}\right\} \left\| v\right\| _{C^{n+2}(\overline{{\varOmega }}_{H})}, \end{aligned}$$
(55)

and when \(w\) satisfies Hypothesis 2 with \(k=1\) for all \(v\in C^{n+3}(\overline{{\varOmega }}_{H})\),

$$\begin{aligned} \left\| {\varDelta }v-{\varDelta }_h{}v\right\| _{C(\overline{{\varOmega }})} \le c\left\{ \left(1+2\frac{r_{N}}{h_{N}}\right)^{d}\frac{r_{N}+ d_{N}}{h_{N}^2}+h_{N}^{n+1}\right\} \left\| v\right\| _{C^{n+3}(\overline{{\varOmega }}_{H})}. \end{aligned}$$
(56)

Because the family \(\{({\mathcal {X}}_{N}, {{\mathcal {V}}}_{N}, h_{N})\}_{N\rightarrow \infty }\) is regular, by applying (7) to (54), (55), and (56), we obtain (9), (10), and (11), respectively. We now conclude the proof of Theorem 3.

5 Numerical results

Set \({\varOmega }=(0,1)^2\) and \(H=0.1\). Then, \({\varOmega }_H=(-0.1,1.1)^2\). We now compute the truncation errors of \(v:{\varOmega }_H\rightarrow {\mathbb {R}}\), which are defined as \(v(x,y)=\sin (2\pi (x+y))\). Particle distribution \({\mathcal {X}}_{N}\) is set as

$$\begin{aligned} {\mathcal {X}}_{N}= \left\{ \left((i+\eta _{ij}^{(1)})\varDelta x, (j+\eta _{ij}^{(2)}) \varDelta x\right)\in {\varOmega }_H;~i,j\in {\mathbb {Z}}\right\} , \end{aligned}$$

where \(\Delta x\) is taken by \(2^{-5}, 2^{-6}, \ldots , 2^{-12}\) and \(\eta _{i j}^{(k)}\,(i,j\in {\mathbb {Z}}, k=1,2)\) are random numbers satisfying \(|\eta _{i j}^{(k)}|< 1/4\). Particle distribution \({\mathcal {X}}_{N}\) with \(\Delta x=2^{-5}\) is shown in Fig. 4. Particle volume set \({{\mathcal {V}}}_{N}\) is defined as

$$\begin{aligned} {{\mathcal {V}}}_{N}=\left\{ V_{i} = \dfrac{\left| {\varOmega }_H\right| }{N}\Big |i=1,2,\ldots ,N\right\} . \end{aligned}$$

For \(m=1,3,5\), the influence radius \(h_{N}\) is set as

$$\begin{aligned} h_{N}= 2.6\times 2^{5/m-5}\Delta x^{1/m}. \end{aligned}$$

Note that if \(\Delta x=2^{-5}\), then \(h=2.6\times 2^{-5}\) for all m. Using the discrete parameters above, the covering radius \(r_{N}\) satisfies \(r_{N}\le \sqrt{2}(1+1/4)\Delta x/2\). Moreover, the Voronoi deviation \(d_{N}\) satisfies \(d_{N}\le 64(1+\sqrt{2})\Delta x/\pi \). Therefore, the family \(\{({\mathcal {X}}_{N},{{\mathcal {V}}}_{N},h_{N})\}\) is regular with order m.

Fig. 4
figure 4

Particle distribution \({\mathcal {X}}_{N}\) with \(\Delta x=2^{-5} \, (N=1{,}521)\). The gray area represents \({\varOmega }\)

For the interpolant, we consider the following three cases of reference weight functions:

$$\begin{aligned} ({\Pi 1})\quad w(r)&:=\dfrac{3}{\pi } {\left\{ \begin{array}{ll} 1-r,&{} 0 \le r< 1, \\ 0,&{} 1 \le r, \end{array}\right. } \\ ({\Pi 2})\quad w(r)&:=\dfrac{40}{7\pi } {\left\{ \begin{array}{ll} 1-6r^2+6r^3, \quad &{} \displaystyle 0 \le r< \frac{1}{2}, \\ 2(1-r)^3, \quad &{} \displaystyle \frac{1}{2} \le r< 1, \\ 0, &{} 1 \le r, \end{array}\right. } \\ ({\Pi 3})\quad w(r)&:=\dfrac{5}{\pi } {\left\{ \begin{array}{ll} (1-r)(2-3r), &{} 0 \le r < 1, \\ 0,&{} 1 \le r. \end{array}\right. } \end{aligned}$$

\(({\Pi 1})\) is the lowest-order polynomial function belonging to \({\mathcal {W}}\). \(({\Pi 2})\) is the cubic B-spline commonly used in the SPH method and belonging to \({\mathcal {W}}\). \(({\Pi 3})\) is the lowest-order polynomial function belonging to \({\mathcal {W}}\) that satisfies Hypothesis 1 with \(n=3\).

For the approximate gradient operator, we consider the following three cases of reference weight functions:

$$\begin{aligned} ({\nabla 1})\quad w(r)&:=\dfrac{6}{\pi } {\left\{ \begin{array}{ll} r(1-r), \quad &{} 0 \le r< 1, \\ 0,&{} 1 \le r, \end{array}\right. } \\ ({\nabla 2})\quad w(r)&:=\dfrac{40}{7\pi } {\left\{ \begin{array}{ll} 6r^2-9r^3, \quad &{} \displaystyle 0 \le r< \frac{1}{2}, \\ 3r(1-r)^2, \quad &{} \displaystyle \frac{1}{2} \le r< 1, \\ 0, &{} 1 \le r, \end{array}\right. } \\ ({\nabla 3})\quad w(r)&:=\dfrac{15}{2\pi } {\left\{ \begin{array}{ll} r(1-r)(5-7r), &{} 0 \le r < 1, \\ 0,&{} 1 \le r. \end{array}\right. } \end{aligned}$$

\(({\nabla 1})\) is the lowest-order polynomial function belonging to \({\mathcal {W}}\) that satisfies Hypothesis 2 with \(k=0\). \(({\nabla 2})\) is chosen so that the approximate gradient operator (3) with \(({\nabla 2})\) coincides with that in the SPH method with the cubic B-spline (see Appendix 1). \(({\nabla 3})\) is the lowest-order polynomial function belonging to \({\mathcal {W}}\) that satisfies Hypothesis 1 with \(n=3\) and Hypothesis 2 with \(k=0\).

For the approximate Laplace operator, we consider the following three cases of reference weight functions:

$$\begin{aligned} ({\Delta 1})\quad w(r)&:=\dfrac{10}{\pi } {\left\{ \begin{array}{ll} r^2(1-r), \quad &{} 0 \le r< 1, \\ 0,&{} 1 \le r, \end{array}\right. } \\ ({\Delta 2})\quad w(r)&:=\dfrac{40}{7\pi } {\left\{ \begin{array}{ll} 6r^2-9r^3, \quad &{} \displaystyle 0 \le r< \frac{1}{2}, \\ 3r(1-r)^2, \quad &{} \displaystyle \frac{1}{2} \le r< 1, \\ 0, &{} 1 \le r, \end{array}\right. } \\ ({\Delta 3})\quad w(r)&:=\dfrac{30}{\pi } {\left\{ \begin{array}{ll} r^2(1-r)(3-4r), &{} 0 \le r < 1, \\ 0,&{} 1 \le r. \end{array}\right. } \end{aligned}$$

\(({\Delta 1})\) is the lowest-order polynomial function belonging to \({\mathcal {W}}\) that satisfies Hypothesis 2 with \(k=1\). \(({\Delta 2})\) is chosen so that approximate Laplace operator (4) with \(({\Delta 2})\) coincides with that in the SPH method with the cubic B-spline (see Appendix 1). \(({\Delta 3})\) is the lowest-order polynomial function belonging to \({\mathcal {W}}\) that satisfies Hypothesis 1 with \(n=3\) and Hypothesis 2 with \(k=1\).

The above settings were used in the computation of the following relative errors

$$\begin{aligned} \dfrac{\left\| v-{\varPi }_h{} v\right\| _{\ell ^{\infty }({\varOmega })}}{\Vert v\Vert _{C(\overline{{\varOmega }})}},\quad \dfrac{\left\| \nabla v-\nabla _h{} v\right\| _{\ell ^{\infty }({\varOmega })}}{\Vert \nabla v\Vert _{C(\overline{{\varOmega }})}},\quad \dfrac{\left\| {\varDelta }v-{\varDelta }_h{} v\right\| _{\ell ^{\infty }({\varOmega })}}{\Vert {\varDelta }v\Vert _{C(\overline{{\varOmega }})}}. \end{aligned}$$

Here, the discrete norm \(\left\| \cdot \right\| _{\ell ^{\infty }({\varOmega })}\) is defined as

$$\begin{aligned} \left\| v\right\| _{\ell ^{\infty }({\varOmega })} :=\max _{i\in {\varLambda }({\varOmega })} |v(x_{i})|. \end{aligned}$$

Figure 5 shows graphs of the relative errors of (a) interpolant \({\varPi }_h{}\), (b) approximate gradient operator \(\nabla _h{}\), and (c) approximate Laplace operator \({\varDelta }_h{}\) versus the influence radius \(h_{N}\) with regular orders \(m=1, 3, 5\). In Fig. 5, the slopes of the triangles show the theoretical convergence rates obtained via Theorem 3. Table 1 lists the numerical and theoretical convergence rates obtained from the cases of \(\varDelta x=2^{-11}\) and \(2^{-12}\), where the theoretical convergence rates correspond to Theorem 3. In the case of \(m=1\), as the settings could not be applied to Theorem 3, only numerical results without convergence were obtained. In contrast, the settings in cases \(m=3\) and 5 could be applied Theorem 3; thus, the numerical results with convergence were obtained. Moreover, the approximate operators with reference weight functions satisfying Hypothesis 1 with \(n=3\) became higher convergence orders in the cases where \(m=5\) as per Theorem 3.

Fig. 5
figure 5

Graphs of the relative errors of a the interpolant, b approximate gradient operator, and c approximate Laplace operator versus the influence radius with regular orders \(m=1, 3, 5\)

Table 1 Numerical and theoretical convergence rates of (a) the interpolant, (b) approximate gradient operator, and (c) approximate Laplace operator with regular orders \(m=1, 3, 5\). The numerical convergence rates were obtained for the cases of \(\varDelta x=2^{-11}\) and \(2^{-12}\)

6 Conclusions

We analyzed truncation errors in a generalized particle method, which is a wider class of particle methods that includes commonly used methods such as the SPH and MPS methods. In our analysis, we introduced two indicators: the first was the covering radius, which represents the maximum radius of the Voronoi region associated with the particle distribution, while the second was the Voronoi deviation, which indicates the deviation between particle volumes and Voronoi volumes. With the covering radius and Voronoi deviation, we introduced a regularity of a family of discrete parameters, which includes the particle distribution, particle volume set, and influence radius associated with the number of particles. Moreover, we introduced two hypotheses of reference weight functions. With the regularity and hypotheses of reference weight functions, we established truncation error estimates for the continuous norm. The convergence rates are dependent on the regular order and order of the reference weight functions appearing in a hypothesis. Moreover, as it was possible to validate the conditions by calculation, we showed the numerical convergence orders were in good agreement with the theoretical ones.

In a forthcoming paper, we plan to establish error estimates of the generalized particle method for the Poisson and heat equations.