1 Introduction

Jackson’s famous estimate of the error of the best polynomial approximation for a fixed function is one of the main theorems in constructive function theory. According to a multivariate version of the classical Jackson theorem (see, e.g., [10]), if I is a compact cube in \(\mathbb {R}^N\) and \(f : I \rightarrow \mathbb {R}\) is a \({\mathcal {C}}^{k+1}\) function on I, then

$$\begin{aligned} n^k\mathrm{dist}_{I}(f,{\mathcal {P}}_n) \le C_k \sum _{j=1}^{N} \sup _{x \in I} \left| \frac{\partial ^{k+1} f }{\partial x_j^{k+1}} (x) \right| , \end{aligned}$$

where the constant \(C_k\) depends only on N, I and k. As usual, \( \mathrm{dist}_{I}(f,{\mathcal {P}}_n) =\inf \{\Vert f-p\Vert _{I} : p\in \mathcal {P}_n\}\), \({\mathcal {P}}_n\) is the space of all algebraic polynomials of degree at most n and \(\Vert \cdot \Vert _{I}\) is the sup norm on I.

As an application of Jackson’s theorem, one can prove classical results like the well-known Bernstein theorem (see, e.g., [5, 6]) which allows to obtain a characterization of \(C^\infty \) functions:

A function f defined on I can be extended to a \(C^\infty \)function on \(\mathbb {R}^N\) if and only if

$$\begin{aligned} \lim _{n \rightarrow \infty } n^k \mathrm{dist}_{I} (f,\mathcal {P}_n) \ = \ 0 \ \ \ \ \ for\,all\,positive\,integer\,numbers~k. \end{aligned}$$

A natural question arises: For which compact subsets E of \(\mathbb {R}^N\) the following Bernstein property holds?

For every function \(f:E \rightarrow \mathbf {R}\) if the sequence \(\{\mathrm{dist}_E(f, \mathcal {P}_n)\}_n\) is rapidly decreasing (i.e. \(\lim \limits _{n\rightarrow \infty } n^k \mathrm{dist}_E(f, \mathcal {P}_n) =0\) for all \(k>0\)), then there exists a \(C^\infty \) function \(F : \mathbb {R}^N \rightarrow \mathbb {R}\) such that \(F=f\) on E.

It turns out that these matters were considered by Pleśniak in 1990 (see [8, 9] for previous results). He proved that the Markov inequality

$$\begin{aligned} \Vert D^{\alpha }P\Vert _E \le M (\deg P)^{r|\alpha |} \Vert P \Vert _E, \quad \alpha \in \mathbb {Z}^N_{+}, \end{aligned}$$

and Bernstein’s property are equivalent for \(C^{\infty }\) determining sets. Our goal is to find a generalization of this fact for sets which are not \(C^{\infty }\) determining.

2 Markov Inequality

Our intention in this section is to study an extension of the Markov inequality to compact subsets of algebraic set. We will consider nonempty sets of the form

$$\begin{aligned} V=\left\{ (x_1,\ldots ,x_N) \in \mathbb {R}^N : x_k^d=Q_0(y)+Q_1(y)x_k + \cdots + Q_{d-1}(y)x_k^{d-1}\right\} , \end{aligned}$$
(1)

where \(Q_i\) are polynomials for every \(0\le i \le d-1\) and the variable \(y=(x_1,\ldots ,x_{k-1},x_{k+1},\ldots ,x_N) \in \mathbb {R}^{N-1}\). One can verify that every polynomial P from the space \(\mathcal {P}(x_1,\ldots ,x_N)\) , on V, coincides with some polynomial from \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\) (see [3]). Here \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\) denotes the subspace of \(\mathcal {P}(x_1,\ldots ,x_N)\) formed of all polynomials of the form \(\sum _{i=0}^{d-1} G_i(y)x_k^i\) with \(G_i \in \mathcal {P}(y)\). Hence

$$\begin{aligned} \mathcal {P}(V):=\left\{ P_{|V}, P \in \mathcal {P}(x_1,\ldots ,x_N) \right\} =\left\{ P_{|V}, P \in \mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k) \right\} . \end{aligned}$$
(2)

Considerations in [2, 3] suggest the following definition:

(Markov set and Markov inequality on F) Let \(\mathbf{F} \) be an infinite-dimensional subspace of \(\mathcal {P}(x_1,\ldots ,x_N)\)such that \(P \in \mathbf{F} \) implies \(D^{\alpha }P \in \mathbf{F} \) for all \(\alpha \in \mathbb {Z}^N_{+}\). A compact set \(\emptyset \ne E \subset \mathbb {R}^N\)is said to be a \(\mathbf{F} \)-Markov set if there exist \(M, m > 0\) such that

$$\begin{aligned} \Vert D^{\alpha } P\Vert _E \le M^{|\alpha |} (\deg P )^{m|\alpha |} \Vert P\Vert _E, \quad P \in \mathbf{F} , \quad \alpha \in \mathbb {Z}^N_{+}. \end{aligned}$$
(3)

This inequality is called a \(\mathbf{F} \)-Markov inequality for E.

Note that, similarly as in the classical case, it is enough to check the property for \(|\alpha | = 1\).

It is clear that if \(P \in \mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\), then \(D^{\alpha }P \in \mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\) for all \(\alpha \in \mathbb {Z}^N_{+}\). Now we give an example to demonstrate that the above definition makes sense.

Example 1

Let \(V=\{y^3=(1-x^2)y\} \subset \mathbb {R}^2\). The compact set \(E =\{(x,y) \in V : x \in [-1,1]\} \) is a \(\mathcal {P}(x) \otimes \mathcal {P}_{2}(y)\)-Markov.

Proof

We recall first the three classical inequalities Markov’s inequality: For any polynomial P

$$\begin{aligned} \Vert P'\Vert _{[-1,1]} \le (\deg P)^2 \Vert P\Vert _{[-1,1]}. \end{aligned}$$
(4)

Bernstein’s inequality: If \(T_n\) is a trigonometric polynomial of degree at most n, then

$$\begin{aligned} \Vert T'\Vert \le n\Vert T\Vert , \end{aligned}$$
(5)

where \(\Vert \cdot \Vert \) denotes the supremum norm. If \(P_n\) is an algebraic polynomial of degree at most n, then \(T_n(t) =P_n(\cos t)\) is a trigonometric polynomial of degree at most n, and (5) yields

$$\begin{aligned} |P'(x)| \le \frac{n}{\sqrt{1-x^2}} \Vert P\Vert _{[-1,1]}, \quad x \in (-1,1), \end{aligned}$$
(6)

which is also known as Bernstein inequality. The classical inequality of Schur states that

$$\begin{aligned} \Vert P\Vert _{[-1,1]} \le (\deg P+1) \left\| P(x)\sqrt{1-x^2}\right\| _{[-1,1]} \end{aligned}$$
(7)

holds for every polynomial P. This can be generalized to weights \((1-x^2)^{\alpha }\) with \(\alpha \ge 1/2\) (see [1], Lemma 2.4, p. 73):

$$\begin{aligned} \Vert P\Vert _{[-1,1]} \le n^{2\alpha } \left\| P(x)(1-x^2)^{\alpha }\right\| _{[-1,1]} \quad P \in \mathcal {P}_{n-1}. \end{aligned}$$
(8)

Combining the above inequality and Markov’s inequality (4), we obtain

$$\begin{aligned} \left\| P'(x)(1-x^2)\right\| _{[-1,1]} \le 3 (n+2)^{2} \left\| P(x)(1-x^2)\right\| _{[-1,1]} \quad P \in \mathcal {P}_{n}. \end{aligned}$$
(9)

Let \(P \in \mathcal {P}(x) \otimes \mathcal {P}_{2}(y)\). Then \(P(x,y)=G_0(x) + G_1(x)y + G_2(x)y^2\) for some \(G_i \in \mathcal {P}(x)\) (\(i=0,1,2\)). Now

$$\begin{aligned} \left\| D^{(1,0)} P(x,y) \right\| _E \le&\left\| G'_0(x)\right\| _E + \left\| G'_1(x)y + G'_2(x)y^2\right\| _E \\=&\left\| G'_0(x)\right\| _{[-1,1]} + \left\| G'_1(x)y + G'_2(x)y^2\right\| _{E'}, \end{aligned}$$

where \(E'=\{(x,y) \in \mathbb {R}^2 : y^2=1-x^2\}\). Since \((x,y) \in E' \Longleftrightarrow (x,-y) \in E'\), we have

$$\begin{aligned} \left\| D^{(1,0)} P(x,y) \right\| _E \le&\left\| G'_0(x)\right\| _{[-1,1]} + \left\| G'_1(x)\sqrt{1-x^2}\right\| _{[-1,1]} \\&+\left\| G'_2(x)(1-x^2)\right\| _{[-1,1]}. \end{aligned}$$

By (4), (5), and (9), respectively, we get

$$\begin{aligned} \left\| D^{(1,0)} P(x,y) \right\| _E\le & {} (\deg G_0)^2\left\| G_0(x)\right\| _{[-1,1]} + \deg G_1 \left\| G_1(x)\right\| _{[-1,1]} \\&+\, 3(2 + \deg G_2)^2 \left\| G_2(x)(1-x^2)\right\| _{[-1,1]}. \end{aligned}$$

The inequality (7) yields the following

$$\begin{aligned} \left\| D^{(1,0)} P(x,y) \right\| _E \le&(\deg G_0)^2\left\| G_0(x)\right\| _{[-1,1]} \\&+\, (\deg G_1+1)^2\left\| G_1(x)\sqrt{1-x^2}\right\| _{[-1,1]} \\&+\, 3(2+\deg G_2)^2\left\| G_2(x)(1-x^2)\right\| _{[-1,1]}. \end{aligned}$$

Using again the fact that \((x,y) \in E' \Longleftrightarrow (x,-y) \in E'\), we obtain

$$\begin{aligned} \left\| D^{(1,0)} P(x,y) \right\| _E \le 5(\deg P)^2 \left( \left\| G_0(x)\right\| _{[-1,1]} + \left\| G_1(x)y+ G_2(x)y^2\right\| _{E'}\right) . \end{aligned}$$

Now if \(-1 \le \xi \le 1\), then \((\xi ,0) \in E\) and \(G_0(\xi )=P(\xi ,0)\). Hence

$$\begin{aligned} \Vert G_0(x)\Vert _{[-1,1]} \le \Vert P\Vert _E. \end{aligned}$$

This together with the triangle inequality, implies

$$\begin{aligned} \left\| D^{(1,0)} P(x,y) \right\| _E \le 15(\deg P)^2\left\| P\right\| _E. \end{aligned}$$

Next, we consider the case of \(D^{(0,1)}\). It is clear that

$$\begin{aligned} \left\| D^{(0,1)} P(x,y) \right\| _E \le \left\| G_1(x)\right\| _E + 2\left\| G_2(x)y\right\| _E \le \left\| G_1(x)\right\| _E + 2\left\| G_2(x)\right\| _E. \end{aligned}$$

Then, using (7) and (8), we have

$$\begin{aligned} \left\| D^{(0,1)} P(x,y) \right\| _E \le&(\deg G_1+1)\left\| G_1(x)\sqrt{1-x^2}\right\| _{[-1,1]} \\&+\, 2(1+\deg G_2)^2\left\| G_2(x)(1-x^2)\right\| _{[-1,1]}. \end{aligned}$$

Now a similar proof to that of the previous case gives the following

$$\begin{aligned} \left\| D^{(0,1)} P(x,y) \right\| _E \le 6(\deg P)^2\left\| P\right\| _E. \end{aligned}$$

That is what we wished to prove. \(\square \)

Next example shows that \(\mathbf{F} \)-Markov inequality depends not only on the set but also on the family \(\mathbf{F} \).

Example 2

Consider set \(V=\{y^3=1-x^2\} \subset \mathbb {R}^2\). The compact set \(E=\{(x,y) \in V : x \in [-\frac{1}{2},-\frac{1}{4}] \cup [\frac{1}{4},\frac{1}{2}]\}\) is a \(\mathcal {P}(y) \otimes \mathcal {P}_{1}(x)\)-Markov, but it is not \(\mathcal {P}(x) \otimes \mathcal {P}_{2}(y)\)-Markov.

Proof

The fact that \(E=\{(x,y) \in V : x \in [-\frac{1}{2},-\frac{1}{4}] \cup [\frac{1}{4},\frac{1}{2}]\}\) is a \(\mathcal {P}(y) \otimes \mathcal {P}_{1}(x)\)-Markov follows from [2, 4]. So we need only show that E is not \(\mathcal {P}(x) \otimes \mathcal {P}_{2}(y)\)-Markov. Seeking a contradiction, we consider the sequence of polynomials

$$\begin{aligned} P_n(x,y)=y-\sum _{k=0}^{n} \frac{\Gamma (k-1/3)}{\Gamma (-1/3) k! } x^{2k}. \end{aligned}$$

It is well known that

$$\begin{aligned} \root 3 \of {1-x^2}=\sum _{k=0}^{\infty } \frac{\Gamma (k-1/3)}{\Gamma (-1/3) k! } x^{2k} \quad \mathrm{for} \quad |x| < 1. \end{aligned}$$

Hence

$$\begin{aligned} \Vert P_n(x,y)\Vert _E= & {} \left\| \sum _{k=n+1}^{\infty } \frac{\Gamma (k-1/3)}{\Gamma (-1/3) k! } x^{2k}\right\| _{[-\frac{1}{2},-\frac{1}{4}] \cup [\frac{1}{4},\frac{1}{2}]}\\ {}= & {} \left\| \frac{x^{2+2 n} \Gamma \left( \frac{1}{3} (2+3 n)\right) F\left( 1,\frac{2}{3}+n,2+n,x^2\right) }{\Gamma \left( -\frac{1}{3}\right) \Gamma (2+n)} \right\| _{[-\frac{1}{2},-\frac{1}{4}] \cup [\frac{1}{4},\frac{1}{2}]}, \end{aligned}$$

where F is the hypergeometric function defined for \(|z| < 1\) by the power series

$$\begin{aligned} F(a,b;c;z)=\sum _{\iota =0}^{\infty } \frac{(a)_\iota (b)_\iota }{(c)_\iota } \frac{z^\iota }{\iota !}. \end{aligned}$$

Here \((q)_\iota \) is the (rising) Pochhammer symbol. If \(x \in [0,1]\), then the function \(F\left( 1,\frac{2}{3}+n,2+n,x^2\right) \) is the increasing function of x, since its Taylor coefficients are all positive. Therefore, by \(F(a,b;c;1)=\frac{\Gamma (c) \Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}\) and \(z\Gamma (z)=\Gamma (z+1)\), we have

$$\begin{aligned} F\left( 1,\frac{2}{3}+n,2+n,x^2\right)\le & {} F\left( 1,\frac{2}{3}+n,2+n,1\right) =\frac{ \Gamma (2+n) \Gamma \left( \frac{1}{3}\right) }{\Gamma \left( \frac{4}{3}\right) \Gamma (n+1)}\\= & {} 3(1+n). \end{aligned}$$

If we recall that \(\lim _{n\rightarrow \infty } \frac{\Gamma (n+\alpha )}{\Gamma (n)n^{\alpha }}=1\), then

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{ 3\Gamma \left( \frac{2}{3} + n\right) (1+n)}{4 \Gamma \left( -\frac{1}{3}\right) \Gamma (2+n)}=0. \end{aligned}$$

We thus may conclude that there exists a constant \(C>0\) (independent of n) for which

$$\begin{aligned} \Vert P_n(x,y)\Vert _E \le C 4^{-n}. \end{aligned}$$

Consequently for \(r>0\),

$$\begin{aligned} \lim _{n\rightarrow \infty } n^r \Vert P_n(x,y)\Vert _E=0. \end{aligned}$$

This gives a contradiction, and the result is established. \(\square \)

Remark 1

Note that \((x,y) \in E \Longleftrightarrow (-x,y) \in E\). On the other hand, if \((x,y) \in E\), then \((x,-y) \notin E\). This is one of the reasons why the set E is a \(\mathcal {P}(y) \otimes \mathcal {P}_{1}(x)\)-Markov, but it is not \(\mathcal {P}(x) \otimes \mathcal {P}_{2}(y)\)-Markov.

Example 1 illustrates the more general idea.

Example 3

Combining methods used in [2] with method from Example 1, one can provide other examples of \(\mathcal {P}(y) \otimes \mathcal {P}_{2}(x_k)\)-Markov sets by considering algebraic sets of the form

$$\begin{aligned} V=\{ (x_1,\ldots ,x_N) \in \mathbb {R}^N : x_k^3=Q(y)x_k\}, \end{aligned}$$

where \(Q_j \in \mathcal {P}(y)\) and \(y=(x_1,\ldots ,x_{k-1},x_{k+1},\ldots ,x_N) \in \mathbb {R}^{N-1}\).

3 \(C^{\infty }\) Functions

First we introduce the subspace of the space \(C^{\infty }(\mathbb {R}^N)\) related to an algebraic set defined by (1). We define

$$\begin{aligned} C^{\infty }_V(\mathbb {R}^N):= & {} \left\{ f \in C(\mathbb {R}^N) : \forall _{r>0} \lim _{n\rightarrow \infty } n^r \mathrm{dist}_I\left( f,\mathcal {P}_n(y) \otimes \mathcal {P}_{d-1}(x_k)\right) =0 \right. \nonumber \\&\qquad \left. \mathrm{for \ every \ compact \ cube } \ I \ \mathrm{in } \ \mathbb {R}^N \right\} . \end{aligned}$$
(10)

Since every cube I is a Markov set, then by Pleśniak’s theorem (see [9]) \(C^{\infty }_V(\mathbb {R}^N) \subset C^{\infty }(\mathbb {R}^N)\). It should be noted that Pleśniak’s result, together with the Jackson theorem, implies

$$\begin{aligned} C^{\infty }(\mathbb {R}^N)= & {} \left\{ f \in C(\mathbb {R}^N) : \forall _{r>0} \lim _{n\rightarrow \infty } n^r \mathrm{dist}_I\left( f,\mathcal {P}_n(x_1,\ldots ,x_N)\right) =0 \right. \\&\left. \mathrm{for\ every \ compact\ cube } \ I \ \mathrm{in } \ \mathbb {R}^N\right\} . \end{aligned}$$

We say that f is a \(C^{\infty }_V\) function on a compact subset E of V if, there exists a function \(\tilde{f} \in C^{\infty }_V(\mathbb {R}^N)\) with \(\tilde{f}_{|E}=f\). We denote by \(C^{\infty }_V(E)\) the space of such functions. Let \(\tau _J\) be the topology on \(C^{\infty }_V(E)\) determined by the seminorms \(\delta _{-1}(f):=\Vert f\Vert _E\), \(\delta _0(f):=\mathrm{dist}_E(f,\mathcal {P}_0(y) \otimes \mathcal {P}_{d-1}(x_k))\) and

$$\begin{aligned} \delta _\nu (f):=\sup _{l \ge 1} l^\nu \mathrm{dist}_E(f,\mathcal {P}_l(y) \otimes \mathcal {P}_{d-1}(x_k)) \end{aligned}$$

for \(\nu =1,2,\ldots \) (This idea comes from Zerener’s work [11].) The fact that \(\delta _\nu \)’s are seminorms on \(C^{\infty }_V(E)\) follows from the definition of the set \(C^{\infty }_V(\mathbb {R}^N)\). It should be noted that this topology need not be complete.

The natural topology \(\tau _0\) on the set \(C^{\infty }(\mathbb {R}^N)\) is determined by the seminorms \(|\cdot |^\nu _K\), where for each compact set K in \(\mathbb {R}^N\) and each \(\nu =0,1,\ldots \),

$$\begin{aligned} |f|^\nu _K:=\max _{|\alpha | \le \nu } \Vert D^{\alpha } f\Vert _K. \end{aligned}$$

Therefore, we consider the topology \(\tau _Q\) on \(C^{\infty }_V(E)\) determined by the seminorms

$$\begin{aligned} q_{K,\nu }(f):=\inf \left\{ |\tilde{f}|^\nu _K : f \in C^{\infty }_V(\mathbb {R}^N), \,\tilde{f}_{|E}=f \right\} . \end{aligned}$$

Then \(\tau _Q\) coincides with the quotient topology of the space \(C^{\infty }_V(\mathbb {R}^N)/I(E)\), where \(C^{\infty }_V(\mathbb {R}^N)\) is considered with the natural topology \(\tau _0\) and \(I(E):=\{f \in C^{\infty }_V(\mathbb {R}^N) : f_{|E}=0\}\). Notice that the space \(( C^{\infty }_V(\mathbb {R}^N), \tau _0)\) is a closed subspace of the complete space \(( C^{\infty }(\mathbb {R}^N), \tau _0)\). Therefore, the space \(( C^{\infty }_V(\mathbb {R}^N), \tau _0)\) is also complete. In view of the fact that I(E) is a closed subspace of \(( C^{\infty }_V(\mathbb {R}^N), \tau _0)\), the quotient space \(C^{\infty }_V(\mathbb {R}^N)/I(E)\) is complete. Hence \((C^{\infty }_V(E), \tau _Q)\) is a Fréchet space. To prove the main result, we will need the following lemma (see, e.g., [7], 1.4.2).

Lemma 1

There are positive constants \(C_{\alpha }\) depending only on \(\alpha \in \mathbb {Z}^N_{+}\) such that for each compact set K in \(\mathbb {R}^N\) and each \(\epsilon > 0\), one can find a \(C^{\infty }\) function h on \(\mathbb {R}^N\) satisfying \(0\le h \le 1\) on \(\mathbb {R}^N\), \(h=1\) in a neighborhood of K, \(h(x)=0\) if \(\mathrm{dist}(x,K) > \epsilon \), and for all \(x \in \mathbb {R}^N\) and \(\alpha \in \mathbb {Z}^N_{+}\), \(|D^{\alpha } h(x)| \le C_{\alpha } \epsilon ^{-|\alpha |}\).

4 Main Result

Before starting the main result, we prove the following lemma.

Lemma 2

Let E be a \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov set. Also define

$$\begin{aligned} \pi (E)=\left\{ (x_1,\ldots ,x_{k-1},x_{k+1},\ldots ,x_N) \in \mathbb {R}^{N-1} : (x_1,\ldots ,x_N) \in E, \, x_k \in \mathbb {R}\right\} . \end{aligned}$$

If E is a \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov set (with M and m), then \(\pi (E)\) is a Markov set (as a subset of \(\mathbb {R}^{N-1}\)), and for every polynomial \(P=\sum _{i=0}^{d-1} G_i(y)x_k^i\), there exist constant \(C>0\) (depending only on E and d) such that

$$\begin{aligned} \Vert G_i\Vert _{\pi (E)} \le \frac{C}{i!} (\deg P)^{m(d-1)} \Vert P\Vert _E, \end{aligned}$$

for every \(i=0,1,\ldots ,d-1\). Conversely, if \(\pi (E)\) is a Markov set (with A and \(\eta \)) and for every polynomial \(P=\sum _{i=0}^{d-1} G_i(y)x_k^i\), there exist \(B, \lambda >0\) (depending only on E and d) such that

$$\begin{aligned} \Vert G_i\Vert _{\pi (E)} \le B (\deg P)^{\lambda } \Vert P\Vert _E, \quad i=0,1,\ldots ,d-1, \end{aligned}$$
(11)

then E is a \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov set.

Proof

Let E be a \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov set. The proof starts from the observation that

$$\begin{aligned} \frac{\partial ^{d-1} P}{\partial x_k^{d-1}} = (d-1)!G_{d-1}. \end{aligned}$$

Therefore the \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov property of the set E gives

$$\begin{aligned} \Vert G_{d-1}\Vert _{\pi (E)} \le \frac{M^{d-1}}{(d-1)!} (\deg P)^{m(d-1)} \Vert P\Vert _E. \end{aligned}$$

If \(i=d-2\), then

$$\begin{aligned} (d-2)! G_{d-2} =\frac{\partial ^{d-2} P}{\partial x_k^{d-2}} - (d-1)G_{d-1}x_k. \end{aligned}$$

Hence, there exists constant \(C>0\) (depending only on the set E) such that

$$\begin{aligned} \Vert G_{d-2}\Vert _{\pi (E)} \le \frac{(C+1)M^{d-1}}{(d-2)!} (\deg P)^{m(d-1)} \Vert P\Vert _E. \end{aligned}$$

Continuing this process, one can show that there exists a constant \(C_1>0\) (depending only on the set E and d) such that

$$\begin{aligned} \Vert G_{i}\Vert _{\pi (E)} \le \frac{C_1}{i!} (\deg P)^{m(d-1)} \Vert P\Vert _E. \end{aligned}$$

To prove the converse direction, assume that \( \pi (E)\) is a Markov set and (11) holds. Then, for every polynomial \(P=\sum _{i=0}^{d-1} G_i(y)x_k^i\), we have

$$\begin{aligned} \left\| \frac{\partial P}{\partial x_j} \right\| _E \le \sum _{i=0}^{d-1} \left\| \frac{\partial G_i }{\partial x_j} x_k^i + G_i \frac{\partial x_k^i}{\partial x_j} \right\| _E. \end{aligned}$$

Since E is compact, there exists \(K>0\), depending only on the set E, such that

$$\begin{aligned} \left\| \frac{\partial G_i }{\partial x_j} x_k^i + G_i \frac{\partial x_k^i}{\partial x_j} \right\| _E \le K \left( \left\| \frac{\partial G_i }{\partial x_j}\right\| _{\pi (E)} + \Vert G_i\Vert _{\pi (E)} \right) , \end{aligned}$$

for every \( j=1,2,\ldots ,N\) and \(i=0,1,\ldots ,d-1\). Therefore,

$$\begin{aligned} \left\| \frac{\partial P}{\partial x_j} \right\| _E \le K \left( \sum _{i=0}^{d-1} \left\| \frac{\partial G_i }{\partial x_j}\right\| _{\pi (E)} + \Vert G_i\Vert _{\pi (E)} \right) . \end{aligned}$$

Then, using the fact that \( \pi (E)\) is a Markov set, there exists constants \(A >0\) and \(\eta >0\) such that

$$\begin{aligned} \left\| \frac{\partial P}{\partial x_j} \right\| _E \le K \left( \sum _{i=0}^{d-1} A (\deg G_i)^{\eta } \Vert G_i\Vert _{\pi (E)} + \Vert G_i\Vert _{\pi (E)} \right) . \end{aligned}$$

Finally, we use (11) to see that

$$\begin{aligned} \left\| \frac{\partial P}{\partial x_j} \right\| _E \le Kd \left( AB (\deg P)^{\eta + \lambda } + B (\deg P)^{\lambda } \right) \Vert P\Vert _{E}. \end{aligned}$$

That concludes the proof. \(\square \)

We say that the set \(E \subset V\) is \(C^{\infty }_V\) determining if for each \(f \in C^{\infty }_V(\mathbb {R}^N)\), \(f_{|E}=0\) implies \(D^{\alpha }f_{|E}=0\), for all \(\alpha \in \mathbb {Z}^N_{+}\). Now we are ready to state our main result.

Theorem 1

If E is a \(C^{\infty }_V\) determining compact subset of V, then the following statements are equivalent:

  1. (i)

    (\(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov Inequality) There exist positive constants M and r such that for each polynomial \(P \in \mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\) and each \(\alpha \in \mathbb {Z}^N_{+}\),

    $$\begin{aligned} \Vert D^{\alpha } P\Vert _E \le M (\deg P)^{r|\alpha |} \Vert P\Vert _E. \end{aligned}$$
  2. (ii)

    There exist positive constants M and r such that for every \(P \in \mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\) of degree at most n, \(n = 1,2, \ldots \),

    $$\begin{aligned} |P(x)| \le M \Vert P\Vert _E \quad \mathrm{if} \quad x \in E_n:=\{x \in \mathbb {R}^N : \mathrm{dist}(x,E) \le 1/n^r\}. \end{aligned}$$
  3. (iii)

    (Bernstein’s Theorem) For every function \(f: E \rightarrow \mathbb {R}\), if the sequence \(\{\mathrm{dist}_E(f,\mathcal {P}_l(y) \otimes \mathcal {P}_{d-1}(x_k))\}\) is rapidly decreasing, then there is a \(C^{\infty }_V(\mathbb {R}^N)\) function \(\tilde{f}\) on \(\mathbb {R}^N\) such that \(\tilde{f}_{|E}=f\).

  4. (iv)

    The space \((C^{\infty }_V(E), \tau _J)\) is complete and \(C^{\infty }_V(E)=C^{\infty }(E)\).

  5. (v)

    The topologies \(\tau _J\) and \(\tau _Q\) for \(C^{\infty }_V(E)\) coincide.

Proof

The proof of equivalence of (i) and (ii) is almost the same as in [9], and we omit the details.

Next we show that \({( (i) \,and\, (ii))} \Leftrightarrow { (iii)}\). Suppose that we have function \(f: E \rightarrow \mathbb {R}\) such that for each \(s > 0\),

$$\begin{aligned} \lim _{l \rightarrow \infty } l^s \Vert f - P_l\Vert _E =0. \end{aligned}$$

Here \(P_l=\sum _{i=0}^{d-1} G_{l,i}(y)x_k^i\) is a metric projection of f onto \(\mathcal {P}_l(y) \otimes \mathcal {P}_{d-1}(x_k)\) (\(l=0,1,\ldots \)). Set, as in Lemma 2,

$$\begin{aligned} \pi (E)=\{y=(x_1,\ldots ,x_{k-1},x_{k+1},\ldots ,x_N) \in \mathbb {R}^{N-1} : (x_1,\ldots ,x_N) \in E \\ \mathrm{for \ some} \ x_k \in \mathbb {R}\}. \end{aligned}$$

We assume that r is an integer so large that both (i) and (ii) are valid for E. Let \(\epsilon _l=1/l^r\) and for \(l= 1, 2,\ldots \) take a function \(h_l \in C^{\infty }(\mathbb {R}^{N-1})\) of Lemma 1 corresponding to \(\epsilon _l\) and \(\pi (E)\). We will show that

$$\begin{aligned} \tilde{f}(x_1,\ldots ,x_N):=\sum _{i=0}^{d-1} G_{0,i}(y)x_k^i + \sum _{l=1}^{\infty } \sum _{i=0}^{d-1} h_l(y) (G_{l,i}(y)-G_{l-1,i}(y))x_k^i \end{aligned}$$

determines a function from \(C^{\infty }_V(\mathbb {R}^N)\) such that \(\tilde{f}_{|E}=f\). In order to prove that \(\tilde{f} \in C^{\infty }_V(\mathbb {R}^N)\), it suffices to check that

$$\begin{aligned} G_{0,i}(y) + \sum _{l=1}^{\infty } h_l(y) (G_{l,i}(y)-G_{l-1,i}(y)) \in C^{\infty }(\mathbb {R}^{N-1}), \end{aligned}$$

for every \(i=0,1,\ldots ,d-1\). Thus, if \(\gamma \in \mathbb {Z}^{N-1}_{+}\), then, by (i) and (ii),

$$\begin{aligned} \sup _{\mathbb {R}^{N-1}} |D^{\gamma }(h_l(G_{l,i}-G_{l-1,i}))|\le & {} \sum _{\beta \le \gamma } {\gamma \atopwithdelims ()\beta } \sup _{\pi (E)_l} |D^{\beta }h_l D^{\gamma - \beta }(G_{l,i}-G_{l-1,i})| \\\le & {} M \sum _{\beta \le \gamma } {\gamma \atopwithdelims ()\beta } C_{\beta } l^{r|\beta |} \Vert D^{\gamma - \beta }(G_{l,i}-G_{l-1,i})\Vert _{\pi (E)} \\\le & {} M_1 l^{r|\gamma |} \Vert G_{l,i}-G_{l-1,i}\Vert _{\pi (E)} \end{aligned}$$

where \(\pi (E)_l:=\{y \in \mathbb {R}^{N-1} : \mathrm{dist}(y,\pi (E)) \le \epsilon _l \}\). From Lemma 2, there is a constant \(C>0\) so that

$$\begin{aligned} \sup _{\mathbb {R}^{N-1}} |D^{\gamma }(h_l(G_{l,i}-G_{l-1,i}))| \le C (\deg P_l)^{r(|\gamma |+d-1)} \Vert P_l - P_{l-1}\Vert _E. \end{aligned}$$

Now if \(l \ge \max \{2,d\}\), then

$$\begin{aligned} \sup _{\mathbb {R}^{N-1}} |D^{\gamma }(h_l(G_{l,i}-G_{l-1,i}))| \le 2C l^{-2} \delta _{2r(|\gamma |+d-1)+2}(f), \end{aligned}$$

with a constant C independent of l. Taking into account that \(\delta _{2r(|\gamma |+d-1)+2}(f)\) is independent of l and the series \(\sum _{l=1}^{\infty } l^{-2}\) is convergent, one sees that

$$\begin{aligned} \sum _{l=1}^{\infty } D^{\gamma } \left( h_l (G_{l,i}-G_{l-1,i}) \right) \end{aligned}$$

converges uniformly for every \(i=0,1,\ldots ,d-1\).

Next we shall show that \({ (iii)} \Rightarrow { (iv)} \Rightarrow { (v)}\). If Bernstein’s theorem holds, then it is clear that \(C^{\infty }_V(E)=C^{\infty }(E)\). From this and the fact that the map \(C(E) \ni f \rightarrow \mathrm{dist}_E(f,\mathcal {P}_l(y) \otimes \mathcal {P}_{d-1}(x_k)) \in \mathbb {R}\) is continuous, we have (iv). Now suppose that \((C^{\infty }_V(E), \tau _J)\) is complete. Let I be a cube which contains E in its interior. Applying the Jackson theorem on the cube, for every \(\nu \), there exists a constant \(C_\nu >0\) so that

$$\begin{aligned} \delta _\nu (f) \le C_\nu q_{I,\nu +1} (f) \end{aligned}$$

for all f in \(C^{\infty }_V(\mathbb {R}^N)\). Hence by Banach’s isomorphism theorem (for Fréchet spaces), we have (v).

The final step of the proof is to show that (v) implies (i) . If topologies \(\tau _J\) and \(\tau _Q\) coincide, there are a positive constant M and an integer \(\mu \ge - 1\) such that \(q_{E,1}(f) \le M \delta _{\mu }(f)\) for every \(f \in C^{\infty }_V(E)\). Since \(\pi (E)\) is \(C^{\infty }\) determining and \(\delta _{0}(f) \le \Vert f\Vert \), we conclude that \(\mu \ge 1\). Hence if \(f \in \mathcal {P}_\lambda (y) \otimes \mathcal {P}_{d-1}(x_k)\), then

$$\begin{aligned} \left\| \frac{\partial f}{\partial x_j} \right\| _E \le M \sup _{1 \le l \le \lambda } l^{\mu } \mathrm{dist}_E(f,\mathcal {P}_l(y) \otimes \mathcal {P}_{d-1}(x_k)) \le M \lambda ^{\mu } \Vert f\Vert _E \end{aligned}$$

for \(j= 1, 2,\ldots , N\). This implies that E is a \(\mathcal {P}(y) \otimes \mathcal {P}_{d-1}(x_k)\)-Markov set. (It is essential here that E is \(C^{\infty }_V\) determining.) \(\square \)

Remark 2

Let E be compact subset of V. If E satisfies (i), above, then E is \(C^{\infty }_V\) determining set.

To see this, take a compact cube I in \(\mathbb {R}^N\) containing E in its interior. We let \(f \in C^{\infty }_V(\mathbb {R}^N)\) such that \(f=0\) on E. It follows from the definition of \(C^{\infty }_V(\mathbb {R}^N)\) that

$$\begin{aligned} \epsilon _l:=\mathrm{dist}_I (f,\mathcal {P}_l(y) \otimes \mathcal {P}_{d-1}(x_k))=\Vert f-P_l\Vert _I=\Vert f-\sum _{i=0}^{d-1} G_{l,i}(y)x_k^i\Vert _I \end{aligned}$$

is rapidly decreasing. Hence by Markov’s inequality, we have

$$\begin{aligned} D^{\alpha }f= \sum _{i=0}^{d-1} D^{\alpha }\{ G_{0,i}(y)x_k^i\} + \sum _{l=1}^{\infty } \sum _{i=0}^{d-1} D^{\alpha } \{(G_{l,i}(y)-G_{l-1,i}(y))x_k^i\} \quad \mathrm{on }\ I, \end{aligned}$$

for all \(\alpha \in \mathbb {Z}^N_{+}\). Finally, by (i), we obtain that \(D^{\alpha }f(x)=\lim _{l\rightarrow \infty } D^{\alpha }P_l(x)=0\) for every \(x \in E\).