1 Introduction

Recently there has been renewed interest in using splines of maximal smoothness, i.e., of smoothness \(C^{d-1}\) for splines of degree d, as finite elements for solving PDEs. This is one of the main ideas behind isogeometric analysis [1, 2, 4, 10]. This raises the issue of how good these splines are at approximating functions of a certain smoothness class, especially with respect to approximation in the \(L^2\) norm. This was answered to some extent by Melkman and Micchelli [7], who studied the \(L^2\) approximation of functions u in the Sobolev space

$$\begin{aligned} H^r= H^r(0,1)=\{u\in L^2(0,1) : u^{(\alpha )}\in L^2(0,1), \quad \alpha =1,2,\ldots ,r \} \end{aligned}$$

and measured the error relative to the \(L^2\) norm of \(u^{(r)}\). They showed that from this point of view, there are two spaces of splines that are optimal, one of degree \(r-1\), the other of degree \(2r-1\). Later it was shown in [3] that these two spaces are just the first two of a whole sequence of optimal spline spaces of degrees \(lr-1\), \(l=1,2,3,\ldots \). In the case \(r=1\), there is therefore an optimal spline space of every degree, but whether this is true for \(r\ge 2\) is an open question.

In this paper, we study the related problem of approximating functions in \(H^r\) subject to certain boundary conditions. Specifically, we look at

$$\begin{aligned} \begin{aligned} H^r_0&=\{u\in H^r:\quad u^{(k)}(0)=u^{(k)}(1)=0,\quad 0\le k<r,\quad k\text { even}\},\\ H^r_1&=\{u\in H^r:\quad u^{(k)}(0)=u^{(k)}(1)=0,\quad 0\le k<r,\quad k\text { odd} \},\\ H^r_2&=\{u\in H^r:\quad u^{(k)}(0)=u^{(l)}(1)=0,\quad 0\le k, l<r,\quad k\text { even},\quad l\text { odd} \}. \end{aligned} \end{aligned}$$

Our main result is to show that for all \(r\ge 1\), the spaces \(H^r_i\), \(i=0,1,2\), admit optimal spline spaces of all degrees \(\ge r-1\). This is very similar to the numerical results reported by Evans et al. [2] regarding the degrees of the spline spaces; however, their paper considered other boundary conditions (periodic conditions or no conditions).

The derivations in [7] and [3] were based on the use of an integral operator K that represents integration r times. Roughly speaking, and ignoring what happens at the boundary of the interval, if \(X_n\) is an optimal space of splines of some degree d, then the space \(K(X_n)\), i.e., the space generated by integrating the splines in \(X_n\), r times, is also an optimal space, consisting of splines of degree \(d+r\).

In contrast, in this paper, we work only with an integral operator K that represents a single integration. We generate optimal spline spaces for \(H_i^r\), \(i=0,1,2\), by applying K, i.e., one integration, both to the initial Sobolev space \(H_i^1\) and its optimal spline space, \(X_n\), of degree 0. This approach works for \(H_i^r\), \(i=0,1,2\), because, unlike \(H^r\) itself, when we apply (the right) K to the functions in \(H_i^r\), we get back a similar space, with r increased by one.

The optimal spline spaces we obtain have the same type of boundary conditions (odd or even derivatives are zero at the ends of the interval) as the spaces \(H^r_i\) themselves. The splines also have uniform knots, thus making them convenient to use in practice. In particular, some of the spline spaces corresponding to \(H^r_1\) are precisely the ‘reduced spline spaces’ studied recently by Takacs and Takacs [10, Section 5] (see also the end of Sect. 3 in this paper). They proved approximation estimates and inverse inequalities for these spaces, with a view to constructing fast iterative methods for solving PDEs in the framework of isogeometric analysis.

2 Kolmogorov n-Widths

We start by formulating the concept of optimality in terms of Kolmogorov n-widths [9]. Denote the norm and inner product on \(L^2=L^2(0,1)\) by

$$\begin{aligned} \Vert f\Vert ^2 = (f,f), \qquad (f,g) = \int _0^1 f(t) g(t) \, dt, \end{aligned}$$

for real-valued functions f and g. For a subset A of \(L^2\), and an n-dimensional subspace \(X_n\) of \(L^2\), let

$$\begin{aligned} E(A, X_n) = \sup _{u \in A} \inf _{v \in X_n} \Vert u-v\Vert \end{aligned}$$

be the distance to A from \(X_n\) relative to the \(L^2\) norm. Then the Kolmogorov \(L^2\)n-width of A is defined by

$$\begin{aligned} d_n(A) = \inf _{X_n} E(A, X_n). \end{aligned}$$

A subspace \(X_n\) is called an optimal space for A provided that

$$\begin{aligned} d_n(A) = E(A, X_n). \end{aligned}$$

Now, consider the function classes

$$\begin{aligned} A^r_i=\{u\in H^r_i : \Vert u^{(r)}\Vert \le 1\}, \quad i=0,1,2. \end{aligned}$$
(1)

By looking at \(u/\Vert u^{(r)}\Vert \) for functions \(u\in H^r_i\), we have for any n-dimensional subspace \(X_n\) of \(L^2\),

$$\begin{aligned} \Vert u-P_nu\Vert \le E(A^r_i, X_n)\Vert u^{(r)}\Vert , \end{aligned}$$

where \(P_n\) denotes the \(L^2\) projection onto \(X_n\). Moreover, if \(X_n\) is an optimal subspace for \(A^r_i\), then

$$\begin{aligned} \Vert u-P_nu\Vert \le d_n(A^r_i)\Vert u^{(r)}\Vert , \end{aligned}$$

and \(d_n(A^r_i)\) is the least possible constant over all n-dimensional subspaces \(X_n\).

3 Main Results

We first describe the n-widths for \(A^r_i\) in (1) and the optimal subspaces based on eigenfunctions. We will show:

Theorem 1

For any integer \(r\ge 1\), the n-widths of \(A^r_i\), \(i=0,1,2,\) are

$$\begin{aligned} d_n(A^r_0) = \frac{1}{(n+1)^r\pi ^r}, \qquad d_n(A^r_1) = \frac{1}{(n\pi )^r}, \qquad d_n(A^r_{2}) = \frac{1}{(n+\frac{1}{2})^r\pi ^r}. \end{aligned}$$
(2)

Furthermore, the spaces

$$\begin{aligned}&[\sin \pi x, \sin 2\pi x, \ldots , \sin n \pi x], \end{aligned}$$
(3)
$$\begin{aligned}&[1,\cos \pi x, \cos 2\pi x, \ldots , \cos (n-1) \pi x], \end{aligned}$$
(4)
$$\begin{aligned}&[\sin (1/2) \pi x, \sin (3/2) \pi x, \ldots , \sin (n-1/2) \pi x] \end{aligned}$$
(5)

are optimal n-dimensional spaces for, respectively, \(A^r_0\), \(A^r_1\), and \(A^r_2\).

Here, \([\cdots ]\) denotes the span of a set of functions. The result for \(A^1_1\) was shown by Kolmogorov [6]. With r an even number, the result for \(A^r_0\) was shown in [3]. The remaining cases will be shown in Sects. 7 and 8.

Now, let us describe the optimal spline spaces for these sets. Suppose \({\varvec{\tau }}= (\tau _1,\ldots ,\tau _m)\) is a knot vector such that

$$\begin{aligned} 0< \tau _1< \cdots< \tau _m < 1, \end{aligned}$$

and let \(I_0 = [0,\tau _1)\), \(I_j = [\tau _j,\tau _{j+1})\), \(j=1,\ldots ,m-1\), and \(I_m = [\tau _m,1]\). For any \(d \ge 0\), let \(\Pi _d\) be the space of polynomials of degree at most d. Then we define the spline space \(S_{d,{\varvec{\tau }}}\) by

$$\begin{aligned} S_{d,{\varvec{\tau }}} = \{s \in C^{d-1}[0,1] : s|_{I_j} \in \Pi _d,\, j=0,1,\ldots ,m \}, \end{aligned}$$

which has dimension \(m+d+1\). We now define the three n-dimensional spline spaces \(S_{d,i}\) for \(i=0,1,2\), by

$$\begin{aligned} \begin{aligned} S_{d,0}&= \{s\in S_{d,{\varvec{\tau }}_0} : s^{(k)}(0)=s^{(k)}(1)=0,\quad 0\le k\le d, \quad k \text { even}\},\\ S_{d,1}&= \{s\in S_{d,{\varvec{\tau }}_1} : s^{(k)}(0)=s^{(k)}(1)=0,\quad 0\le k\le d, \quad k \text { odd}\},\\ S_{d,2}&= \{s\in S_{d,{\varvec{\tau }}_2} : s^{(k)}(0)=s^{(l)}(1)=0,\quad 0\le k,l\le d, \quad k \text { even}, \quad l \text { odd}\}, \end{aligned} \end{aligned}$$
(6)

where the knot vectors \({\varvec{\tau }}_i\) for \(i=0,1,2\), are given as

$$\begin{aligned} \begin{aligned} {\varvec{\tau }}_0&= {\left\{ \begin{array}{ll} \left( \frac{1}{n+1},\frac{2}{n+1},\ldots ,\frac{n}{n+1}\right) ,\qquad &{}d \text { odd},\\ \left( \frac{1/2}{n+1},\frac{3/2}{n+1},\ldots ,\frac{n+1/2}{n+1}\right) ,\quad &{}d \text { even}, \end{array}\right. }\\ {\varvec{\tau }}_1&= {\left\{ \begin{array}{ll} \left( \frac{1/2}{n},\frac{3/2}{n},\ldots ,\frac{n-1/2}{n}\right) ,\qquad &{}d \text { odd},\\ \left( \frac{1}{n},\frac{2}{n},\ldots ,\frac{n-1}{n}\right) ,\qquad &{}d \text { even}, \end{array}\right. }\\ {\varvec{\tau }}_2&= {\left\{ \begin{array}{ll} \left( \frac{1}{2n+1},\frac{3}{2n+1},\ldots ,\frac{2n-1}{2n+1}\right) ,\quad &{}d \text { odd},\\ \left( \frac{2}{2n+1},\frac{4}{2n+1},\ldots ,\frac{2n}{2n+1}\right) ,\quad &{}d \text { even}. \end{array}\right. } \end{aligned} \end{aligned}$$
(7)

All these knot vectors have equidistant knots, but if we extend them to include the endpoints of [0, 1], the first and last knot intervals of these extended knot vectors sometimes have half the length of the interior ones. Examples of these knot vectors are shown in Figs. 2, 3, and 4. Our main result is then the following.

Theorem 2

Suppose \(r\ge 1\). Then for any \(i=0,1,2\), the spline spaces \(S_{d,i}\) are optimal n-dimensional spaces for the set \(A^r_i\) for any \(d\ge r-1\).

The case \(A^1_1\) was shown in [3, Theorem 2]. On the other hand, the case \(A^r_0\) is a generalization of [3, Theorem 1] since that theorem only treated even r and spline spaces of degrees \(lr-1\) for \(l=1,2,\ldots \), thus leaving gaps between the degrees. When the degree d is even, the spaces \(S_{d,1}\), whose common extended knot vector is equidistant, are the ‘reduced spline spaces’ of Takacs and Takacs [10, Section 5]. They have also derived approximation results regarding these spaces, using Fourier analysis. We can see from Theorem 2 and (2) that, for even d, the constant \(\sqrt{2}\) in [10, Corollary 5.1] can be replaced by the optimal constant \(1/\pi \).

4 Sets Defined by Kernels

We need some properties of kernels, and so this section is similar to [3, Section 3]. The starting point of the analysis is to represent the lowest order function classes \(A^1_i\), \(i=0,1,2,\) in the form

$$\begin{aligned} A = K(B) = \{ K f : \Vert f\Vert \le 1 \}, \end{aligned}$$
(8)

where B is the unit ball in \(L^2\) and K is the integral operator

$$\begin{aligned} K f(x) = \int _0^1 K(x,y) f(y) \, dy. \end{aligned}$$

As in [7], we use the notation K(xy) for the kernel of K. We only consider kernels K(xy) that are continuous or piecewise continuous for \(x, y \in [0,1]\). Observe that for A in (8) and any n-dimensional subspace \(X_n\) of \(L^2\),

$$\begin{aligned} E(A,X_n) = \sup _{\Vert f\Vert \le 1} \Vert (I-P_n) K f \Vert = \Vert (I-P_n)K\Vert _2, \end{aligned}$$
(9)

where \(P_n\) is the orthogonal projection onto \(X_n\) and \(\Vert \cdot \Vert _2\) denotes the operator norm induced by the \(L^2\) norm for functions.

We will denote by \(K^*\) the adjoint, or dual, of the operator K, defined by

$$\begin{aligned} (f,K^*g) = (Kf, g). \end{aligned}$$

The kernel of \(K^*\) is \(K^*(x,y) = K(y,x)\). Similar to matrix multiplication, the kernel of the composition of two integral operators K and L is

$$\begin{aligned} (KL)(x,y) = (K(x,\cdot ),L(\cdot ,y)). \end{aligned}$$

The operator \(K^*K\), being self-adjoint and positive semi-definite, has eigenvalues

$$\begin{aligned} \lambda _1 \ge \lambda _2 \ge \cdots \ge \lambda _n \ge \cdots \ge 0, \end{aligned}$$
(10)

and corresponding orthogonal eigenfunctions

$$\begin{aligned} K^*K \phi _n = \lambda _n \phi _n, \qquad n=1,2,\ldots . \end{aligned}$$
(11)

If we further define \(\psi _n = K \phi _n\), then

$$\begin{aligned} K K^*\psi _n = \lambda _n \psi _n, \qquad n=1,2,\ldots , \end{aligned}$$
(12)

and the \(\psi _n\) are also orthogonal. The square roots of the \(\lambda _n\) are known as the s-numbers of K (or \(K^*\)). With these definitions, we obtain [9, p. 6 or p. 65]:

Theorem 3

\(d_n(A) =\lambda _{n+1}^{1/2}\), and the space \([\psi _1,\ldots ,\psi _n]\) is optimal for A.

5 Totally Positive Kernels

Melkman and Micchelli [7] proved that if K is nondegenerate totally positive (NTP) [9, p. 108], then there are in fact two other optimal subspaces for A. Specifically, if K is NTP, it follows from a theorem of Kellogg [9, p. 109] that the eigenvalues of \(K^*K\) and \(KK^*\) in (11) and (12) are positive and simple, \(\lambda _1> \lambda _2> \cdots> \lambda _n> \cdots > 0\), and the eigenfunctions \(\phi _{n+1}\) and \(\psi _{n+1}\) have exactly n simple zeros in (0, 1),

$$\begin{aligned}&\phi _{n+1}(\xi _j) = \psi _{n+1}(\eta _j) = 0, \quad j=1,2,\ldots ,n, \\&0< \xi _1< \xi _2< \cdots< \xi _n< 1, \qquad 0< \eta _1< \eta _2< \cdots< \eta _n < 1. \end{aligned}$$

Melkman and Micchelli [7, Theorem 2.3] then proved that the spaces

$$\begin{aligned} \begin{aligned} X_n^0&=[K(\cdot ,\xi _1),\ldots ,K(\cdot ,\xi _n)],\\ X_n^1&= [(KK^*)(\cdot ,\eta _1),\ldots , (KK^*)(\cdot ,\eta _n)] \end{aligned} \end{aligned}$$
(13)

are optimal for A. Using a duality technique that we will discuss in the next section, it was later shown in [3, Theorem 5] that, given these two optimal spaces, there is an optimal space \(X_n^d\) for all \(d=0,1,2,\ldots \), where

(14)

Melkman and Micchelli also constructed two optimal subspaces for the set A even when K is not NTP, but for K satisfying some related properties. We will deal with such a situation in Sect. 8.

6 Further Optimality Results

In this section, we describe how optimal subspaces for the set A in (8) can be used to find optimal subspaces for sets of the form \(K^*(A)\), \(KK^*(A)\), and so on. The results here will hold for any integral operator K.

To ease notation, we define two function classes \(A^r\) and \(A^r_*\) for \(r\ge 1\), by

$$\begin{aligned} A^r={\left\{ \begin{array}{ll} (KK^*)^iK(B), &{} r=2i+1,\\ (KK^*)^{i}(B), &{} r = 2i, \end{array}\right. }\qquad A^r_*={\left\{ \begin{array}{ll} (K^*K)^iK^*(B), &{} r=2i+1,\\ (K^*K)^{i}(B), &{} r = 2i. \end{array}\right. } \end{aligned}$$
(15)

Observe that both \(A^r\) and \(A^r_*\) are defined by alternately applying the operators K and \(K^*\), r times, to the unit ball B, with K always being the left-most operator for \(A^r\), and \(K^*\) always being the left-most operator for \(A^r_*\). Since \(A^1=A\), we will write \(A_*\) when referring to \(A_*^1\). As we shall see momentarily, the duality between the operators K and \(K^*\) will play an important role for the sets \(A^r\) and \(A^r_*\) and especially their respective optimal subspaces. In some sense, their optimal subspaces could be considered ‘dual’ to each other.

Since eigenvalues of powers of \(KK^*\) (and \(K^*K\)) are just powers of the \(\lambda _n\) in (10), with the same corresponding eigenfunction, it follows that the n-widths of the sets \(A^r\) and \(A^r_*\) are given by

$$\begin{aligned} d_n(A^r_*)=d_n(A^r) = d_n(A)^r, \end{aligned}$$
(16)

and the space \([\psi _1,\ldots ,\psi _n]\) in Theorem 3 is optimal for \(A^r\), and the space \([\phi _1,\ldots ,\phi _n]\) is optimal for \(A^r_*\). As a tool for finding further optimal subspaces for \(A^r\) and \(A^r_*\), with \(r\ge 2\), we start with the following lemma.

Lemma 1

For any integral operator K, let \(L=KK^*\). If \(X_n\) and \(Y_n\) are any subspaces of \(L^2\), then

$$\begin{aligned} \begin{aligned} E(A^r, L^i(X_n))&\le E(A,X_n)E(A^{r-1},L^i(X_n)), \qquad&r=2i+1,\\ E(A^r, L^{i-1}K(Y_n))&\le E(A_*,Y_n)E(A^{r-1},L^{i-1}K(Y_n)), \qquad&r=2i, \end{aligned} \end{aligned}$$

for \(r\ge 2\).

Proof

First assume \(r=2i+1\) for \(i\ge 1\). From the definition of \(A^r\), we have \(A^r=L^iK(B)\) and \(A^{r-1}=L^i(B)\).

Let \(P_n\) be the \(L^2\) projection onto \(X_n\), and let \(Q_n\) be \(L^2\) projection onto \(L^i(X_n)\). Then

$$\begin{aligned} L^iP_nKf\in L^i(X_n) \end{aligned}$$

for all \(f\in L^2\), and so

$$\begin{aligned} (I-Q_n)L^iP_nK=0. \end{aligned}$$

Thus, by using Eq. (9), we find that

$$\begin{aligned} E(A^r, L^i(X_n))&=\Vert (I-Q_n)L^iK\Vert _2 = \Vert (I-Q_n)L^iK - (I-Q_n)L^iP_nK\Vert _2,\\&=\Vert (I-Q_n)L^i(I-P_n)K\Vert _2 \le \Vert (I-Q_n)L^i\Vert _2\Vert (I-P_n)K \Vert _2,\\&=E(A^{r-1},L^i(X_n))E(A, X_n). \end{aligned}$$

Next, assume \(r=2i\) for \(i\ge 1\). Then \(A^r=L^{i}(B)\) and \(A^{r-1}=L^{i-1}K(B)\). In this case, let \(P_n\) be the \(L^2\) projection onto \(Y_n\), and let \(Q_n\) be \(L^2\) projection onto \(L^{i-1}K(Y_n)\). Then, as before,

$$\begin{aligned} (I-Q_n)L^{i-1}KP_nK^*=0, \end{aligned}$$

and the result follows by an almost identical argument as in the previous case. \(\square \)

Now suppose that \(X_n^0\) is an optimal n-dimensional subspace for A and \(Y_n^0\) is an optimal n-dimensional subspace for \(A_*\). With these two subspaces, one can generate a whole sequence of subspaces \(X_n^d\) and \(Y_n^d\) by

$$\begin{aligned} \begin{aligned} X_n^{d}&=K\left( Y_n^{d-1}\right) ,\qquad Y_n^{d}&=K^*\left( X_n^{d-1}\right) \end{aligned} \end{aligned}$$
(17)

for all \(d=1,2,3,\ldots \), and it follows from [3, Lemma 1] that all the \(X_n^d\) are optimal for the n-width of \(A^1=A\) and all the \(Y_n^d\) are optimal for the n-width of \(A_*^1=A_*\). Note that for \(d>0\), the spaces \(X_n^d\) and \(Y_n^d\) could in general have dimension less than n, but they are still optimal for the n-width problem. In fact, if \(X_n^d\) or \(Y_n^d\) have dimension m, \(0\le m<n\), then \(d_m(A)\) must equal \(d_n(A)\) by definition of the n-width.

Next, we consider \(A^r\) and \(A^r_*\) for \(r\ge 2\).

Lemma 2

Suppose the subspace \(X_n^0\) is optimal for A and \(Y_n^0\) is optimal for \(A_*\). Then for \(r\ge 2\),

$$\begin{aligned}&E\left( A^r, X_n^d\right) \le d_n(A)E\left( A^{r-1},X_n^d\right) , \end{aligned}$$
(18)
$$\begin{aligned}&E\left( A^r_*, Y_n^d\right) \le d_n(A)E\left( A_*^{r-1},Y_n^d\right) \end{aligned}$$
(19)

for all \(d\ge r-1\).

Proof

We start by proving inequality (18). Let \(L=KK^*\). First, assume \(r=2i+1\) for \(i\ge 1\). It then follows from (17) that \(X_n^d=L^i(X_n^{d-r+1})\) for \(d\ge r-1\), and so the result follows from Lemma 1, with \(X_n=X_n^{d-r+1}\), since \(X_n^{d-r+1}\) is optimal for A.

Next, assume \(r=2i\) for \(i\ge 1\). It then follows from (17) that \(X_n^d=L^iK(Y_n^{d-r+1})\) for \(d\ge r-1\), and so the result follows from Lemma 1, with \(Y_n=Y_n^{d-r+1}\), since \(Y_n^{d-r+1}\) is optimal for \(A_*\) and \(d_n(A_*)=d_n(A)\).

Inequality (19) then follows from the same argument if we interchange the roles of K and \(K^*\). \(\square \)

Using Lemma 2, we now obtain optimality results for \(A^r\) and \(A^r_*\) for all \(r\ge 1\).

Theorem 4

Suppose the subspace \(X_n^0\) is optimal for A and \(Y_n^0\) is optimal for \(A_*\). Then for \(r\ge 1\),

  • the subspaces \(X_n^d\) in (17) are optimal for the n-width of \(A^{r}\), and

  • the subspaces \(Y_n^d\) in (17) are optimal for the n-width of \(A^{r}_*\)

for all \(d\ge r-1\).

Proof

The case \(r=1\) follows from [3, Lemma 1]. For \(r\ge 2\) the result for the \(X_n^d\) follows from inequality (18) in Lemma 2, Eq. (16), and induction on r, since \(d_n(A)^r=d_n(A)d_n(A)^{r-1}\). Similarly, now using inequality (19) in Lemma 2, we get the result for the \(Y_n^d\) as well. \(\square \)

Fig. 1
figure 1

Optimality results

We have summarized the statement of Theorem 4 in Figure 1. Under the assumption of Theorem 4 on \(X_n^0\) and \(Y_n^0\), all the spaces (above the line) in the two tables are optimal for all the function classes below them. Optimality of \(X_n^0\) for \(A^1\) implies optimality of \(Y_n^1\) for \(A^1_*\) by [3, Lemma 1], and so on along the first row (below the line) in the left table. Then, by Lemma 2, optimality of \(X_n^0\) for \(A^1\), and \(Y_n^1\) for \(A_*^1\), imply optimality of \(Y_n^1\) for \(A^2_*\), and so on along the second row. Optimality of \(X_n^0\) for \(A^1\), and \(X_n^2\) for \(A^2\), imply optimality of \(X_n^2\) for \(A^3\), and so on along the third row. The right table works similarly.

Let us now turn back to the case where K is NTP. The subspace \(X_n^0\) in (13) is optimal for A, and since K being NTP is equivalent to \(K^*\) being NTP, we also have that the subspace

$$\begin{aligned} Y_n^0=[K^*(\cdot ,\eta _1),\ldots , K^*(\cdot ,\eta _n)] \end{aligned}$$
(20)

is optimal for \(A_*\), and so we can apply Theorem 4. The subspaces \(X_n^d\) in (17) are in this case the same as those in Eq. (14). Since the eigenvalues (10) (and thus also the n-widths) are strictly decreasing whenever K is NTP, the subspaces \(X_n^d\) and \(Y_n^d\) are in this case also n-dimensional for all \(d\ge 0\).

7 Mixed Boundary Conditions

In this section, we study the n-width problem for the function class \(A^r_2\) in (1). Consider the operator K given by

$$\begin{aligned} Kf(x)=\int _0^xf(y)d y = \int _0^1K(x,y)f(y)dy, \end{aligned}$$
(21)

whose kernel is

$$\begin{aligned} K(x,y)= {\left\{ \begin{array}{ll} 0 &{} x < y, \\ 1 &{} x \ge y. \end{array}\right. } \end{aligned}$$
(22)

Using the equality \(K^*(x,y)=K(y,x)\), we find that

$$\begin{aligned} K^*f(x)=\int _x^1f(y)d y. \end{aligned}$$
(23)

Thus K represents integration from the left, while \(K^*\) represents integration from the right.

From (21), we see that the set \(A^1_{2}\) in equation (1) can be expressed as

$$\begin{aligned} A^1_{2} =\left\{ u\in H^1 : \Vert u'\Vert \le 1, \,\, u(0)=0\right\} = \left\{ \int _0^x f(y) d y : \Vert f\Vert \le 1\right\} = K(B). \end{aligned}$$

To see that the remaining \(A^r_2\) can be expressed in terms of K and \(K^*\), it is convenient to recognize the kernel of the composition \(KK^*\) as the Green’s function for a boundary value problem, whose eigenfunctions we will need later anyway [in Eq. (27)].

Lemma 3

If \(u(x) = KK^* f(x)\), then u is the unique solution to the boundary value problem

$$\begin{aligned} - u''(x) = f(x), \quad u(0) = u'(1) = 0. \end{aligned}$$
(24)

Proof

We see from (21) and (23) that for any h,

$$\begin{aligned} (Kh)'(x)&= h(x), \nonumber \\ (K^* h)'(x)&= - h(x). \end{aligned}$$
(25)

So, if \(u(x) = KK^* f(x)\), then \(- u''(x) = f(x)\). For the left boundary condition, from (21), we find that

$$\begin{aligned} u(0)=(KK^*f)(0)=0. \end{aligned}$$

For the right boundary condition, from (25) and (23),

$$\begin{aligned} u'(1)=(KK^*f)'(1)=(K^*f)(1)=0. \end{aligned}$$

To see that u is unique, suppose \(f=0\) in (24). Then u must be a linear function, but to satisfy the boundary conditions we must have \(u=0\). \(\square \)

By applying the above lemma to functions f in B and K(B), respectively, and repeating the procedure i times, we find that

$$\begin{aligned} A^{2i}_{2} = (KK^*)^{i}(B),\qquad A^{2i+1}_{2} = (KK^*)^{i}K(B), \end{aligned}$$

where \(A^{2i}_{2}\) and \(A^{2i+1}_{2}\) are as in equation (1). Observe that the left-most operator for the function class \(A^r_2\) is always K, and so \(A^r_2\) is an instance of \(A^r\) in (15).

7.1 Proof of Theorem 1 for \(A^r_2\)

Analogously to Lemma 3, we have, for the other composition \(K^*K\):

Lemma 4

If \(u(x) = K^*K f(x)\), then u is the unique solution to the boundary value problem

$$\begin{aligned} - u''(x) = f(x), \quad u'(0) = u(1) = 0. \end{aligned}$$

From Lemma 4, we see that the eigenvalues and eigenfunctions of \(K^*K\) are

$$\begin{aligned} \begin{aligned} \lambda _n = \frac{1}{(n-1/2)^2 \pi ^2}, \qquad \phi _n(x) = \cos (n-1/2) \pi x, \qquad n=1,2,\ldots . \end{aligned} \end{aligned}$$
(26)

From Lemma 3, the operator \(KK^*\) has the same eigenvalues, but the eigenfunctions are

$$\begin{aligned} \psi _n(x) = \sin (n-1/2) \pi x, \qquad n=1,2,\ldots . \end{aligned}$$
(27)

So, by Theorem 3, the n-width of \(A^1_2\) is as given in Eq. (2), and an optimal subspace is as given in (5). The analogous results for \(A^r_2\), \(r>1\), follow from Eq. (16).

7.2 Proof of Theorem 2 for \(A^r_2\)

We have already seen that the function class \(A^r_2\) is the function class \(A^r\) in (15) when K has a kernel as given in (22). Since it is well known that this choice of K is NTP [5, p. 16], we can apply Theorem 4 to the spaces \(X_n^0\) in (13) and \(Y_n^0\) in (20). All that remains to show is that the optimal subspaces \(X_n^d\) generated as in Eq. (17) are the spline spaces we claim.

The zeros \(\xi _j\) of \(\phi _{n+1}(x)\) in (26) are the knots in the even degree case for the knot vector \({\varvec{\tau }}_2\) in Eq. (7), and the zeros \(\eta _j\) of \(\psi _{n+1}(x)\) in (27) are the knots in the odd degree case. Thus, \(X_n^0\) in (13), with the kernel of K as in Eq. (22), is equal to

$$\begin{aligned} X_n^0 = [K(\cdot ,\xi _1),\ldots , K(\cdot ,\xi _n)] = S_{0,2}, \end{aligned}$$

where \(S_{0,2}\) is the piecewise constant spline space given in Eq. (6). To find \(X_n^1\), we perform a simple calculation to see that

$$\begin{aligned} KK^*(x,y) = (K(x,\cdot ), K(y,\cdot )) = {\left\{ \begin{array}{ll} x, &{} x<y,\\ y, &{} x>y, \end{array}\right. } \end{aligned}$$

and so, \(X_n^1 =K(Y_n^0)=[(KK^*)(\cdot ,\eta _1),\ldots , (KK^*)(\cdot ,\eta _n)] = S_{1,2},\) the piecewise linear spline space given in Eq. (6). The remaining \(X_n^d\), for \(d\ge 2\), can be found by using the fact that \(X_n^{d+2} = KK^*(X_n^d)\) and applying Lemma 3, since the derivative of a spline is a spline on the same knot vector of one degree lower.

Remark

We note that interchanging the roles of K and \(K^*\) shows that the subspaces \(Y_n^d\) are optimal for the sets defined by interchanging the boundary conditions in \(A^r_{2}\), i.e., odd derivatives set to zero at the left-hand side, and even derivatives set to zero at the right-hand side. One finds that the subspaces \(Y_n^d\) are equal to their corresponding ‘dual’ subspaces \(X_n^d\), just with interchanged boundary conditions and interchanged knots (i.e., replacing the even degree case for \({\varvec{\tau }}_2\) in (7) with the odd degree case, and vice versa).

8 Symmetric Boundary Conditions

In this section, we study the n-width problems for the remaining function classes \(A^r_0\) and \(A^r_1\) in (1). Let \(K_1\) be the operator given by

$$\begin{aligned} K_1 = (I-Q)K, \end{aligned}$$
(28)

where Q is the orthogonal projection onto the constant functions, \(\Pi _{0}\), and K is again the operator (21). From [7], we know that the set \(A^1_1\) given in equation (1) can be written as the orthogonal sum

$$\begin{aligned} A^1_1= \Pi _0 \oplus K_1(B). \end{aligned}$$

It follows from [9, Chap. IV, Sec. 3.2] that the set \(A^1_0\) in Eq. (1) can be written as

$$\begin{aligned} A^1_0&=\{ u\in H^1 : \Vert u'\Vert \le 1, \quad u(0)=u(1)=0 \},\\&= \{K^*f : \Vert f\Vert \le 1,\quad f\perp 1\} =K^*(I-Q)(B)= K_1^*(B). \end{aligned}$$

The kernel of \(K_1K_1^*\) is the Green’s function to the boundary value problem

$$\begin{aligned} -u''(x)=f(x),\quad u'(0)=u'(1)=0,\quad u,f\perp 1 \end{aligned}$$
(29)

(see, e.g., [3, Lemma 4]). Using Eq. (29) i times and then adding back the constants, we find that

$$\begin{aligned} A^{2i}_1 = \Pi _0\oplus (K_1K_1^*)^i(B),\qquad A^{2i+1}_1 = \Pi _0\oplus (K_1K_1^*)^iK_1(B), \end{aligned}$$
(30)

where \(A^{2i}_1\) and \(A^{2i+1}_1\) are as in equation (1). The kernel of \(K_1^*K_1\) is the Green’s function to the boundary value problem

$$\begin{aligned} -u''(x)=f(x),\quad u(0)=u(1)=0 \end{aligned}$$
(31)

(see, e.g., [3, Lemma 3]). Then, using Eq. (31) i times, we find that,

$$\begin{aligned} A^{2i}_0=(K_1^*K_1)^i(B), \qquad A^{2i+1}_0=(K_1^*K_1)^iK_1^*(B), \end{aligned}$$

where \(A^{2i}_0\) and \(A^{2i+1}_0\) are as in Eq. (1). Observe that the left-most operator for the function class \(A^r_0\) is always \(K_1^*\), and so \(A^r_0\) is an instance of \(A^r_*\) in (15). The function class \(A^r_1\), on the other hand, is not quite an instance of \(A^r\) in (15), but it is of the form \(\Pi \oplus A^r\).

8.1 Proof of Theorem 1 for \(A^r_0\) and \(A^r_1\)

From Eq. (31), we see that the eigenvalues and eigenfunctions of \(K_1^*K_1\) are

$$\begin{aligned} \begin{aligned} \lambda _n = \frac{1}{(n\pi )^2}, \qquad \phi _n(x) = \sin (n \pi x), \qquad n=1,2,\ldots . \end{aligned} \end{aligned}$$
(32)

The operator \(K_1K_1^*\) has the same eigenvalues, but the eigenfunctions are

$$\begin{aligned} \psi _n(x) = \cos (n \pi x), \qquad n=1,2,\ldots . \end{aligned}$$
(33)

So, by Theorem 3, the n-widths of both \(A^1_0=K_1^*(B)\) and \(K_1(B)\) are equal to \(d_n(A^1_0)\) in Eq. (2). An optimal n-dimensional subspace for \(A^1_0\) is as given in (3), and an optimal n-dimensional subspace for \(K_1(B)\) is \( [\cos (\pi x), \cos (2 \pi x), \ldots , \cos (n \pi x)].\) Since this subspace is orthogonal to \(\Pi _0\), it follows that an optimal \((n+1)\)-dimensional subspace for \(A^1_1=\Pi _0\oplus K_1(B)\) is

$$\begin{aligned}{}[1, \cos (\pi x), \cos (2 \pi x), \ldots , \cos (n \pi x)], \end{aligned}$$

thus showing that (4) is an optimal n-dimensional space and that the n-width of \(A^1_1\) is as given in (2). Pay special attention to this index-shift caused by \(\Pi _0\): the n-width of \( K_1(B)\) is equal to \(\lambda _{n+1}^{1/2}\), but the n-width of \(A^1_1\) is equal to \(\lambda _{n}^{1/2}\) in (32). As before, the analogous results for \(A^r_0\) and \(A^r_1\), \(r>1\), follow from equation (16).

8.2 Proof of Theorem 2 for \(A^r_0\) and \(A^r_1\)

To prove Theorem 2 for \(A^r_0\) and \(A^r_1\), we will use Theorem 4 with \(K_1\) playing the role of the generic operator K. We must therefore identify the first optimal space \(X_n^0\) for \(K_1(B)\) and the first optimal space \(Y_n^0\) for \(A^1_0=K_1^*(B)\). Unlike K in Eq. (21), \(K_1\) is not NTP (specifically, it is not totally positive), and this creates an extra challenge compared with Sect. 7.2. Fortunately, as shown in [7, 8], the operator \(K_1^*K_1\) is in fact NTP, and we can make use of this and other results in [7, Section 5]. Specifically, we have from [7, Theorem 5.1] that

$$\begin{aligned} X_{n}^{0} =[K_1(\cdot ,\xi _1),\ldots ,K_1(\cdot ,\xi _{n})] \end{aligned}$$

is an optimal subspace for the n-width of \(K_1(B)\), where the \(\xi _j\), for \(j=1,2,\ldots ,n\), are the n zeros of \(\phi _{n+1}(x)\) in (32). Observe that these \(\xi _j\)’s are the knots in the odd degree case for the knot vector \({\varvec{\tau }}_0\) in (7).

Now, we consider \(Y_n^0\). First, let \(\eta _j\), for \(j=1,2,\ldots ,n+1\), be the \(n+1\) zeros of \(\psi _{n+1}(x)\) in (33), which are the knots in the even degree case of \({\varvec{\tau }}_0\) in (7). Additionally, let J be the interpolation operator from C[0, 1] to \(\Pi _{0}\) determined by interpolating at \(\eta _1\), and define the operator

$$\begin{aligned} \overline{K}_1 = (I-J)K, \end{aligned}$$

where K still is the operator (21). If we let \(Y_n^0\) be the n-dimensional space

$$\begin{aligned} Y_{n}^0 = [\overline{K}_1^*(\cdot ,\eta _{2}),\ldots ,\overline{K}_1^*(\cdot ,\eta _{n+1}) ], \end{aligned}$$
(34)

then the proof of [7, Theorem 5.1] (or [9, Theorem 5.11 p. 121]) contains the following important result.

Lemma 5

If \(P_n\) is the orthogonal projection onto the space \(Y_n^0\) and \(\lambda _{n+1}\) is as in (32), then

$$\begin{aligned} \sup _{\Vert f\Vert \le 1}\Vert \overline{K}_1(I-P_n)f\Vert \le \lambda _{n+1}^{1/2}. \end{aligned}$$
(35)

Proof

The operator \(K_1\) is a special case of the operator \(K_1\) in [9] (as explained on page 124). It therefore satisfies the assumptions of [9, Theorem 5.11 p. 121], and inequality (35) is then proved on page 122. Note the shift in index; the above \(n+1\) corresponds to n in [9, Theorem 5.11]. \(\square \)

Using this inequality, we can show the following.

Theorem 5

The space \(Y_n^0\) is optimal for \(A_0^1=K_1^*(B)\).

Proof

Let \(P_n\) be the orthogonal projection onto \(Y_n^{0}\) in (34). To prove that \(Y_n^{0}\) is an optimal subspace for \(A_0^1\), we need to show that

$$\begin{aligned} E(A_0^1, Y_n^{0})\le d_n(A_0^1), \end{aligned}$$

or equivalently,

$$\begin{aligned} \Vert (I-P_n)K_1^*\Vert _2\le \lambda _{n+1}^{1/2}, \end{aligned}$$

with \(\lambda _{n+1}\) as given in (32). First observe that

$$\begin{aligned} \Vert (I-P_n)K_1^*\Vert _2= \Vert K_1(I-P_n)\Vert _2= \Vert (I-Q)K(I-P_n)\Vert _2. \end{aligned}$$

Next, since both J and Q are projections onto the constants, \(\Pi _0\), but only Q is the orthogonal projection, we must have

$$\begin{aligned} \Vert (I-Q)K(I-P_n)\Vert _2\le \Vert (I-J)K(I-P_n)\Vert _2=\Vert \overline{K}_1(I-P_n)\Vert _2. \end{aligned}$$

Hence, the result follows from Lemma 5. \(\square \)

Remark

Melkman and Micchelli [7, Theorem 5.1] used the inequality in Lemma 5 to directly conclude that the \((n+1)\)-dimensional space

$$\begin{aligned} \Pi _{0} + [(\overline{K}_1\, \overline{K}_1^*)(\cdot ,\eta _{2}),\ldots ,(\overline{K}_1\, \overline{K}_1^*)(\cdot ,\eta _{n+1}) ] \end{aligned}$$

is optimal for the set \(A^1_1=\Pi _{0}\oplus K_1(B)\). On the other hand, from [3, Lemma 1] and the above Theorem 5, it follows that \(K_1(Y_{n}^{0})\) is an optimal space for \(K_1(B)\), and so

$$\begin{aligned} \Pi _{0}\oplus K_1(Y_{n}^{0})=\Pi _{0}\oplus [(K_1\overline{K}_1^*)(\cdot ,\eta _{2}),\ldots ,(K_1\overline{K}_1^*)(\cdot ,\eta _{n+1})] \end{aligned}$$

is optimal for \(A^1_1\). This is consistent with their result, since the difference

$$\begin{aligned} (K_1\overline{K}_1^*)(\cdot ,\eta _{j})-(\overline{K}_1\,\overline{K}_1^*)(\cdot ,\eta _{j}) \end{aligned}$$

is a constant for any \(j=2,\ldots ,n+1\).

We now have the first optimal space \(X_n^0\) for \(K_1(B)\) and the first optimal space \(Y_n^0\) for \(A^1_0=K_1^*(B)\), and so we can apply Theorem 4. To do this, let us express \(A^r_1\) in Eq. (30) as

$$\begin{aligned} A^r_1=\Pi _0\oplus \tilde{A^r_1}. \end{aligned}$$

Now, if \(X_n^d\) and \(Y_n^d\) are generated as in (17) with \(K_1\) playing the role of the generic K, then it follows from Theorem 4 that, for all \(r\ge 1\),

  • the n-dimensional spaces \(X_n^d\) are optimal for the n-width of \(\tilde{A^r_1}\), and

  • the n-dimensional spaces \(Y_n^d\) are optimal for the n-width of \(A^r_0\)

for all \(d\ge r-1\). Note further that these spaces are all n-dimensional since the n-widths in (2) are strictly decreasing. Moreover, since both \(X_n^d\perp \Pi _{0}\) and \(\tilde{A^r_1}\perp \Pi _{0}\), we find that the \((n+1)\)-dimensional spaces \(\Pi _{0}\oplus X_n^d\) are optimal for \(A^r_1=\Pi _{0}\oplus \tilde{A^r_1}\) for \(d\ge r-1\).

The remaining task is to recognize the spaces \(\Pi _0\oplus X_n^d\) and \(Y_n^d\) as spline spaces. As already stated, the optimal spaces \(\Pi _0\oplus X_n^d\) were identified in [3], and we have the equality

$$\begin{aligned} S_{d,1} = \Pi _0\oplus X_{n-1}^{d}, \end{aligned}$$

where \(S_{d,1}\) is the n-dimensional space defined in (6). However, only the spline spaces \(Y_n^d\) when d is odd were found in [3]. In that case, we have

$$\begin{aligned} S_{d,0}=Y_n^{d}, \end{aligned}$$
(36)

with \(S_{d,0}\) also as in (6). Now, using the definition of \(\overline{K}_1\), we find that the kernel \(\overline{K}_1^*(x,y) = \overline{K}_1(y,x)\) is equal to

$$\begin{aligned} \overline{K}_1^*(x,y) = {\left\{ \begin{array}{ll} 0, \quad x<\eta _1, \\ 1, \quad \eta _1<x<y, \\ 0, \quad x>y, \end{array}\right. } \end{aligned}$$

for \(y>\eta _1\). The space \(Y_n^0\) in Eq. (34) is then the space of piecewise constant splines with knots \(\eta _j\), \(j=1,\ldots ,n+1\), that vanish on the intervals \([0,\eta _1)\) and \((\eta _{n+1},1]\). Since \(Y_n^2=K_1^*K_1(Y_n^0)\), and so on, we know from (31) that Eq. (36) also holds in the case of d even. This proves Theorem 2 for \(A^r_0\) and \(A^r_1\).

9 Basis Functions

In this section we describe how to create a local basis for the spline spaces \(S_{d,i},\)\(i=0,1,2\). First consider \(i=1\). An explanation of how to construct a local basis for \(S_{d,1}\) (with d even) is presented in [10]. The basic idea consists of three parts. Start with our uniform knot vector \({\varvec{\tau }}_1\) in (7), and extend it to a uniform knot vector on the whole real line. Second, construct all the B-splines on this infinite knot vector that have nonzero support on (0, 1). Third, identify the B-splines that cross the boundary and add them together in pairs, chosen in such a manner that the symmetry of uniform B-splines ensures the boundary conditions (all odd derivatives set to zero) are satisfied. Figure 2 shows the basis functions for \(S_{d,1}\) of degree 0 to 3 with knot-distance 0.2 (\(n=5\)).

Fig. 2
figure 2

Basis functions for \(S_{d,1}\)

Next, we consider \(i=0\). Constructing a basis for \(S_{d,0}\) can be done by essentially the same procedure as for \(S_{d,1}\). Instead of adding pairs of B-splines together, we take differences. The symmetry of uniform B-splines will again ensure that the boundary conditions (all even derivatives set to zero) are satisfied. Figure 3 shows the basis functions for \(S_{d,0}\) of degree 0 to 3 with knot-distance 0.2 (\(n=4\)).

Fig. 3
figure 3

Basis functions for \(S_{d,0}\)

Regarding \(i=2\), adding pairs of B-splines together on the right-hand side and subtracting on the left-hand side will give a basis for \(S_{d,2}\). Figure 4 shows the basis functions for \(S_{d,2}\) of degree 0 to 3 with knot-distance 2 / 9 (\(n=4\)).

Fig. 4
figure 4

Basis functions for \(S_{d,2}\)