Keywords

1 Problem Formulation

A sequence of interpolation points \(\mathscr {M}=\{x_0,x_1,x_2,\dots ,x_n\}\) (here \(n\ge 2\)) in Euclidean space \(E^m\) is called reduced data if the corresponding interpolation knots \(\{t_i\}_{i=0}^n\) are not given (see e.g. [6, 10, 12, 13, 20, 23, 25, 29, 30]). Let the class of admissible curves \(\gamma \) (denoted by \({\mathscr {I}}_T\)) form the set of piece-wise \(C^{2}\) curves \(\gamma :[0,T]\rightarrow E^m\) interpolating \(\mathscr {M}\) with the ordered free unknown admissible knots \(\{t_i\}_{i=0}^{n}\) satisfying \(\gamma (t_i)=x_i\). Here \(t_i<t_{i+1}\) are free with, upon re-scaling \(t_0=0\) and \(t_n=T\) set to an arbitrary constant \(T>0\). More precisely, for each choice of ordered knots, the curve \(\gamma \) is assumed to be \(C^2\) except of being only at least \(C^1\) over \(\{t_i\}_{i=0}^n\). The analysis to follow is not restricted to a thinner class of \(\gamma \in C^2([t_0,t_n])\) due to the ultimate choice of computational scheme (called herein Leap-Frog - see [18, 24, 27, 28]) which effectively deals with the optimization problem (1). However, the computed optimum by Leap-Frog belongs to the tighter class of functions coinciding with \(C^2([t_0,t_n])\) as addressed in [17, 18].

Assume now, we search for an optimal \(\gamma _{opt}\in {\mathscr {I}}_T\) to minimize:

$$\begin{aligned} \mathscr {J}_T(\gamma )=\int _{t_0}^T \Vert {\ddot{\gamma }}(t)\Vert ^2dt=\sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}} \Vert {\ddot{\gamma }}(t)\Vert ^2dt. \end{aligned}$$
(1)

The latter defines an infinite dimensional optimization task over \({\mathscr {I}}_T\). The unknown interpolation knots \(\{t_i\}_{i=0}^n\) (\(t_0=0\) and \(0<t_n=T\) can be fixed) belong to:

$$\begin{aligned} \varOmega _{t_0}^{T}= \{(t_1,t_2,\dots ,t_{n-1})\in \mathbb {R}^{n-1}: t_0=0<t_1<t_2<\ldots<t_{n-1}<t_n=T<\infty \}. \end{aligned}$$
(2)

For any affine reparameterization \(\phi :[0,T]\rightarrow [0,\tilde{T}]\) defined as \(\phi (t)=t{\tilde{T}}/T\) (with \(t=\phi ^{-1}(s)=sT/{\tilde{T}}\)) \(\phi ^{-1'}\equiv T/{\tilde{T}}\) and \(\phi ^{-1''}\equiv 0\), formula (1), for \(\tilde{\gamma }(s)=(\gamma \circ \phi ^{-1})(s)\) reads:

$$\begin{aligned} \mathscr {J}_{\tilde{T}}(\tilde{\gamma })= & {} \sum _{i=0}^{n-1}\int _{s_i}^{s_{i+1}} \Vert {\ddot{\tilde{\gamma }}}(s)\Vert ^2ds={T^3\over \tilde{T}^3} \sum _{i=0}^{n-1}\int _{\tilde{t}_i}^{\tilde{t}_{i+1}} \phi ^{-1'}(s)\Vert (\ddot{\gamma }\circ \phi ^{-1})(s)\Vert ^2ds\\= & {} {T^3\over \tilde{T}^3}\mathscr {J}_{T}(\gamma ). \end{aligned}$$
(3)

Thus, a curve \(\gamma _{opt}\in {\mathscr {I}}_T\) is optimal to \(\mathscr {J}_{T}\) if and only if a corresponding \({\tilde{\gamma }}_{opt}\in {\mathscr {I}}_{\tilde{T}}\) is optimal for \(\mathscr {J}_{\tilde{T}}\). Hence \(t_n=T\) can be taken as arbitrary, and with the additional affine mapping \(\phi (t)=t-t_0\), one can also set \(t_0=0\).

Recall now a cubic spline interpolant \(\gamma ^{C_i}_\mathscr {T}=\gamma ^{C}_\mathscr {T}|_{[t_i,t_{i+1}]}\) (see e.g. [3]), which for given temporarily fixed admissible interpolation knots \(\mathscr {T}=(t_0,t_1,\dots ,t_{n-1},t_n)\) reads as:

$$\begin{aligned} \gamma _\mathscr {T}^{C_i}(t)=c_{1,i}+c_{2,i}(t-t_i)+c_{3,i}(t-t_i)^2 +c_{4,i}(t-t_i)^3, \end{aligned}$$
(4)

and fulfills (for \(i=0,1,2,\dots ,n-1\); \(c_{j,i}\in \mathbb {R}^m\), where \(j=1,2,3,4\))

$$\begin{aligned} \gamma _\mathscr {T}^{C_i}(t_{i+k})=x_{i+k}, \quad \dot{\gamma }_\mathscr {T}^{C_i}(t_{i+k})=v_{i+k},\quad k=0,1 \end{aligned}$$

with the assumed unknown velocities \(v_0,v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\). The coefficients \(c_{j,i}\) (with \(\varDelta t_i=t_{i+1}-t_i\)) are defined as follows:

$$\begin{aligned} c_{1,i}= & {} x_i,\quad \quad c_{2,i}=v_i,\nonumber \\ c_{4,i}= & {} {v_i+v_{i+1}-2{x_{i+1}-x_i\over \varDelta t_i}\over (\varDelta t_i)^2}, \quad \quad c_{3,i}={{(x_{i+1}-x_i)\over \varDelta t_i}-v_i\over \varDelta t_i}-c_{4,i}\varDelta t_i. \end{aligned}$$
(5)

Adding \(n-1\) conditions \(\ddot{\gamma }_\mathscr {T}^{C_{i-1}}(t_{i})=\ddot{\gamma }_\mathscr {T}^{C_i}(t_{i})\) over \(x_1,x_2,\dots ,x_{n-1}\) yields m tridiagonal linear systems (see [3]) of \(n-1\) equations in \(n+1\) vector unknowns \(v_0, v_1,\dots ,v_n\in \mathbb {R}^m\):

$$\begin{aligned} v_{i-1}\varDelta t_i+ & {} 2v_i(\varDelta t_{i-1}+\varDelta t_i)+v_{i+1}\varDelta t_{i-1}=b_i,\nonumber \\ b_i= & {} 3\left( \varDelta t_i{x_i-x_{i-1}\over \varDelta t_{i-1}} +\varDelta t_{i-1}{x_{i+1}-x_i\over \varDelta t_i}\right) . \end{aligned}$$
(6)

In case of the so-called natural cubic spline interpolant (denoted as \(\gamma _\mathscr {T}^{C}=\gamma _\mathscr {T}^{NS}\)), two extra constraints involving \(v_0\) and \(v_{n}\) stipulate that \(\ddot{\gamma }_\mathscr {T}^{C}(0)=\ddot{\gamma }_\mathscr {T}^{C}(T)=0\) which leads to:

$$\begin{aligned} 2v_0+v_1=3{x_1-x_0\over \varDelta t_0},\quad v_{n-1}+2v_n=3{x_n-x_{n-1}\over \varDelta t_{n-1}}. \end{aligned}$$
(7)

The resulting m linear systems (i.e. (6) and (7)), each of size \((n+1)\times (n+1)\), determine unique vectors \(v_0,v_1,v_2,\dots ,v_n\) (see [3, Chap. 4]), which when fed into (5) and then passed to (4) determine explicitly a natural cubic spline \(\gamma _\mathscr {T}^{NS}\) (with fixed \(\mathscr {T}\)). Visibly all computed velocities \(\{v_i\}_{i=0}^m\) (and, thus, \(\gamma _\mathscr {T}^{NS}\)) with the aid of the above procedure depend in fact on the interpolation knots \(\{t_i\}_{i=0}^m\) and fixed data \(\mathscr {M}\). It is well known (see e.g. [3]) that if the respective knots \(\{t_i\}_{i=0}^m\) are frozen the optimization task (1) is minimized by a unique natural spline \(\gamma _\mathscr {T}^{NS}\) defined by \(\{t_i\}_{i=0}^n\) and \(\mathscr {M}\). Therefore, upon relaxing all internal knots \(\{t_i\}_{i=1}^{n-1}\) in (1) (for arbitrarily fixed terminal knots to e.g. \(t_0=0\) and \(t_n=T\)) one arrives at the following (see [3, 17,18,19]):

Theorem 1

For a given \(\mathscr {M}\) with points in Euclidean space \(E^m\), the subclass of natural splines \({\mathscr {I}}^{NS}\subset {\mathscr {I}}_{T}\) satisfies

$$\begin{aligned} \min _{\gamma \in {\mathscr {I}}_T}\mathscr {J}_T(\gamma )= \min _{\gamma ^{NS}\in {\mathscr {I}}^{NS}} \mathscr {J}_T(\gamma ^{NS}), \end{aligned}$$
(8)

which reduces to the finite dimensional optimization in \(\hat{\mathscr {J}}=(t_1,t_2,\dots ,t_{n-1})\) over non-compact \(\varOmega _{t_0}^{T}\) introduced in (2):

$$\begin{aligned} \mathscr {J}_{T}(\gamma ^{NS}_{opt})= & {} \min _{{\hat{\mathscr {T}}}\in \varOmega _{t_0}^{T}} \mathscr {J}_{T}^F(t_1,t_2,\dots ,t_{n-1})\nonumber \\= & {} \min _{{\hat{\mathscr {T}}} \in \varOmega _{t_0}^{T_c}} 4\sum _{i=0}^{n-1}\Big ({-1\over (\varDelta t_i)^3}(-3\Vert x_{i+1}-x_i\Vert ^2 +3\langle v_i+v_{i+1}|x_{i+1}-x_i\rangle \varDelta t_i\nonumber \\&\quad \quad \quad \quad \quad \quad -(\Vert v_i\Vert ^2+\Vert v_{i+1}\Vert ^2 +\langle v_i|v_{i+1}\rangle )(\varDelta t_i)^2\Big ), \end{aligned}$$
(9)

for which at least one global minimum \(\hat{\mathscr {J}}_{opt}=(t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\in \varOmega _{t_0}^{T}\) exists.

We take here the computed optimal values of \(\hat{\mathscr {J}}_{opt}\), as estimates \(\{\hat{t}_i\}_{i=0}^m\approx \{t_i\}_{i=0}^m\). In this paper, we demonstrate strong non-linearity and non-convexity effects built-in the optimization scheme (9). The relevant examples and theoretical insight is supplemented to justify the latter. Sufficient conditions for convexity (or unimodality) of (9) are proved at least for \(n=2\). The complexity of the optimization scheme (9) not only impedes its theoretical analysis but also impacts on the choice of feasible numerical scheme handling computationally (9). Finally, this work is supplemented with illustrative examples and numerical tests used to fit input sparse reduced data \(\mathscr {M}\) for various n and \(m=2,3\).

Related work on fitting reduced data \(\mathscr {M}\) (sparse or dense) can also be found in [8, 9, 15, 16, 21, 22, 26, 33, 34]. Some applications in computer vision and graphics, image processing, engineering, physics, and astronomy are discussed e.g. in [1, 2, 5, 7, 11, 21, 31, 32].

2 Non-Linearity of \(\mathscr {J}_{T}^F\) and Numerical Difficulties

First we demonstrate a high non-linearity featuring the optimization task (9). This is accomplished by generating an explicit formula for (9) whose complexity is substantial even for n small and gets complicated for n incremented. The latter is illustrated by the next two examples followed by pertinent computational tests.

Example 1

Consider four data points (i.e. here \(n=3\)) \(\mathscr {M}=\{x_0,x_1,x_2,x_3\}\) in \(E^m\). Formula for \(\mathscr {J}_T^F\) (see (9)) reads here as \(\mathscr {J}_{T_c}^{F,3}(\hat{\mathscr {T}})=\mathscr {J}_0^3+\mathscr {J}_1^3+\mathscr {J}_2^3\) (with \(\hat{\mathscr {T}}=(t_0,t_1,t_2,t_3)\) and \(t_0=0\) and e.g. \(t_3=T=T_c\) - see (12)), where

$$\begin{aligned} \mathscr {J}_0^3= & {} {1\over (t_0 - t_1)^3}(-3\Vert x_0\Vert ^2 - 3\Vert x_1\Vert ^2 + (t_0 - t_1)(3\langle v_0|x_0\rangle - 3\langle v_0|x_1\rangle + 3\langle v_1|x_0\rangle \nonumber \\&- 3\langle v_1|x_1\rangle +(\Vert v_0\Vert ^2 + \Vert v_1\Vert ^2+ \langle v_0|v_1\rangle ) (t_1-t_0))+ 6\langle x_0|x_1\rangle ),\nonumber \\ \mathscr {J}_1^3= & {} {1\over (t_1 - t_2)^3}(-3\Vert x_1\Vert ^2 - 3\Vert x_2\Vert ^2 + (t_1 - t_2)(3\langle v_1|x_1\rangle - 3\langle v_1|x_2\rangle + 3\langle v_2|x_1\rangle \nonumber \\&- 3\langle v_2|x_2\rangle +(\Vert v_1\Vert ^2 + \Vert v_2\Vert ^2+ \langle v_1|v_2\rangle )(t_2-t_1))+ 6\langle x_1|x_2\rangle ),\nonumber \\ \mathscr {J}_2^3= & {} {1\over (t_2 - t_3)^3}(-3\Vert x_2\Vert ^2 - 3\Vert x_3\Vert ^2 + (t_2 - t_3)(3\langle v_2|x_2\rangle - 3\langle v_2|x_3\rangle + 3\langle v_3|x_2\rangle \nonumber \\&- 3\langle v_3|x_3\rangle +(\Vert v_2\Vert ^2 + \Vert v_3\Vert ^2+ \langle v_2|v_3\rangle )(t_3-t_2))+ 6\langle x_2|x_3\rangle ). \end{aligned}$$
(10)

The missing velocities \(\{v_0,v_1,v_2,v_3\}\) for natural spline \(\gamma _{\mathscr {T}}^{NS}\) (see (4)) are determined here by the following four matrix equations, with \(i=1,\dots ,m\) (see (6) and (7))

$$\begin{aligned} \left( \begin{array}{cccc} 2&{}1&{}0&{}0\\ t_2-t_1&{}2(t_2-t_0)&{}t_1-t_0&{}0\\ 0&{}t_3-t_2&{}2(t_3-t_1)&{}t_2-t_1\\ 0&{}0&{}1&{}2\\ \end{array}\right) \left( \begin{array}{c}v_0^i\\ v_1^i\\ v_2^i\\ v_3^i\\ \end{array}\right) =\left( \begin{array}{c}3{x_1^i - x_0^i\over t_1 - t_0}\\ 3({(t_2 - t_1)(x_1^i - x_0^i)\over t_1 - t_0} + {(t_1 - t_0)(x_2^i - x_1^i)\over t_2 - t_1})\\ 3({(t_3 - t_2)(x_2^i - x_1^i)\over t_2 - t_1} + {(t_2 - t_1)(x_3^i - x_2^i)\over t_3 - t_2})\\ 3{x_3^i - x_2^i\over t_3 - t_2}\\ \end{array}\right) \end{aligned}$$

yielding (with the aid of symbolic computation in Mathematica - see [35]) a unique solution. For the sake of this example, we consider exclusively the case of \(m=1\). This can be easily extended to \(m>1\), since both square of norms and dot products (appearing in non-reduced form of \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}})\)) are additive by each vector component. Upon substituting computed velocities from the last matrix equations into \(\mathscr {J}_{T_c}^{F,3}\) (as previously we set \(t_0=0\) and \(t_3=T_c\) - see (12)) and taking into account that \(m=1\), Mathematica FullSimplify (see [35]) function yields an explicit formula for

$$\begin{aligned} \mathscr {J}_{T_c}^{F,3}(t_1,t_2)=N_3(t_1,t_2)/(t_1^2(t_1 - t_2)^2 (t_2-T_c)^2((t_1 + t_2)^2-4T_ct_2)), \end{aligned}$$

where

$$\begin{aligned} N_3(t_1,t_2)= & {} (3(-T_c^3t_2^2(x_0 - x_1)^2 + 2T_c^2t_2^3(x_0 - x_1)^2 + T_ct_2^4(x_0 - x_1)(x_1-x_0)\nonumber \\&+t_1^3(-T_c(x_0 + x_1 - 2x_2) + t_2(x_0 + x_1 - 2x_3))(T_c(x_2-x_0)+ t_2(x_0 - x_3))\nonumber \\&-t_1^2(T_c^3(x_0 - x_2)^2 - 3T_ct_2^2(x_0 - x_2)(x_0 - x_3)\nonumber \\&+t_2^3(x_0 - x_3)(2x_0 - x_2 - x_3))+t_1(2T_c^3t_2(x_0 - x_1)(x_0 - x_2) \nonumber \\&-3T_c^2t_2^2(x_0 - x_1)(x_0 - x_2)+t_2^4(x_0 - x_1)(x_0 - x_3)\nonumber \\&- T_ct_2^3(x_0 - x_1)(x_2 - x_3))-t_1^4(T_c(x_2-x_0)+ t_2(x_0 - x_3))(x_2 - x_3))). \end{aligned}$$

Note that \(N_3(t_1,t_2)\) is a 5th order polynomial in \(t_1\) and \(t_2\).    \(\square \)

Example 2

Let five data points (i.e. here \(n=4\)) \(\mathscr {M}=\{x_0,x_1,x_2,x_3,x_4\}\) be given in \(E^m\). Formula (9) reads here \(\mathscr {J}_{T_c}^{F,4}(\hat{\mathscr {T}}) =\mathscr {J}^4_0+\mathscr {J}^4_1+\mathscr {J}_2^4+\mathscr {J}_3^4\) (for \(\hat{\mathscr {T}}=(t_0,t_1,t_2,t_3,t_4)\) with \(t_0=0\) and \(t_4=T_c\) - see (12)), where \(\mathscr {J}_k^4=\mathscr {J}_k^3\), for \(k=0,1,2\) (see (10)) and

$$\begin{aligned} \mathscr {J}_3^4= & {} {1\over (t_3 - t_4)^3}(-3\Vert x_3\Vert ^2 - 3\Vert x_4\Vert ^2 + (t_3 - t_4)(3\langle v_4|x_3\rangle - 3\langle v_3|x_4\rangle + 3\langle v_4|x_3\rangle \\&- 3\langle v_4|x_4\rangle +(\Vert v_3\Vert ^2 + \Vert v_4\Vert ^2+ \langle v_3|v_4\rangle )(t_4-t_3))+ 6\langle x_3|x_4\rangle ). \end{aligned}$$

Again the missing velocities \(\{v_0,v_1,v_2,v_3,v_4\}\) for the natural spline \(\gamma _{\mathscr {T}}^{NS}\) defined by (4) are determined here by five matrix equations, with \(i=1,\dots ,m\) (see (6) and (7)):

$$\begin{aligned} \left( \begin{array}{ccccc} 2&{}1&{}0&{}0&{}0\\ t_2-t_1&{}2(t_2-t_0)&{}t_1-t_0&{}0&{}0\\ 0&{}t_3-t_2&{}2(t_3-t_1)&{}t_2-t_1&{}0\\ 0&{}0&{}t_4-t_3&{}2(t_4-t_2)&{}t_3-t_2\\ 0&{}0&{}0&{}1&{}2\\ \end{array}\right) \left( \begin{array}{c}v_0^i\\ v_1^i\\ v_2^i\\ v_3^i\\ v_4^i\\ \end{array}\right) =B_i, \end{aligned}$$
(11)

where

$$\begin{aligned} B_i=\left( \begin{array}{c}3{x_1^i - x_0^i\over t_1 - t_0}\\ 3({(t_2 - t_1)(x_1^i - x_0^i)\over t_1 - t_0} + {(t_1 - t_0)(x_2^i - x_1^i)\over t_2 - t_1})\\ 3({(t_3 - t_2)(x_2^i - x_1^i)\over t_2 - t_1} + {(t_2 - t_1)(x_3^i - x_2^i)\over t_3 - t_2})\\ 3({(t_4 - t_3)(x_3^i - x_2^i)\over t_3 - t_2} + {(t_3 - t_2)(x_4^i - x_3^i)\over t_4 - t_3})\\ 3{x_4^i - x_3^i\over t_4 - t_3}\\ \end{array}\right) . \end{aligned}$$

Again the system (11) renders a unique solution \(\{v_0,v_1,v_2,v_3,v_4\}\) (found e.g. upon using Mathematica software - see [35]). As previously only the case of \(m=1\) is here considered. Upon substituting computed velocities from (11) into \(\mathscr {J}_{T_c}^{F,4}\) and setting \(t_0=0\) and \(t_4=T_c\) (see (12)) Mathematica FullSimplify function yields an explicit formula for \(\mathscr {J}_{T_c}^{F,4}(t_1,t_2,t_3)=\)

$$\begin{aligned} {N_4(t_1,t_2,t_3)\over 4t_1^2(t_1 - t_2)^2(t_2 - t_3)^2(t_3-T_c)^2 (t_2(t_3 - t_1)(t_1 + 2t_2 + t_3) + T_c((t_1 + t_2)^2 - 4t_2t_3))}. \end{aligned}$$

It can be checked that \(N_4(t_1,t_2,t_3)\) is an 8th order polynomial in \(t_1\), \(t_2\) and \(t_3\). We omit here to present a full explicit formula for \(N_4(t_1,t_2,t_3)\) since it takes more than one A4 format page size.    \(\square \)

Examples 1 and 2 indicate the growing complexity of the non-linearity in (9) while n increases. Thus, in a search for global minimum of (9), any numerical optimization scheme relying on derivative computation (irrespectively of an initial guess) faces the computational difficulties for n getting bigger. The latter is demonstrated in the next Example 3 for \(n=7\), where Mathematica FindMinimum applied with Newton Method fails (see [35]). Similar effects appear when Mathematica Minimize[f,constraints,variables] is invoked (see [35]) which works efficiently for both minimized function and imposed constraints (such as inequality or equations) expressed as polynomials. The latter happens with the numerator of the derivative of (9). To alleviate this problem and to efficiently optimize (9) we invoke first a multidimensional version of the Secant Method (not relying on derivative computation) given in Mathematica software e.g. for two free variables as \(FindMinimum[f,\{\{var1,2num1\},\{var,2num2\}\}]\) - see [35]. Its super-linear convergence order (e.g. for \(m=1\) equal to \(((1+\sqrt{5})/2)\approx 1.618\)) though slower than Newton quadratic rate, makes it still both faster and computationally feasible as opposed to most standard optimization techniques based on derivative calculation. In the last section of this paper, we compare the Secant Method with a Leap-Frog Algorithm (see [18, 27, 28])). One of the advantages of Leap-Frog over the Secant Method is a faster execution time (see also [17, 18]).

In order to set up a computationally feasible numerical optimization scheme a good initial guess is needed. In particular, for the Secant Method for each free variable, two numbers are needed to be selected. A possible choice is the so-called cumulative chord \(\mathscr {T}_c=\{t_i^c\}_{i=0}^n\) (see e.g. [13, 14, 20, 25]):

$$\begin{aligned} t_0^{c}=0, \quad \quad t_{i+1}^{c}=t_i^{c}+\Vert x_{i+1}-x_i\Vert , \quad {i=0,\dots ,n-1}, \end{aligned}$$
(12)

with \(T^c=\sum _{i=0}^{n-1}\Vert x_{i+1}-x_i\Vert \). Cumulative chord parameterization in a normalized form \(\mathscr {T}_{cc}\) reads \(t_i^{cc}=t_i^{c}/T^{c}\) (for \(i=0,1,\dots ,n\)). Here an additional assumption about reduced data \(\mathscr {M}\) i.e. \(x_i\ne x_{i+1}\) is also drawn. For the Secant Method and each free knot \(t_i\) appearing in (9) (here \(i=1,2,\dots ,n-1\)) we choose two starting numbers as \(t_i^{c}-\varepsilon \) and \(t_i^{c}+\varepsilon \), with some prescribed small value for \(\varepsilon \).

The next example illustrates expected computational difficulties in optimizing (9).

Example 3

(a) Consider four 2D reduced data points (i.e. here \(n=3\)):

$$\begin{aligned} \mathscr {M}_3=\{(-4, 0), (-0.5, -4), (0.5, -3), (-0.5, 4)\}. \end{aligned}$$

The cumulative chord knots (see (12)) based on \(\mathscr {M}_3\) coincide with \(\mathscr {T}_c=\{0, 5.31507,\) \( 6.72929, 13.8004\}\). In fact, here we have only two free variables \(\{t_1,t_2\}\) corresponding to the unknown knots at points \(x_i\) (\(i=1,2\)). FindMinimum (Newton Method) applied to (9) with the initial guess as cumulative chords \(\mathscr {T}_c\) yields the following optimal knots (with optimal energy value \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=0.741614\)):

$$\begin{aligned} \mathscr {T}_{opt}=\{0,5.3834, 8.2118,13.8004\}, \end{aligned}$$
(13)

where \(\hat{\mathscr {T}}_{opt}=\{5.3834, 8.2118\}\). The execution time \(T_{\mathscr {M}_3}^N=3.012858\) s. FindMinimum for the Secant Method (with two numbers associated to each free variables taken as \(\varepsilon =\pm 0.5\) variation of (12)) gives exactly the same optimal knots (13) (with the same \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=0.741614\)) but shorter execution time \(T_{\mathscr {M}_3}^S=0.647756\) s. Finally, Minimize with constraints \(0<t_1<t_2<13.8004\) gives optimal knots (13) with \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=0.741614\). The execution time amounts here \(T_{\mathscr {M}_3}^M=9.229526\,\mathrm{s}>T_{\mathscr {M}_3}^N>T_{\mathscr {M}_3}^S\).

(b) Consider now six 2D reduced data points (i.e. here \(n=5\)):

$$\begin{aligned} \mathscr {M}_5=\{(0, 0), (-0.5, -4), (0.5, -4), (-0.5, 4), (0.5, 4), (-1, 3.8)\}. \end{aligned}$$

The resulting cumulative chord knots (based on (12) and \(\mathscr {M}_5\)) are equal here to \(\mathscr {T}_c=\{0, 4.03113, 5.03113, 13.0934, 14.0934, 15.6067\}\) with internal knots \(\hat{\mathscr {T}}_c=\{4.03113,\) \( 5.03113, 13.0934, 14.0934\}\). In this case, there are four free variables \(\{t_1,t_2,t_3,t_4\}\) corresponding to the unknown knots at points \(x_i\) (\(i=1,\dots ,4\)). FindMinimum (Newton Method) applied to (9) with initial guess as \(\mathscr {T}_c\) yields the following optimal knots (with optimal energy value \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=4.65476\)):

$$\begin{aligned} \mathscr {T}_{opt}=\{0, 2.9185,5.12397,11.1964,13.507\}, \end{aligned}$$
(14)

where \(\hat{\mathscr {T}}_{opt}=\{2.9185,5.12397,11.1964\}\). The execution time \(T_{\mathscr {M}_5}^N=29.946006\) s. FindMinimum (Secant Method) (here again \(\varepsilon =\pm 0.5\) is added to cumulative chord initial guess along each free knot \(t_i\)) yields exactly the same optimal knots (14) (with \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=4.65476\)) and again shorter execution time \(T_{\mathscr {M}_5}^S=6.385922\) s. As previously, Minimize with constraints \(0<t_1<t_2<t_3<t_4<13.507\) yields optimal knots (14) and \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=4.65476\). The execution time here reads \(T_{\mathscr {M}_5}^M=358.390915\,\mathrm{s}>>T_{\mathscr {M}_5}^N>T_{\mathscr {M}_5}^S\).

(c) Finally, consider now eight 2D reduced data points (i.e. here \(n=7\)):

$$\begin{aligned} \mathscr {M}_6=\{(0, 0), (-0.5, -4), (0.5, -4), (-0.5, 4),\\ (0.5, 4), (-1, 3.8), (0.3, 0.3),(0.5, 0.5)\}. \end{aligned}$$

By (12) \(\mathscr {T}_c=\{0, 4.03113, 5.03113, 13.0934, 14.0934, 15.6067,19.3403,19.6231\}\). As previously \(\hat{\mathscr {T}}_c=\{4.03113, 5.03113, 13.0934, 14.0934, 15.6067,19.3403\}\). For both optimization schemes FindMinimum (Newton Method) and Minimize no result was reported within 60 min. FindMinimum (Secant Method) (with as previously \(\varepsilon =\pm 0.5\) variations of cumulative chords for each free variable) yields optimal knots (with energy \(\mathscr {J}_{T_c}^F(\hat{\mathscr {T}}_{opt})=8.27118\)):

$$\begin{aligned} \mathscr {T}_{opt}=\{0,2.67713,4.69731, 10.3221,12.3943,14.8132, 19.0316,19.6231\}, \end{aligned}$$

where \(\mathscr {T}_{opt}=\{2.67713,4.69731, 10.3221,12.3943,14.8132\}\). The execution time is \(T_{\mathscr {M}_7}^S=35.708519\) s.    \(\square \)

The above experiments illustrate that for \(n\ge 7\), FindMinimum (Secant Method) offers a feasible computational scheme to optimize (9). In Sect. 4 of this paper, we compare the performance of already discussed Secant Method with Leap-Frog Algorithm (see [17, 18, 27, 28]).

3 Non-Convexity of \(\mathscr {J}_{T}^F\)

We demonstrate in this section that \(\mathscr {J}_{T_c}^F\) introduced in (9) may not be convex. In doing so, a simple degenerate case of (9) with \(n=2\) is examined. Three points \(\mathscr {M}=\{x_0,x_1,x_2\}\) are admitted with one relaxed internal knot \(t_1\in [t_0=0, t_2=T_c]\) (see (12)).

Fig. 1.
figure 1

The graph of the non-convex energy \(\tilde{\mathscr {E}}_{deg}\) for \( x_0=-1\), \(x_1=0\), and \(x_2=20\) (a) over the interval (0, 1), (b) in the proximity of non-convexity sub-interval (0.05, 0.35) (c) and the graph of the corresponding changing signs second derivative \(\tilde{\mathscr {E}}_{deg}''\) in the proximity of (0.05, 0.35)

Example 4

For \(n=2\) and arbitrary natural \(m\ge 1\) (using Mathematica package), by (9) the energy \(\mathscr {J}_{T_c}^F(t_1)=(T_c-t_0)^{-3} (\tilde{\mathscr {E}}_{deg}\circ \phi ^{-1})(t_1)\), where \(\tilde{\mathscr {E}}_{deg}(s_1)=3\Vert {x_0-x_1\over s_1} +{x_2-x_1\over 1-s_1}\Vert ^2\) - here \(t_1\in (t_0,t_2=T_c)\), and \(s_1=\phi (t_1)=(t-t_0)(T_c-t_0)^{-1}\in (0,1)\). Obviously since \(\phi ^{-1'}\equiv T_c-t_0>0\) and \(\phi ^{-1''}\equiv 0\) the convexity (non-convexity) of \(\tilde{\mathscr {E}}_{deg}\) is inherited by \(\mathscr {J}_{T_c}^F\). Take now for \(m=1\) the following points \(x_0=-1\), \(x_1=0\) and \(x_2=20\) (here \((x_0-x_1)(x_2-x_1)=-20<0\)). The graph of the energy \(\tilde{\mathscr {E}}_{deg}(s_1) =3({-1\over s_1}+{20\over 1-s_1})^2\) over the interval (0, 1) is plotted in Fig. 1(a). The non-convexity is better visible in Fig. 1(b) with the graph of the energy \(\tilde{\mathscr {E}}_{deg}\) localized over the sub-interval (0.05, 0.35). Finally the change of sign in the second derivative \(\tilde{\mathscr {E}}_{deg}''\) is also illustrated in Fig. 1(c). In fact, \(\tilde{\mathscr {E}}_{deg}''(0.21)=1338.7\) and \(\tilde{\mathscr {E}}_{deg}''(0.20)=-1640.63\). Thus, \(\tilde{\mathscr {E}}_{deg}\) is not convex which also implies non-convexity of \(\mathscr {J}_{T_c}^F\) in the general case.    \(\square \)

The next example formulates sufficient conditions to enforce convexity of \(\mathscr {J}_{T_c}^F\), but only for \(m=1\) and \(n=2\). The latter can be extended to the general case of \(m\ge 1\) and \(n=2\). Such general case is here omitted due to the paper length limitation.

Fig. 2.
figure 2

The graph of the convex energy \(\tilde{\mathscr {E}}_{deg}\) for \( x_0=2\), \(x_1=0\), and \(x_2=5\) (a) over the interval (0, 1), (b) and the graph of the corresponding second derivative \(\tilde{\mathscr {E}}_{deg}''\ge 0\) over (0, 1)

Example 5

(i) For \(m=1\) it is easy to show that \(\tilde{\mathscr {E}}_{deg}\) is convex (and so thus \(\mathscr {J}_{T_c}^F\)) if \((x_0-x_1)(x_2-x_1)\ge 0\) - under this constraint, we have exactly one critical point for \(\tilde{\mathscr {E}}_{deg}\). Indeed, recalling that \(\tilde{\mathscr {E}}_{deg}(s_1)=3f^2(s_1)\) with \(f(s_1)={x_0-x_1\over s_1}+{x_2-x_1\over 1-s_1}\) it suffices to show that f is either convex and \(f\ge 0\) or it is concave and \(f\le 0\), for \(s_1\in (0,1)\). Indeed the latter combined with \(\tilde{\mathscr {E}}_{deg}''=3(f^2)''=6(f')^2+6ff''\) yields the convexity of \(\mathscr {J}_{T_c}^F\) given non-negativity of \(ff''\) over \(s_1\in (0,1)\) which follows from \((x_0-x_1)(x_2-x_1)\ge 0\) applied both to \(f''(s_1)={x_0-x_1\over s_1^3}+{x_2-x_1\over (1-s_1)^3}\) and f. Figure 2(a) shows that convexity of \(\tilde{\mathscr {E}}_{deg}(s_1) =3({2\over s_1}+{5\over 1-s_1})^2\) indeed follows for \(x_0=2\), \(x_1=0\) and \(x_2=5\) with \((x_0-x_1)(x_2-x_1)\ge 0\) clearly fulfilled. As expected \(\tilde{\mathscr {E}}_{deg}''\ge 0\) over (0, 1) - see Fig. 2(b). The corresponding sufficient conditions guaranteeing the convexity of \(\tilde{\mathscr {E}}_{deg}\) for \(m\ge 1\) and \(n=2\) can also be formulated (though omitted here) - see [19]. As it turns out, the vector generalization \(\langle x_0-x_1|x_2-x_1\rangle \ge 0\) of scalar inequality \((x_0-x_1)(x_2-x_1)\ge 0\) assures the convexity of \(\tilde{\mathscr {E}}_{deg}\) and thus of \(\mathscr {J}_{T_c}^F\).

(ii) In case of scalar data (i.e. when \(m=1\)) if \((x_0-x_1)(x_2-x_1)<0\) holds then the existence of exactly one critical point (and thus one global minimum - see Theorem 1) of \(\tilde{\mathscr {E}}_{deg}\) (and so of \(\mathscr {J}_{T_c}^F\)) follows, which yields the unimodality of \(\tilde{\mathscr {E}}_{deg}=3f^2\) (see [4]). Indeed, assume that \((x_0-x_1)(x_2-x_1)<0\). Since now \(x_0\ne x_1\) and \(x_1\ne x_2\) we have \(x_0-x_1\ne 0\) and \(x_2-x_1\ne 0\). To show unimodality of \(f^2\) we need to prove the existence of exactly one critical point \(s_1\in (0,1)\) satisfying \((f^2)'(s_1)=2f(s_1)f'(s_1)=0\). Taking into account \((x_0-x_1)(x_2-x_1)<0\) we have that \(f'(s_1)=-{x_0-x_1\over s_1^2}+{x_2-x_1\over (1-s_1)^2}\) is either always positive or negative over (0, 1). Hence for unimodality of \(f^2\) it suffices to show that \(f(s_1)=0\) has one root \(s_1^0\in (0,1)\) defining a unique global minimum of \(f^2=0\) (and of \(\tilde{\mathscr {E}}_{deg}\)). In this case as \(\tilde{\mathscr {E}}_{deg}(s_1)=3f^2(s_1)\) the energy \(\tilde{\mathscr {E}}_{deg}\) also vanishes at \(s_1^0\). The latter may not be the case for the convexity case, since another factor \(f'(s_1)=0\) may contribute to \((f^2)'(s_1)=0\). A simple inspection shows that

$$\begin{aligned} s_1^0={(x_0-x_1)\over (x_0-x_1)-(x_2-x_1)}. \end{aligned}$$
(15)

Note that the denominator \(x_0-x_2\) in (15) does not vanish due to \((x_0-x_1)(x_2-x_1)<0\). Of course, \(s_1^0>0\) since \((x_0-x_1)(x_2-x_1)<0\). To justify \(s_1^0<1\) two cases are here considered, namely either \(x_0-x_1>0\) and \(x_2-x_1<0\) or \(x_0-x_1<0\) and \(x_2-x_1>0\). In the first (second) case \(s_1^0<1\) in (15) leads to a true inequality \(x_2-x_1<0\) (\(x_2-x_1>0\)). Figure 1 confirms the unimodality of \({\mathscr {E}}_{deg}\) for \((x_0-x_1)(x_2-x_1)=-20<0\). As proved the global minimum of (9) is attained at \(s_1^0=1/21\approx 0.047619\) nullifying \(\mathscr {J}_{T_c}^F\).

Note that in case of convexity (enforced by \((x_0-x_1)(x_2-x_1)\ge 0\)), the unique global minimum \(s_1^0\) can also be found in analytic form. Indeed as \(x_0\ne x_1\) and \(x_1\ne x_2\) it suffices to assume a stronger inequality \((x_0-x_1)(x_2-x_1)> 0\). The latter results in either \(f>0\) or \(f<0\). Consequently for \(6f'f\) to vanish we need to solve \(f'(s_1)=0\) over (0, 1) which leads to \(s_1^0=\frac{\sqrt{|x_0-x_1|}}{\sqrt{|x_2-x_1|}+\sqrt{|x_0-x_1|}}\in (0,1)\). Thus, \(f^2\) is unimodal (and so \(\tilde{\mathscr {E}}_{deg}\)) since \((f^2)'=2ff'\) vanishes at exactly one point \(s_1^0\in (0,1)\). The unimodality of \(\tilde{\mathscr {E}}_{deg}\) can also be proved in case of \(\langle x_0-x_1|x_2-x_1\rangle <0\) for arbitrary data with \(m\ge 1\) and \(n=2\).    \(\square \)

4 Numerical Experiments for Fitting Sparse Reduced Data

All experiments are conducted in Mathematica - see [35]. The numerical tests compare the Leap-Frog algorithm (see [17, 18]) with the Secant Method both used to optimize (9). Only sparse reduced data points \(\mathscr {M}\) in \(E^{2,3}\) are admitted here, though the entire setting is applicable for arbitrary m, i.e. for reduced data \(\mathscr {M}\) in arbitrary Euclidean space.

The first example admits reduced data \(\mathscr {M}\) in \(E^2\) (i.e. for \(m=2\)).

Fig. 3.
figure 3

Natural splines interpolating data points \(\mathscr {M}_{2D3}\) (a) \(\gamma _{\mathscr {T}_{uni}}^{NS}\) with uniform knots \(\mathscr {T}_{uni}\), (b) \(\gamma _{\mathscr {T}_{c}}^{NS}\) with cumulative chords \(\mathscr {T}_{c}\), (c) \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}\) with optimal knots \(\mathscr {T}_{opt}^{LF}=\mathscr {T}_{opt}^{SM}\) (thus \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}=\gamma _{\mathscr {T}_{opt}^{SM}}^{NS}\)) (d) \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}\) and \(\gamma _{\mathscr {T}_{c}}^{NS}\) plotted together

Example 6

Assume for \(n=6\), the following 2D points (see dotted points in Fig. 3):

$$\begin{aligned} \mathscr {M}_{2D3}=\{(-3, -3), (-3.1, -2.6), (2.5, -2.6), (2.4, -2.8), (-3, 2.8), (-3, 2.6)\}. \end{aligned}$$

The uniform interpolation knots, \(\{\hat{t}_i={i\over 6}T_c\}_{i=0}^6\) (rescaled to \(T_{c}\) - see (12)) taken as a blind guess of \(\{t_i\}_{i=0}^6\), read as:

$$\begin{aligned} \mathscr {T}_{uni}=\{0, 2.84308, 5.68615, 8.52923, 11.3723, 14.2154\} \end{aligned}$$

and the initial guess based on cumulative chord \(\mathscr {T}_c\) (see (12)) coincides with:

$$\begin{aligned} \mathscr {T}_{c}=\{0, 0.412311, 6.01231, 6.23592, 14.0154, 14.2154 \}. \end{aligned}$$

Here \(\hat{\mathscr {T}}_{uni}\) (and \(\hat{\mathscr {T}}_{c}\)) is defined as \(\mathscr {T}_{uni}\) (and \(\mathscr {T}_{c}\)) stripped from its terminal values. The natural splines \(\gamma _{\mathscr {T}_{uni}}^{NS}\) (based on \(\mathscr {T}_{uni}\)) and \(\gamma _{\mathscr {T}_{c}}^{NS}\) (based on \(\mathscr {T}_{c}\)) yield the following energies \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{uni})=15.4253>\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_c)=8.51108\). Both interpolants \( \gamma _{\mathscr {T}_{uni}}^{NS}\) and \(\gamma _{\mathscr {T}_{c}}^{NS}\) are shown in Fig. 3(a) and (b), respectively.

One expects that the Secant Method with two initial numbers \(t_i^c\pm 0.5\) may produce a bad solution as \(|t_0^c-t_1^c|<0.5\), \(|t_2-t_3|<0.5\) and \(|t_4-t_5|<0.5\). Indeed the Secant Method returns \(t_1^{opt}=-8.2211<t_0=0\), which is disallowed. Upon adjusting \(t_i^c\pm 0.05\) the Secant Method yields (for (9)) the optimal knots \(\hat{\mathscr {T}}_{opt}^{SM}\) augmented by terminal times \(t_0=0\) and \(t_5=T_c\) as:

$$\begin{aligned} \mathscr {T}_{opt}^{SM}= \{0, 0.737027,6.07314,7.14642,13.5208, 14.2154\} \end{aligned}$$

with the optimal energy \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{opt}^{SM})=5.04331\). The execution time amounts to \(T^{SM}=9.204045\) s. The resulting curve \(\gamma _{\mathscr {T}_{opt}^{SM}}^S\) is plotted in Fig. 3(c). In fact, for general data it is safer for each free variable optimized by the Secant Method to choose a pair of numbers \(t_i^c\pm 0.5\min _{0\le i\le n-1}\{|t_{i+1}^c-t_i^c|\}\).

Leap-Frog decreases the initial energy to \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{opt}^{LF})=\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{opt}^{SM})\) (as for the Secant Method) with the iteration stopping conditions \(\hat{\mathscr {T}}_{opt}^{LF}=\hat{\mathscr {T}}_{opt}^{SM}\) (up to 6th decimal point) upon 38 iterations. The respective execution time amounts to \(T^{LF}=3.247230<T^{SM}\). The 0th (i.e. \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_c)\)), 1st, 2nd, 10th, 18th, and 38th iterations of Leap-Frog decrease the energy to:

$$\begin{aligned} \{8.51108,5.91959,5.23031, 5.04455, 5.04331,5.04331\} \end{aligned}$$

with again only the first three iterations contributing to the major correction of the initial guess knots \(\mathscr {T}_{c}\). The resulting natural spline \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}\) (clearly the same as \(\gamma _{\mathscr {T}_{opt}^{SM}}^{NS}\) yielded by the Secant Method) based on \(\mathscr {T}_{opt}^{LF}\) is shown in Fig. 3(c) and also visually compared with \(\gamma _{\mathscr {T}_c}^{NS}\) in Fig. 3(d).

Again if Leap-Frog iteration bound condition is changed e.g. to make current Leap-Frog energy equal to \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_c^{SM})\) (say up to 5th decimal place) then only 18 iterations are needed here with shorter execution time \(T^{LF}_{E}=1.785042<T^{SM}\) and with optimal times

$$\begin{aligned} \mathscr {T}_{opt}^{LF_E}=\{0,0.736394,6.0697,7.14349,13.5205,14.2154\}. \end{aligned}$$

We miss out here a bit on a precise estimation of the optimal knots but we accelerate the Leap-Frog execution time by obtaining almost the same interpolating curve as the optimal one (as \(\hat{\mathscr {T}}_{opt}^{LF_E}\approx \hat{\mathscr {T}}_{opt}^{SM}\)). For other iteration stopping criteria accelerating the execution of Leap-Frog at almost no cost in difference between computed and optimal curve see [19].    \(\square \)

We pass now to an example of reduced data in \(E^3\) (i.e. with \(m=3\)).

Fig. 4.
figure 4

Natural splines interpolating data points \(\mathscr {M}_{3D3}\) (a) \(\gamma _{\mathscr {T}_{uni}}^{NS}\) with uniform knots \(\mathscr {T}_{uni}\), (b) \(\gamma _{\mathscr {T}_{c}}^{NS}\) with cumulative chords \(\mathscr {T}_{c}\), (c) \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}\) with optimal knots \(\mathscr {T}_{opt}^{LF}=\mathscr {T}_{opt}^{SM}\) (thus \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}=\gamma _{\mathscr {T}_{opt}^{SM}}^{NS}\)) (d) \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}\) and \(\gamma _{\mathscr {T}_{c}^{SM}}^{NS}\) plotted together

Example 7

Consider for \(n=7\) the following 3D points (see dotted points in Fig. 4):

$$\begin{aligned} \mathscr {M}_{3D3}=\{ (0, 0, 1), (0, 0, -1), (0, 0, -0.8), (1, 0, 0), (1, 0.2, 0), (1, 0.4, 0), (1, 0.8, 0.2),&\\ (1, 1, 0)\}. \end{aligned}$$

The uniform interpolation knots \(\{\hat{t}_i={i\over 7}T_c\}_{i=0}^7\approx \{t_i\}_{i=0}^7\) (rescaled to \(t_0=0\) and to \(T_{c}\) – see (12)) read as:

$$\begin{aligned} \mathscr {T}_{uni}= \{{0, 0.658669, 1.31734, 1.97601, 2.63467, 3.29334, 3.95201, 4.61068}\} \end{aligned}$$

and the initial guess based on cumulative chords \(\mathscr {T}_c\) is equal to:

$$\begin{aligned} \mathscr {T}_{c}=\{0, 2, 2.2, 3.48062, 3.68062, 3.88062, 4.32784, 4.61068\}. \end{aligned}$$

Here \(\hat{\mathscr {T}}_{uni}=\{{ 0.658669, 1.31734, 1.97601, 2.63467, 3.29334, 3.95201}\}\), while the other one \(\hat{\mathscr {T}}_{c}=\{0, 2, 2.2, 3.48062, 3.68062, 3.88062, 4.32784, 4.61068\}\). The natural splines \(\gamma _{\mathscr {T}_{uni}}^{NS}\) (based on \(\mathscr {T}_{uni}\)) and \(\gamma _{\mathscr {T}_{c}}^{NS}\) (based on \(\mathscr {T}_{c}\)) yields the following energies \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{uni})=46.7919> \mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_c)=22.3564\). Both interpolants \( \gamma _{\mathscr {T}_{uni}}^{NS}\) and \(\gamma _{\mathscr {T}_{c}}^{NS}\) are shown in Fig. 4(a) and(b), respectively. Noticeably the energy based on blind guess of knots (i.e. for uniform knots) is far from the optimal one.

The Secant Method yields for (9) the optimal knots \(\hat{\mathscr {T}}_{opt}^{SM}\) (augmented by terminal knots \(t_0=0\) and \(t_7=T_c\) - see (12))

$$\begin{aligned} \mathscr {T}_{opt}^{SM}=\{0,1.34728,1.82093, 3.12718,3.39487,3.62307, 4.19613, 4.61068\} \end{aligned}$$

with the optimal energy \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{opt}^{SM})=15.407\). The execution time amounts to \(T^{SM}=128.804084\) s. The resulting curve \(\gamma _{\mathscr {T}_{opt}^{SM}}^{NS}\) is plotted in Fig. 4(c). Note that for each free variable, the Secant Method uses here two initial numbers \(t_i^c\pm 0.1\).

Leap-Frog decreases the initial energy to \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{opt}^{LF})=\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_{opt}^{SM})\) (as for the Secant Method) with the iteration stopping conditions \(\hat{\mathscr {T}}_{opt}^{LF}=\hat{\mathscr {T}}_{opt}^{SM}\) (up to 5th decimal point) upon 620 iterations. The execution time amounts to \(T^{LF}=73.749111\,\mathrm{s}<T^{SM}\). The 0th (i.e. \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_c)\)), 1st, 2nd, 10th, 50th, 40th and 100th, 200th, 281th and 620th iterations of Leap-Frog decrease the energy to:

$$\begin{aligned} \{ 22.3564, 18.5598, 18.274, 17.4628,15.8596.15.5049, 15.4091 15.407, 15.407\} \end{aligned}$$

with again only the first three iterations contributing to major correction of the initial guess knots \(\mathscr {T}_{c}\). The resulting natural spline \(\gamma _{\mathscr {T}_{opt}^{LF}}^{NS}\) (clearly the same as \(\gamma _{\mathscr {T}_{opt}^{SM}}^{NS}\) yielded by the Secant Method) based on \(\mathscr {T}_{opt_2}^{LF}\) is shown in Fig. 4(c) and also visually compared with \(\gamma _{\mathscr {T}_c}^{NS}\) in Fig. 4(d). The optimal curve does not vary too much from the initial guess curve based on cumulative chord knots.

Again if Leap-Frog iteration bound condition is changed e.g. to make current Leap-Frog energy equal to \(\mathscr {J}^F_{\mathscr {T}_c}(\hat{\mathscr {T}}_c^{SM})\) (say up to 4th decimal place) then only 281 iterations are needed here with shorter execution time \(T^{LF}_{E}= 33.931990\,\mathrm{s}<T^{SM}\) and with optimal knots:

$$\begin{aligned} \mathscr {T}_{opt}^{LF_E}=\{0, 1.348043, 1.82195, 3.12892, 3.39651, 3.62453, 4.19679, 4.61068\}. \end{aligned}$$

As previously, we lose here slightly on a precise estimation of the optimal knots but we accelerate the Leap-Frog execution time by obtaining almost the same interpolating curve as the optimal one (as \(\mathscr {T}_{opt}^{LF_E}\approx \mathscr {T}_{opt}^{SM}\)). For other iteration stopping criteria accelerating the execution of Leap-Frog at almost no cost in difference between computed curve and optimal curve see [19].    \(\square \)

5 Conclusions

In this paper, we discuss the method of estimating the unknown interpolation knots \(\{t_i\}_{i=0}^n\) by \(\{\hat{t}_i\}_{i=0}^n\) to fit reduced sparse data \(\mathscr {M}=\{q_i\}_{i=0}^n\) with the natural cubic spline in arbitrary Euclidean space \(E^m\). As indicated here, the above task is transformed into the corresponding finite-dimensional constrained optimization task (9) in \((t_1,t_2,\dots ,t_{n-1})\)-variables, subject to the satisfaction of the inequalities \(t_0<t_1<t_2,<\) \(\dots \) \(<t_{n-1}<t_n\). We first demonstrate a high non-linearity and possible non-convexity of (9) - Sects. 1, 2, and 3. Consequently, the latter hinders the use of standard optimization techniques like Newton Method to deal with such optimization task. Finally, two computationally feasible schemes are implemented, i.e. Leap-Frog and the Secant Method to examine the quality of the reconstructed interpolants. The derivation of the explicit formula in (9) including its particular forms examined in Examples 1 and 2 relies on Mathematica symbolic computation - see [35]. All numerical computations performed in Examples 3, 6 and 7 resort to the numerical functions supplied by Mathematica software (see [35]). In addition, sufficient conditions to enforce the convexity (or unimodality) of (9) are generated for the special case of \(n=2\) - see Example 5. Future work involves the analysis of a more general case i.e. when n is arbitrary. Alternatively one may also consider to derive a similar to (9) optimization task set for a complete spline interpolant (see [3]), with the initial velocities \(v_0\) and \(v_n\) either a priori given or approximated according to [15].