Keywords

1 Introduction

In this work the problem of interpolating n points \(\mathcal{M}_n=\{x_i\}_{i=0}^n\) in arbitrary Euclidean space \(\mathbb {E}^m\) is addressed. The corresponding knots \(\mathcal{T}=\{t_i\}_{i=1}^{n-1}\) are assumed to be unknown. The class of fitting functions (curves) \(\mathcal{I}\) considered in this paper represents piecewise \(C^{2}\) curves \(\gamma :[0,T]\rightarrow \mathbb {E}^m\) satisfying \(\gamma (t_i)=q_i\) and \(\ddot{\gamma }(t_0)=\ddot{\gamma }(T)={\mathbf{0}}\). It is also assumed that \(\gamma \in \mathcal{I}\) is at least of class \(C^1\) over \(\mathcal{T}_{int}=\{t_i\}_{i=1}^{n-1}\) and extends to \(C^2([t_i,t_{i+1}])\). Additionally, the unknown internal knots \(\mathcal{T}_{int}\) are allowed to vary subject to \(t_i<t_{i+1}\), for \(i=0,1,\dots ,n-1\) (here \(t_0=0\) and \(t_n=T\)). Such knots are called admissible and choosing them according to some adopted criterion permits to control and model the trajectory of \(\gamma \). One of such criterion might focus on minimizing “average squared norm acceleration” of \(\gamma \). In fact, for a given choice of fixed knots \(\mathcal{T}\), the task of minimizing

$$\begin{aligned} \mathcal{J}_T(\gamma )=\sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}} \Vert {\ddot{\gamma }}(t)\Vert ^2dt\;, \end{aligned}$$
(1)

(over \(\mathcal{I}\)) yields a unique optimal curve \(\gamma _{opt}\in \mathcal{I}\) forming a natural cubic spline \(\gamma _{NS}\) - see [1] or [8]. Consequently, letting the internal knots \(\mathcal{T}_{int}\) change, minimizing \(\mathcal{J}_T\) over \(\mathcal{I}\) reduces to searching for an optimal natural spline \(\gamma _{NS}\) with \(\mathcal{T}_{int}\) treated as free variables. Thus by [1], having recalled that \(\gamma _{NS}\) is uniquely determined by \(\mathcal{T}\), minimizing \(\mathcal{J}_T\) amounts to optimizing a highly non-linear function \(J_0\) in \(n-1\) variables \(\mathcal{T}_{int}\) satisfying \(t_i<t_{i+1}\) (see [3]). Due to the high non-linearity of \(J_0\) the majority of numerical schemes applied to optimize \(J_0\) lead to numerical difficulties (see e.g. [3]). Similarly, the analysis of critical points of \(J_0\) forms a complicated task. To alleviate the latter, a Leap-Frog can be applied to deal with \(J_0\) - see [2] or [3]. This scheme minimizes \(J_0\) with iterative sequence of single variable overlapping optimizations of \(J_0^{(k,i)}\) subject to \(t_i<t_{i+1}\).

The novelty of this work refers to the generic case of Leap-Frog (recursively applied over each internal snapshots). The analysis establishing sufficient conditions for unimodality of \(J_0^{(k,i)}\) is conducted here. Numerical tests and illustrative examples supplement the latter. The discussion covers first a special case of data (see Sect. 4) extended next to its perturbation (see Sect. 5 and Theorem  1). More information on numerical performance of Leap-Frog and comparison tests with two standard numerical optimization schemes can be found in [2, 3] or recently published [6]. Some applications of Leap-Frog optimization scheme used also as a modelling and simulation tool are discussed in [9, 10] or [11].

2 Preliminaries

Recall (see [1]) that a cubic spline interpolant \(\gamma ^{C_i}_\mathcal{T}=\gamma ^{C}_\mathcal{T}|_{[t_i,t_{i+1}]}\), for given admissible knots \(\mathcal{T}=(t_0,t_1,\dots ,t_{n-1},t_n)\) is defined as \(\gamma _\mathcal{T}^{C_i}(t)=c_{1,i}+c_{2,i}(t-t_i)+c_{3,i}(t-t_i)^2+c_{4,i}(t-t_i)^3\), (for \(t\in [t_i,t_{i+2}]\)) to satisfy (for \(i=0,1,2,\dots ,n-1\); \(c_{j,i}\in \mathbb {R}^m\), where \(j=1,2,3,4\)) \(\gamma _\mathcal{T}^{C_i}(t_{i+k})=x_{i+k}\) and \(\dot{\gamma }_\mathcal{T}^{C_i}(t_{i+k})=v_{i+k}\), for \(k=0,1\) with the velocities \(v_0,v_1,\dots ,v_{n-1},v_n\in \mathbb {R}^m\) assumed to be temporarily free parameters (if unknown). The coefficients \(c_{j,i}\) read (with \(\varDelta t_i=t_{i+1}-t_i\)):

$$\begin{aligned} c_{1,i}= & {} x_i, \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \ c_{2,i}\ \ =\ \ v_i \;,\nonumber \\ c_{4,i}= & {} {v_i+v_{i+1}-2{x_{i+1}-x_i\over \varDelta t_i}\over (\varDelta t_i)^2}\;,\quad \quad c_{3,i}\ \ =\ \ {{(x_{i+1}-x_i)\over \varDelta t_i}-v_i\over \varDelta t_i}-c_{4,i}\varDelta t_i\;. \end{aligned}$$
(2)

The latter follows from Newton’s divided differences formula (see e.g. [1, Chap. 1]). Adding \(n-1\) constraints \(\ddot{\gamma }_\mathcal{T}^{C_{i-1}}(t_{i})=\ddot{\gamma }_\mathcal{T}^{C_i}(t_{i})\) for continuity of \(\ddot{\gamma }_\mathcal{T}^C\) at \(x_1,\dots ,x_{n-1}\) (with \(i=1,2,\dots ,n-1\)) leads by (2) (for \(\gamma _\mathcal{T}^{C_i}\)) to the m tridiagonal linear systems (strictly diagonally dominant) of \(n-1\) equations in \(n+1\) vector unknowns representing velocities at \(\mathcal{M}\) i.e. \(v_0, v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\):

$$\begin{aligned} v_{i-1}\varDelta t_i+ & {} 2v_i(\varDelta t_{i-1}+\varDelta t_i)+v_{i+1}\varDelta t_{i-1}=b_i\;,\nonumber \\ b_i= & {} 3(\varDelta t_i{x_i-x_{i-1}\over \varDelta t_{i-1}} +\varDelta t_{i-1}{x_{i+1}-x_i\over \varDelta t_i})\;. \end{aligned}$$
(3)

(i) Both \(v_0\) and \(v_{n}\) (if unknown) can be e.g. calculated from \(a_0=\ddot{\gamma }_\mathcal{T}^{C}(0)=a_n=\ddot{\gamma }_\mathcal{T}^{C}(T_c)={\mathbf{0}}\) combined with (2) (this yields a natural cubic spline interpolant \(\gamma _\mathcal{T}^{NS}\) - a special \(\gamma _\mathcal{T}^{C}\)) which supplements (3) with two missing vector linear equations:

$$\begin{aligned} 2v_0+v_1=3{x_1-x_0\over \varDelta t_0}\;,\quad v_{n-1}+2v_n=3{x_n-x_{n-1}\over \varDelta t_{n-1}}\;. \end{aligned}$$
(4)

The resulting m linear systems, each of size \((n+1)\times (n+1)\), (based on (3) and (4)) as strictly row diagonally dominant result in one vector solution \(v_0,v_1,\dots ,v_{n-1},v_n\) (solved e.g. by Gauss elimination without pivoting - see [1, Chap. 4]), which when fed into (2) determines explicitly a natural cubic spline \(\gamma _\mathcal{T}^{NS}\) (with fixed \(\mathcal{T}\)). A similar approach follows for arbitrary \(a_0\) and \(a_n\).

(ii) If both \(v_0\) and \(v_{n}\) are given then the so-called complete spline \(\gamma ^{CS}_\mathcal{T}\) can be found with \(v_1,\dots v_{n-1}\) determined solely by (3).

(iii) If one of \(v_{0}\) or \(v_{n}\) is unknown it can be compensated by setting the respective terminal acceleration e.g. to \({\mathbf{0}}\). The above scheme relies on solving (3) with one equation from (4). Such splines are denoted here by \(\gamma _\mathcal{T}^{v_n}\) or \(\gamma _\mathcal{T}^{v_0}\). Two non-generic cases of Leap-Frog optimizations deal with the latter - omitted in this paper.

By (1) \(\mathcal{J}_{T}(\gamma _\mathcal{T}^{NS})= 4\sum _{i=0}^{n-1}(\Vert c_{3,i}\Vert ^2\varDelta t_i+3\Vert c_{4,i}\Vert ^2(\varDelta t_i)^3 +3\langle c_{3,i}|c_{4,i}\rangle (\varDelta t_i)^2)\;,\) which ultimately reformulates into (see [2]):

$$\begin{aligned} \mathcal{J}_{T}(\gamma _\mathcal{T}^{NS})= & {} 4\sum _{i=0}^{n-1}\Big ({-1\over (\varDelta t_i)^3}(-3\Vert x_{i+1}-x_i\Vert ^2 +3\langle v_i+v_{i+1}|x_{i+1}-x_i\rangle \varDelta t_i\nonumber \\&\quad \quad \quad \quad \quad \quad -(\Vert v_i\Vert ^2+\Vert v_{i+1}\Vert ^2 +\langle v_i|v_{i+1}\rangle )(\varDelta t_i)^2\Big )\;. \end{aligned}$$
(5)

As mentioned before for fixed knots \(\mathcal{T}\), the natural spline \(\gamma _\mathcal{T}^{NS}\) minimizes (1) (see [1]). Thus upon relaxing the internal knots \(\mathcal{T}_{int}\) the original infinite dimensional optimization (1) reduces to finding the corresponding optimal knots \((t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\) for (5) (viewed from now on as a multivariable function \(J_0(t_1,t_2,\dots ,t_{n-1})\)) subject to \(t_0=0<t_1^{opt}<t_2^{opt}<\dots<t_{n-1}^{opt}<t_n=T\). Such reformulated non-linear optimization task (5) transformed into minimizing \(J_0(\mathcal{T}_{int})\) (here \(t_0=0\) and \(t_n=T\)) forms a difficult task for critical points examination as well as for the numerical computations. The analysis addressing the non-linearity of \(J_0\) and comparisons between different numerical methods used to optimize \(J_0\) are discussed in [2, 3] or [6]. One of the computationally feasible schemes handling (5) turns out to be a Leap-Frog (for its 2D analogue for image noise removal see also [11] or in other contexts see e.g. [9] or [10]). For optimizing \(J_0\) this scheme is based on the sequence of single variable iterative optimization which in k-th iteration minimizes:

$$\begin{aligned} J_0^{(k,i)}(s)=\int _{t_{i-1}^{k}}^{t_{i+1}^{k-1}}\Vert \ddot{\gamma }^{CS}_{k,i}(s)\Vert ^2ds \end{aligned}$$
(6)

over \(I_i^{k-1}=[t_{i-1}^{k},t_{i+1}^{k-1}]\). Here \(t_i\) is set to be a free variable \(s_i\). The complete spline \(\gamma ^{CS}_{k,i}:I_i^{k-1}\rightarrow \mathbb {E}^m\) is determined by \(\{t_{i-1}^{k},s,t_{i+1}^{k-1}\}\), both velocities \(\{v_{i-1}^{k}, v_{i+1}^{k-1}\}\) and the interpolation points \(\{x_{i-1},x_i,x_{i+1}\}\). Once \(s_i^{opt}\) is found one updates \(t_i^{k-1}\) with \(t_i^k=s_i^{opt}\) and \(v_i^{k-1}\) with the \(v_i^k=\dot{\gamma }^{CS}_{k,i}(s_i^{opt})\). Next we pass to the shifted overlapped sub-interval \(I_{i+1}^k=[t_{i}^{k},t_{i+2}^{k-1}]\) and repeat the previous step of updating \(t_{i+1}^{k-1}\). Note that both cases \([0,t_2^{k-1}]\) and \([t_{n-2}^{k-1},T]\) rely on splines discussed in (iii), where the vanishing acceleration replaces one of the velocities \(v_0^{k-1}\) or \(v_n^{k-1}\). Once \(t_{n-1}^{k-1}\) is changed over the last sub-interval \(I_{n-1}^{k-1}=[t_{n-2}^{k},T]\) the k-th iteration is terminated and the next local optimization over \(I_1^k=[0,t_2^{k}]\) represents the beginning of the \((k+1)\)-st iteration of Leap-Frog Algorithm. The initialization of \({\mathcal T}_{int}\) for Leap-Frog can follow normalized cumulative chord parameterization (see e.g. [8]) which sets \(t_0^0=0,t_1^0,\dots ,t_{n-1}^k,t_n^0=T\) according to \(t_0^0=0\) and \(t_{i+1}^0=\Vert x_{i+1}-x_i\Vert \frac{T}{\hat{T}}+t_i^0\), for \(i=0,1,\dots ,n-1\) and \(\hat{T}=\sum _{i=0}^{n-1}\Vert x_{i+1}-x_i\Vert \).

3 Generic Middle Case: Initial and Last Velocities Given

Assume that for internal points \(x_i,x_{i+1},x_{i+2}\in \mathbb {E}^m\) (for \(i=1,2,\dots ,n-3\) and \(n>3\)) the interpolation knots \(t_i\) and \(t_{i+2}\) with the velocities \(v_{i},v_{i+2}\in \mathbb {R}^m\) are somehow given (e.g. by previous Leap-Frog iteration outlined in Sect. 2). We construct now a \(C^2\) piecewise cubic (a complete spline - see Sect. 2), depending on varying \(t_{i+1}\in (t_i,t_{i+2})\) (temporarily free variable). The curve \(\gamma ^c_i:[t_i,t_{i+2}]\rightarrow \mathbb {E}^m\) (i.e. a cubic on each \([t_i,t_{i+1}]\) and \([t_{i+1},t_{i+2}]\)) satisfies:

$$\begin{aligned} \gamma ^c_i(t_{i+j})=x_{i+j}\;, \quad j=0,1,2\;;\quad \dot{\gamma }^c_i(t_{i+j})=v_{i+j}\;,\quad j=0,2\;. \end{aligned}$$
(7)

Letting \(\phi _i:[t_i,t_{i+2}]\rightarrow [0,1]\), \(\phi _i(t)=(t-t_i)(t_{i+2}-t_i)^{-1}=s\) the curve \(\tilde{\gamma }^c_i:[0,1]\rightarrow \mathbb {E}^m\) (with \(\tilde{\gamma }^c_i=\gamma ^c_i\circ \phi _i^{-1}\)) by (7) satisfies, for \(0<s_{i+1}=\phi _i(t_{i+1})<1\):

$$\begin{aligned} \tilde{\gamma }^c_i(0)=x_{i}\;,\quad \tilde{\gamma }^c_i(s_{i+1})=x_{i+1}\;, \quad \tilde{\gamma }^c_i(1)=x_{i+2}\;, \end{aligned}$$
(8)

with the adjusted initial and the last velocities \({\tilde{v}}_i,{\tilde{v}}_{i+2}\in \mathbb {R}^m\) fulfilling:

$$\begin{aligned} {\tilde{v}}_i=\tilde{\gamma }^{c'}_i(0)=(t_{i+2}-t_i)v_{i}\;,\quad {\tilde{v}}_{i+2}=\tilde{\gamma }^{c'}_i(1)=(t_{i+2}-t_i)v_{i+2}\;. \end{aligned}$$
(9)

To reformulate \(\tilde{\mathcal{E}}_i\) define two cubics \(\tilde{\gamma }^{lc}_i,\tilde{\gamma }^{rc}_i\) satisfying (with \(s_{i+1}\in (0,1)\)) \(\tilde{\gamma }^c_i=\tilde{\gamma }^{lc}_i\) (over \([0,s_{i+1}]\)) and \(\tilde{\gamma }^c_i=\tilde{\gamma }^{rc}_i\) (over \([s_{i+1},1]\)) with \(c_{ij},d_{ij}\in \mathbb {E}^m\):

$$\begin{aligned} \tilde{\gamma }^{lc}_i(s)= & {} c_{i0}+c_{i1}(s-s_{i+1})+c_{i2}(s-s_{i+1})^2 +c_{i3}(s-s_{i+1})^3 \;,\nonumber \\ \tilde{\gamma }^{rc}_i(s)= & {} d_{i0}+d_{i1}(s-s_{i+1})+d_{i2}(s-s_{i+1})^2 +d_{i3}(s-s_{i+1})^3\;. \end{aligned}$$
(10)

Since \(\tilde{\gamma }^c_i\) is a complete spline the following constraints hold:

$$\begin{aligned} \tilde{\gamma }^{lc}_i(0)=x_i\;, \quad \tilde{\gamma }^{lc}_i(s_{i+1})=\tilde{\gamma }^{rc}_i(s_{i+1})=x_{i+1}\;, \quad \tilde{\gamma }^{rc}_i(1)=x_{i+2}\;, \end{aligned}$$
(11)
$$\begin{aligned} \tilde{\gamma }^{lc'}_i(0)={\tilde{v}}_i\;, \quad \quad \tilde{\gamma }^{rc'}_i(1)={\tilde{v}}_{i+2}\;, \end{aligned}$$
(12)

together with two \(C^1\) and \(C^2\) smoothness constraints at \(s=s_{i+1}\):

$$\begin{aligned} \tilde{\gamma }^{lc'}_i(s_{i+1})=\tilde{\gamma }^{rc'}_i(s_{i+1})\;, \quad \quad \tilde{\gamma }^{lc''}_i(s_{i+1})=\tilde{\gamma }^{rc''}_i(s_{i+1})\;. \end{aligned}$$
(13)

Upon shifting the coordinates origin in \(\mathbb {E}^m\) to \(x_{i+1}\) we have for \(\tilde{x}_{i+1}={\mathbf{0}}\), \(\tilde{x}_{i}=x_i-x_{i+1}\) and \(\tilde{x}_{i+2}=x_{i+2}-x_{i+1}\) (by (11)):

$$\begin{aligned} \tilde{\gamma }^{lc}_i(0)=\tilde{x}_i\;, \quad \tilde{\gamma }^{lc}_i(s_{i+1})=\tilde{\gamma }^{rc}_i(s_{i+1})={\mathbf{0}}\;, \quad \tilde{\gamma }^{rc}_i(1)=\tilde{x}_{i+2}\;. \end{aligned}$$
(14)

Both (10) and \(x_{i+1}={\mathbf{0}}\) yield \(c_{i0}=d_{i0}={\mathbf{0}}\). Next (13) with \(\tilde{\gamma }^{lc'}_i(s)=c_{i1}+2c_{i2}(s-s_{i+1})+3c_{i3}(s-s_{i+1})^2\), \(\tilde{\gamma }^{rc'}_i(s)=d_{i1}+2d_{i2}(s-s_{i+1})+3d_{i3}(s-s_{i+1})^2\), \(\tilde{\gamma }^{lc''}_i(s)=2c_{i2}+6c_{i3}(s-s_{i+1})\) and \(\tilde{\gamma }^{lr''}_i(s)=2d_{i2}+6d_{i3}(s-s_{i+1})\), leads to \(c_{i1}=d_{i1}\) and \(c_{i2}=d_{i2}\). Hence one obtains:

$$\begin{aligned} \tilde{\gamma }^{lc}_i(s)= & {} c_{i1}(s-s_{i+1})+c_{i2}(s-s_{i+1})^2+c_{i3}(s-s_{i+1})^3\;, \nonumber \\ \tilde{\gamma }^{rc}_i(s)= & {} c_{i1}(s-s_{i+1})+c_{i2}(s-s_{i+1})^2+d_{i3}(s-s_{i+1})^3\;. \end{aligned}$$
(15)

The unknown vectors \(c_{i1},c_{i2},c_{i3},d_{i3}\) in (15) follow from four linear vector equations obtained from (12) and (14) (i.e. with data \(\tilde{\mathcal{M}}_i=\{\tilde{x}_i,\tilde{x}_{i+2}, \tilde{v}_i,\tilde{v}_{i+2}\}\)):

$$\begin{aligned} \tilde{x}_i= & {} -c_{i1}s_{i+1}+c_{i2}s_{i+1}^2-c_{i3}s_{i+1}^3\;,\nonumber \\ \tilde{x}_{i+2}= & {} c_{i1}(1-s_{i+1})+c_{i2}(1-s_{i+1})^2+d_{i3}(1-s_{i+1})^3\;,\nonumber \\ \tilde{v}_i= & {} c_{i1}-2c_{i2}s_{i+1}+3c_{i3}s_{i+1}^2\;,\nonumber \\ \tilde{v}_{i+2}= & {} c_{i1}+2c_{i2}(1-s_{i+1})+3d_{i3}(1-s_{i+1})^2\;. \end{aligned}$$
(16)

Applying Mathematica Solve to (16) yields:

$$\begin{aligned} c_{i1}= & {} -{-s_{i+1}\tilde{v}_i+2s_{i+1}^2\tilde{v}_i-s_{i+1}^3\tilde{v}_i-s_{i+1}^2\tilde{v}_{i+2}+s_{i+1}^3\tilde{v}_{i+2}-3\tilde{x}_i +6s_{i+1}\tilde{x}_i\over 2(s_{i+1}-1)s_{i+1}}\nonumber \\\nonumber \\&-{-3s_{i+1}^2\tilde{x}_i+3s_{i+1}^2\tilde{x}_{i+2}\over 2(s_{i+1}-1)s_{i+1}}\;, \nonumber \\ \nonumber \\ c_{i2}= & {} -{s_{i+1}\tilde{v}_i-s_{i+1}^2\tilde{v}_i-s_{i+1}\tilde{v}_{i+2}+s_{i+1}^2\tilde{v}_{i+2}+3\tilde{x}_i-3s_{i+1}\tilde{x}_i+3s_{i+1}\tilde{x}_{i+2}\over (s_{i+1}-1)s_{i+1}}\;, \nonumber \\ \nonumber \\ c_{i3}= & {} -{s_{i+1}(\tilde{v}_i+2\tilde{x}_i)-s_{i+1}^3(\tilde{v}_i-\tilde{v}_{i+2}) -s_{i+1}^2(\tilde{v}_{i+2}+3\tilde{x}_i-3\tilde{x}_{i+2})+\tilde{x}_i\over 2(s_{i+1}-1)s_{i+1}^3}\;,\nonumber \\ \nonumber \\ d_{i3}= & {} -{-s_{i+1}\tilde{v}_i+2s_{i+1}^2\tilde{v}_i-s_{i+1}^3\tilde{v}_i+2s_{i+1}\tilde{v}_{i+2}-3s_{i+1}^2\tilde{v}_{i+2}+s_{i+1}^3\tilde{v}_{i+2}-3\tilde{x}_i \over 2(s_{i+1}-1)^3s_{i+1}}\nonumber \\\nonumber \\&-{6s_{i+1}\tilde{x}_i-3s_{i+1}^2\tilde{x}_i-4s_{i+1}\tilde{x}_{i+2}+3s_{i+1}^2\tilde{x}_{i+2}\over 2(s_{i+1}-1)^3s_{i+1}}\;, \end{aligned}$$
(17)

which satisfy (as functions in \(s_{i+1}\)) the system (16). Next, since \(\Vert \gamma ^{lc''}_i(s)\Vert ^2=4\Vert c_{i2}\Vert ^2 +24\langle c_{i2}| c_{i3}\rangle (s-s_{i+1})+36\Vert c_{i3}\Vert ^2(s-s_{i+1})^2\) and \(\Vert \gamma ^{rc''}_i(s)\Vert ^2=4\Vert c_{i2}\Vert ^2 +24\langle c_{i2}| d_{i3}\rangle (s-s_{i+1})+36\Vert d_{i3}\Vert ^2(s-s_{i+1})^2\) the formula for \(\tilde{\mathcal{E}}_i\) reads as

$$\begin{aligned} \tilde{\mathcal{E}}_i(s_{i+1})=\int _{0}^{s_{i+1}}\Vert \gamma ^{lc''}_i(s)\Vert ^2ds +\int _{s_{i+1}}^{1}\Vert \gamma ^{rc''}_i(s)\Vert ^2ds=I_1+I_2, \end{aligned}$$

where \(I_1=4(\Vert c_{i2}\Vert ^2s_{i+1}-3\langle c_{i2}|c_{i3}\rangle s_{i+1}^2+3\Vert c_{i3}\Vert ^2s_{i+1}^3)\) and \(I_2=4(\Vert c_{i2}\Vert ^2(1-s_{i+1})+3\langle c_{i2}|d_{i3}\rangle (1-s_{i+1})^2+3\Vert d_{i3}\Vert ^2(1-s_{i+1})^3)\). Combining the latter with (17) (upon applying NIntegrate and FullSimplify from Mathematica) yields:

$$\begin{aligned}&\tilde{\mathcal{E}}_i(s_{i+1})=\nonumber \\&{1\over s_{i+1}^3(s_{i+1}-1)^3} (3\Vert \tilde{x}_i\Vert ^2(s_{i+1}-1)^3(1+3s_{i+1}) +s_{i+1}(-6\langle \tilde{v}_i|\tilde{x}_i\rangle \nonumber \\&+s_{i+1}(\Vert \tilde{v}_{i+2}\Vert ^2(s_{i+1}-4)(s_{i+1}-1)^2s_{i+1} +3\Vert \tilde{x}_{i+2}\Vert ^2s_{i+1}(3s_{i+1}-4)\nonumber \\&+\Vert \tilde{v}_i\Vert ^2(s_{i+1}-1)^3(s_{i+1}+3) -2(s_{i+1}-1)^3s_{i+1}\langle \tilde{v}_i|\tilde{v}_{i+2}\rangle \nonumber \\&+6(2+(s_{i+1}-2)s_{i+1}^2)\langle \tilde{v}_i|\tilde{x}_i\rangle -6(s_{i+1}-1)^2s_{i+1}\langle \tilde{v}_i|\tilde{x}_{i+2}\rangle -6(s_{i+1}-1)^3\langle \tilde{v}_{i+2}|\tilde{x}_i\rangle \nonumber \\&+6(s_{i+1}-2)(s_{i+1}-1)s_{i+1}\langle \tilde{v}_{i+2}|\tilde{x}_{i+2}\rangle -18(s_{i+1}-1)^2\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle )))\;. \end{aligned}$$
(18)

Upon substituting for \(\tilde{x}_{i+2}=x_{i+2}-x_{i+1}\) and \(\tilde{x}_{i}=x_{i}-x_{i+1}\) one can reformulate (18) (and thus (19)) in terms of each data \(x_i,x_{i+1},x_{i+2}\in \mathbb {E}^m\). Mathematica symbolic differentiation and FullSimplify applied to \(\tilde{\mathcal{E}}_i\) yields:

$$\begin{aligned}&\tilde{\mathcal{E}}'_i(s_{i+1})=\nonumber \\ &{-3\over (s_{i+1}-1)^4s_{i+1}^4} (3\Vert \tilde{x}_i\Vert ^2(s_{i+1}-1)^4(1 + 2s_{i+1})+ s_{i+1}(\Vert \tilde{v}_i\Vert ^2(s_{i+1}-1)^4s_{i+1}\nonumber \\ &-\Vert \tilde{v}_{i+2}\Vert ^2(s_{i+1}-1)^2s_{i+1}^3 + 3\Vert \tilde{x}_{i+2}\Vert ^2s_{i+1}^3(2 s_{i+1}-3)\nonumber \\ &+2(s_{i+1}-1)^4(2 + s_{i+1})\langle \tilde{v}_i| \tilde{x}_i\rangle -2(s_{i+1}-1)^2s_{i+1}^3\langle \tilde{v}_i| \tilde{x}_{i+2}\rangle \nonumber \\ &-2(s_{i+1}-1)^4s_{i+1}\langle \tilde{v}_{i+2}|\tilde{x}_i\rangle +2(s_{i+1}-3)(s_{i+1}-1)s_{i+1}^3\langle \tilde{v}_{i+2}|\tilde{x}_{i+2}\rangle \nonumber \\ &-6(s_{i+1}-1)^2s_{i+1}(2s_{i+1}-1) \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle ))\;. \end{aligned}$$
(19)

By (19) \(\tilde{\mathcal{E}}'_i(s_{i+1})=(-1/((s_{i+1}-1)^4s_{i+1}^4))N_i(s_{i+1})\), where \(N_i(s_{i+1})\) is a polynomial of degree 6 (use here e.g. Mathematica functions Factor and CoefficientList) \(N_i(s_{i+1})=b_0^i+b_1^is_{i+1}+b_2^is_{i+1}^2+b_3^is_{i+1}^3 +b_4^is_{i+1}^4+b_5^is_{i+1}^5+b_6^is_{i+1}^6\), where

$$\begin{aligned} {b_0^i\over 3}= & {} 3\Vert \tilde{x}_i\Vert ^2\;,\quad \quad {b_1^i\over 3}\ =\ -6\Vert \tilde{x}_i\Vert ^2 + 4 \langle \tilde{v}_i|\tilde{x}_i\rangle \;,\nonumber \\ {b_2^i\over 3}= & {} \Vert \tilde{v}_i\Vert ^2 - 6\Vert \tilde{x}_i\Vert ^2 - 14\langle \tilde{v}_i|\tilde{x}_i\rangle - 2\langle \tilde{v}_{i+2}|\tilde{x}_i\rangle + 6\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle \;, \nonumber \\ {b_3^i\over 3}= & {} -4\Vert \tilde{v}_i\Vert ^2 + 24\Vert \tilde{x}_i\Vert ^2 + 16\langle \tilde{v}_i|\tilde{x}_i\rangle + 8\langle \tilde{v}_{i+2}|\tilde{x}_i\rangle - 24 \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle \;, \nonumber \\ {b_4^i\over 3}= & {} 6\Vert \tilde{v}_i\Vert ^2 - \Vert \tilde{v}_{i+2}\Vert ^2 - 21\Vert \tilde{x}_i\Vert ^2 - 9\Vert \tilde{x}_{i+2}\Vert ^2 - 4\langle \tilde{v}_i|\tilde{x}_i\rangle - 2\langle \tilde{v}_{i}|\tilde{x}_{i+2}\rangle \nonumber \\&- 12 \langle \tilde{v}_{i+2}|\tilde{x}_i\rangle + 6 \langle \tilde{v}_{i+2}|\tilde{x}_{i+2}\rangle + 30\langle \tilde{x}_{i}|x_{i+2}\rangle \;, \nonumber \\ {b_5^i\over 3}= & {} -4\Vert \tilde{v}_{i}\Vert ^2 + 2\Vert \tilde{v}_{i+2}\Vert ^2 + 6\Vert \tilde{x}_i\Vert ^2 + 6\Vert \tilde{x}_{i+2}\Vert ^2 - 4 \langle \tilde{v}_i|\tilde{x}_i \rangle + 4\langle \tilde{v}_i|\tilde{x}_{i+2}\rangle \rangle \nonumber \\&+8\langle \tilde{v}_{i+2}|\tilde{x}_i\rangle - 8\langle \tilde{v}_{i+2}|\tilde{x}_{i+2}\rangle - 12 \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle \;, \nonumber \\ {b_6^i\over 3}= & {} \Vert \tilde{v}_i\Vert ^2 - \Vert \tilde{v}_{i+2}\Vert ^2 + 2\langle \tilde{v}_i|\tilde{x}_i\rangle - 2\langle \tilde{v}_i|\tilde{x}_{i+2}\rangle - 2\langle \tilde{v}_{i+2}|\tilde{x}_i\rangle + 2 \langle \tilde{v}_{i+2}|\tilde{x}_{i+2}\rangle \;. \end{aligned}$$

In a search for a global optimum of \(\tilde{\mathcal{E}}_i\), instead of using any optimization scheme relying on initial guess, one can apply Mathematica Solve which finds all roots (real and complex). Indeed upon computing the roots of \(N_i(s_{i+1})\) one selects only these from (0, 1). Next we evaluate \(\tilde{\mathcal{E}}_i\) on each critical point \(s_{i+1}^{crit}\in (0,1)\) and choose \(s_{i+1}^{crit}\) with minimal energy as optimal. This feature is particularly useful in implementation of Leap-Frog as opposed to the optimization of the initial energy (5) depending on \(n-1\) unknown knots.

4 Special Conditions for Leap-Frog Generic Case

Assume \(\tilde{x}_{i}, \tilde{x}_{i+1}, \tilde{x}_{i+2}\in \mathbb {E}^m\) with \(\tilde{v}_i,\tilde{v}_{i+2}\in \mathbb {R}^m\) satisfy now the extra constraints:

$$\begin{aligned} \tilde{v}_i=\tilde{v}_{i+2}\;,\quad \quad \tilde{x}_{i+2}-\tilde{x}_i=\tilde{v}_i=\tilde{v}_{i+2}\;. \end{aligned}$$
(20)

By (20) we get \(\Vert \tilde{v}_{i+2}\Vert ^2=\Vert \tilde{v}_{i}\Vert ^2=\langle \tilde{v}_{i}|\tilde{v}_{i+2}\rangle = \Vert \tilde{x}_{i+2}\Vert ^2+\Vert \tilde{x}_i\Vert ^2 -2\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle \), \(\langle \tilde{x}_i|\tilde{v}_i\rangle =\langle \tilde{x}_i|\tilde{v}_{i+2}\rangle =\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle -\Vert \tilde{x}_i\Vert ^2\) and \(\langle \tilde{x}_{i+2}|\tilde{v}_i\rangle =\langle \tilde{x}_{i+2}|\tilde{v}_{i+2}\rangle =\Vert \tilde{x}_{i+2}\Vert ^2-\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle \). Substituting the above into (19) (or into \(\tilde{\mathcal{E}}_i^c\)) yields \(\tilde{\mathcal{E}}_i^{c}(s_{i+1})=\)

$$\begin{aligned}&-{3(\Vert \tilde{x}_i\Vert ^2(s_{i+1}-1)^2 + s_{i+1}(\Vert \tilde{x}_{i+2}\Vert ^2 s_{i+1} - 2(s_{i+1}-1)\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle ))\over (s_{i+1}-1)^3s_{i+1}^3} \end{aligned}$$
(21)

and hence \(\tilde{\mathcal{E}}_i^{c'}(s_{i+1})=\)

$$\begin{aligned}&{3\over (s_{i+1}-1)^4s_{i+1}^4} (\Vert \tilde{x}_{i}\Vert ^2(s_{i+1}-1)^2(4s_{i+1}-3)\nonumber \\&+s_{i+1}(\Vert \tilde{x}_{i+2}\Vert ^2s_{i+1}(4s_{i+1}-1) -4(1 - 3s_{i+1} + 2s_{i+1}^2) \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle ))\;. \end{aligned}$$
(22)

The numerator of (22) forms now a polynomial of degree 3 (instead of degree 6 as in (19)) \(N_i^c(s_{i+1}) =b_0^{i_c}+b_1^{i_c}s_{i+1}+b_2^{i_c}s_{i+1}^2+b_3^{i_c}s_{i+1}^3\), where:

$$\begin{aligned} {b_0^{i_c}\over 3}= & {} -3\Vert \tilde{x}_i\Vert ^2<0\;,\quad \quad {b_1^{i_c}\over 3}\ =\ 2(5\Vert \tilde{x}_i\Vert ^2 - 2 \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle )\;,\nonumber \\ {b_2^{i_c}\over 3}= & {} -11\Vert \tilde{x}_i\Vert ^2 -\Vert \tilde{x}_{i+2}\Vert ^2 + 12\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle =5(\Vert \tilde{x}_{i+2}\Vert ^2-\Vert \tilde{x}_i\Vert ^2)-6\Vert \tilde{x}_{i+2}-\tilde{x}_i\Vert ^2\;, \nonumber \\ {b_3^{i_c}\over 3}= & {} 4\Vert \tilde{x}_i\Vert ^2+4\Vert \tilde{x}_{i+2}\Vert ^2- 8\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle =4\Vert \tilde{x}_{i+2}-\tilde{x}_i\Vert ^2\ge 0\;. \end{aligned}$$

For \(\tilde{\mathcal{E}}_i^c\) to be unimodal over (0, 1) one needs \(N_i^c(s_{i+1})\) with a single root in (0, 1).

(i) Note that if \(\tilde{x}_{i+2}=\tilde{x}_i\) then \(N_i^c(s_{i+1})=-9\Vert \tilde{x}_i\Vert ^2+18s_{i+1}\Vert \tilde{x}_{i}\Vert ^2\) has exactly one root \(\hat{s}_{i+1}=1/2\in (0,1)\). By (20) we have \(\tilde{v}_{i+2}=\tilde{v}_i=\mathbf{0}\).

(ii) We assume now that \(\tilde{x}_{i+2}\ne \tilde{x}_i\) then \(N_i^c(s_{i+1})\) becomes a cubic. We find now the conditions for which \(N_i^c\) has exactly one root over (0, 1). For the latter as \(N_i^c(0)=-9\Vert \tilde{x}_i\Vert ^2<0\) and \(N_i^c(1)=9\Vert \tilde{x}_{i+2}\Vert ^2>0\) by Intermediate Value Th. it suffices to show that either \(N_i^{c'}(s_{i+1})=c_0^{i_c}+c_1^{i_c}s_{i+1}+c_2^{i_c}s_{i+1}^2>0\) (over (0, 1)) or that the derivative \(N_i^{c'}\) has exactly one root \(\hat{u}_{i+1}\in (0,1)\) (i.e. \(N_i^{c}\) has exactly one max/min/saddle at \(\hat{u}_{i+1}\)) and thus \(N_i^{c}(s_{i+1})=0\) yields exactly single root \(\hat{s}_{i+1}\in (0,1)\) - note that if \(\hat{s}_{i+1}=\hat{u}_{i+1}\) then \(\hat{u}_{i+1}\) is a saddle point of \(N_i^{c}\). Here a quadratic \(N_i^{c'}(s_{i+1})\) (as \(\tilde{x}_{i+2}\ne \tilde{x}_i\)) has coefficients \((c_0^{i_c}/6)=5\Vert \tilde{x}_i\Vert ^2 - 2 \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle \), \((c_1^{i_c}/6)= 5(\Vert \tilde{x}_{i+2}\Vert ^2-\Vert \tilde{x}_{i}\Vert ^2) -6\Vert \tilde{x}_{i+2}-\tilde{x}_i\Vert ^2\), and \((c_2^{i_c}/6)=6\Vert \tilde{x}_{i+2}-\tilde{x}_i\Vert ^2>0\).

The discriminant \(\tilde{\varDelta }\) of the quadratic \(N_i^{c'}(s_{i+1})/6\) reads as:

$$\begin{aligned} \tilde{\varDelta }=\Vert \tilde{x}_{i+2}\Vert ^4+\Vert \tilde{x}_i\Vert ^4 -98\Vert \tilde{x}_{i+2}\Vert ^2\Vert \tilde{x}_{i}\Vert ^2 +24\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle \Vert \tilde{x}_{i+2}+\tilde{x}_{i}\Vert ^2 \,. \end{aligned}$$
(23)

Define now two auxiliary parameters \((\lambda ,\mu )\in \varOmega =(\mathbb {R}_+\times [-1,1])\setminus \{(1,1)\}\):

$$\begin{aligned} \Vert \tilde{x}_i\Vert =\lambda \Vert \tilde{x}_{i+2}\Vert \;, \quad \langle \tilde{x}_i|\tilde{x}_{i+2}\rangle =\mu \Vert \tilde{x}_i\Vert \Vert \tilde{x}_{i+2}\Vert \;. \end{aligned}$$
(24)

Here \(\mu \) stands for \(\cos (\alpha )\), where \(\alpha \) is the angle between vectors \(\tilde{x}_i\) and \(\tilde{x}_{i+2}\) - hence \(\mu =\lambda =1\) is excluded as then \(\tilde{x}_{i+2}=\tilde{x}_i\). Note, however that as analyzed in case (i) when \(\tilde{x}_{i+2}=\tilde{x}_i\) there is only one optimal parameter \(\hat{s}_{i+1}=1/2\) - thus \((\mu ,\lambda )=(1,1)\) is also admissible. We examine various constraints on \((\mu ,\lambda )\ne (1,1)\) (with \(\lambda >0\) and \(-1 \le \mu \le 1\)) for the existence of either no roots or one root of \(N_i^{c'}=0\) over [0, 1] (yielding single critical point of \(\tilde{\mathcal{E}}_i^c\) over (0, 1)).

1. \(\tilde{\varDelta }<0\). Since \(c_2^{i_c}>0\), clearly the following \(N_i^{c'}>0\) holds over (0, 1). Substituting (24) into (23) yields (for \(\varDelta =(\tilde{\varDelta }/\Vert \tilde{x}_{i+2}\Vert ^4)\)) \(\varDelta (\lambda ,\mu )=\lambda ^4+24\mu \lambda ^3+(48\mu ^2-98)\lambda ^2+24\mu \lambda +1\). In order to decompose \(\varOmega \) into sub-regions \(\varOmega _-\) (with \(\varDelta <0\)), \(\varOmega _{+}\) (with \(\varDelta >0\)) and \(\varGamma _0\) (with \(\varDelta \equiv 0\)) we resort to Mathematica functions InequalityPlot, ImplicitPlot and Solve. Figure 1(a) shows the resulting decomposition and Fig. 1(b) shows its magnification for \(\lambda \) small. The intersection points of \(\varGamma _0\) and boundary \(\partial \varOmega _{}\) (found by Solve) read: for \(\mu =1\) it is a point (1, 1) (already excluded - see dotted point in Fig. 1) and for \(\mu =-1\) we have two points \((-1,(1/(13+2\sqrt{42})))\approx (-1,0.0385186)\) or \((-1,13+2\sqrt{42})\approx (-1,25.9615)\).

Fig. 1.
figure 1

Decomposition of \(\varOmega \) into sub-regions: (a) over which \(\varDelta >0\) (i.e. \(\varOmega _+\)), \(\varDelta =0\) (i.e. \(\varGamma _0\)) or \(\varDelta <0\) (i.e. \(\varOmega _{-}\)),   (b) only for \(\lambda \) small.

The admissible subset \(\varOmega _{ok}\subset \varOmega \) of parameters \((\mu ,\lambda )\) (for which there is one local minimum of \(\tilde{\mathcal{E}}_i^c\)) satisfies \(\varOmega _-\subset \varOmega _{ok}\). The set to \(\varOmega \setminus \varOmega _-\) is a potential exclusion zone \(\varOmega _{ex}\subset \varOmega \setminus \varOmega _-\). Next we shrink an exclusion zone \(\varOmega _{ex}\subset \varOmega \) (subset of shaded region in Fig. 1).

2. \(\tilde{\varDelta }=0\). There is only one root \(\hat{u}_{i+1}^0\in \mathbb {R}\) for \(N_i^{c'}(s_{i+1})=0\). As explained, irrespectively whether \(\hat{u}_{i+1}^0\in (0,1)\) or \(\hat{u}_{i+1}^0\notin (0,1)\) this results in exactly one root \(\hat{s}_{i+1}\in (0,1)\) of \(N_i^{c}(s_{i+1})=0\), which in turn yields exactly one local (thus one global) minimum for \(\tilde{\mathcal{E}}_i^c\). Hence \(\varOmega _{-}\cup \varGamma _0\subset \varOmega _{ok}\).

3. \(\tilde{\varDelta }>0\). There are two different roots \(\hat{u}_{i+1}^{\pm }\in \mathbb {R}\) of \(N_i^{c'}(s_{i+1})=0\). Note that since \(c_2^{i_c}>0\) we have \(\hat{u}_{i+1}^-<\hat{u}_{i+1}^+\). They are either (in all cases we use Vieta’s formulas):

(a) of opposite signs: i.e. \((c_0^{i_c}/c_2^{i_c})<0\) or

(b) non-positive: i.e. \((c_0^{i_c}/c_2^{i_c})\ge 0\) and \((-c_1^{i_c}/c_2^{i_c})<0\) (as \(\hat{u}_{i+1}^-<\hat{u}_{i+1}^+\)) or

(c) non-negative: i.e. \((c_0^{i_c}/c_2^{i_c})\ge 0\) and \((-c_1^{i_c}/c_2^{i_c})>0\) - split into:

(c1) \(\hat{u}_{i+1}^{+}\ge 1\): i.e.

(c2) \(0<\hat{u}_{i+1}^{+}<1\) (as here \(\hat{u}_{i+1}^{-}<\hat{u}_{i+1}^{+}\)).

Evidently for a), b) and c1) there is up to one root \(\hat{u}_{i+1}\in (0,1)\) of \(N_i^{c'}(s_{i+1})=0\). Therefore as already explained there is only one root \(\hat{s}_{i+1}\in (0,1)\) of \(N_i^{c}(s_{i+1})=0\), which is the unique critical point of \(\tilde{\mathcal{E}}_i^c\) over (0, 1). We show now that the inequalities from a) or b) or c) extend (contract) the admissible (exclusion) zone \(\varOmega _{ok}\) (\(\varOmega _{ex}\)) of parameters \((\mu ,\lambda )\in \varOmega \). Indeed:

a) the constraint \((c_0^{i_c}/c_2^{i_c})<0\) upon using (24) reads (as \(\lambda >0\)):

$$\begin{aligned} 5\lambda ^2-2\mu \lambda<0\quad \equiv \lambda <{2\mu \over 5}\;. \end{aligned}$$
(25)

Figure 2 a) shows \(\varOmega _1\) (over which (25) holds) cut out from the exclusion zone \(\varOmega _{ex}\) of parameters \((\mu ,\lambda )\in \varOmega \) (again InequalityPlot is used here).

Fig. 2.
figure 2

Extension of admissible zone \(\varOmega _{ok}\) by cutting out from \(\varOmega _{ex}\): (a) \(\varOmega _1\), (b) \(\varOmega _2\).

Thus \(\varOmega _{-}\cup \varGamma _0\cup \varOmega _1\subset \varOmega _{ok}.\) The intersection \(\varGamma _1\cap \partial \varOmega =\{(0,0),(1,0.4)\}\) (here \(\varGamma _1=\{(\mu ,\lambda )\in \varOmega : 5\lambda -2\mu =0\}\)). Similarly the intersection \(\varGamma _0\cap \varGamma _1 =\{(5/(2\sqrt{19}),1/\sqrt{19})\}\approx (0.573539,0.229416)=p_1\).

b) the constraints \((c_0^{i_c}/c_2^{i_c})\ge 0\) and \((-c_1^{i_c}/c_2^{i_c})<0\) combined with (24) yield:

$$\begin{aligned} \lambda \ge {2\mu \over 5}\quad \mathrm{and}\quad 11\lambda ^2-12\mu \lambda +1<0\;. \end{aligned}$$
(26)

Using ImplicitPlot and InequalityPlot we find \(\varOmega _2\) (cut out from \(\varOmega _{ex}\)) as the intersection of three sets defined by (26) and \(\varDelta >0\) (for \(\varOmega _2\) see Fig. 2 a–b)). Thus \(\varOmega _{-}\cup \varGamma _0\cup \varOmega _1\cup \varOmega _2\subset \varOmega _{ok}\) (see Fig. 2 b)). Note that for \(\varGamma _2=\{(\mu ,\lambda )\in \varOmega : 11\lambda ^2-12\mu \lambda +1=0\}\) the sets \(\varGamma _0\cap \varGamma _2=\{(5/(2\sqrt{19}),1/\sqrt{19}),(1,1)\}\), \(\varGamma _1\cap \varGamma _2=\{(5/(2\sqrt{19}),1/\sqrt{19})\}\}\), and intersection of \(\varGamma _2\) with the boundary \(\mu =1\) yields \(\{(1,1),(1,1/11)\}\}\) (use e.g. Solve in Mathematica).

Fig. 3.
figure 3

Extension of admissible zone \(\varOmega _{ok}\) by cutting out from \(\varOmega _{ex}\): (a) \(\varOmega _3\), (b) \(\varOmega _4\).

c1) \((c_0^{i_c}/c_2^{i_c})\ge 0\), \((-c_1^{i_c}/c_2^{i_c})>0\) and \(u_{i+1}^+\ge 1\) with (24) yield

$$\begin{aligned} \lambda \ge {2\mu \over 5}\;,\quad 11\lambda ^2-12\mu \lambda +1>0\;, \quad \sqrt{\varDelta }\ge \lambda ^2-12\mu \lambda +11\;. \end{aligned}$$
(27)

The last inequality in (27) is clearly satisfied for \(\lambda ^2-12\mu \lambda +11<0\). This holds over \(\varOmega _5=\varOmega _3\cup \varOmega _4\cup \varGamma _0^3\) which is the domain bounded by \(\varGamma _3=\{(\mu ,\lambda )\in \varOmega :\lambda ^2-12\mu \lambda +11=0\}\) and the boundary \(\mu =1\) (see Fig. 3 a)). Here \(\varGamma _3\cap \partial \varOmega =\{(1,1),(1,11)\}\) and \(\varGamma _3\cap \varGamma _0=\{(1,1),(5/(2\sqrt{19}),\sqrt{19})\} \approx \{(1,1),(0.573539,4.3589)\}\) - again we resort here to InequalityPlot, ImplicitPlot and Solve functions in Mathematica. Intersecting \(\varOmega _5\) with three subsets defined by the first two inequalities from (27) and \(\varDelta >0\) yields cutting \(\varOmega _3\) from the exclusion zone \(\varOmega _{ex}\) (see Fig. 3 a)), where \(\varOmega _3\) is bounded by \(\varGamma _0^3\), undashed \(\varGamma _3\) and the boundary \(\mu =1\). Thus \(\varOmega _{-}\cup \varGamma _0\cup \varOmega _1\cup \varOmega _2\cup \varOmega _3\subset \varOmega _{ok}\). For the opposite case \(\lambda ^2-12\mu \lambda +11\ge 0\) (satisfied over \(\varOmega \setminus \varOmega _5)\) the last inequality from (27) yields \(\varOmega _8=\varOmega _6\cup \varOmega _7\cup \varGamma _3^1\) with the bounding curve \(\varGamma _4=\{(\mu ,\lambda )\in \varOmega : \varDelta -(\lambda ^2-12\mu \lambda +11)^2=0\}\) (see Fig. 3 b)) - here \(\varOmega _6\) is bounded by \(\varGamma _4^1\), \(\varGamma _3^1\) and \(\partial \varOmega \) and \(\varOmega _7\) is bounded by \(\varGamma _4^2\), \(\varGamma _3^1\) and \(\partial \varOmega \)). The intersection of \(\varGamma _4\) with boundary \(\mu =1\) yields single point \(\{(1,5/2)\}\). Since \(\varGamma _0\cap \varGamma _3\cap \varGamma _4 =\{(5/(2\sqrt{19}),\sqrt{19})\}\approx \{(0.573539,4.3589)\}=p_2\) the intersection of \(\varOmega _8\) with the regions defined by first two inequalities in (27) (and by \(\varDelta >0\) and \(\lambda ^2-12\mu \lambda +11\ge 0\)) leads to the further cut out of \(\varOmega _6\cup \varGamma _4^1\cup \varGamma _3^1\) in the zone \(\varOmega _+^3\subset \varOmega _{ex}\). The inclusion \(\varOmega _{-}\cup \varGamma _0\cup \varOmega _1\cup \varOmega _2\cup \varOmega _3\cup \varOmega _6\cup \varGamma _4^1\cup \varGamma _3^1 \subset \varOmega _{ok}\) follows.

5 Perturbed Special Case

Assume now that for data points \(\{\tilde{x}_i,\tilde{x}_{i+1},\tilde{x}_{i+2}\}\) and velocities \(\{\tilde{v}_i,\tilde{v}_{i+2}\}\) condition (20) is not met. For the perturbation vector \(\delta =(\delta _1,\delta _2)\in \mathbb {R}^{2m}\) we attempt to extend the results for (20) to its perturbed form (28). Indeed let \((\delta _1,\delta _2)\):

$$\begin{aligned} \tilde{x}_{i+2}-\tilde{x}_i-\tilde{v}_{i+2}=\delta _1\;,\quad \quad \tilde{v}_{i+2}-\tilde{v}_{i}=\delta _2\;, \end{aligned}$$
(28)

with \(\tilde{\mathcal{E}}_i^{\delta }\) derived as in (18). Of course, for \(\delta _1=\delta _2=\mathbf{0}\in \mathbb {R}^m\) (28) collapses to (20) (i.e. with the notation \(\tilde{\mathcal{E}}_i^0=\bar{\mathcal{E}}_i^c\) derived for (20)). To obtain formulas for \(\tilde{\mathcal{E}}_i^{\delta }\) and \(\tilde{\mathcal{E}}_i^{\delta '}\) we resort to (by (28)):

$$\begin{aligned} \Vert \tilde{v}_{i+2}\Vert ^2= & {} \Vert \tilde{x}_{i+2}\Vert ^2+\Vert \tilde{x}_i\Vert ^2 -2\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle -2\langle \tilde{x}_{i+2}|\delta _1\rangle +2\langle \tilde{x}_{i}|\delta _1\rangle \nonumber +\Vert \delta _1\Vert ^2\;, \nonumber \\ \langle \tilde{v}_{i+2}|\tilde{x}_{i}\rangle= & {} \langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle -\Vert \tilde{x}_{i}\Vert ^2-\langle \tilde{x}_{i}|\delta _1\rangle \;, \nonumber \\ \langle \tilde{v}_{i+2}|\tilde{x}_{i+2}\rangle= & {} \Vert \tilde{x}_{i+2}\Vert ^2-\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle - \langle \tilde{x}_{i+2}|\delta _1\rangle \;,\nonumber \\ \Vert \tilde{v}_i\Vert ^2= & {} \Vert \tilde{x}_{i+2}\Vert ^2+\Vert \tilde{x}_i\Vert ^2 -2\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle +\Vert \delta _1\Vert ^2+\Vert \delta _2\Vert ^2 +2\langle \delta _1|\delta _2\rangle \nonumber \\&-2\langle \tilde{x}_{i+2}|\delta _1\rangle -2\langle \tilde{x}_{i+2}|\delta _2\rangle +2\langle \tilde{x}_{i}|\delta _1\rangle +2\langle \tilde{x}_{i}|\delta _2\rangle \;,\nonumber \\ \langle \tilde{v}_{i}|\tilde{v}_{i+2}\rangle= & {} \Vert \tilde{x}_{i+2}\Vert ^2+\Vert \tilde{x}_i\Vert ^2 -2\langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle -2\langle \tilde{x}_{i+2}|\delta _1\rangle +2\langle \tilde{x}_{i}|\delta _1\rangle -\langle \tilde{x}_{i+2}|\delta _2\rangle \nonumber \\&+\langle \tilde{x}_{i}|\delta _2\rangle +\Vert \delta _1\Vert ^2 +\langle \delta _1|\delta _2\rangle \;,\nonumber \\ \langle \tilde{x}_i|\tilde{v}_i\rangle= & {} \langle \tilde{x}_{i}|\tilde{x}_{i+2}\rangle -\Vert \tilde{x}_i\Vert ^2 -\langle \tilde{x}_i|\delta _1\rangle -\langle \tilde{x}_i|\delta _2\rangle \;, \nonumber \\ \langle \tilde{x}_{i+2}|\tilde{v}_i\rangle= & {} \Vert \tilde{x}_{i+2}\Vert ^2-\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle -\langle \tilde{x}_{i+2}|\delta _1\rangle -\langle \tilde{x}_{i+2}|\delta _2\rangle \;, \end{aligned}$$

leading by (18) to (with FullSimplify, Factor and CoefficientList): \(\tilde{\mathcal{E}}_i^{\delta }(s_{i+1})=\)

$$\begin{aligned}&{1\over s_{i+1}^3(s_{i+1}-1)^3}(3\Vert \tilde{x}_i\Vert ^2(s_{i+1}-1)^3(1 + 3s_{i+1})+ s_{i+1}(-6 (\langle \tilde{x}_i|\tilde{x}_{i+2} \rangle \nonumber \\ &- \langle \tilde{x}_i|\delta _1\rangle - \langle \tilde{x}_i|\delta _2\rangle - \Vert \tilde{x}_i\Vert ^2)+ s_{i+1}(-18\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle (s_{i+1}-1)^2\nonumber \\ &-6(\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle - \langle \tilde{x}_i|\delta _1\rangle - \Vert \tilde{x}_i\Vert ^2)(s_{i+1}-1)^3 +6(-\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle -\langle \tilde{x}_{i+2}|\delta _1\rangle \nonumber \\ &+ \Vert \tilde{x}_{i+2}\Vert ^2) (s_{i+1}-2) (s_{i+1}-1) s_{i+1} - 6 (-\langle \tilde{x}_i| \tilde{x}_{i+2}\rangle -\langle \tilde{x}_{i+2}|\delta _1\rangle \nonumber \\ &-\langle \tilde{x}_{i+2}|\delta _2\rangle + \Vert \tilde{x}_{i+2}\Vert ^2) (s_{+1}-1)^2s_{i+1} +(-2\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle + 2\langle \tilde{x}_i|\delta _1\rangle \nonumber \\ &- 2\langle \tilde{x}_{i+2}|\delta _1\rangle + \Vert \tilde{x}_i\Vert ^2 + \Vert \tilde{x}_{i+2}\Vert ^2 +\Vert \delta _1\Vert ^2) (s_{i+1}-4)(s_{i+1}-1)^2s_{i+1}\nonumber \\ &-2(-2\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle + 2\langle \tilde{x}_i |\delta _1\rangle + \langle \tilde{x}_i|\delta _2\rangle - 2\langle x_{i+2}|\delta _1\rangle - \langle \tilde{x}_{i+2}|\delta _1 \rangle + \langle \delta _1 |\delta _2\rangle \nonumber \\ &+\Vert \tilde{x}_i\Vert ^2 + \Vert \tilde{x}_{i+2}\Vert ^2 + \Vert \delta _1\Vert ^2) (s_{i+1}-1)^3s_{i+1}+(-2 \langle \tilde{x}_i | \tilde{x}_{i+2}\rangle + 2 \langle \tilde{x}_i|\delta _1\rangle \nonumber \\ &+ 2 \langle \tilde{x}_i|\delta _2\rangle - 2 \langle \tilde{x}_{i+2}|\delta _1\rangle - 2 \langle \tilde{x}_{i+2}|\delta _2\rangle + 2 \langle \delta _1|\delta _2\rangle + \Vert \tilde{x}_i\Vert ^2 + \Vert \tilde{x}_{i+2}\Vert ^2+ \Vert \delta _1\Vert ^2 \nonumber \\ &+ \Vert \delta _2\Vert ^2) (s_{i+1}-1)^3(3 + s_{i+1})+ 3\Vert \tilde{x}_{i+2}\Vert ^2s_{i+1}(3s_{i+1}-4)\nonumber \\ &+6(\langle \tilde{x}_i| \tilde{x}_{i+2}\rangle - \langle \tilde{x}_i|\delta _1\rangle - \langle \tilde{x}_i |\delta _2\rangle - \Vert \tilde{x}_i\Vert ^2)(2 + (s_{i+1}-2)s_{i+1}^2)))) \end{aligned}$$
(29)

yielding \(\tilde{\mathcal{E}}_i^{\delta }(s_{i+1}) =M_i^\delta (s_{i+1})/(s_{i+1}^3(s_{i+1}-1)^3)\). Here \(deg(M_i^\delta )=6\) with the coefficients (using Mathematica functions Factor and CoefficientList): \(a_0^{i,\delta }=-3\Vert \tilde{x}_i\Vert ^2\), \(a_1^{i,\delta }=-6\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle +6\langle \tilde{x}_i|\delta _1\rangle +6\langle \tilde{x}_i|\delta _2\rangle + 6\Vert \tilde{x}_{i}\Vert ^2\), \(a_2^{i,\delta }=6\langle \tilde{x}_i|\tilde{x}_{i+2}\rangle -24\langle \tilde{x}_i|\delta _1\rangle -18\langle \tilde{x}_i|\delta _2\rangle +6\langle \tilde{x}_{i+2}|\delta _1\rangle +6\langle \tilde{x}_{i+2}|\delta _2\rangle -6\langle \delta _1|\delta _2\rangle -3\Vert \tilde{x}_i\Vert ^2-3\Vert \tilde{x}_{i+2}\Vert ^2-3\Vert \delta _1\Vert ^2 -3\Vert \delta _2\Vert ^2\), \(a_3^{i,\delta }=2(15 \langle \tilde{x}_{i}|\delta _1\rangle +9\langle \tilde{x}_{i}|\delta _2\rangle -9 \langle \tilde{x}_{i+2}|\delta _1\rangle -6 \langle \tilde{x}_{i+2}|\delta _2\rangle + 9 \langle \delta _1|\delta _2\rangle + 3 \Vert \delta _1\Vert ^2 + 4 \Vert \delta _2\Vert ^2)\), \(a_4^{i,\delta }=-12\langle \tilde{x}_{i}|\delta _1\rangle -6\langle \tilde{x}_{i}|\delta _2\rangle + 12\langle \tilde{x}_{i+2}|\delta _1\rangle + 6\langle \tilde{x}_{i+2}|\delta _2\rangle -18\langle \delta _1|\delta _2\rangle -3\Vert \delta _1\Vert ^2 - 6\Vert \delta _2\Vert ^2\), \(a_5^{i,\delta }= 6\langle \delta _1|\delta _2\rangle \) and \(a_6^{i,\delta }\ =\Vert \delta _2\Vert ^2\). The derivative of \(\tilde{\mathcal{E}}_i^{\delta }(s_{i+1})\) reads as \(\tilde{\mathcal{E}}_i^{\delta '}(s_{i+1}) =-N_i^\delta (s_{i+1})/(s_{i+1}^4(s_{i+1}-1)^4)\), where \(N_i^\delta \) is the 6-th order polynomial in \(s_{i+1}\) with the coefficients (e.g. again upon using symbolic differentiation in Mathematica and functions Factor and CoefficientList): \(b_0^{i,\delta }=-9\Vert \tilde{x}_i\Vert ^2\), \(b_1^{i,\delta }=-12\langle \tilde{x}_i |\tilde{x}_{i+2}\rangle + 12\langle \tilde{x}_i |\delta _1\rangle + 12 \langle \tilde{x}_i |\delta _2\rangle + 30\Vert \tilde{x}_i\Vert ^2\), \(b_2^{i,\delta }=3(12 \langle \tilde{x}_i |\tilde{x}_{i+2}\rangle -18 \langle \tilde{x}_i |\delta _1 \rangle -16 \langle \tilde{x}_i |\delta _2 \rangle +2 \langle \tilde{x}_{i+2} |\delta _1 \rangle +2 \langle \tilde{x}_{i+2} |\delta _2 \rangle -2 \langle \delta _1 |\delta _2 \rangle - 11 \Vert \tilde{x}_{i}\Vert ^2-\Vert \tilde{x}_{i+2}\Vert ^2 -\Vert \delta _1\Vert ^2-\Vert \delta _2\Vert ^2)\), \(b_3^{i,\delta }=3(-8 \langle \tilde{x}_i |\tilde{x}_{i+2}\rangle +32 \langle \tilde{x}_i |\delta _1\rangle +24 \langle \tilde{x}_i |\delta _2\rangle -8 \langle \tilde{x}_{i+2} |\delta _1\rangle -8 \langle \tilde{x}_{i+2} |\delta _2\rangle +8 \langle \delta _1 |\delta _2\rangle +4 \Vert \tilde{x}_i\Vert ^2+ 4 \Vert \tilde{x}_{i+2}\Vert ^2 + 4\Vert \delta _1\Vert ^2 + 4 \Vert \delta _2\Vert ^2)\), \(b_4^{i,\delta }=3(-26 \langle \tilde{x}_i|\delta _1\rangle -16 \langle \tilde{x}_i|\delta _2\rangle +14 \langle \tilde{x}_{i+2}|\delta _1\rangle + 10 \langle \tilde{x}_{i+2}|\delta _2\rangle - 12 \langle \delta _1|\delta _2\rangle -5\Vert \delta _1\Vert ^2- 6\Vert \delta _2\Vert ^2)\), \(b_5^{i,\delta }=3(8 \langle \tilde{x}_{i}|\delta _1\rangle +4 \langle \tilde{x}_{i}|\delta _2\rangle -8 \langle \tilde{x}_{i+2}|\delta _1\rangle -4 \langle \tilde{x}_{i+2}|\delta _2\rangle +8 \langle \tilde{\delta }_1|\delta _2\rangle +2 \Vert \delta _1\Vert ^2+ 4 \Vert \delta _2\Vert ^2)\) and \(b_6^{i,\delta }=-6 \langle \delta _1|\delta _2\rangle - 3\Vert \delta _2\Vert ^2\;.\)

The following result merging (20) with (28) holds (proof is omitted):

Theorem 1

Assume that for unperturbed data (20) the corresponding energy \(\tilde{\mathcal{E}}_i^0\) has exactly one critical point \(\hat{s}_0\in (0,1)\) with \(\tilde{\mathcal{E}}_i^{0''}(\hat{s}_0)\ne 0\). Then there exists sufficiently small \(\varepsilon _0>0\) such that for all \(\Vert \delta \Vert <\varepsilon _0\) (where \(\delta =(\delta _1,\delta _2)\in \mathbb {R}^{2m}\)) the perturbed data (28) yield the energy \(\tilde{\mathcal{E}}_i^{\delta }\) with exactly one critical point \(\hat{s}_0^{\delta }\in (0,1)\) (a global minimum \(\hat{s}_0^{\delta }\) of \(\tilde{\mathcal{E}}_i^{\delta }\) is sufficiently close to \(\hat{s}_0\)).

Example 1

Consider the planar points \(\tilde{x}_i=(0,-1)\), \(\tilde{x}_{i+1}=(0,0)\) and \(\tilde{x}_{i+2}=(1,1)\) - we set here \(i=0\). Here cumulative chord parameterization yields \(\hat{s}_1^{cc}=1/(\sqrt{2}+1)\approx 0.414214\). Assume that given velocities \(\tilde{v}_0, \tilde{v}_2\) (upon adjustment by some perturbation \(\delta =(\bar{\delta },\hat{\delta })\in \mathbb {R}^4\)) satisfy both constraints \(\tilde{x}_2-\tilde{x}_0=\tilde{v}_2+\bar{\delta }\) and \(\tilde{v}_2=\tilde{v}_0+\hat{\delta }\). The above interpolation points \(\{\tilde{x}_i,\tilde{x}_{i+1},\tilde{x}_{i+2}\}\) for further testing in this example are assumed to be fixed. Here \(\Vert \tilde{x}_0\Vert ^2=1\), \(\Vert \tilde{x}_2\Vert ^2=2\), \(\langle \tilde{x}_0|\tilde{x}_2\rangle =-1\) and \((\mu ,\lambda )=(-1/\sqrt{2},1/\sqrt{2}) \approx (-0.707107,0.707107)\in \varOmega _{ok}\) (with \(\delta =\mathbf{0}\)). The unperturbed energy with \(\tilde{v}_2=\tilde{v}_0=(1,2)\) (see also (21) or (29) with \(\delta =\mathbf{0}\) and non-perturbed data satisfying (20)) amounts to: \(\tilde{\mathcal{E}}_0^{\delta }(s)=-3(1 + s(5s-4))((s-1)^3s^3)^{-1}\). Which yields a global minimum \(\tilde{\mathcal{E}}_0^c(0.433436)=41.6487\) (see Fig. 4). As here \((\mu ,\lambda )=(-1/\sqrt{2},1/\sqrt{2})\in \varOmega _{ok}\) and thus \(\tilde{\mathcal{E}}_0^0\) has exactly one critical point \(\hat{s}_0\in (0,1)\). One can show that \(\tilde{\mathcal{E}}_0^{0''}\ne 0\) at any critical point \(\hat{s}_0\) of \(\tilde{\mathcal{E}}_0^{0}\). Hence the assumptions from Theorem 1 are clearly satisfied.

We add now the perturbation \(\bar{\delta }=(2,-3)\) and \(\hat{\delta }=(-1,2)\) (for \(\tilde{v}_0=(0,3)\) and \(\tilde{v}_2=(-1,5)\)). The corresponding perturbed energy (see (29)) \( \tilde{\mathcal{E}}^{\delta }_0(s) =(-3 + s (18 + s (-57 + s (34 + s (45 + s (5s-48)))))) ((s-1)^3s^3)^{-1}\) is plotted in Fig. 5 with the optimal value \(\hat{s}_0^{\delta }\approx 0.390407\) (close to \(\hat{s}_1^{cc}\) as perturbation \(\delta \) is sufficiently small - here \((\Vert \bar{\delta }\Vert ,\Vert \hat{\delta }\Vert )=(\sqrt{13},\sqrt{5})\)) and \(\tilde{\mathcal{E}}_\delta ^c(\hat{s}_0^{\delta })=149.082< \tilde{\mathcal{E}}_\delta ^c(\hat{s}_1^{cc})=150.004\) - the convexity of \(\tilde{\mathcal{E}}_0^c\) is visibly preserved by \(\tilde{\mathcal{E}}_{\delta }^c\) (see Fig. 4 and Fig. 5).

Fig. 4.
figure 4

The graph of \(\tilde{\mathcal{E}}_0^{c'}\) for \(\tilde{x}_0=(0,-1)\), \(\tilde{x}_2=(1,1)\), \(\tilde{v}_0=\tilde{v}_2=(1,2)\) (a) over (0, 1), (b) close to unique root \(\hat{s}_0\approx 0.433\ne \hat{s}_1^{cc}=1/(\sqrt{2}+1)\approx 0.414\), (c) the graph of \(\tilde{\mathcal{E}}_0^{c}\).

Fig. 5.
figure 5

The graph of \(\tilde{\mathcal{E}}_{\delta }^{c}\) for \(\tilde{x}_0=(0,-1)\), \(\tilde{x}_2=(1,1)\), \(\tilde{v}_0=(0,3)\), \(\tilde{v}_2=(-1,5)\), \(\bar{\delta }=(2,-3)\) and \(\hat{\delta }=(-1,2)\) (a) over (0, 1), (b) close to its unique min. \(\hat{s}_0^{\delta }=0.390\ne s_1^{cc}=1/(\sqrt{2}+1)\approx 0.414\).

For a large perturbation \(\bar{\delta }=(16,7)\) and \(\hat{\delta }=(-10,5)\) (for \(\tilde{v}_0=(-5,-10)\) and \(\tilde{v}_2=(-15,-5)\)) the corresponding perturbed energy (see (29) and use Simplify in Mathematica) \(\tilde{\mathcal{E}}^{\delta }_0(s)=(-3+ s(-60+ s (-189+ s(-74 + 5s (5s-21)(5s-9)))))((s-1)^3s^3)^{-1}\) is plotted in Fig. 6 a) with the unique optimal value \(\hat{s}_0^{\delta }\approx 0.432069\) for which \(\tilde{\mathcal{E}}_\delta ^c(\hat{s}_0^{\delta })=3229.81 <\tilde{\mathcal{E}}_\delta ^c(\hat{s}_1^{cc})=3236.5\) - the convexity of \(\tilde{\mathcal{E}}_0^c\) is here visibly also preserved by \(\tilde{\mathcal{E}}_{\delta }^c\) (even for such a quite large perturbation \(\delta \) - here \((\Vert \bar{\delta },\Vert \hat{\delta }\Vert )=(125,305)\)). Note also that though cumulative chord \(\hat{s}_1^{cc}\) is now farther away from a global minimum \(\hat{s}_0^{\delta }\), it is still in its potential basin.

We add now very large \(\bar{\delta }=(-25,-17)\) and \(\hat{\delta }=(-6,20)\) (for \(\tilde{v}_0=(32,-1)\) and \(\tilde{v}_2=(26,19)\)). The perturbed energy (see (29)) \(\tilde{\mathcal{E}}^{\delta }_0(s)= (-3+ s(-6+ s(-3141+ 2s(3145 + s(-1221- 570s + 218s^2))))) ((s-1)^3s^3)^{-1}\) is plotted in Fig. 6 b) with the optimal value \(\hat{s}_0^{\delta }\approx 0.948503\) for which \(\tilde{\mathcal{E}}_\delta ^c(\hat{s}_0^{\delta })=11146< \tilde{\mathcal{E}}_\delta ^c(\hat{s}_1^{cc})=12667.7\) and another local minimum at \(\hat{s}_0^1\approx 0.563968\) for which \(\tilde{\mathcal{E}}_\delta ^c(\hat{s}_0^1)=11781\). There is also a local maximum at \(\hat{s}_{max}\approx 0.879929>s_1^{cc}=1/3\) - convexity of \(\tilde{\mathcal{E}}_0^c\) is here clearly not preserved by \(\tilde{\mathcal{E}}_{\delta }^c\) (\(\delta \) is here too large for Theorem 1 to hold - here \((\Vert \bar{\delta }\Vert ,\Vert \hat{\delta }\Vert )=(914,436)\)) - see also Fig. 4 and Fig. 6. Again the cumulative chord \(\hat{s}_1^{cc}\approx 0.414214\) is here in the basin of \(\hat{s}_0^1\) (not of \(\hat{s}_0^{\delta }\)) - see Fig. 6 b).    \(\square \)

Fig. 6.
figure 6

The graph of \(\tilde{\mathcal{E}}_{\delta }^{c}\) for \(\tilde{x}_0=(0,-1)\) and \(\tilde{x}_2=(1,1)\) for (a) \(\tilde{v}_0=(-5,-10)\), \(\tilde{v}_2=(-15,-5)\) and a big \(\bar{\delta }=(-16,7)\) and \(\bar{\delta }=(-10,5)\) yielding global min. at \(\hat{s}_0^{\delta }\approx 0.432\ne s_1^{cc}\approx 0.414\), (b) \(\tilde{v}_0=(32,-1)\), \(\tilde{v}_2=(26,19)\) and a very big \(\bar{\delta }=(-25,-17)\) and \(\bar{\delta }=(-6,20)\) with global min. at \(\hat{s}_0^{\delta }\approx 0.949\) and a local min. at \(\hat{s}_0^1=0.564\ne s_1^{cc}\approx 0.414\).

Example 1 suggests that \(\delta \) in Theorem 1 can in fact be quite substantial. Thus a local character of Theorem 1 seems to be more a semi-global one.

6 Conclusions

We study the problem of finding optimal knots to fit reduced data. The optimization task (1) is reformulated into (5) (and (18)) to minimize a highly non-linear multivariable function \(\mathcal{J}_0\) depending on knots \({\mathcal T}_{int}\). Leap-Frog is a feasible numerical scheme to handle (5). It minimizes iteratively single variable functions from (6). Generic case of Leap-Frog is addressed to establish sufficient conditions for unimodality of (18). First, its special case (20) is studied. Next a perturbed analogue (28) of the latter is addressed. The unimodality of (21) is shown to be preserved by large perturbations (28). The performance of Leap-Frog in minimizing (5) against Newton’s and Secant Methods is discussed in [2, 3] and [6]. Other contexts and applications of Leap-Frog can be found in [9, 10] or [11]. For more work on fitting \(\mathcal{M}_n\) (sparse or dense) see [4, 5] or [7].