Keywords

1 Introduction

Assume that n interpolation points \(\mathcal{M}_n=\{x_i\}_{i=0}^n\) in arbitrary Euclidean space \(\mathbb {E}^m\) are given with the associated knots \(\mathcal{T}=\{t_i\}_{i=0}^n\) unavailable. The analyzed here class of fitting curves \(\mathcal{I}\) forms piecewise \(C^{2}\) functions \(\gamma :[0,T]\rightarrow \mathbb {E}^m\) satisfying \(\gamma (t_i)=q_i\) and \(\ddot{\gamma }(t_0)=\ddot{\gamma }(T)={\mathbf {0}}\). It is also assumed that \(\gamma \in \mathcal{I}\) is at least of class \(C^1\) over \(\mathcal{T}_{int}=\{t_i\}_{i=1}^{n-1}\) and extends to \(C^2([t_i,t_{i+1}])\). The unknown internal knots \(\mathcal{T}_{int}\) are called admissible if \(t_i<t_{i+1}\), for \(i=0,1,\dots ,n-1\) (here \(t_0=0\) and \(t_n=T\)). Different choices of \(\mathcal{T}_{int}\) permits to control and model the trajectory of \(\gamma \). A possible criterion (measuring the “average acceleration” of \(\gamma \)) for a given choice of fixed knots \(\mathcal{T}\) is to minimize

$$\begin{aligned} \mathcal{J}_T(\gamma )=\sum _{i=0}^{n-1}\int _{t_i}^{t_{i+1}} \Vert {\ddot{\gamma }}(t)\Vert ^2dt\;, \end{aligned}$$
(1)

(over \(\mathcal{I}\)) which yields a unique optimal curve \(\gamma _{opt}\in \mathcal{I}\) being a natural cubic spline \(\gamma _{NS}\) - see [1, 9]. Thus, varying \(\mathcal{T}_{int}\) reformulates (1) into searching for an optimal natural spline \(\gamma _{NS}\) with \(\mathcal{T}_{int}\) treated as free variables. As shown in [3] the latter reformulates into optimizing a highly non-linear function \(J_0\) in \(n-1\) variables \(\mathcal{T}_{int}\) subject to \(t_i<t_{i+1}\). Due to the complicated nature of \(J_0\) most of numerical schemes used to optimize \(J_0\) face numerical difficulties (see e.g. [3]). Equally, studying the character of critical points of \(J_0\) forms a non-trivial task. A possible remedy is to apply a Leap-Frog Algorithm (see [2, 3]) designed to optimize \(J_0\) upon merging the iterative sequence of univariate overlapping optimizations of \(J_0^{(k,i)}\) preserving \(t_i<t_{i+1}\). Recent work [5] deals with the generic case of a Leap-Frog recomputing iteratively the knots \(\{t_2,t_3,\dots ,t_{n-2}\}\).

This paper discussesFootnote 1 a non-generic case of Leap-Frog covering the recursive readjustment of knots \(t_1\) and \(t_{n-1}\). The latter establishes sufficient conditions for unimodality of \(J_0^{(k,1)}\) and \(J_0^{(k,n-1)}\). First a special data case (18) is considered (see Sect. 4) which in turn is subsequently extended to its perturbed version (22) preserving in practice the unimodality (once it occurs for (18)) for substantially large perturbations (see Proposition 1 and Example 1 in Sect. 5).

Numerical performance of Leap-Frog and comparison tests with other optimization schemes are presented in [2, 3, 7]. Some applications of Leap-Frog optimization scheme in modelling and simulation are covered in [10,11,12].

2 Preliminaries

A cubic spline interpolant (see [1]) \(\gamma ^{C_i}_\mathcal{T}=\gamma ^{C}_\mathcal{T}|_{[t_i,t_{i+1}]}\), for a given admissible knots \(\mathcal{T}=(t_0,t_1,\dots ,t_{n-1},t_n)\) defined as \(\gamma _\mathcal{T}^{C_i}(t)=c_{1,i}+c_{2,i}(t-t_i)+c_{3,i}(t-t_i)^2+c_{4,i}(t-t_i)^3\), (for \(t\in [t_i,t_{i+2}]\)) satisfies (for \(i=0,1,2,\dots ,n-1\); \(c_{j,i}\in \mathbb {R}^m\), where \(j=1,2,3,4\)) \(\gamma _\mathcal{T}^{C_i}(t_{i+k})=x_{i+k}\) and \(\dot{\gamma }_\mathcal{T}^{C_i}(t_{i+k})=v_{i+k}\), for \(k=0,1\) with the velocities \(v_0,v_1,\dots ,v_{n-1},v_n\in \mathbb {R}^m\) assumed to be temporarily free parameters (if unknown). The coefficients \(c_{j,i}\) read (with \(\varDelta t_i=t_{i+1}-t_i\)):

$$\begin{aligned} c_{1,i}= & {} x_i, \qquad \qquad \qquad \qquad \qquad \,\, c_{2,i}\ \ =\ \ v_i \;,\nonumber \\ c_{4,i}= & {} {v_i+v_{i+1}-2{x_{i+1}-x_i\over \varDelta t_i}\over (\varDelta t_i)^2}\;,\quad \quad c_{3,i}\ \ =\ \ {{(x_{i+1}-x_i)\over \varDelta t_i}-v_i\over \varDelta t_i}-c_{4,i}\varDelta t_i\;. \end{aligned}$$
(2)

The latter follows from Newton’s divided differences formula (see e.g. [1, Chap. 1]). Adding \(n-1\) constraints \(\ddot{\gamma }_\mathcal{T}^{C_{i-1}}(t_{i})=\ddot{\gamma }_\mathcal{T}^{C_i}(t_{i})\) for continuity of \(\ddot{\gamma }_\mathcal{T}^C\) at \(x_1,\dots ,x_{n-1}\) (with \(i=1,2,\dots ,n-1\)) leads by (2) (for \(\gamma _\mathcal{T}^{C_i}\)) to the m tridiagonal linear systems (strictly diagonally dominant) of \(n-1\) equations in \(n+1\) vector unknowns representing velocities at \(\mathcal{M}\) i.e. \(v_0, v_1,v_2,\dots ,v_{n-1},v_n\in \mathbb {R}^m\):

$$\begin{aligned} v_{i-1}\varDelta t_i+ & {} 2v_i(\varDelta t_{i-1}+\varDelta t_i)+v_{i+1}\varDelta t_{i-1}=b_i\;,\nonumber \\ b_i= & {} 3(\varDelta t_i{x_i-x_{i-1}\over \varDelta t_{i-1}} +\varDelta t_{i-1}{x_{i+1}-x_i\over \varDelta t_i})\;. \end{aligned}$$
(3)

(i) Both \(v_0\) and \(v_{n}\) (if unknown) can be e.g. calculated from \(a_0=\ddot{\gamma }_\mathcal{T}^{C}(0)=a_n=\ddot{\gamma }_\mathcal{T}^{C}(T_c)={\mathbf {0}}\) combined with (2) (this yields a natural cubic spline interpolant \(\gamma _\mathcal{T}^{NS}\) - a special \(\gamma _\mathcal{T}^{C}\)) which supplements (3) with two missing vector linear equations:

$$\begin{aligned} 2v_0+v_1=3{x_1-x_0\over \varDelta t_0}\;,\quad v_{n-1}+2v_n=3{x_n-x_{n-1}\over \varDelta t_{n-1}}\;. \end{aligned}$$
(4)

The resulting m linear systems, each of size \((n+1)\times (n+1)\), (based on (3) and (4)) as strictly row diagonally dominant result in one solution \(v_0,v_1,\dots ,v_{n-1},v_n\) (solved e.g. by Gauss elimination without pivoting - see [1, Chap. 4]), which when fed into (2) determines explicitly a natural cubic spline \(\gamma _\mathcal{T}^{NS}\) (with fixed \(\mathcal{T}\)). A similar approach follows for arbitrary \(a_0\) and \(a_n\).

(ii) If both \(v_0\) and \(v_{n}\) are given then the so-called complete spline \(\gamma ^{CS}_\mathcal{T}\) can be found with \(v_1,\dots v_{n-1}\) determined solely by (3).

(iii) If one of \(v_{0}\) or \(v_{n}\) is unknown, this can be compensated by setting the respective terminal acceleration e.g. to \({\mathbf {0}}\). The above scheme relies on solving (3) with one equation from (4). Such splines are denoted here by \(\gamma _\mathcal{T}^{v_n}\) or \(\gamma _\mathcal{T}^{v_0}\). Two non-generic cases of Leap-Frog optimizations deal with the latter.

By (1) \(\mathcal{J}_{T}(\gamma _\mathcal{T}^{NS})= 4\sum _{i=0}^{n-1}(\Vert c_{3,i}\Vert ^2\varDelta t_i+3\Vert c_{4,i}\Vert ^2(\varDelta t_i)^3 +3\langle c_{3,i}|c_{4,i}\rangle (\varDelta t_i)^2)\;,\) which ultimately reformulates into (see [2]):

$$\begin{aligned} \mathcal{J}_{T}(\gamma _\mathcal{T}^{NS})= & {} 4\sum _{i=0}^{n-1}\Big ({-1\over (\varDelta t_i)^3}(-3\Vert x_{i+1}-x_i\Vert ^2 +3\langle v_i+v_{i+1}|x_{i+1}-x_i\rangle \varDelta t_i\nonumber \\&\quad \quad \quad \quad \quad \quad -(\Vert v_i\Vert ^2+\Vert v_{i+1}\Vert ^2 +\langle v_i|v_{i+1}\rangle )(\varDelta t_i)^2\Big )\;. \end{aligned}$$
(5)

As mentioned before for fixed knots \(\mathcal{T}\), the natural spline \(\gamma _\mathcal{T}^{NS}\) minimizes (1) (see [1]). Thus upon relaxing the internal knots \(\mathcal{T}_{int}\) the original infinite dimensional optimization (1) reduces into finding the corresponding optimal knots \((t_1^{opt},t_2^{opt},\dots ,t_{n-1}^{opt})\) for (5) (viewed from now on as a multivariate function \(J_0(t_1,t_2,\dots ,t_{n-1})\)) subject to \(t_0=0<t_1^{opt}<t_2^{opt}<\dots<t_{n-1}^{opt}<t_n=T\). Such reformulated non-linear optimization task (5) transformed into minimizing \(J_0(\mathcal{T}_{int})\) (here \(t_0=0\) and \(t_n=T\)) forms a difficult task for critical points examination as well as for the numerical computations (see e.g. [2, 3, 7]). One of the computationally feasible schemes handling (5) is a Leap-Frog Algorithm. For optimizing \(J_0\) this scheme is based on the sequence of single variable iterative optimization which in k-th iteration minimizes \(J_0^{(k,i)}(s)=\int _{t_{i-1}^{k}}^{t_{i+1}^{k-1}}\Vert \ddot{\gamma }^{CS}_{k,i}(s)\Vert ^2ds\), over \(I_i^{k-1}=[t_{i-1}^{k},t_{i+1}^{k-1}]\). Here \(t_i^k\) is set to be a free variable denoted as \(s_i\). The complete spline \(\gamma ^{CS}_{k,i}:I_i^{k-1}\rightarrow \mathbb {E}^m\) is determined by \(\{t_{i-1}^{k},s_i,t_{i+1}^{k-1}\}\), both velocities \(\{v_{i-1}^{k}, v_{i+1}^{k-1}\}\) and the interpolation points \(\{x_{i-1},x_i,x_{i+1}\}\). Once \(s_i^{opt}\) is found one updates \(t_i^{k-1}\) with \(t_i^k=s_i^{opt}\) and \(v_i^{k-1}\) with the \(v_i^k=\dot{\gamma }^{CS}_{k,i}(s_i^{opt})\). Next we pass to the shifted overlapped sub-interval \(I_{i+1}^k=[t_{i}^{k},t_{i+2}^{k-1}]\) and repeat the previous step of updating \(t_{i+1}^{k-1}\). Note that both cases \([0,t_2^{k-1}]\) and \([t_{n-2}^{k-1},T]\) rely on splines discussed in (iii), where the vanishing acceleration replaces one of the velocities \(v_0^{k-1}\) or \(v_n^{k-1}\). Once \(t_{n-1}^{k-1}\) is changed over the last sub-interval \(I_{n-1}^{k-1}=[t_{n-2}^{k},T]\) the k-th iteration is terminated and the next local optimization over \(I_1^k=[0,t_2^{k}]\) represents the beginning of the \((k+1)\)-st iteration of Leap-Frog. The initialization of \({\mathcal T}_{int}\) for Leap-Frog can follow normalized cumulative chord parameterization (see e.g. [9]) which sets \(t_0^0=0,t_1^0,\dots ,t_{n-1}^k,t_n^0=T\) according to \(t_0^0=0\) and \(t_{i+1}^0=\Vert x_{i+1}-x_i\Vert \frac{T}{\hat{T}}+t_i^0\), for \(i=0,1,\dots ,n-1\) and \(\hat{T}=\sum _{i=0}^{n-1}\Vert x_{i+1}-x_i\Vert \).

3 Non-generic Case: First Acceleration and Last Velocity

Let for \(x_0,x_1,x_2\in \mathcal{M}\) (for \(n\ge 3\)) the corresponding knots (see [1]) \(t_0\) and \(t_{2}\) be somehow given together with the respective first acceleration and the last velocity \(a_{0},v_{2}\in \mathbb {R}^m\) (without loss \(t_0=0\)). We construct now a \(C^2\) piecewise cubic \(\gamma ^c_0:[t_0,t_{2}]\rightarrow \mathbb {E}^m\) depending on varying \(t_{1}\in (t_0,t_2)\) (i.e. a cubic on each \([t_0,t_{1}]\) and \([t_{1},t_{2}]\)) satisfying \(\gamma ^c_0(t_{j})=x_{j}\) (for \(j=0,1,2\)), \(\ddot{\gamma }^c_0(t_{0})=a_{0}\) and \(\dot{\gamma }^c_0(t_{2})=v_{2}\). With \(\phi _0:[t_0,t_{2}]\rightarrow [0,1]\) (with \(\phi _0(t)=(t-t_0)(t_{2}-t_0)^{-1})\) the re-parameterized curve \(\tilde{\gamma }^c_0=\gamma ^c_0\circ \phi _0^{-1}:[0,1]\rightarrow \mathbb {E}^m\) satisfies, for \(0<s_{1}<1\) (where \(s_{1}=\phi _0(t_{1})\)): \(\tilde{\gamma }^c_0(0)=x_{0}\), \(\tilde{\gamma }^c_0(s_{1})=x_{1}\) and \(\tilde{\gamma }^c_0(1)=x_{2}\), with the adjusted \({\tilde{a}}_0,{\tilde{v}}_{2}\in \mathbb {R}^m\) equal to:

$$\begin{aligned} {\tilde{a}}_0=\tilde{\gamma }^{c''}_0(0)=(t_{2}-t_0)^2a_{0}\;,\quad {\tilde{v}}_{2}=\tilde{\gamma }^{c'}_0(1)=(t_{2}-t_0)v_{2}\;. \end{aligned}$$
(6)

An easy inspection shows (for each \(s_{1}=\phi _0(t_{1})\)):

$$\begin{aligned} \tilde{\mathcal{E}}_0(s_{1})=\int _0^1\Vert {\tilde{\gamma }}_0^{c''}(s)\Vert ^2ds= & {} (t_{2}-t_0)^3 \int _{t_0}^{t_{2}}\Vert \ddot{\gamma }^c_0(t)\Vert ^2dt=(t_{2}-t_0)^3\mathcal{E}_0(t_{1})\;. \end{aligned}$$
(7)

Thus critical points \(s_{1}^{crit}\) of \(\tilde{\mathcal{E}}_0\) are mapped (and vice versa) onto the corresponding critical points \(t_{1}^{crit}=\phi _0^{-1}(s_{1}^{crit})=s_{1}^{crit}(t_{2}-t_0)+t_0\) of \(\mathcal{E}_0\). Hence optimal points of \(\tilde{\mathcal{E}}_0\) and \(\mathcal{E}_0\) satisfy \(t_{1}^{opt}=\phi _0^{-1}(s_{1}^{opt})\). Thus by (7) to decrease \(\mathcal{E}_0\) it suffices to decrease \(\tilde{\mathcal{E}}_0\). To find the expression for \(\tilde{\mathcal{E}}_0\) we determine \(\tilde{\gamma }^c_0\) (depending on \(s_1\))

$$\begin{aligned} \tilde{\gamma }^c_0(s)~=~\left\{ \begin{array}{ll} \tilde{\gamma }^{lc}_0(s)\;, &{} \text{ for } s\in [0,s_{1}]\\ \tilde{\gamma }^{rc}_0(s)\;, &{} \text{ for } s\in [s_{1},1] \end{array}\right. \end{aligned}$$
(8)

with \(c_{0j},d_{0j}\in \mathbb {R}^m\) and \(\tilde{\gamma }^{lc}_0(s)=c_{00}+c_{01}(s-s_{1})+c_{02}(s-s_{1})^2 +c_{03}(s-s_{1})^3\) and \(\tilde{\gamma }^{rc}_0(s)=d_{00}+d_{01}(s-s_{1})+d_{02}(s-s_{1})^2+d_{03}(s-s_{1})^3,\) the following holds:

$$\begin{aligned} \tilde{\gamma }^{lc}_0(0)=x_0\;, \quad \tilde{\gamma }^{lc}_0(s_{1})=\tilde{\gamma }^{rc}_0(s_{1})=x_{1}\;, \quad \tilde{\gamma }^{rc}_0(1)=x_{2}\;, \end{aligned}$$
(9)
$$\begin{aligned} \tilde{\gamma }^{lc''}_0(0)={\tilde{a}}_0\;, \quad \quad \tilde{\gamma }^{rc'}_0(1)={\tilde{v}}_{2}\;, \end{aligned}$$
(10)

together with the smoothness (\(C^1\) and \(C^2\)) constraints at \(s=s_{1}\) i.e.:

$$\begin{aligned} \tilde{\gamma }^{lc'}_0(s_{1})=\tilde{\gamma }^{rc'}_0(s_{1})\;, \quad \quad \tilde{\gamma }^{lc''}_0(s_{1})=\tilde{\gamma }^{rc''}_0(s_{1})\;. \end{aligned}$$
(11)

We may assume that (upon shifting the origin of coordinate system) \(\tilde{x}_0=x_0-x_1\), \(\tilde{x}_1=\mathbf {0}\), \(\tilde{x}_2=x_2-x_1\) and therefore by (9) we have

$$\begin{aligned} \tilde{\gamma }^c_0(0)={\tilde{x}}_{0}\;, \quad \tilde{\gamma }^c_0(s_{1})={\mathbf {0}}\;, \quad \tilde{\gamma }^c_0(1)={\tilde{x}}_{2}\;. \end{aligned}$$
(12)

In sequel, combining (11) with \(\tilde{x}_1\) vanishing gives

$$\begin{aligned} \tilde{\gamma }^{lc}_{0}(s)= & {} c_{01}(s-s_{1})+c_{02}(s-s_{1})^2 +c_{03}(s-s_{1})^3\;, \nonumber \\ \tilde{\gamma }^{rc}_{0}(s)= & {} c_{01} (s-s_{1})+c_{02}(s-s_{1})^2+d_{03}(s-s_{1})^3\;, \end{aligned}$$
(13)

with \(c_{00}=d_{00}={\mathbf {0}}\). The unknown vectors \(c_{01},c_{02},c_{03},d_{03}\) are determined by solving the system of four linear vector equations obtained from (10) and (12):

$$\begin{aligned} \tilde{x}_{0}= & {} -c_{01}s_{1}+c_{02}s_{1}^2-c_{03}s_{1}^3\;,\nonumber \\ \tilde{x}_{2}= & {} c_{01}(1-s_{1})+c_{02}(1-s_{1})^2+d_{03}(1-s_{1})^3\;, \nonumber \\ \tilde{a}_0= & {} 2c_{02}-6c_{03}s_{1}\nonumber \;,\\ \tilde{v}_{2}= & {} c_{01}+2c_{02}(1-s_{1})+3d_{03}(1-s_{1})^2\;. \end{aligned}$$
(14)

An inspection reveals that:

$$\begin{aligned} c_{01}= & {} {s_1^2\tilde{a}_0+2s_1^3\tilde{a}_0-s_1^4\tilde{a}_0 +4s_1^2\tilde{v}_2 -4s_1^3\tilde{v}_2+6 \tilde{x}_0 - 12 s_1 \tilde{x}_0 +6s_1^2 \tilde{x}_0-12s_1^2\tilde{x}_2 \over 2s_1(s_1^2+ 2s_1-3)}\;,\nonumber \\ \nonumber \\ c_{02}= & {} -{-s_1^2\tilde{a}_0 + s_1^3\tilde{a}_0-3s_1\tilde{v}_2 + 3s_1^2\tilde{v}_2 + 6 \tilde{x}_0 - 6 s_1 \tilde{x}_0 + 9 s_1 \tilde{x}_2 \over s_1(s_1-1)(s_1+3)}\;,\nonumber \\ \nonumber \\ c_{03}= & {} -{-s_1\tilde{a}_0 + s_1^3\tilde{a}_0 - 2s_1\tilde{v}_2 +2s_1^2\tilde{v}_2 + 4\tilde{x}_0 - 4s_1\tilde{x}_0+6 s_1 \tilde{x}_2 \over 2s_1^2(s_1^2+ 2s_1 -3)}\;,\nonumber \\ \nonumber \\ d_{03}= & {} -{s_1^2\tilde{a}_0-2s_1^3\tilde{a}_0+ s_1^4\tilde{a}_0+ 6s_1\tilde{v}_2 -8s_1^2\tilde{v}_2 + 2s_1^3\tilde{v}_2 - 6\tilde{x}_0 + 12s_1\tilde{x}_0 -6s_1^2\tilde{x}_0 \over 2s_1(s_1+3)(s_1-1)^3}\nonumber \\ \nonumber \\&-{- 12s_1\tilde{x}_2 + 8s_1^2\tilde{x}_2\over 2s_1(s_1+3)(s_1-1)^3} \end{aligned}$$
(15)

satisfy (14) (as functions in \(s_1\)). As \(\Vert \tilde{\gamma }^{lc''}_0(s)\Vert ^2=4\Vert c_{02}\Vert ^2 +24\langle c_{02}| c_{03}\rangle (s-s_{1})+36\Vert c_{03}\Vert ^2(s-s_{1})^2\), \(\Vert \tilde{\gamma }^{rc''}_0(s)\Vert ^2=4\Vert c_{02}\Vert ^2 +24\langle c_{02}| d_{03}\rangle (s-s_{1})+36\Vert d_{03}\Vert ^2(s-s_{1})^2\):

$$\begin{aligned} \tilde{\mathcal{E}}_0(s_{1})=\int _{0}^{s_{1}}\Vert \tilde{\gamma }^{lc''}_0(s)\Vert ^2ds +\int _{s_{1}}^{1}\Vert \tilde{\gamma }^{rc''}_0(s)\Vert ^2ds=I_1+I_2, \end{aligned}$$

where \(I_1=4(\Vert c_{02}\Vert ^2s_{1}-3\langle c_{02}|c_{03}\rangle s_{1}^2+3\Vert c_{03}\Vert ^2s_{1}^3)\) and \(I_2=4(\Vert c_{02}\Vert ^2(1-s_{1})+3\langle c_{02}|d_{i3}\rangle (1-s_{1})^2+3\Vert d_{03}\Vert ^2(1-s_{1})^3)\). The latter combined with \(a_0={\mathbf {0}}\) (and \(\tilde{a}_0={\mathbf {0}}\) - see (6)), (13) and (15) yields (with Mathematica Integrate and FullSimplify):

$$\begin{aligned} \tilde{\mathcal{E}}_0(s_{1})= & {} {-1\over (s_1+3)s_1^2(s_1-1)^3} (12(-\Vert \tilde{x}_0\Vert ^2(s_1-1)^3 + s_1(\Vert \tilde{x}_2\Vert ^2(3-2s_1)s_1\nonumber \\&\&+\,\Vert \tilde{v}_2\Vert ^2(s_1-1)^2s_1 +(s_1-1)^3 \langle \tilde{v}_2|\tilde{x}_0\rangle -(s_1-3)(s_1-1)s_1 \langle \tilde{v}_2|\tilde{x}_2 \rangle \nonumber \\&\&+\, 3(s_1-1)^2\langle \tilde{x}_0|\tilde{x}_2\rangle )))\;. \end{aligned}$$
(16)

Note also that \(\lim _{s_1\rightarrow 0^+}\tilde{\mathcal{E}}_0(s_1) =(12\Vert \tilde{x}_0\Vert ^2/0^+)=+\infty ,\) and \(\lim _{s_1\rightarrow 1^-}\tilde{\mathcal{E}}_0(s_1) =(12\Vert \tilde{x}_2\Vert ^2/0^+)=+\infty \). Hence as \(\tilde{\mathcal{E}}_0\ge 0\) and \(\tilde{\mathcal{E}}_0\in C^1\) the global minimum \(s_1^{opt}\in (0,1)\) exists (one of critical points of \(\tilde{\mathcal{E}}_0\)). Note that \(x_{i}\ne x_{i+1}\) yields \(\Vert \tilde{x}_0\Vert \ne 0\) and \(\Vert \tilde{x}_2\Vert \ne 0\). An inspection shows that (or differentiate symbolically in Mathematica \(\tilde{\mathcal{E}}_0\) and use FullSimplify):

$$\begin{aligned} \tilde{\mathcal{E}}_0'(s_1)= & {} {1\over (s_1-1)^4s_1^3(s_1+3)^2} (12(2\Vert \tilde{v}_2\Vert ^2(s_1-1)^2s_1^3(s_1+1) -6\Vert \tilde{x}_2\Vert ^2 s_1^3(s_1^2-3)\nonumber \\&\&-\,3\Vert \tilde{x}_0\Vert ^2(s_1-1)^4(s_1+2)+(s_1-1)^4s_1(2s_1+3)\langle \tilde{v}_2|\tilde{x}_0\rangle \nonumber \\&\&-\,2(s_1-1)s_1^3(-6+(-3+ s_1)s_1)\langle \tilde{v}_2|\tilde{x}_2\rangle \nonumber \\&\&+\,3(s_1-1)^2s_1(-3 + s_1(4+3s_1))\langle \tilde{x}_0|\tilde{x}_2\rangle ))\;. \end{aligned}$$
(17)

The numerator of (17) is the polynomial of degree 6 i.e. \(N_0(s_{1})=b_0^0+b_1^0s_{1}+b_2^0s_{1}^2+b_3^0s_{1}^3 +b_4^0s_{1}^4+b_5^0s_{1}^5+b_6^0s_{1}^6\), with the coefficients \(b_j^0\in \mathbb {R}\) (for \(j=0,1,\dots ,6\)) equal to (use e.g. Mathematica functions Factor and CoefficientList): \({b_0^0\over 12}=-6\Vert \tilde{x}_0\Vert ^2\), \({b_1^0\over 12}=21\Vert \tilde{x}_0\Vert ^2 + 3\langle \tilde{v}_2|\tilde{x}_0\rangle - 9\langle \tilde{x}_0|\tilde{x}_2\rangle \), \({b_2^0\over 12}=-24 \Vert \tilde{x}_0\Vert ^2 - 10 \langle \tilde{v}_2|\tilde{x}_0\rangle + 30 \langle \tilde{x}_0|\tilde{x}_2\rangle \), \({b_3^0\over 12}= 2 \Vert \tilde{v}_2\Vert ^2 + 6 \Vert \tilde{x}_0\Vert ^2 + 18 \Vert \tilde{x}_2\Vert ^2 + 10 \langle \tilde{v}_2|\tilde{x}_0\rangle - 12 \langle \tilde{v}_2|\tilde{x}_2\rangle - 24 \langle \tilde{x}_0|\tilde{x}_2\rangle \), \({b_4^0\over 12}=-2 \Vert \tilde{v}_2\Vert ^2 + 6 \Vert \tilde{x}_0\Vert ^2 + 6 \langle \tilde{v}_2|\tilde{x}_2\rangle - 6 \langle \tilde{x}_0|\tilde{x}_2\rangle \), \({b_5^0\over 12}=-2\Vert \tilde{v}_2\Vert ^2 - 3\Vert \tilde{x}_0\Vert ^2 - 6\Vert \tilde{x}_2\Vert ^2 - 5 \langle \tilde{v}_2|\tilde{x}_0\rangle + 8 \langle \tilde{v}_2|\tilde{x}_2\rangle + 9 \langle \tilde{x}_0|\tilde{x}_2\rangle \) and \({b_6^0\over 12}=2 \Vert \tilde{v}_2\Vert ^2 + 2 \langle \tilde{v}_2|\tilde{x}_0\rangle - 2 \langle \tilde{v}_2|\tilde{x}_2\rangle \).

In a search for a global optimum of \(\tilde{\mathcal{E}}_0\), instead of performing any iterative optimization scheme (relying on initial guess), one can invoke Mathematica Package Solve which easily finds all roots (real and complex) for a given low order polynomial in one variable. Upon computing the roots of \(N_0(s_{1})\) we select only these which are real and belong to (0, 1). Next we evaluate \(\tilde{\mathcal{E}}_0\) on each critical point \(s_{1}^{crit}\in (0,1)\) and choose \(s_{1}^{crit}\) with minimal energy \(\tilde{\mathcal{E}}_0\) as optimal. Again this particular property of optimizing \(\tilde{\mathcal{E}}_0\) is very useful for future Leap-Frog Algorithm as compared with optimizing multiple variable function (5). We analyze in the next section the character of the energy \(\tilde{\mathcal{E}}_0\) for the special case (18).

The case of first velocity and last acceleration given (covering the last three interpolation points \(x_{n-2},x_{n-1},x_n\in \mathcal{M}\)) is symmetric and as such is omitted.

4 Special Conditions for Non-generic Case of Leap-Frog

For a global minimum of \(\tilde{\mathcal{E}}_0\) the analysis of \(\tilde{\mathcal{E}}_0'(s_1)=0\) reduces into finding all real roots of the sixth order polynomial \(N_0(s_{1})\). Consider a special case of \(\tilde{x}_{0}, \tilde{x}_{1}, \tilde{x}_{2}\in \mathbb {E}^m\) and \(\tilde{v}_{2}\in \mathbb {R}^m\) which satisfy (for some \(k\ne 0\) as \(\tilde{x}_{0}\ne \mathbf {0}\)):

$$\begin{aligned} \tilde{x}_{2}-\tilde{x}_0=\tilde{v}_{2}\;\quad \quad \tilde{x}_0=k\tilde{x}_{2}\;. \end{aligned}$$
(18)

Remark 1

The case of \(k<0\) yields the so-called co-linearly ordered data \(\tilde{x}_{0}, \tilde{x}_{1}, \tilde{x}_{2}\). Here the function \(x(s)=s\tilde{x}_{2}+(1-s)\tilde{x}_0\) (with \(s\in (0,1)\)) satisfies required constraints (18), namely: \(x(0)=\tilde{x}_{0}\), \(x(1)=\tilde{x}_{2}\), \(x'(1)=\tilde{x}_{2}-\tilde{x}_{0}=\tilde{v}_{2}\), and \(x''(0)={\mathbf {0}}\). For \(k\ne 0\) the normalized cumulative chord reads

$$\begin{aligned} \hat{s}^{cc}_1=\hat{s}^{cc}_1(k)={\Vert \tilde{x}_0\Vert \over \Vert \tilde{x}_{2}\Vert +\Vert \tilde{x}_0\Vert }={|k|\over 1+|k|}. \end{aligned}$$
(19)

Noticeably, \(\Vert \tilde{x}_2\Vert \ne 0\) and \(\Vert \tilde{x}_0\Vert \ne 0\) as \(x_0\ne x_1\) and \(x_1\ne x_2\). Thus since \(|k|=-k\) (for \(k<0\)) by (18) we have \(x\left( {|k|\over 1+|k|}\right) =\tilde{x}_{2}{|k|\over 1+|k|}+\tilde{x}_{0}{1\over 1+|k|}= \tilde{x}_{2}\left( {|k|\over 1+|k|}+{k\over 1+|k|}\right) ={\mathbf {0}}=\tilde{x}_{1}\). Hence the interpolation condition \(x(s_1)=\mathbf {0}\) is satisfied with \(s_1=\hat{s}_1^{cc}\). In sequel since \(\Vert x''(s)\Vert =0\) over (0, 1), the integral (7) vanishes with \(s=\hat{s}^{cc}_1\). Thus \(\hat{s}^{cc}_1\) is a global minimizer of \(\tilde{\mathcal{E}}_0\) for any \(k<0\). This does not hold for \(k>0\). Note that for n big the case \(k>0\) prevails.    \(\square \)

By (18) we have \(\Vert \tilde{x}_{0}\Vert ^2=k^2\Vert \tilde{x}_{2}\Vert ^2,\) \(\Vert \tilde{v}_{2}\Vert ^2=(1-k)^2\Vert \tilde{x}_{2}\Vert ^2,\) \(\langle \tilde{v}_{2}|\tilde{x}_{2}\rangle =(1-k)\Vert \tilde{x}_{2}\Vert ^2,\) \(\langle \tilde{v}_{2}|\tilde{x}_{0}\rangle =(k-k^2)\Vert \tilde{x}_{2}\Vert ^2,\) \(\langle \tilde{x}_{0}|\tilde{x}_{2}\rangle =k\Vert \tilde{x}_{2}\Vert ^2\). Substituting the latter into (16) yields (upon using e.g. FullSimplify in Mathematica) \(\tilde{\mathcal{E}}_0(s)=\tilde{\mathcal{E}}_0^d(s) =(-12(\Vert \tilde{x}_{2}\Vert (k + s(1-k)))^2)/((s-1)^3s^2(3 + s))\). The latter vanishes iff \(s=-k/(1-k)\) which for \(k<0\) reads as \(s=|k|/(1+|k|)=\hat{s}_1^{cc}(k)\) (as previously \(\hat{s}_1^{cc}\) can be treated as a function in k). In a search for other critical points of \(\tilde{\mathcal{E}}_0^d\), the respective derivative \(\tilde{\mathcal{E}}_0^{d'}\) (e.g. use symbolic differentiation in Mathematica and FullSimplify) reads:

$$\begin{aligned} \tilde{\mathcal{E}}_0^{d'}(s)= {24\Vert \tilde{x}_2\Vert ^2(k+s(1-k))(-3k + 6ks + s^2(4 - k)+ s^3(2 - 2k)) \over (s-1)^4s^3(3 + s)^2}\;. \end{aligned}$$
(20)

If \(k\ne 1\) (i.e. \(\tilde{x}_2\ne \tilde{x}_0\)) the first numerator’s factor of (20) yields exactly one root \(\hat{s}_1^{L}=-k/(1-k)\). Only for \(k<0\) we have \(\hat{s}_1^{L}\in (0,1)\). Otherwise for \(k>1\) we have \(\hat{s}_1^{L}>1\) and for \(0<k<1\) we have \(\hat{s}_1^{L}<0\). The analysis of \(k<0\) and \(k>0\) (with \(k\ne 1\)) for the second cubic factor in (20) is performed below. Note that with \(k=1\) the first linear factor in (20) has no roots and the second cubic factor reduces in quadratic with the roots \(\hat{s}_1^{\pm }=-1\pm \sqrt{2}\). Only \(\hat{s}_1=\hat{s}_1^+\in (0,1)\).

(i) Assume that \(k<0\) (the data are co-linearly ordered). The latter shows that cumulative chord \(\hat{s}_1^{cc}\) (see the linear factor in (20)) is a critical point of \(\tilde{\mathcal{E}}_0^d\).

To find the remaining critical points of \(\tilde{\mathcal{E}}_0^d\) the real roots of \(M_0(s)=-3k + 6ks + s^2(4 - k)+ s^3(2 - 2k)\) over (0, 1) are to be examined. Note that as \(M_0(0)=-3k>0\) (as \(k<0\)) and \(M_0(1)=6>0\) for the existence of one critical point \(\hat{s}_1\) of \(\tilde{\mathcal{E}}_0'\) (i.e. here with \(\hat{s}_1=\hat{s}_1^{cc}\)) it suffices to show that \(M_0\) is positive at any of its critical points \(\hat{u}_0\in (0,1)\) (i.e. at points where \(M_0'(\hat{u}_0)=0\) with \(M_0(\hat{u}_0)>0\)). Indeed \(M_0'(u)=0\) iff \(3k + u(4-k) + u^2(3-3k)=0.\) The discriminant (as \(k<0\)) \(\bar{\varDelta }(k)=16 - 44k + 37k^2>0\) and thus there are two different real roots \(\hat{u}_0^-\) and \(\hat{u}_0^+\) (both depending on parameter \(k<0\)). Since \((-k)/(k-1)<0\) (again as \(k<0\)) by Vieta’s formula both roots are of opposite signs. Thus as

$$\begin{aligned} \hat{u}_0^{\pm }=\hat{u}_0^{\pm }(k)={k-4\pm \sqrt{\varDelta }\over 6(1-k)} \end{aligned}$$
(21)

we have \(\hat{u}_0^-<0\) (since for \(k<0\) we have \(k-4-\sqrt{\varDelta }<0\) and \(6(1-k)>0\)). Hence \(\hat{u}_0^+>0\) - this can be independently shown as being equivalent to true inequality \(36k(k-1)>0\). But as \(M_0'(0)=6k<0\) and \(M_0'(1)=14-2k>0\) by Intermediate Value Theorem we have that \(\hat{u}_0^+\in (0,1)\) (and it is a unique root of \(M_0'(u)=0\) over (0, 1)). As also \(M_0'(0)<0\) for \(u\in (0,\hat{u}_0^+)\) and \(M_0'(0)>0\) for \(\hat{u}\in (u_0^+,1)\) we have minimum of \(M_0\) at \(\hat{u}_0^{+}\) over interval (0, 1). Evaluating \(M_0(\hat{u}_0^+(k))\) (see (21)) yields a function \(\bar{M}_0(k)=M_0(\hat{u}_0^+(k))\) in k (again use. e.g. Mathematica FullSimplify and Factor) which reads: \(\bar{M}_0(k)={-1\over 54(k-1)^2}(-64+426 k - 606k^2 + 217 k^3 + 16\sqrt{\bar{\varDelta }(k)} - 44k\sqrt{\bar{\varDelta }(k)}+ 37k^2\sqrt{\bar{\varDelta }(k)})\). By Taylor expansion we have \(f(x)=\sqrt{1+x} =1+(1/2)x+(-1/(2(1+\xi )^{3/2}))x^2\) (with \(0<\xi <x\) if \(x>0\) and \(x<\xi <0\) if \(x<0\)). Upon applying the latter to \(\sqrt{\bar{\varDelta }(k)}=4\sqrt{1+(37/16)k^2-(11/4)k}\) with \(x=(37/16)k^2-(11/4)k\) we arrive at \(\lim _{k\rightarrow 0^-}\bar{M}_0(k)=0\). Again by Taylor expansion, the domineering factor (recall for \(k\approx O^-\) we have \(|k| \ge |k|^{\alpha }\) with \(\alpha \ge 1\)) in the expression for \(M_0(k)\) (inside the brackets) is a linear component \(426k-88k-176k=162k<0\), as \(k<0\) - (here the constant 64 is canceled). Thus \(\bar{M}_0(k)>0\), for sufficiently small \(k<0\). To show that \(\lim _{k\rightarrow -\infty }\bar{M}_0(k)=-\infty \), it suffices to show \(\lim _{k\rightarrow -\infty }((217k^3+37k^2\sqrt{\bar{\varDelta }(k)})/(k-1)^2) =+\infty \). The latter follows upon observation that (for \(k<0\)) \(217k^3+37k^2\sqrt{\bar{\varDelta }(k)}> 217k^3+37k^2\sqrt{36k^2}= 217k^3+222k^2|k|=-5k^3\). The Mathematica function NSolve applied to \(\bar{M}_0(k)=0\) yields two real roots i.e. \(k_1=0\) (excluded as \(\tilde{x}_0=k\tilde{x}_2\)) and \(k_2\approx -26.1326\). Next \(\bar{M}_0'(k)={1\over 54(k-1)^3} (-298+ 786k - 651 k^2 + 217k^3 + 34\sqrt{\bar{\varDelta }(k)} -89k\sqrt{\bar{\varDelta }(k)} + 37k^2\sqrt{\bar{\varDelta }(k)})\). Again, Solve applied to \(\bar{M}_0'(k)=0\) yields one root \(-4.61116\). Combining the latter with the plotted graph of \(\bar{M}_0(k)\) renders for each \(k\in (k_2,0)\) the following: \(\bar{M}_0>0\) and thus \(M_0>0\). Hence, for each \(k\in (k_2,0)\) there is only one critical point of \(\tilde{\mathcal{E}}_0^d\) over (0, 1) - i.e. cumulative chord \(\hat{s}_1^{cc}=|k|/(1+|k|)\). Hence if any iterative optimization scheme for \(\tilde{\mathcal{E}}_0^{d}\) is invoked a good initial guess can be an arbitrary number from the interval (0, 1) (due to unimodality of \(\tilde{\mathcal{E}}_0^d\)). In case of small perturbations of (18) one expects that \(\tilde{\mathcal{E}}_0^{\delta }\) preserves a similar pattern as its unperturbed counterpart \(\tilde{\mathcal{E}}_0^{0}=\tilde{\mathcal{E}}^d_0\) (see Proposition 1). Thus, for arbitrary \(k_2<k<0\), a good initial guess to optimize \(\tilde{\mathcal{E}}_0^{\delta }\) should be chosen in the proximity of cumulative chord \(\hat{s}_1^{cc}(k)\).

Clearly for \(k=k_2\approx -26.1326\) (the second case when \(k=0\) is excluded) \(M_0(\hat{u}_0^+(k_2))=\bar{M}_0(k_2)=0\) and \(M_0'(\hat{u}_0^+(k_2))=M_0'(k_2)=0\) (see (21)). Thus we have one additional critical point \(\hat{u}_0^+({k_2})\in (0,1)\) of \(\tilde{\mathcal{E}}_0^d\). Hence for \(k_2\) there are exactly two critical points of \(\tilde{\mathcal{E}}_0^d\) over (0, 1) - one is a cumulative chord \(\hat{s}_1^{cc}(k_2)\in (0,1)\) (a global minimum) and the second one \(\hat{s}_1^0=u_0^+(k_2)\in (0,1)\). Substituting \(k_2\approx -26.1326\) into (19) and (21) gives \(\hat{u}_0^{+}(k_2)\approx 0.813606<\hat{s}_1^{cc}(k_2)=(|k_2|/(1+|k_2|)) \approx 0.963144\). Of course \(u_0^{+}(k_2)\) must be a a saddle-like point of \(\tilde{\mathcal{E}}_0^d\) - recall here that \(\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc}(k_2))=0\) (attained global minimum), which is smaller than \(\tilde{\mathcal{E}}_0^d(\hat{u}_0^+(k_2))\approx 12083.9\Vert \tilde{x}_2\Vert ^2>0\) and that \(\lim _{s\rightarrow 0+}\tilde{\mathcal{E}}_0^d(s) =\lim _{s\rightarrow 1^+}\tilde{\mathcal{E}}_0^d(s)=+\infty \). This together with non-negativity of \(\tilde{\mathcal{E}}_0^d \) and its smoothness over (0, 1) implies that \(\hat{u}_0^{+}(k_2)\) is a saddle-like point of \(\tilde{\mathcal{E}}_0^d\). Thus for \(k=k_2\) if any iterative optimization scheme for \(\tilde{\mathcal{E}}_0^d\) is to be invoked a good initial guess can be an arbitrary number from the interval \((\hat{s}_1^{cc}(k_2),1)\). As for small perturbations of (18) one expects the energy \(\tilde{\mathcal{E}}_0^{\delta }\) to have as similar pattern as its unperturbed counterpart \(\tilde{\mathcal{E}}_0^{0}=\tilde{\mathcal{E}}_0^d\). Thus, for \(k=k_2\), a good initial guess to optimize \(\tilde{\mathcal{E}}_0^{\delta }\) should also be taken closely to cumulative chord \(\hat{s}_1^{cc}(k_2)\) and preferably from \((\hat{s}_1^{cc}(k_2),1)\).

For \(k\in (-\infty ,k_2)\) we have \(M_0(\hat{u}_0^+(k))=\bar{M}_0(k)<0\). Thus for each \(k\in (-\infty ,k_2)\), as \(M_0(0)>0\), \(M_0(1)>0\), \(M_0(\hat{u}_0^+(k))<0\) and \(M_0'\) vanishes over (0, 1) only at \(\hat{u}_0^+(k)\in (0,1)\), there are exactly two different additional critical points \(\hat{s}_0^{1}(k),s_0^2(k)\in (0,1)\) (say \(\hat{s}_0^{1}(k)<\hat{s}_0^{2}(k)\)) of \(\tilde{\mathcal{E}}_0^d\) (as \(M_0(s_0^{1,2}(k))=0\)). Clearly the inequalities hold \(\hat{s}_0^1(k)<\hat{u}_0^+(k) <\hat{s}_0^2(k)\). Recall that the global minimum of \(\tilde{\mathcal{E}}_0^d\) is attained at cumulative chord \(\hat{s}_1^{cc}\), where \(\tilde{\mathcal{E}}_0^d\) vanishes. Note also that here \(\hat{s}_1^{cc}\approx 1\) as \(\hat{s}_1^{cc}(k)=|k|/(1+|k|)\in (|k_2|/(1+|k_2|),1) =(0.963144,1)\) for \(k\in (-\infty ,k_2)\). To show the latter use the facts that \(f(x)=(x/(1+x))\) is increasing and \(0 \le f(x)<1\) for \(0 \le x\le 1\) and \(\lim _{x \rightarrow 1^{-}}f(x)=1\). In addition, as \(M_0(\hat{s}_1^{cc}(k))=(3 - 4k)k/(-1+ k)^2\ne 0\) for \(k\in (-\infty ,k_2)\), and \(M_0(\hat{s}_1^{1,2}(k))=0\) we have \(\hat{s}_1^{cc}(k)\ne s_0^{1,2}(k)\). In Remark 2 we will show that none of \(\hat{s}_0^{1,2}\) can be saddle-like points (for \(k\in (-\infty ,k_2)\)). Thus as \(\lim _{s\rightarrow 0^{+}}\tilde{\mathcal{E}}_0^d(s) =\lim _{s\rightarrow 1^{-}}\tilde{\mathcal{E}}_0^d(s)=+\infty \) and \(\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc}(k))\) attains its global minimum both \(\hat{s}_0^{1,2}\) must be on one side of \(\hat{s}_1^{cc}\) (as otherwise that would imply both \(\hat{s}_0^1\) and \(\hat{s}_0^2\) to be saddle-like points, since no more than 3 critical points of \(\tilde{\mathcal{E}}_0^{d}\) over (0, 1) exist). Thus to prove that \(\hat{u}_0^{1,2}(k)<\hat{s}_1^{cc}(k)\) its suffices to show \(\hat{s}_0^1(k)<\hat{s}_1^{cc}(k)\). For the latter (as \(\hat{s}_0^1(k)<\hat{u}_0^+(k)\)) it is sufficient to prove \(\hat{u}_0^+(k)<\hat{s}_1^{cc}(k)\) (where \(k\in (-\infty ,k_2)\)). A simple inspection shows that \(\hat{u}_0^+(k)=(4-k-\sqrt{16-44k+37k^2})/(6(k-1))<\hat{s}_1^{cc}(k)=|k|/(1+|k|)\) holds as this requires \(4-7k>\sqrt{\varDelta }\) which is true since \(k(k-1)>0\) (with \(k<0\)). Thus at \(\hat{s}_0^1(k)\) (at \(\hat{s}_0^2(k)\)) we have a local minimum (maximum) of \(\tilde{\mathcal{E}}_0^d\). Again, if any iterative optimization scheme for \(\tilde{\mathcal{E}}_0^d\) is invoked a good initial guess can be an arbitrary number from the interval \((\hat{s}_1^{cc},1)\). For small perturbations of (18) one expects the energy \(\tilde{\mathcal{E}}_0^{\delta }\) to preserve a pattern of its unperturbed counterpart \(\tilde{\mathcal{E}}_0^{0}=\tilde{\mathcal{E}}_0^c\) (see Proposition 1). Consequently, for any \(k\in (-\infty ,-26.1326)\), a good initial guess can be taken as a close to \(\hat{s}_1^{cc}(k)\) and from the interval \((\hat{s}_1^{cc}(k),1)\).

(ii) Assume now that \(k>0\) (the data are co-linearly unordered). As here \(s_1^L=-k/(1-k)\notin (0,1)\) (see (20)) the critical points of \(\tilde{\mathcal{E}}^{d}_0\) coincides with the roots of a cubic \(N_c(s)=-3k + 6ks + s^2(4 - k)+ s^3(2 - 2k)\).

Evidently for that for \(0<k<1\) there is only one change of signs of the coefficients (three are positive and one is negative). By Fermat sign principle there exists up to one positive root of \(N_c(s)\). The analysis from Sect. 3 assures the existence of at least one critical point of \(\tilde{\mathcal{E}}_0\) over (0, 1) (\(\tilde{\mathcal{E}}^{d}_0\) is a special case of \(\tilde{\mathcal{E}}_0\)). Thus for \(0<k<1\) there exists exactly one critical point \(\hat{s}_1\in (0,1)\) (a global minimum) of \(\tilde{\mathcal{E}}^{d}_0\).

The case \(k=1\) (with \(\tilde{x}_2=\tilde{x}_0\)) already analyzed yields one critical point \(\hat{s}_1=-1+\sqrt{2}\in (0,1)\) of \(\tilde{\mathcal{E}}^{d}_0\).

Finally, for \(k>1\) as \(N_c(0)=-3k<0\), \(N_c(1)=6>0\) and \(N_c\in C^{\infty }\), it suffices to show that there is up to one root of \(N_c'(u)=0\), for \(u\in (0,1)\). Of course no roots for \(N_c'(u)=0\) yields strictly increasing \(N_c(s)\) over (0, 1) (as \(N_c(0)<0\) and \(N_c(1)>0\)) and hence exactly one critical point \(\hat{s}_1\in (0,1)\) (a global minimum) of \(\tilde{\mathcal{E}}^{d}_0\). The existence of exactly one root \(u_1\in (0,1)\) for \(N_c'(u)=0\), still yields (since \(N_c(0)<0\) and \(N_c(1)>0\)) exactly one root \(\hat{s}_1\in (0,1)\) of \(N_c(s)=0\) (this time \(N_c\) is not monotonic with the exception of the case when \(\hat{s}_1=u_1\), i.e. when \(u_1\) is a saddle-like point of \(N_c\)). To show that \(N_c'(s)=6k+2s(4-k)+6s^2(1-k)=0\) has up to one root over (0, 1), note that as now \(1-k<0\) we have \(lim_{s\rightarrow \pm \infty }N_c'(s)=-\infty \). The latter combined with \(N_c'(0)=6k>0\) assures the existence of exactly one negative and one positive root \(N_c'(s)=0\). This completes the proof.

Thus for \(k>0\) the energy \(\tilde{\mathcal{E}}^{d}_0\) has exactly one critical point \(\hat{s}_1\in (0,1)\).

5 Perturbed Special Case

The case when \(\{\tilde{x}_0,\tilde{x}_1=\mathbf {0},\tilde{x}_{2}\}\) and \(\tilde{v}_{2}\) do not satisfy (18) is considered now. Upon introducing a perturbation vector \(\delta =(\delta _1,\delta _2)\in \mathbb {R}^{2m}\) the question arises whether a unimodality of \(\tilde{\mathcal{E}}_0^d=\tilde{\mathcal{E}}_0^{\delta =0}\) (holding for special data (18)) extends to the perturbed case i.e. to \(\tilde{\mathcal{E}}_0^{\delta }\). In doing so, assume the following holds:

$$\begin{aligned} \tilde{x}_{2}-\tilde{x}_0-\tilde{v}_{2}=\delta _1\;,\quad \quad \tilde{x}_{0}-k\tilde{x}_{2}=\delta _2\;, \end{aligned}$$
(22)

(for some \(k\ne 0\)) with the corresponding \(\tilde{\mathcal{E}}_0^{\delta }\) derived as in (16) (see also (23)). For \(\delta _1=\delta _2=\mathbf {0}\in \mathbb {R}^m\) formulas (22) reduce into (18) (i.e. \(\tilde{\mathcal{E}}_0^0=\tilde{\mathcal{E}}_0\) derived for (18)). For an explicit formula of \(\tilde{\mathcal{E}}_0^{\delta }\) and \(\tilde{\mathcal{E}}_0^{\delta '}\) we resort to (recalling (22)): \(\Vert \tilde{x}_{0}\Vert ^2=k^2\Vert \tilde{x}_{2}\Vert ^2+\Vert \delta _2\Vert ^2 +2k\langle \tilde{x}_2|\delta _2\rangle \), \(\Vert \tilde{v}_{2}\Vert ^2=(1-k)^2\Vert \tilde{x}_{2}\Vert ^2+\Vert \delta _1\Vert ^2+\Vert \delta _2\Vert ^2 +2\langle \delta _1|\delta _2\rangle -2(1-k)\langle \tilde{x}_2|\delta _1\rangle -2(1-k)\langle \tilde{x}_2|\delta _2\rangle \), \(\langle \tilde{v}_{2}|\tilde{x}_{2}\rangle =(1-k)\Vert \tilde{x}_{2}\Vert ^2 -\langle \tilde{x}_2|\delta _1\rangle -\langle \tilde{x}_2|\delta _2\rangle \), \(\langle \tilde{v}_{2}|\tilde{x}_{0}\rangle =(k-k^2)\Vert \tilde{x}_{2}\Vert ^2 +(1-2k)\langle \tilde{x}_2|\delta _2\rangle -k\langle \tilde{x}_2|\delta _1\rangle -\langle \delta _1|\delta _2\rangle -\Vert \delta _2\Vert ^2\) and \(\langle \tilde{x}_{0}|\tilde{x}_{2}\rangle =k\Vert \tilde{x}_{2}\Vert ^2 +\langle \tilde{x}_2|\delta _2\rangle \). Substituting the latter into the formula for \(\tilde{\mathcal{E}}_0\) (here \(s_1=s\)) (see (16)) yields (we use here Mathematica function FullSimplify)

$$\begin{aligned} \tilde{\mathcal{E}}_0^{\delta }(s)= & {} {-1\over (s-1)^3s^2(s+3)}(12(\Vert \delta _2\Vert ^2(s-1)^2+k^2\Vert \tilde{x}_2\Vert ^2(s-1)^2 \nonumber \\&\&+\,s(\langle \delta _1| \delta _2 \rangle (s-1)^2(1 + s)+ s(\Vert \tilde{x}_2\Vert ^2 +\Vert \delta _1\Vert ^2 (s-1)^2 +\langle \tilde{x}_2|\delta _1\rangle \nonumber \\&\&-\, s^2\langle \tilde{x}_2|\delta _1\rangle -2 \langle \tilde{x}_2|\delta _2 \rangle ) + 2 \langle \tilde{x}_2|\delta _2\rangle ) + k (s-1) (-2\Vert \tilde{x}_2\Vert ^2\,s \nonumber \\&\&+\,(s-1)(s(1 + s) \langle \tilde{x}_2|\delta _1\rangle + 2 \langle \tilde{x}_2|\delta _2\rangle )))) \end{aligned}$$
(23)

which upon simplification yields \(\tilde{\mathcal{E}}_0^{\delta }(s)=(-12M_0^{\delta }(s))/((s-1)^3s^2(s+3))\), where \(M_0^{\delta }\) is the 4-th order polynomial in s with the coefficients (use Mathematica functions Factor and CoefficientList): \(a_0^{0,\delta }=\Vert \delta _2\Vert ^2 + k^2\Vert \tilde{x}_2\Vert ^2 + 2k\langle \tilde{x}_2|\delta _2\rangle \), \(a_1^{0,\delta }=\langle \delta _1|\delta _2 \rangle - 2 \Vert \delta _2\Vert ^2 + 2 k\Vert \tilde{x}_2\Vert ^2 -2k^2\Vert \tilde{x}_2\Vert ^2 + k\langle \tilde{x}_2|\delta _1\rangle + 2\langle \tilde{x}_2|\delta _2\rangle -4 k \langle \tilde{x}_2|\delta _2\rangle \), \(a_2^{0,\delta }= -\langle \delta _1|\delta _2\rangle + \Vert \delta _1\Vert ^2 + \Vert \delta _2\Vert ^2 + \Vert \tilde{x}_2\Vert ^2 - 2 k \Vert \tilde{x}_2\Vert ^2 + k^2\Vert \tilde{x}_2\Vert ^2 + \langle \tilde{x}_2|\delta _1\rangle - k\langle \tilde{x}_2|\delta _1\rangle - 2\langle \tilde{x}_2|\delta _2\rangle + 2k\langle \tilde{x}_2|\delta _2\rangle \), \(a_3^{0,\delta }=-\langle \delta _1|\delta _2\rangle - 2 \Vert \delta _1\Vert ^2 - k \langle \tilde{x}_2|\delta _1\rangle \) and \(a_4^{0,\delta }=\langle \delta _1|\delta _2\rangle + \Vert \delta _1\Vert ^2 - \langle \tilde{x}_2|\delta _1\rangle + k \langle \tilde{x}_2|\delta _1\rangle \) with the corresponding derivative (use symbolic differentiation in Mathematica and apply Factor and CoefficientList) \(\tilde{\mathcal{E}}_0^{\delta '}(s)=(12N_0^{\delta }(s))/( (s-1)^4s^3(s+3)^2)\), where \(N_0^{\delta }\) is the 6-th order polynomial in s with the coefficients: \(b_0^{0,\delta }=-6\Vert \delta _2\Vert ^2 - 6k^2\Vert \tilde{x}_2\Vert ^2- 12k\langle \tilde{x}_2|\delta _2\rangle \), \(b_1^{0,\delta }=-3 \langle \delta _1|\delta _2\rangle + 18\Vert \delta _2\Vert ^2 - 6 k \Vert \tilde{x}_2\Vert ^2 + 18k^2\Vert \tilde{x}_2\Vert ^2 - 3k\langle \tilde{x}_2|\delta _1\rangle -6 \langle \tilde{x}_2|\delta _2\rangle + 36 k \langle \tilde{x}_2|\tilde{\delta }_2\rangle \), \(b_2^{0,\delta }=10 \langle \delta _1|\delta _2\rangle - 14\Vert \delta _2\Vert ^2 + 20 k\Vert \tilde{x}_2\Vert ^2 - 14k^2\Vert \tilde{x}_2\Vert ^2 + 10k\langle \tilde{x}_2|\delta _1\rangle + 20\langle \tilde{x}_2|\delta _2\rangle - 28 k\langle \tilde{x}_2|\delta _2\rangle \), \(b_3^{0,\delta }=-6 \langle \delta _1|\delta _2\rangle + 2\Vert \delta _1\Vert ^2 - 2\Vert \delta _2\Vert ^2 + 8\Vert \tilde{x}_2\Vert ^2 - 6k\Vert \tilde{x}_2\Vert ^2 - 2k^2\Vert x_2\Vert ^2 + 8\langle \tilde{x}_2|\delta _1\rangle -6 k \langle \tilde{x}_2|\delta _1\rangle - 6 \langle \tilde{x}_2|\delta _2\rangle - 4 k \langle \tilde{x}_2|\delta _2\rangle \), \(b_4^{0,\delta }=-4 \langle \delta _1|\delta _2\rangle - 2\Vert \delta _1\Vert ^2 + 4\Vert \delta _2\Vert ^2 +4\Vert \tilde{x}_2\Vert ^2 - 8k\Vert \tilde{x}_2\Vert ^2 + 4k^2\Vert \tilde{x}_2\Vert ^2 -2 \langle \tilde{x}_2|\delta _1\rangle - 4 k \langle \tilde{x}_2|\delta _1\rangle - 8\langle \tilde{x}_2|\delta _2 \rangle + 8 k \langle \tilde{x}_2|\delta _2\rangle \), \(b_5^{0,\delta }=\langle \delta _1|\delta _2\rangle - 2\Vert \delta _1\Vert ^2 - 4 \langle \tilde{x}_2|\delta _1\rangle + k \langle \tilde{x}_2|\delta _1\rangle \) and \(b_5^{0,\delta }=2 \langle \delta _1|\delta _2\rangle + 2\Vert \delta _1\Vert ^2 - 2 \langle \tilde{x}_2|\delta _1\rangle + 2 k \langle \tilde{x}_2|\delta _1\rangle \).

The following result holds (the proof is straightforward):

Proposition 1

Assume that for unperturbed data (18) the corresponding energy \(\tilde{\mathcal{E}}_0^0\) has exactly one critical point \(\hat{s}_0\in (0,1)\) at which \(\tilde{\mathcal{E}}_0^{0''}(\hat{s}_0)>0 \). Then there exists sufficiently small \(\varepsilon _0>0\) such that for all \(\Vert \delta \Vert <\varepsilon _0\) (where \(\delta =(\delta _1,\delta _2)\in \mathbb {R}^{2m}\)) the perturbed data (22) yield the energy \(\tilde{\mathcal{E}}_0^{\delta }\) with exactly one critical point \(\hat{s}_0^{\delta }\in (0,1)\) (a global minimum \(\hat{s}_0^{\delta }\) of \(\tilde{\mathcal{E}}_0^{\delta }\) is sufficiently close to \(\hat{s}_0\)).

Remark 2

The condition \(\tilde{\mathcal{E}}_0^{0'}(s)=\tilde{\mathcal{E}}_0^{0''}(s)=0\) excludes among all possible saddle-like points of \(\tilde{\mathcal{E}}_0^0=\tilde{\mathcal{E}}_0^d\) (which as shown, e.g. happens for \(k=k_2\approx -26.1326\)). In a search for other possible saddle-like points for \(k\le k_2\) varying we eliminate variable s \(\tilde{\mathcal{E}}_0^{0'}(s)=0\) and \(\tilde{\mathcal{E}}_0^{0''}(s)=0\) by resorting to Mathematica function Eliminate. Indeed upon symbolic differentiation of \(\tilde{\mathcal{E}}^{\delta '}_0\) (and then putting \(\delta =0\)) with Mathematica function FullSimplify we obtain \(\tilde{\mathcal{E}}_0^{0''}(s)={-24 \Vert \tilde{x}_2\Vert ^2\over (s-1)^5s^4(s+3)^3} (27k^2 + (18k - 102k^2)s + (-72k + 120k^2)s^2 +(96k-12k^2)s^3 + (46+ 28k - 53k^2)s^4+( 40- 50k + 10k^2)s^5+(10- 20k+ 10 k^2)s^6\). Using Eliminate function in Mathematica applied to \(\tilde{\mathcal{E}}_0^{0'}(s)=\tilde{\mathcal{E}}_0^{0''}(s)=0\) (in fact applied to the respective numerators of the first and the second derivatives of \(\tilde{\mathcal{E}}_0^{0}\)) leads to \(576k^3 - 4209k^4 + 10636k^5 - 11277k^6 + 4152k^7 + 176k^8 = 0\) which when factorized (use e.g. Mathematica Factor) equals to \(k^3(-3+ 4k)^2(64 - 297k + 276k^2 + 11k^3)=0\). Mathematica function NSolve yields four non-negative roots \(k=0\), \(k=3/4\), \(k=0.300288\) and \(k=0.741423\) (excluded as \(k\le k_2\)) and one negative \(k=-26.1326=k_2\). The latter is not only consistent with the previous analysis but also implies that the saddle-like point can only occur for \(k=k_2\). This fact was used when we analyzed the critical points of \(\tilde{\mathcal{E}}_0^{0}\) for different \(k<0\).    \(\square \)

Fig. 1.
figure 1

The graph of \(\tilde{\mathcal{E}}_0^d\) for different \(\tilde{x}_0, \tilde{x}_{1}, \tilde{x}_2\in \mathbb {E}^m\) co-linearly ordered with varying \(k\in (k_2,0)\) and \(\Vert \tilde{x}_2\Vert =1\): (a) \(k=-4\) and a global minimum at \(\hat{s}_1^{cc}(k)=4/5=0.8\), (b) \(k=-15\) and a global minimum at \(\hat{s}_1^{cc}(k)=15/16\approx 0.9375\), (c) \(k=-25\) and a global minimum at \(\hat{s}_1^{cc}(k)=25/26\approx 0.961538\).

Example 1

Let \(\tilde{x}_0,\tilde{x}_1=\mathbf {0}, \tilde{x}_2\in \mathbb {E}^m\) be co-linearly ordered i.e. \(\tilde{x}_0=k\tilde{x}_2\), for some \(k<0\) and \(\Vert \tilde{x}_2\Vert =1\). The energy \(\tilde{\mathcal{E}}_0^d\) with \(\Vert \tilde{x}_2\Vert =1\) reads here as \(\tilde{\mathcal{E}}_0^d(s)=(-12((k + s - ks)^2))/((s-1)^3s^2(3 + s))\). The plot of \(\tilde{\mathcal{E}}_0^d\) with \(k=-4,-15,-25\in (k_2\approx -26.1326,0)\) is shown in Fig. 1. As already proved, in this case there is only one critical point of \(\tilde{\mathcal{E}}_0^d\) at cumulative chords \(\hat{s}_1^{cc}(k)=4/5=0.6\), \(\hat{s}_1^{cc}(k)=15/16\approx 0.9375\) and \(\hat{s}_1^{cc}(k)=25/26\approx 0.961538\), respectively (where \(\tilde{\mathcal{E}}_0^d(s_1^{cc}(k))=0\)). Similarly the plot of the corresponding energy \(\tilde{\mathcal{E}}_0^d\) with \(k=k_2\approx -26.1326\) is shown in Fig. 2a). Here there are two critical points of \(\tilde{\mathcal{E}}_0^d\) i.e. a global minimum at cumulative chord \(\hat{s}_1^{cc}(k_2)\approx 26.1326/27.1326\approx 0.963144\) (where \(\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc}(k_2))=0\)) and a saddle-like point \(\hat{s}_1^0(k_2)\approx 0.813607\) (with \(\tilde{\mathcal{E}}_0^d(\hat{s}_1^0(k_2))\approx 12083.9\)). Finally, the plot of \(\tilde{\mathcal{E}}_0^d\) with \(k=-35,-65\in (-\infty , k_2)\) is shown Fig. 2b)–c). As established above, a single global minimum \(\tilde{\mathcal{E}}_0^d\) is again taken at cumulative chord \(\hat{s}_1^{cc}(k)=35/36\approx 0.972222\) (or at \(\hat{s}_1^{cc}(k)=65/66\approx 0.984848\)) with \(\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc}(k))=0\). There are other two critical points: local maximum at \(\hat{s}_0^2(k)\approx 0.897748\) with \(\tilde{\mathcal{E}}_0^d(\hat{s}_0^2(k))\approx 25683.8\) (or at \(\hat{s}_0^2(k)\approx 0.950547\) with \(\tilde{\mathcal{E}}_0^d(\hat{s}_0^2(k))\approx 142466\)) and local minimum at \(\hat{s}_0^1(k)\approx 0.743991\) with \(\tilde{\mathcal{E}}_0^d(\hat{s}_0^1(k))\approx 23297\) (or at \(\hat{s}_1^2(k)\approx 0.711383\) with \(\tilde{\mathcal{E}}_0^d(\hat{s}_1^2(k))\approx 86569.7\)). Note that for \(k=35,64\) as already proved the critical points and cumulative chord \(\hat{s}_1^{cc}(k)\) satisfy \(\hat{s}_0^{1,2}(k)<\hat{s}_1^{cc}(k)\) and \(\hat{s}_1^{cc}(k)\approx 1\).

Fig. 2.
figure 2

The graph of \(\tilde{\mathcal{E}}_0^d\) for \(\tilde{x}_0, \tilde{x}_1, \tilde{x}_2\in \mathbb {E}^m\) co-linearly ordered with varying \(k\in (-\infty ,k_2]\) and \(\Vert \tilde{x}_2\Vert =1\): (a) \(k=k_2\) and a global minimum at \(\hat{s}_1^{cc}(k_2)=(|k_2|/(1+|k_2|))\approx 0.963144\) and saddle-like point at \(\hat{s}_1^0(k_2)\approx 0.813607\), (b) \(k=-35\) and a global minimum at \(\hat{s}_1^{cc}(k)=35/36\approx 0.972222\) and two other critical points i.e. with local maximum \(\hat{s}_0^2 \approx 0.897748\) and with local minimum \(\hat{s}_0^1\approx 0.743991\), (c) \(k=-65\) and global minimum at \(\hat{s}_1^{cc}(k)=65/66\approx 0.984848\) and two other critical points i.e. with local maximum \(\hat{s}_0^2\approx 0.950547\) and with local minimum \(\hat{s}_0^1\approx 0.711383\).

Fig. 3.
figure 3

The graph of \(\tilde{\mathcal{E}}_0^d\) for \(\tilde{x}_0, \tilde{x}_1, \tilde{x}_2\in \mathbb {E}^m\) co-linearly unordered with varying \(k\in (0,+\infty )\) and \(\Vert \tilde{x}_2\Vert =1\): (a) \(k=1/2\) and a global minimum at \(\hat{s}_1\approx 0.346272\) different than \(\hat{s}_1^{cc}(k)=(|k|/(1+|k|))=1/3\approx 0.333333\), (b) \(k=1\) and a global minimum at \(\hat{s}_1=1-\sqrt{2}\approx 0.414214\) different than \(\hat{s}_1^{cc}(k)=1/2\), (c) \(k=5\) and global minimum at \(\hat{s}_1\approx 0.556194\) different than \(\hat{s}_1^{cc}(k)=5/6\approx 0.833333\).

Let now \(\tilde{x}_0,\tilde{x}_1=\mathbf {0}, \tilde{x}_2\in \mathbb {E}^m\) be co-linearly unordered i.e. \(\tilde{x}_0=k\tilde{x}_2\), for some \(k>0\) (here also \(\Vert \tilde{x}_2\Vert =1\)). The corresponding energy \(\tilde{\mathcal{E}}_0^d\) coincides with the one derived for \(k<0\). The plot of \(\tilde{\mathcal{E}}_0^d\) with \(k=1/2,1,5\) is shown in Fig. 3. As proved for \(k>0\) there is only one critical point (a global minimum) of \(\tilde{\mathcal{E}}_0^d\) different than cumulative chords \(\hat{s}_1^{cc}(k)=1/3\approx 0.333333\), \(\hat{s}_1^{cc}(k)=1/2\) and \(\hat{s}_1^{cc}(k)=5/6\approx 0.833333\), respectively. For \(k=0.5\) (see Fig. 3a)) the global minimum \(\hat{s}_1\approx 0.346272\) yields \(\tilde{\mathcal{E}}_0^d(\hat{s}_1)\approx 48.5065 <\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc})=48.6\). For \(k=1\) (see Fig. 3b)) the global minimum \(\hat{s}_1\approx 0.414214\) yields \(\tilde{\mathcal{E}}_0^d(\hat{s}_1)\approx 101.912 <\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc})\approx 109.714\). Note that here as proved \(\hat{s}_1=-1+\sqrt{2}\). Finally, for \(k=5\) (see Fig. 3c)) the global minimum \(\hat{s}_1\approx 0.556194\) yields \(\tilde{\mathcal{E}}_0^d(\hat{s}_1)\approx 961.081 <\tilde{\mathcal{E}}_0^d(\hat{s}_1^{cc})=2704.7\).

Consider the unperturbed planar data \(\tilde{x}_2=(3/5,4/5)\), \(\tilde{x}_0=(-12/5,-16/5)\), \(\tilde{v}_2=(3,4)\) clearly satisfying (18) with \(k=-4\in (k_2\approx -26.1226,0)\). The graph of \(\tilde{\mathcal{E}}_0^{0}\) is shown in Fig. 1a). We perturb now the co-linearity of \(\tilde{x}_0,\tilde{x}_1=\mathbf {0}, \tilde{x}_2\) by taking \(\tilde{x}_0^{\delta _2}=k\tilde{x}_2+\delta _2=\tilde{x}_0+\delta _2\), for some \(\delta _2\in \mathbb {R}^2\) (and any fixed \(k<0\)). In this example the second interpolation point \(\tilde{x}_2\) remains fixed with \(\Vert \tilde{x}_2\Vert =1\). Similarly we violate the first condition in (18) by choosing a new velocity \(\tilde{v}_2^{\delta _1,\delta _2}\) and perturbation \(\delta _1\in \mathbb {R}^2\) such that \(\tilde{x}_2-\tilde{x}_0^{\delta _2}-\tilde{v}_2^{\delta _1,\delta _2}=\delta _1\) holds (i.e. \(\tilde{v}_2^{\delta _1,\delta _2}=\tilde{v}_0-(\delta _1+\delta _2)\)). For \(k=-4\) and \(\delta _1=(-1/5,1)\), \(\delta _2=(1,-2/5)\) (small perturbation with \((\Vert \delta _1\Vert ,\Vert \delta _2\Vert ) =(\sqrt{26/25},\sqrt{29/25})\)) the data \(\tilde{x}_2\), \(\tilde{x}_0^{\delta _2}=(-1{2\over 5},-3{3\over 5})\) and \(\tilde{v}_2^{\delta _1,\delta _2}=(11/5,17/5)\) satisfy (22). Similarly, for \(k=-4\) and \(\delta _1=(-4,1)\), \(\delta _2=(7,-2)\) (big perturbation with \((\Vert \delta _1\Vert ,\Vert \delta _2\Vert )=(\sqrt{17},\sqrt{53})\)) the data \(\tilde{x}_2\), \(\tilde{x}_0^{\delta _2}=(4{3\over 5},-5{1\over 5})\) and \(\tilde{v}_2^{\delta _1,\delta _2}=(0,5)\) satisfy (22). Lastly, for \(k=-4\) and \(\delta _1=(-12,1)\), \(\delta _2=(-11,-8)\) (large perturbation with \((\Vert \delta _1\Vert ,\Vert \delta _2\Vert )=(\sqrt{145},\sqrt{185})\)) the data \(\tilde{x}_2\), \(\tilde{x}_0^{\delta _2}=(-13{2\over 5},-11{1\over 5})\) and \(\tilde{v}_2^{\delta _1,\delta _2}=(26,11)\) satisfy (22). Comparing the graph of \(\tilde{\mathcal{E}}_0^{d}\) from Fig. 1a) with the graphs of \(\tilde{\mathcal{E}}_0^{\delta }\) from Fig. 4 shows that unimodality of \(\tilde{\mathcal{E}}_0^{d}\) is preserved for substantial perturbations \(\delta \ne {\mathbf {0}}\). This trend repeats for other \(k\in (k_2,0)\cup (0,\infty )\) which indicates that in practice the perturbation \(\delta =(\delta _1,\delta _2)\) from Proposition 1 can be taken as reasonably large.    \(\square \)

Fig. 4.
figure 4

The graph of \(\tilde{\mathcal{E}}_0^{\delta }\) for \(k=-4\), \(\tilde{x}_0, \tilde{x}_2^{\delta _2}, \tilde{v}_2^{\delta _1,\delta _2}\in \mathbb {R}^2\) and for (a) \(\delta _1=(-1/5,1)\) and \(\delta _2=(1,-2/5)\) yields a global minimum at \(\hat{s}_1(k)\approx 0.76615\ne \hat{s}_1^{cc}(k)\approx 0.79435\), (b) \(\delta _1=(-4,1)\) and \(\delta _2=(7,-2)\) yields a global minimum at \(\hat{s}_1(k)\approx 0.755816\ne \hat{s}_1^{cc}(k)\approx 0.874097\), (c) \(\delta _1=(-12,1)\) and \(\delta _2=(-11,-8)\) yields a global minimum at \(\hat{s}_1(k)\approx 0.654924\ne \hat{s}_1^{cc}(k)\approx 0.945841\), and two other critical points i.e. a local minimum at \(\hat{s}_0^1(k)\approx 0.944104\) and a local maximum at \(\hat{s}_0^2(k)\approx 0.922\).

6 Conclusions

The optimization task (1) is reformulated into (5) (and (16)) to minimize a highly non-linear multivariate function \(\mathcal{J}_0\) depending on knots \({\mathcal T}_{int}\). One of the numerical scheme to handle the latter is a Leap-Frog. The generic case of this algorithm is studied in [5]. Here, we complement the latter by analyzing non-generic case of Leap-Frog and formulate sufficient conditions preserving unimodality of (16). In doing so, first a special case of data (18) is addressed. Subsequently its perturbed analogue (22) is covered. Example shows that unimodality for (18) (if it occurs) is in practice preserved by large perturbations (22). The performance of Leap-Frog compared with Newton’s and Secant Methods is reported in [2, 3, 7]. More applications of Leap-Frog are discussed in [10,11,12]. For related work on fitting (sparse or dense) reduced data \(\mathcal{M}_n\) see e.g. [4, 6, 8].