1 Introduction

Tyre models are important to evaluate the behaviour of such an important component of a car. Those models calculate forces in the tyre-ground contact. Probably, the most accurate and widely used by the community of automotive engineers is the so-called Pacejka’s magic formula tyre model [7, 8] and [9], (see 2.1).

Due to the nonlinear behaviour of the tyre, the optimization procedure required to calculate the parameters of those models for an optimum adjustment to test data is not a simple problem, because the forces in the contact depend on slip, slip angle, normal load and camber angle; thus, the mathematical problem is a nonlinear multivariate optimization problem. The complexity of the mathematical formulation of the model can influence both the easiness of computing the model and the convergence properties during the nonlinear optimization process. The Pacejka’s magic formula tyre model uses a complex nested inverse tangent function.

The authors of this paper have been looking for a simpler expression quicker and easier to process both during the optimization and during the direct computing of the model, more suitable for real-time applications.

The new polynomial model presented and validated in this article is obtained from the magic formula expression, by using theory of approximation, expanding the magic formula in series of Jacobi orthogonal polynomials. In the following section, we summarize how that expansion was obtained.

This article validates the new model with real test data and analyses the convergence properties of the model during the optimization process, to calculate the values of the parameters. A multivariate model is proposed including the influence of camber angle and normal load.

This work is integrated in a more general line of research, whose goal is to obtain fast computing solutions of the vehicle nonlinear equations, expanding them in orthogonal polynomial series (Chebyshev and Jacobi polynomials). The application is saving computing time in pre-collision situations for active safety devices (see the two PhD Thesis of the authors [13] and the papers of Amirouche [4] and Ferrara [5, 6]).

2 Theoretical background

2.1 The magic formula tyre model

The well-known tyre model proposed by Bakker, Nyborg and Pacejka [7, 8] and [9] is a semi-empirical tyre model based on the “magic” formula:

$$\begin{aligned} Y \!=\! D\cdot \sin [C\cdot \arctan (BX\!-\!E\cdot [BX\!-\!\arctan (BX)])] \end{aligned}$$

This model is widely used and accepted by the community of automotive engineers and is also considered the most accurate. For that reason, we use it as the reference in this paper.

The shape of the curve is controlled by four parameters: \(B, C, D\) and \(E\). The equation can calculate the following:

  • Lateral forces in a tyre, \(F_y\), as a function of the slip angle of the tyre, \(\alpha \) (in degrees)

  • Braking force, \(F_x\), as a function of longitudinal slip \(K\) (%).

  • Self-aligning torque, Mz, as a function of the slip angle \(\alpha \).

Figure 3 shows the aspect of this magic formula model in the case of a longitudinal force.

\(B, C, D\) and \(E\) are constants that describe the inclination of the curve at the origin (\(BCD\)), the peak value (\(D\)), the curvature (\(E\)) and the basic form (\(C\)) for each case (lateral, braking or self-aligning torque). In addition, the curve can have vertical (\(Sv\)) or horizontal (\(Sh\)) shifts at the origin. The full expression is as follows:

$$\begin{aligned} Y&= D\cdot \sin [C\cdot \arctan (B(X+Sh)- E\cdot [B(X+Sh)\\&-\arctan (B(X+Sh))])] + Sv \end{aligned}$$

Coefficients \(B, D\) and \(E\) are functions of the vertical load in the tyre, \(F_z\):

$$\begin{aligned}&d=a_1 \cdot F_z^2 +a_2 \cdot F_z; \quad B=BCD/(C\cdot d) ; \\&E= a_6 \cdot F_z^2 +a_7 \cdot F_z +a_8 ;\\&BCD_1 =\frac{a_3 \cdot F_z^2 +a_4 \cdot F_z }{e^{a_5 \cdot F_z }}; \\&BCD_2 = a_3 \cdot \sin (a_4 (\arctan (a_5 \cdot F))); \end{aligned}$$

\(BCD_{1}\) is valid for the longitudinal force and the self-aligning torque with \(C=1.65\) and \(C=2.4\), respectively.

\(BCD_{2}\) is valid for the lateral force with \(C=1.3\).

The Camber angle \(\gamma \) in the wheel modifies the shifts Sh and Sv and the stiffness BCD:

$$\begin{aligned}&\varvec{\Delta } S_h =a_9 \cdot \gamma ; \quad \varvec{\Delta } S_v =(a_{10} \cdot F_z^2 +a_{11} \cdot F_z )\cdot y;\\&\varvec{\Delta } B=-a_{12} \cdot \left| \gamma \right| \cdot B; \quad E1=\frac{E_0 }{(1-a_{13} \cdot \left| \gamma \right| } \end{aligned}$$

\(E_{1}\) is the \(E\) value modified by the camber angle in the self-aligning torque calculation.

In the next two sections, we explain how to obtain a polynomial approximations to this magic formula model.

2.2 Approximation of a function in Chebyshev series

The Chebyshev polynomials, see [10], of the first kind are defined by \(T_n (x)=\cos [n \arccos (x)]\) and are orthogonal regarding the function \(w(x)\!=\!\left( {1\!-\!x^{2}} \right) ^{-1/2}\) in the interval \([-1,1]\).

To work in different [\(a,b\)] intervals, shifted polynomials with the following change must be used:

$$\begin{aligned} t=1\big /2[(b-a)x+a+b]. \end{aligned}$$

Their general expression [11] is the following:

$$\begin{aligned}&T_n (x)=\frac{n}{2}\sum _{m=0}^{\left\lfloor {n/2} \right\rfloor } {(-1)^{m}\frac{(n-m-1)!}{m!(n-2m)!}(2x)^{n-2m}} ; \\&n=1,2,3\ldots ; T_0 (x)=1 \end{aligned}$$

\(\left\lfloor {n/2} \right\rfloor \) is the highest whole number \(\le n/2\). They fulfil the following recursive property:

$$\begin{aligned} T_{n+1} (x)=2xT_n (x)-T_{n-1} (x) ; \quad n=1,2,\ldots \end{aligned}$$

Chebyshev polynomials can be computed and manipulated using the MAPLE Orthopoly library. The expansion of a function in Chebyshev series (ACh) has the following form:

$$\begin{aligned} f(x)=\sum _{n=0}^\infty {}^{\prime }{a_n T_n (x)}, \end{aligned}$$

The single comma in the summation indicates that the first term must be divided by 2.

This expansion usually converges faster than the power series and the coefficients get the value:

$$\begin{aligned} a_n =\frac{1}{r_n }\int _{-1}^1 {w(x)\cdot f(x) T_n (x) dx} \end{aligned}$$

Where w(x) is the weight function \(w(x)\!=\!\left( {1\!-\!x^{2}} \right) ^{-1/2}\). If we truncate the series in degree \(N\), we get an approximation to the function, the more accurate the higher \(N\) is. Due to properties of Chebyshev polynomials, truncating in \(N-1\) is the best \(N-1\) degree polynomial approximation to the development at N degree. \(r_{n}\) is the norm of the function (\(\pi /2\) for Chebyshev polynomials).

The coefficients \(a_{n}\) can be assessed with the direct integration in some functions, but, in general, this is not possible and the previous integral must be approximated by some other quadrature formula. This research work has been implemented in MAPLE, which uses quadrature algorithms, which first analyse the singularities and then use Clenshaw–Curtis quadrature [12, 13]; if the result is not satisfactory, Newton-Cotes adaptive formulae are used. All this is carried out at the Chebpade function from MAPLE Numapprox library of approximation of functions.

Chebyshev-Padé functions obtain good approximations, but not those of minimum–maximum error (known as minimax). To find the latter, the Remez algorithm [14] is used, which fine tunes the result by numeric iterations and converges to an improved minimax approximation.

The Remez algorithm produces optimal results at the approximation. This method allows the calculation of minimum error of any given function f(t) weighted with any weight term \(w(t)\). If \(w(t)=1/\vert f(t)\vert \) is used, the minimum relative error is obtained. These methods are described in any good book on the approximation theory [15].

In MAPLE, the Remez algorithm is implemented by the minimax function included in the Numapprox library of approximation of functions.

Next, we introduce Jacobi polynomials because they introduce flexibility in the approximation.

2.3 Expansion in series of Jacobi polynomials

Within the families of classic orthogonal polynomials generated from the Sturm-Liouville differential equation, from which Chebyshev polynomials also derive, we consider now the Jacobi polynomials, see [16]. Jacobi polynomials can also be computed and manipulated using the MAPLE Orthopoly library. The expansion of a function in series of Jacobi polynomials uses a Jacobi weight function this time. The integral must be programmed, and a library for expansions of functions in Jacobi series is not available in MAPLE.

$$\begin{aligned}&f(x)\approx \sum _{k=0}^n {a_k \cdot J_k (x)} ; \\&a_n =\frac{1}{r_n }\int _{-1}^1 {w(x)\cdot f(x) J_n (x) dx} \end{aligned}$$

The Jacobi weight function in this type of orthogonal polynomials is the following:

$$\begin{aligned} w(x)=\frac{(1-x)^{\delta }}{(1+x)^{\gamma }} \end{aligned}$$

This function is controlled by two parameters \(\delta \) and \(\gamma \) that allow choosing the area of a best approximation at the orthogonality interval. In practice, this is very interesting as it will allow us to improve the adjustment of the error at any area of the longitudinal force, lateral force or self-aligning torque curves, depending on the application in which the approximation is used, for instance, looking either for a more reduced error in slip values close to zero or in values close to the maximum stress or in the maximum slip point (100 %), (see Fig. 3).

The norm \(r_{n}\) in Jacobi polynomials is not constant, but it is also a function of \(\delta , \gamma \) and the degree of the n polynomial.

$$\begin{aligned} r_n =\frac{2^{\delta +\gamma +1}\Gamma (n+\delta +1) \cdot \Gamma (n+\gamma +1)}{n! \cdot (2n+\delta +\gamma +1) \cdot \Gamma (n+\delta +\gamma +1)} \end{aligned}$$

The recurrence relation seen for the Chebyshev polynomials now takes a more general expression in the case of Jacobi polynomials:

$$\begin{aligned}&J_{n+1}^{(\delta ,\gamma )} (x)=(a_n +b_b )\cdot J_n^{(\delta ,\gamma )} (x)-c_n \cdot J_{n-1}^{(\delta ,\gamma )} (x) ;\\&\quad n=1,2,\ldots \end{aligned}$$

Where the recurrence coefficients are now:

$$\begin{aligned}&a_n =\frac{(2n+1+\delta +\gamma )(2n+2+\delta +\gamma )}{2(n+1)(n+1+\delta +\gamma )}\\&b_n =\frac{(\delta ^{2}-\gamma ^{2})(2n+1+\delta +\gamma )}{2(n+1)(2n+\delta +\gamma )(n+1+\delta +\gamma )}\\&c_n =\frac{(n+\delta )(n+\gamma )(2n+2+\delta +\gamma )}{(n+1)(n+1+\delta +\gamma )(2n+\delta +\gamma )} \end{aligned}$$

3 The new polynomial tyre model

3.1 General description

As a result of the expansion of the magic formula in series of Jacobi polynomials, the authors obtained a very simple mathematical expression to calculate longitudinal and lateral forces in a tyre [17]:

$$\begin{aligned} F=A_0 +A_1\cdot \frac{x}{x+b}+A_2\cdot \left( \frac{x}{x+b}\right) ^{2}+ A_3 \cdot \left( \frac{x}{x+b}\right) ^{3} ; \end{aligned}$$
(1)

A simple degree \(N=3\) polynomial in an easy rational function x/(x+b).

  • \(F\): can be lateral (\(F_y\)) or longitudinal (\(F_x\)) force, the expression is valid for both. For self-aligning torque, a degree four polynomial should be used to obtain good accuracy.

  • \(x\): can be longitudinal slip (\(s\)) or slip angle (\(\alpha )\) according to what force we are considering.

  • Ai and b are the basic parameters of the model. Usual values of b are between 3 and 8; values around five are very common.

This model shows excellent coincidence with the original magic formula (the maximum difference is lower than 1 % with \(N=3\)), both for \(F_x\) and \(F_y\). Self-aligning torque requires a degree four polynomial. The model has excellent analytical properties; it is possible to obtain the position of extreme points, asymptotes, analytic derivatives and integrals of this expression in an easy manner (the last is not possible in the original magic formula). Finally, the main advantage is the facility of processing (test showed processing time 20 times faster than the magic formula tyre model). Obviously, the inverse tangent nested functions of the magic formula are very inefficient in terms of computation.

We have to calculate the term \(v=\frac{x}{x+b}\) only once, including it in the polynomial, for more efficiency, the Horner polynomial form can be used:

$$\begin{aligned} F&= A_0 +A_1\cdot v +A_2\cdot v^{2} +A_3\cdot v^{3}\\&= A_0 +(A_1 +(A_2 + A_3\cdot v)\cdot v)\cdot v \end{aligned}$$

The work [17] was based in previous papers of the authors, [18, 19].

In [17], we had published our theoretical polynomial formula, comparing it with the mathematic expression of the magic formula, but without any validation with real test data.

But in the present paper, we tackle the problem of nonlinear optimization, that is, how to obtain the parameters of our model from test data and analysing the convergence of our model comparing it with the speed of convergence of the MF Tyre model (this had not been analysed in [17].

The approximate function proposed by the authors (1) presents a typical aspect as the one seen in Fig. 1

Fig. 1
figure 1

Curve of the proposed polynomial model. Lateral or longitudinal force versus slip angle or slip

In the horizontal axis, the graph represents the longitudinal slip or the lateral slip. The vertical axis shows the longitudinal force \(F_x\) or the lateral force \(F_y\). The model is valid for both, with different values of the parameters obviously.

The different curve branches are shown in Fig. 1. Obviously, and regarding the tyre model, only the branch from the minimum point to the right is used.

In this useful area, the curve shows two local ends at the interval 0–100. A typical maximum around \(x=15\) (for the longitudinal force) and a minimum close to the origin. Depending on the values of the coefficients, this minimum point could be in any of the four quadrants. As it is a polynomial of a rational function, this function changes very quickly near the minimum; therefore, we must be very careful in the process of approximation to test data in order to keep the curve on the right of the minimum value. We will see how to achieve this in Sect. 5.

Depending on the coefficients, the position of both inflection points allows a very flexible adaptation to the curvature not only at the ascending branch on the right of the minimum, but also at the horizontal area on the right of the maximum.

Obviously, the use of symmetry will allow symmetric or asymmetric branches describing equal or different behaviours in traction or braking or in asymmetric lateral behaviour on the right or left.

Both vertical and horizontal shifts of the magic formula can evidently be applied in a natural form (already integrated in the equation itself), but in a more flexible way as an inflection point can be kept in the upwards section of every branch when working with two equations, one for each side of the symmetry, in case the tyre’s behaviour requires it.

Let us see now the mathematical analysis of the curve.

3.2 Function derivatives

$$\begin{aligned} F&= A_0 +A_1\cdot u+A_2\cdot u^{2}+A_3\cdot u^{3} \end{aligned}$$

where

$$\begin{aligned}&u=\frac{x}{x+b}; u^{\prime }=\frac{b}{(x+b)^{2}}; u^{\prime \prime }=\frac{-2\cdot b}{(x+b)^{3}}\\&F^{\prime }=(A_1 +2\cdot A_2 \cdot u+3\cdot A_3 \cdot u^{2})\cdot u^{\prime }\\&F^{\prime \prime }=(2\cdot A_{2\cdot } u^{\prime }+6\cdot A_3 \cdot u\cdot u^{\prime })\cdot u^{\prime }\\&\quad \quad \quad +(A_1 +2\cdot A_2 \cdot u+3\cdot A_3 \cdot u^{2})\cdot u^{\prime \prime }\\&F_0 =A_0 ; \quad F^{\prime }_0 =\frac{A_1 }{b}; \quad F^{\prime \prime }_0 =\frac{2}{b^{2}}(A_2 -A_1 )u_0 ;\\&u^{\prime }_0 =\frac{1}{b}; \quad u^{\prime \prime }_0 =-\frac{2}{b^{2}} \end{aligned}$$

3.3 Maximum and minimum values

The position of the extrema in function of the coefficients is easily calculated as follows:

$$\begin{aligned}&u_{\max } =\frac{-A_2 \pm \sqrt{A^{2}_2 -3A_1 A_3 }}{3A_3 }; \quad x_{\max } =\frac{b\cdot u_{\max } }{(1-u_{\max } )}\\&F_{\max } =A_0 +A_1\cdot u_{\max } +A_2\cdot u_{\max }^2 +A_3\cdot u_{\max }^3 \end{aligned}$$

The positive value of the root corresponds to the local minimum close to \(x=0\) and the negative one to the maximum close to \(x=15\).

3.4 Inflection points

The position of the inflection points in function of the polynomial coefficients is the following:

$$\begin{aligned}&F^{\prime \prime }=0=(2\cdot A_2 +6\cdot A_3\cdot u)\cdot u^{\prime 2}\\&\quad \quad \quad +(A_1 +2\cdot A_2\cdot u+3\cdot A_3 \cdot u^{2})\cdot u^{\prime \prime };\\&(2\cdot A_2 +6\cdot A_3 \cdot u)\cdot \frac{b^{2}}{(x+b)^{4}}\\&\quad =(A_1 +2\cdot A_2 \cdot u+3\cdot A_3 \cdot u^{2})\cdot \frac{-2\cdot b}{(x+b)^{3}}\\&R=-(A_1 \!+\!2A_2 \!+\!3A_3 ); \quad S\!=\!b\cdot (3A_3 \!-\!A_2 \!-\!2A_1 );\\&T=b^{2}(A_2 -A_1 ); \quad R\cdot x^{2}+S\cdot x+T=0\\&x_{\inf } =\frac{-S\pm \sqrt{S^{2}-4\cdot R\cdot T}}{2\cdot R} \end{aligned}$$

The negative root corresponds to an inflection point placed in the ascending section on the right of the minimum, and the positive one to the point on the right of the maximum. \(R, S\) and \(T\) are intermediate auxiliary variables used in order to simplify the expressions, but without any conceptual interest.

3.5 Asymptotes

The curve represents a vertical asymptote in \(x=-b\) and an horizontal asymptote on the right of the origin in \(x=A_{0}+A_{1}+A_{2}+A_{3}\)

3.6 Symmetries and shifts

The symmetric curve in the second quadrant, which will be called \(F2\), is obtained by simply changing the sign of parameter b, making \(u=x/(x-b)\). The symmetric curve in the third quadrant is \(-F2\), and the symmetric function in the fourth quadrant is \(-F\). If the behaviour of the tyre is symmetric, the same equation (with the same coefficients) can be used; if it is asymmetric, coefficients can be changed.

The application of shifts Sx and Sy is also very easy:

$$\begin{aligned} F\textit{shifted}=F(x+Sx)+Sy \end{aligned}$$

4 Getting coefficients from tests

4.1 Introduction

In [17], this polynomial formulation was achieved from the magic formula and the approximation theory implemented on symbolic calculation programs, in particular MAPLE. In this paper, we validate the model with real test data, using a nonlinear optimization method, in a multivariate domain, taking into account not only the slip or slip angle, but also camber angle and normal load too.

At this point, we review the different methods of optimization present in the bibliography and we explain the application to both the proposed new polynomial model and the magic formula tyre model.

The main methods of nonlinear optimization used in the approximation of tyre models with test data can be classified as follows:

  • Newton’s methods

    • Newton’s method

    • Gauss–Newton method and the Marquardt–Levenberg variant

    • Quasi-Newton methods

  • SQP methods

  • Iterative methods from the simplex method (Nelder–Mead)

  • Genetic algorithms

We describe now the functioning of those methods. All of them have been applied to the estimation of the parameters of the presented polynomial tyre model with good convergence results. They have been applied to the magic formula tyre model too.

4.2 The Newton’s method

The basic tenet of the Newton method for nonlinear least-squares optimization is the following [20]:

If we have a set of m test points \((x_{i},y_{i})\), where, in general, \(y_{i}\) is the longitudinal or lateral force related to slip or lateral slip, which we also denote \(x_{i}\), as the \(m=55\) test points that can be seen in the Fig. 3 of Sect. 7.

The differences between the value predicted by our model and that presented in the test make-up a residue vector \(r\) (with a size of 55 in the proposed example) where every residue has the form:

\(r_i =y_i -F_i =y_i -(A_0 +A_1 \cdot u_i +A_2 \cdot u^{2}_i +A_3 \cdot u^{3}_i );\) being \(u_{i}=x_{i}/(x_{i}+b)\), with i= 1...m; (55 points in our example).

If \(\varvec{\beta }\,(\beta _{1} {\ldots } \beta _{5})\) is the vector of the parameters of the model, \(\beta _{j}\), where (\(j=1,{\ldots },n\)) in this case \(n=5\), being in our particular model (\(\beta _{1}=A_{0}, \beta _{2}=A_{1},{\ldots }, \beta _{5}=b\)), the sum of the quadratic deviations will be a function of \(\varvec{\beta }\):

$$\begin{aligned} S(\varvec{\beta } )=\sum _{i=1}^m {r_i^2 (\varvec{\beta } )} \end{aligned}$$
(2)

The Newton’s method starts from the Taylor series expansion of the function, and for simplicity, we assume that the function depends only on a unique parameter \(\beta \) at every point \(i\):

$$\begin{aligned} (X_i ,\beta _1 )&\approx F(x_i ,\beta _1^k )+F^{\prime }(x_i ,\beta _1^k )\varvec{\Delta } \beta \\&+\frac{1}{2}F^{\prime \prime }(x_i ,\beta _1^k )\cdot (\varvec{\Delta } \beta )^{2}+\cdots \end{aligned}$$

Being \(\varvec{\Delta } \beta =(\beta _1 -\beta _1^k )\)

The Newton’s method establish that the function reaches its extrema when its derivative with respect to \(\Delta \beta =0\), that means:

$$\begin{aligned} F^{\prime }(x_i ,\beta _1^k )+F^{\prime }(x_i ,\beta _1^k )\cdot \varvec{\Delta } \beta =0 \end{aligned}$$

Being as, in our model, the vector \(\varvec{\beta }\) contains now several parameters, the previous expression becomes:

$$\begin{aligned} \varvec{G}+\varvec{H}\cdot \varvec{\Delta \beta } =\mathbf{0} \end{aligned}$$

From the previous equation, we can obtain the step of the parameter’s vector in every iteration, the so-called Newton’s step.

$$\begin{aligned} \varvec{\Delta \beta } =-\varvec{H}^{-\mathbf{1}}\cdot \varvec{G} \end{aligned}$$
(3)

and calculate the value of \(\varvec{\beta }\) in the next iteration,

$$\begin{aligned} \varvec{\beta }^{(s+1)}=\varvec{\beta }^{s}+\varvec{\Delta \beta } \end{aligned}$$

\(G\) is the gradient vector of S(\(\varvec{\beta }\)), whose terms are as follows:

$$\begin{aligned} G_j =2\sum _{i=1}^m {r_i \frac{\partial r_i }{\partial \beta _j }} \end{aligned}$$

\(H\) is the Hessian matrix of \(S\), obtained by differentiating the terms of the gradient:

$$\begin{aligned} H_{jk} =2\sum _{i=1}^m {\left( {\frac{\partial r_i }{\partial \beta _j }\frac{\partial r_i }{\partial \beta _k }+ri\frac{\partial ^{2}r_i }{\partial \beta _j \partial \beta _k }} \right) } \end{aligned}$$

This method and all its derived methods are iterative, and they need an initial value of the parameters’ vector. The quality of the final result will depend notably on the goodness of this initial value.

4.3 Gauss–Newton and Marquardt–Levenberg methods

The Gauss–Newton method approximates the Hessian matrix, neglecting the second term of the previous equation as follows.

$$\begin{aligned} H_{jk} \approx 2\sum _{i=1}^m {J_{ij} \cdot J_{ik} } ; \quad J_{ij} =\frac{\partial r^{i}}{\partial \beta _j } \end{aligned}$$

\(J_{ij}\) is the Jacobian matrix which contains the partial derivatives of the vector of residuals with respect of the parameters of the model. If we write \(G\) and \(H\) in matrix notation, we obtain:

$$\begin{aligned}&{\varvec{G}}=\mathbf{2}\cdot {\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} \cdot {\varvec{r}}; \quad \varvec{H}\approx \mathbf{2}{\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} \cdot {\varvec{J}}_{r} ;\\&{\varvec{G}}+{\varvec{H}}\cdot \varvec{\varDelta \beta } ={\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} \cdot {\varvec{r}}+{\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} \cdot {{\varvec{J}}}_{{\varvec{r}}} \cdot \varvec{\Delta \beta } =0;\\&{\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} \cdot {{\varvec{J}}}_{{\varvec{r}}} \cdot \varvec{\Delta \beta } =-{{\varvec{J}}}_{{\varvec{r}}}^{{\varvec{T}}}\cdot {\varvec{r}} \end{aligned}$$

In every iteration, the new value of the parameters’ vector is the following:

$$\begin{aligned} \begin{aligned}&\varvec{\Delta \beta } =-({\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}}\cdot {\varvec{J}}_{{\varvec{r}}} )^{-\mathbf{1}}{\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}}\cdot {\varvec{r}};\\&\varvec{\beta }^{(s+1)}=\varvec{\beta }^{(s)}-({\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} {\varvec{J}}_{{\varvec{r}}} )^{-\mathbf{1}}{\varvec{J}}_{r}^{T} \cdot {\varvec{r}} \end{aligned} \end{aligned}$$
(4)

In the most basic tyre polynomial model, which includes initially five parameters, the Jacobian matrix is a (NPt \(\times \) 5) matrix, being NPt the number of data points (see the data vector in the example of Sect. 5). For this tyre model, the rows in this Jacobian matrix have the following shape:

$$\begin{aligned}&\frac{\partial r_i }{\partial A_0 }\!=\!-1; \quad \! \frac{\partial r_i }{\partial A_1 }\!=\!-\!u_i ; \,\,\! \frac{\partial r_i }{\partial A_2 }\!=\!-u_i^2 ; \quad \! \frac{\partial r_i }{\partial A_3 }\!=\!-u_i^3 ;\\&\frac{\partial r_i }{\partial b}\!=\!\frac{1}{(x_i +b)}\left[ {A_1 \cdot u_i +2\cdot A_2 \cdot u_i^2 +3\cdot A_3\cdot u_i^3 } \right] \end{aligned}$$

The Jacobian matrix terms in the magic formula are the following:

$$\begin{aligned}&\frac{\partial r_i }{\partial S_v }=-1; \frac{\partial r_i }{\partial d}=-\sin (V_i );\\&\frac{\partial r_i }{\partial C}=[-d\cdot \cos (V_i )]\cdot \arctan (W_i )\\&\frac{\partial r_i }{\partial B}=\frac{\partial r_i }{\partial V_i }\cdot \frac{\partial V_i }{\partial W_i }\cdot \frac{\partial W_i }{\partial B}= \left[ {-d\cdot \cos \left( {Vi} \right) } \right] \cdot \left[ {\frac{C}{1+W_i^2 }} \right] \\&\qquad \cdot \left[ x_i +S_h -E\left( {x_i +S_v -\frac{x_i +S_h }{1+B^{2}\cdot (x_i +S_h )^{2}}} \right) \right] \\&\frac{\partial r_i }{\partial E}=\frac{\partial r_i }{\partial V_i }\cdot \frac{\partial V_i }{\partial W_i }\cdot \frac{\partial W_i }{\partial E} \left[ {-d\cdot \cos \left( {Vi} \right) } \right] \cdot \left[ {\frac{C}{1+W_i^2 }} \right] \\&\qquad \cdot \left[ -B\left( {x_i +S_h } \right) +\arctan \left( {B\cdot \left( {x_i +S_h } \right) } \right) \right] ;\\&\frac{\partial r_i }{\partial S_h }=\frac{\partial r_i }{\partial V_i }\cdot \frac{\partial V_i }{\partial W_i }\cdot \frac{\partial W_i }{\partial S_h } \left[ {-d\cdot \cos \left( {Vi} \right) } \right] \cdot \left[ {\frac{C}{1+W_i^2 }} \right] \\&\qquad \cdot \left[ B-E\left( {B-\frac{B}{1+B^{2}\cdot (x_i +S_h )^{2}}} \right) \right] ; \end{aligned}$$

Being:

$$\begin{aligned} W_i&= B\left( {x_i +S_h } \right) -E\cdot \left[ B\left( {x_i +S_h } \right) \right. \\&\left. \!-\!\arctan \!\left( {B\left( {x_i \!+\!S_h } \right) } \right) \right] \hbox { and } V_i \!=\!C\cdot \arctan (W_i) \end{aligned}$$

If convergence problems appear, there are several methods which modify the Gauss–Newton method. The first and most simple method consists of reducing the length of the step of the parameters’ vector \(\Delta \beta \), by multiplying it by a constant \(\alpha \) lower than 1.

$$\begin{aligned} \varvec{\beta }^{({s}+1)}=\varvec{\beta }^{({s})}-\upalpha \cdot ({\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} {{\varvec{J}}}_{{\varvec{ r}}} )^{-\mathbf{1}}{\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} {{\varvec{r}}} \end{aligned}$$

In this manner, we can solve situations, in which, the step of the parameters’ vector \(\varvec{\Delta \beta }\) points to the right direction (which reduces the addition of quadratic deviations), but it is too long.

The second method is the so-called Marquardt–Levenberg method, [21], in which, the step \(\varvec{\Delta \beta }\) is modified by adding the term \(\lambda \).D, where D is a positive diagonal matrix and \(\lambda \) is the so-called Marquardt’s parameter. It is also called the trust region method. The addition of this term rotates the vector \(\varvec{\Delta \beta }\) towards the maximum descending slope.

$$\begin{aligned} \varvec{\beta }^{({s}+1)}=\varvec{\beta }^{({s})}-({\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} {\varvec{J}}_{{\varvec{r}}} +\varvec{\lambda } \cdot {\varvec{D}})^{-1}{\varvec{J}}_{{\varvec{r}}}^{{\varvec{T}}} {\varvec{r}} \end{aligned}$$

In point 7, we will see the convergence of the Gauss–Newton method in our tyre example, which is very fast. The convergence of our model is compared with the convergence of the magic formula tyre model.

4.4 Quasi-Newton methods

According to [22], in order to estimate the parameters of the magic formula tyre model, the research team of the TNO (the research organization at the Netherlands), used the so-called quasi-Newton method [23] and [24], implemented by the E04FDF subroutine of the NAG (Numerical Analysis Group) [25] (see http://www.nag.co.uk/). This method is similar to the Gauss–Newton method, but its application is not specific for least-squares problems, but its application field is wider, actually it can be used to optimize any function.

The family of quasi-Newton methods avoids the inversion of the Hessian matrix H in Eq. 2, by calculating directly the inverse of a pseudo-Hessian matrix B, which is obtained by successive approximations of the gradient G in a generalization of the secant method to the multivariate domain. In this way, these methods improve the computational efficiency of the whole calculation. In order to estimate the pseudo-Hessian B, different iterative algorithms have been used and published along the history, DFP (Davidon–Fletcher–Powell), [26, 27], BFGS (Broyden–Fletcher–Goldfarb–Shanno), see [20], SR1 (Symmetric Rank 1), see [28] and [20] and the class of Broyden methods, see [29, 30] and [20]. All of them use the Sherman-Morrison formula to invert the pseudo-Hessian matrix B, see [20].

4.5 The sequential quadratic programming (SQP) method

SQP method [31] poses the general problem of nonlinear optimization for a given target function S(\(\varvec{\beta }\)) of a parameters’ vector \(\varvec{\beta }\), but now with a set of constraint equations \(\mathbf{G}(\varvec{\beta })\ge 0\), which can be both equality or inequality functions of the parameters’ vector too.

$$\begin{aligned} \mathbf{G}(\varvec{\beta }) = \mathbf{(G1(}\varvec{\beta }\mathbf{)},\varvec{{\ldots }},\mathbf{Gm}(\varvec{\beta })) \end{aligned}$$

SQP is an iterative method, and it models the nonlinear problem for a given iteration by a quadratic programming (QP) sub-problem, solves that QP sub-problem and then uses the solution to find a new parameters’ vector \(\varvec{\beta }^{(\mathbf{s+1})}\).

To find the solution, SQP uses the Lagrangian function that combines the objective function S(\(\varvec{\beta }\)) and the constraints G(\(\varvec{\beta }\)) properly. The Lagrangian function of our problem is the following:

$$\begin{aligned} \mathbf{L}(\varvec{\beta }, \mathbf{u}) = \mathbf{S}(\varvec{\beta }) - \mathbf{u}^{\mathbf{T}}\mathbf{G}(\varvec{\beta }) \end{aligned}$$

where u is the vector of Langrange’s multipliers of the nonlinear problem. SQP replaces the objective function S(\(\varvec{\beta }\)) by its local quadratic approximation, expanding it in a Taylor series, and the constraint functions G(\(\varvec{\beta }\)) are replaced by their local linear approximations. This construction is done in such a way that the algorithm sequence converges to a local minimum. Modern optimization textbooks have chapters devoted to SQP methods, see [31].

4.6 The Nelder–Mead method

This method was proposed by [32], see also [33, 34] and [35], and it allows to minimize a target function in a multidimensional space. The method uses the “simplex” concept which uses elements of \(N+1\) vertices, in a \(N\)-dimensional space. In a one-dimensional space, the simplex element is just a line. In a bi-dimensional one, the simplex element is a triangle, and in a tri-dimensional space, the element is a tetrahedron and so on.

The algorithm generates a new test position by extrapolating the behaviour of the target function in every vertex of the simplex. One of these vertices is replaced with a new point, and it progresses in this way. The easiest step is to replace the worst point with a new one obtained by reflecting it across the centroid of the N remaining points. If this new point is better than the best of all the current points, we can try to extend outwards the simplex element along this line. If the new point is not better than the previous one, it will probably be in a valley area and we should compress the simplex towards a better point. It is known that the method can converge at nonstationary points. This method in implemented in the fminsearch library of MATLAB.

4.7 Genetic algorithms

Recently, the group of the Department of Mechanical Engineering of the University of Malaga (IMMA) has developed a new optimization method to calculate the coefficients of the magic formula tyre model, see [36] and [37]. They propose the use of genetic algorithms, which work with high accuracy and efficiency, avoiding the use of initial values for the parameters. Genetic algorithms’ techniques were presented initially by [38] and [39]. The IMMA applied them to tyre models first.

An interesting hybrid approach can be found in [40] that combines genetic algorithms with classic gradient search methods applied to multiparametric nonlinear systems.

Several authors who have worked in nonlinear multivariate optimization in tyre models agree that when using Newton, Gauss–Newton, quasi-Newton and Nelder–Mead methods, the selection of an adequate initial point of the parameters’ vector is an important issue for the quality of the final solution, and minor variations in this initial point can produce different final results, see for example [22, 36, 37] and [20].

The reason for this is the nonlinear condition of the problem and the subsequent need for an iterative method with an initial point.

During this research work, we have programmed tyre models optimizations (by hand, writing the code in a low level software) using Newton, Gauss–Newton and Marquardt–Levenberg methods, both for the polynomial and for magic formula models, with and without constraints. We have also used the libraries of MAPLE and MATLAB which combine Nelder–Mead and Quasi-Newton methods with the previous ones, and the results are always coincident; we could observe many times this problem, in certain combination of the parameters, a slight change in the initial point can yield different results.

It is very difficult to give a general rule of when the optimization will not converge. A first factor is how big are the residues, that is, how far from the final curve is the initial values curve. But this is not the only factor, the shape of the initial values curve can also influence the possibility of convergence. In addition, the final result of the optimization can be apparently good, but actually, it could be a local minimum, and we could find a near combination of parameters with better optimization results (a lower sum of quadratic deviations) if we start with different initial values. Most optimization methods find local minima. Those problems are bigger when the number of parameters and the number of variables increases, because the number of possible combinations of data is bigger and we could find minima very close. As usual, a good knowledge of the physical phenomenon (the tyre behaviour in this case) and the previous experience in optimization of similar tyres can help a lot.

Therefore, this is not a well-solved problem or at least not in a fully automatic way.

From this point of view, the genetic algorithms method, whose approach is probabilistic, nondeterministic and very different from the rest, is very interesting because it does not need an initial point.

Added to the previous methods, we can mention the work of [41] and [42] who estimates the values of the Pacejka’s magic formula tyre model, using the so-called TS (two stage) technique and compares it with different methods of observation and parameters estimation based on Kalman filters, using data obtained along the life time of the vehicle.

If we want to obtain a fast convergence in the optimization algorithms, the initial point should not be far from the optimum. The first step in a nonlinear optimization process is the search of a reasonably good initial point. We can use the previous results of a different tyre under similar load conditions to obtain this initial point.

5 Optimization with constraints to the model coefficients

Sometimes, it may be necessary to obtain the optimum curve under certain constraints, because we are more confident in some points of the test than in others or because we want to equate the value of the curve, or the value of its derivatives, at both sides of the origin, or because we want to give a fixed value to the derivative at the origin. If we have equality constraints, a direct and effective technique is to include the constraints in the original equation, before starting the optimization process, as we show below, instead of formulating the constraint equations added, and using SQP algorithms, perhaps more adequate for inequality constraints.

If no constraints are imposed, the optimization algorithms will calculate the coefficients so that the value of the sum of the quadratic deviations is the minimum. As constraints are imposed, the quadratic deviation will be bigger and bigger, but the curve will comply those constraints. As our model has five basic coefficients, in theory, we could impose up to five constraints, although we must always keep in mind that the less constraints we impose, the better the adjustment of the test data and the lesser the sum of the quadratic deviations will be.

Let us see now some typical constraint examples. All of them have been tested both with our polynomial tyre model and with the magic formula tyre model. We have obtained a fast convergence and moderate variations of the sum of quadratic deviations.

R1. The curve passes exactly through point \((x_p, y_p)\).

$$\begin{aligned} y_{{p}}&= A_{0}+A_{1}\cdot u_{{p}}+ A_{2}\cdot u_{{p}}^{2}+A_{3}\cdot u_{{p}}^{3};\\&\hbox { being } u_{{p}}= x_{{p}}/(x_{{p}}+b) \end{aligned}$$

The resulting equation is the following:

$$\begin{aligned} r_{{i}}&= y_{{i}}\!-\! F_{{i}}= y_{{i}}- y_{{p}} \!+\! (A_{1}\cdot u_{{p}}\!+\!A_{2}\cdot u_{{p}}^{2}\!+\!A_{3}\cdot u_{{p}}^{3}) \\&- (A_{1}\cdot u_{{i}}+A_{2}\cdot u_{{i}}^{2}+ A_{3}\cdot u_{{i}}^{3}) \end{aligned}$$

The terms of the Jacobian matrix for our tyre model are modified as follows:

$$\begin{aligned} \frac{\partial r_i }{\partial A_1 }\!&= \!u_p \!-\!u_i ; \quad \! \frac{\partial r_i }{\partial A_2 }\!=\!u_p^2 \!-\!u_i^2 ; \quad \! \frac{\partial r_i }{\partial A_3 }\!=\!u_p^3 \!-\!u_i^3\\ \frac{\partial r_i }{\partial b}&= \frac{1}{(x_i +b)}\left[ {A_1\cdot u_i +2\cdot A_2\cdot u_i^2 +3\cdot A_3\cdot u_i^3 } \right] \\&-\frac{1}{(x_p \!+\!b)}\left[ {A_1\cdot u_p \!+\!2\cdot A_2\cdot u_p^2 \!+\!3\cdot A_3\cdot u_p^3 } \right] \end{aligned}$$

This constraint calculates the optimal value of the coefficients so that the curve passes through a given point, for example, the end point (\(s=100\)) or through the first data of the series at \(F_x=0\), which can be very reliable data from the test perspective: those two points can have a better measurement reliability than the points between the maximum value and the full slip point (\(s=100\) %).

It can also be interesting that the curve passes exactly through the maximum test point. Generally, these three situations generate a small increase at the quadratic sum and adjust the curve at those points accurately.

Nevertheless, if the chosen point is the maximum one (\(x_\mathrm{max}, y_\mathrm{max}\)), this constraint does not guarantee \(y_\mathrm{max}\) to be the curve maximum value, as it could have an anterior or posterior maximum.

The curve can be forced to pass exactly through up to five points, as it has five coefficients.

Particularly, the curve can be forced to pass through \((0, fy)\). Naturally, combined constraints can be imposed, so that they can pass through \((x_p,y_p)\) and \((0,fy)\). This is a combination of the two previous ones.

R2. The curve takes \(F=y_\mathrm{max}\) as maximum value (even if it does not pass through xmax exactly).

$$\begin{aligned} u_{\max }&= \frac{-A_2 -\sqrt{A_2^2 -3A_1 A_3 }}{3A_3 };\\ F_{\max }&= y_{\max } =A_0 \!+\!A_1\cdot u_{\max } \!+\!A_2 \cdot u_{\max }^2 \!+\!A_3 \cdot u_{\max }^3 \end{aligned}$$

The resulting equation will be the following:

$$\begin{aligned} r_i&= y_i -F_i =y_i -y_{\max } \\&+(A_1 \cdot u_{\max } +A_2 \cdot u_{\max } ^{2}+A_3 \cdot u_{\max } ^{3})\\&-(A_1 \cdot u_i +A_2 \cdot u_i ^{2}+A_3 \cdot u_i ^{3}) \end{aligned}$$

Let:

$$\begin{aligned}&K=\sqrt{A_2^2 -3\cdot A_1\cdot A_3 }= -A_2 -3\cdot A_3\cdot u_{\max } ; \\&\frac{\partial r_i }{\partial u_{\max } }= r^{\prime }_{i,u_{\max } } \\&\quad = A_1 +2\cdot A_2 \cdot u_{\max }+3A_3 \cdot u_{\max }^2\\&\frac{\partial u_{\max } }{\partial A_1 }=u^{\prime }_{\max ,A_1 } =\frac{1}{2\cdot k}; u^{\prime }_{\max ,A_2 } =\frac{-k-A_2 }{3\cdot k\cdot A_3 };\\&u^{\prime }_{\max ,A_3 } =\frac{A_1 }{2\cdot k\cdot A_3 }+\frac{A_2+k}{3\cdot A_3^2 }\\&\frac{\partial r_i }{\partial u_i }=r^{\prime }_{i,u_i } =-(A_1 +2\cdot A_2 \cdot u_i +3A_3\cdot u_i^2 ); \\&\frac{\partial u_i }{\partial b}=u^{\prime }_{i,b} =\frac{-x}{(x+b)^{2}}. \end{aligned}$$

The terms of the Jacobian matrix will be as follows:

$$\begin{aligned}&\frac{\partial r_i }{\partial A_1 }=u_{\max } -u_i +r^{\prime }_{i,u_{\max } } \cdot u^{\prime }_{\max ,A_1 } ; \\&\frac{\partial r_i }{\partial A_2 }=u_{\max }^2 -u_i^2 +r^{\prime }_{i,u_{\max } } \cdot u^{\prime }_{\max ,A_2 }\\&\frac{\partial r_i }{\partial A_3 }=u_{\max }^3 -u_i^3 +r^{\prime }_{i,u_{\max } } \cdot u^{\prime }_{\max ,A_3 } ; \\&\frac{\partial r_i }{\partial b}=r^{\prime }_{i,u_i }\cdot u^{\prime }_{i,b} \end{aligned}$$

R3. The curve reaches its maximum value at \(x_\mathrm{max}\) (even if it does not take value \(y_\mathrm{max})\)

The resulting equation will be the following: \(r_{{i}}= y_{{i}}- F_{{i}}= y_{{i} }-(A_{0}+A_{1}\cdot u_{{i}}+A_{2}\cdot u_{{i}}^{2}+A_{3}\cdot u_{{i}}^{3}); b= - (2\cdot A_{2}+A_{1}+3\cdot A_{3})/(A_{2}+A_{1}+K); u_{{ i}} = x_{{i}}/(x_{{i}}+b)\)

$$\begin{aligned}&\frac{\partial u_i }{\partial A_1 }=u^{\prime }_{i,A_1 }\\&\;\;=\frac{-2-3\cdot (2\cdot A_2 +A_1 +3\cdot A_3 )\cdot (-2\cdot k+3\cdot A_3 )}{2(A_{2+} A_1 +K)\cdot K}\\&\frac{\partial u_i }{\partial A_2 }=u^{\prime }_{i,A_2 }\\&\;\;=\frac{(-A_1 +3\cdot A_3 )\cdot K+(6\cdot A_3 +A_2 )\cdot A_1 +\cdot A_3 \cdot A_2 }{(A_2 +A_1 +K)\cdot K}\\&\frac{\partial u_i }{\partial A_3 }=u^{\prime }_{i,A_3 }\\&\;\;=-\frac{3}{A_2 +A_1 +K}-\frac{3\cdot (2\cdot A_2 +A_1 +3\cdot A_3 )\cdot A_1 }{(A_2 +A_1 +K)^{2}\cdot K} \end{aligned}$$

The terms of the Jacobian matrix will be as follows:

$$\begin{aligned}&\frac{\partial r_i }{\partial A_0 }=-1;\\&\frac{\partial r_i }{\partial A_1 }=-u_i -(A_1 +2\cdot A_2 \cdot u_i +3\cdot A_3 \cdot u_i^u )\cdot u^{\prime }_{{iA}_1 } ;\\&\frac{\partial r_i }{\partial A_2 }=-u_i^2 -(A_1 +2\cdot A_2\cdot u_i +3\cdot A_3\cdot u_i^2 )\cdot u^{\prime }_{i,A_2 } ;\\&\frac{\partial r_i }{\partial A_3 }=-u_i^3 -(A_1 +2\cdot A_2\cdot u_i +3\cdot A_3\cdot u_i^2 )\cdot u^{\prime }_{i,A_3 } \end{aligned}$$

R4. The inflection point is located at the origin.

The inflection point is at the point \(x_{\mathrm{inf}}\)

The condition of the inflection point at the origin is as follows:

$$\begin{aligned}&(F0^{\prime \prime }=0);\ A_2 =A_1 ;\\&F=A_0 + A_1 \cdot u\cdot (1+u)+A_3\cdot u\cdot _3 \end{aligned}$$

The terms of the Jacobian matrix this time will be as follows:

$$\begin{aligned}&\frac{\partial r_i }{\partial A_0 }=-1; \quad \frac{\partial r_i }{\partial A_1 }=-u_i\cdot (1+u_i ); \quad \frac{\partial r_i }{\partial A_3 }=-u_i^3 ; \\&\frac{\partial r_i }{\partial b}=\left[ {A_1 +2\cdot A_1\cdot u_i +3\cdot A_3\cdot u_i^2 } \right] \cdot \frac{x_i }{(x_i +b)^{2}} \end{aligned}$$

R5. The slope at a given point \(x_{{p}}\) takes a given value \(y'_{{p}}\)

$$\begin{aligned}&y'_{p}=(A_{1 }+ 2\cdot A_{2}\cdot u_{{p}} + 3\cdot A_{3}\cdot u_{{p}}^{2})\cdot u_{{p}}^{'};\\&u_{{p}}=x_{p}/(x_{{p}}+b); u_{{ p}}^{'}=b/(x_{{p}}+b)^{2};\\&A_{1} = (y^{'}_{{p}}/u^{'}_{{p}}) - 2\cdot A_{2}\cdot u_{{p}}- 3\cdot A_{3}\cdot u_{p}^{2}\\&r_{{i} }= y_{{i} }- F_{{i}}= y_{{i}}- (A_{0}+ [(y^{'}_{{p}}/u^{'}_{{p}}) - 2\cdot A_{2}\cdot u_{{p}}\\&\qquad -3\cdot A_{3}\cdot u_{{p}}^{2} ]\cdot u_{{i}}+ A_{2}\cdot u_{{i}}^{2}+ A_{3}\cdot u_{{i}}^{3})\\&\frac{\partial r_i }{\partial u^{\prime }_p }=\frac{y^{\prime }_p \cdot u_i }{u^{\prime 2}_p }; \quad \frac{\partial u^{\prime }_p }{\partial b}=\frac{x_p -b}{(x_p +b)^{3}};\\&\frac{\partial r_i }{\partial u_p }=(2\cdot A_2 +6\cdot A_3 \cdot u_p )\cdot u_i ; \quad \frac{\partial u_p }{\partial b}=\frac{-x_p }{(x_p +b)^{2}}\\&\frac{\partial r_i }{\partial u_i }=\frac{y^{\prime }_p }{u^{\prime }_p }+2\cdot A_2 \cdot u_p +3\cdot A_3 \cdot u_p^2 \\&\quad \quad \quad -2\cdot A_2\cdot u_i -3\cdot A_3 \cdot u_i^2 ;\\&\frac{\partial u_i }{\partial b}=\frac{-x_i }{(x_i +b)^{2}} \end{aligned}$$

The terms of the Jacobian matrix this time are:

$$\begin{aligned}&\frac{\partial r_i }{\partial A_0 }=-1; \quad \frac{\partial r_i }{\partial A_2 }=2\cdot u_p \cdot u_i -u_i^2 ; \\&\frac{\partial r_i }{\partial A_3 }=3\cdot u_p^2 \cdot u_i -u_i^3 ; \\&\frac{\partial r_i }{\partial b}=\frac{\partial r_i }{\partial u^{\prime }_p }\cdot \frac{\partial u^{\prime }_p }{\partial b}+\frac{\partial r_i }{\partial u_p }\cdot \frac{\partial u_p }{\partial b}+\frac{\partial r_i }{\partial u_i }\cdot \frac{\partial u_i }{\partial b} \end{aligned}$$

Particular case: the slope at the origin takes a given value.

\(F'_{0}=A_{1}/b\); In this case, the equation is \(F=A_{0}+b\cdot F'_{0}\cdot u+A_{2}\cdot u^{2}+A_{3}\cdot u^{3}\). The terms of the Jacobian matrix are slightly modified:

$$\begin{aligned} \frac{\partial r_i }{\partial b}&= \frac{-F^{\prime }_0\cdot x_i }{(x_i +b)}+\frac{b\cdot F^{\prime }_0\cdot x_i }{(x_i +b)^{2}}+\frac{2\cdot A_2\cdot x_i^2 }{(x_i +b)^{3}}\\&\quad +\frac{3\cdot A_3\cdot x_i^3 }{(x_i +b)^{4}} \end{aligned}$$

The rest of the terms remains equal than in the equation without constraints, except A\(_{1}\) that disappears.

We could use this constraint to force the slope at the origin, to take the same value BCD of the magic formula tyre model.

A combination could be imposed, so that the inflection point is at the origin and also at \(F'_0\) takes a given value.

As a final consideration related to the constraint imposition, it must be stated that optimization algorithms converge when the imposed constraints make sense. If conditions are imposed to the curve that are far from the initial ones with the result of very distorted curve shapes, no final solution will be achieved.

6 Stability of the polynomial at the origin

The main drawback of using polynomials, especially rational polynomials, is their trend to oscillation, in other words, their possible instability. If the distribution of real points shows an inflection point at the rise, the polynomial with the best interpolation by least-squares can have its minimum in the first quadrant (see Fig. 1) between two test points, something definitely unwanted (see Fig. 2).

Fig. 2
figure 2

Oscillation at the origin

To avoid it, we have to impose the constraint of a given value of the derivative at the origin, \(A_{1}=F'_{0}\cdot b\). \(F'_{0}\) can be estimated by means of a parabolic regression from the first 4 points and calculating the derivative of this parabola at the origin. This increases slightly the addition of quadratic deviations, but it solves in a simple manner the problem because it moves the minimum point to a different quadrant and stabilizes the curve in the first quadrant. The correct value of \(F'_0\) may also be calculated by imposing the same value of the function, and its derivative, at both sides of the origin if we have data of forces in both sides. Figure 8 shows the solution in the multivariate case too.

7 Results of the model with real tests and accuracy

In [17], the adjustment of the proposed polynomial model was compared with the magic formula model in relation to both the longitudinal and lateral forces for different normal load values, and it could be seen that the deviations regarding that model were always very low, keeping a relative value below 1 %.

Now, we analyse the optimization to real test data, not the adjustment to a theoretical model. The results in real tests present the typical instability of the tyre material inequalities, the instruments measurement uncertainty and the difficulty to stabilize the longitudinal or lateral slip at some areas of the curve. For this reason, mathematical models are exactly that models adjusted to each type of tyre, but the resulting points always present the variability specific of tests, so that individual deviations may be produced at some point.

Next, some modelling results from tests on a real tyre are presented, showing the test points, the polynomial modelling proposed in this article and its comparison with the magic formula model.

Longitudinal force.

data \(=\) [[0, 276], [1, 824], [2, 1742], [3, 2930], [4, 4146], [5, 4913], [6, 5244], [7, 5492], [8, 5693], [9, 5844], [10, 5987], [11, 6097], [12, 6155], [13, 6193], [14, 6226], [15, 6253], [16, 6269], [17, 6277], [18, 6269], [19, 6243], [20, 6226], [21, 6202], [22, 6171], [23, 6152], [24, 6125], [25, 6101], [26, 6072], [27, 6048], [28, 6017], [29, 5978], [30, 5944], [31, 5918], [32, 5887], [33, 5860], [34, 5831], [35, 5797], [36, 5768], [37, 5723], [38, 5691], [39, 5662], [40, 5633], [41, 5597], [45, 5506], [50, 5404], [55, 5261], [60, 5186], [65, 5115], [70, 5028], [75, 4999], [80, 4954], [85, 4929], [90,4915], [95, 4882], [99, 4827], [100, 4796]]:

These are data from real longitudinal force tests on a 175/70 R13 tyre. Normal load = 6 kN. Camber angle = 0 and pure slip conditions.

In our model, we adjust the slope at the origin to the value 408 which is the value for the slope at the origin in a parabolic interpolation of the first 4 points. The obtained result in the optimization is the following:

$$\begin{aligned} F_x&= - 6.33 + 2199.78\cdot u + 28102.83 \cdot u^{2}\\&- 26462.59 \cdot u^{3}; \hbox {being}\, u=s/(s+5.391) \end{aligned}$$

The adjustment to the magic formula is the following:

$$\begin{aligned}&fy=2035.56; fx=-2.055; B=0.13915; \\&d= 4226.87; E=0,7019; C=1,766\\&F_x \!=\! 4226.87^{*}\!\sin (1.766^{*}\!\arctan (0.13915^{*}(x\!-\!2.055) \\&\qquad \quad - 0.7019^{*}(0.13915^{*}(x-2.055) \\&\quad \qquad - \arctan (0.13915^{*}(x-2.055))))) + 2035.56 \end{aligned}$$

Let us see now the convergence of both models using the Gauss–Newton’s method for our example tyre with the previous data.

In Table 1, we show the calculation of the parameters for the longitudinal braking force \(F_x\).

Table 1 Convergence of the polynomial tyre model parameters

As we can see in Table 2, the magic formula model also converges using the Gauss–Newton’s method, but very much slowly and the use of a step’s reduction factor \(\alpha = 1/5\) in this case for a correct convergence. Anyway, the time of computing is very reduced.

Table 2 Convergence of the magic formula tyre model parameters

Then, we show the curves with the results. Firstly, we can see the longitudinal force; in Fig. 3, we present the test points, and the curves of our polynomial model (dotted thick line) and the magic formula tyre model (thin line). Both models approximate the test points quite well.

Fig. 3
figure 3

Longitudinal force (N) interpolation

If we analyse the deviations in Fig. 4, we can observe that our polynomial model (circles) presents a better adjustment in the high area of the curve and in the final part, with moderate, medium and high values of slip. Pacejka’s magic formula tyre model (cross marks) presents a better adjustment in the first points of the test. The behaviour of the polynomial tyre model is especially accurate in the area of the maximum value of the curve.

Fig. 4
figure 4

Absolute deviations (N). Longitudinal force

Lateral Force

Data test for the same previous tyre is the following:

Slip Angle = \(-15 \ldots 15\) (In this test, we have taken the right-hand side of the file from 0 to \(15^{\circ }\) (0,262 rad)

Camber Angle = \(0^{\circ }\). Normal load \(F_z = 6\) kN. Pure Lateral Force \(F_y\) is measured.

Data \(=\) [[0, \(-\)98], [0.004, 108], [0.009, 253], [0.013, 480], [0.017, 700], [0.022, 912], [0.026, 1094], [0.031, 1275], [0.035, 1441], [0.039, 1641], [0.044, 1820], [0.048, 1984], [0.052, 2163], [0.057, 2357], [0.061, 2531], [0.065, 2675], [0.07, 2823], [0.074, 2970], [0.079, 3108], [0.083, 3251], [0.087, 3373], [0.092, 3500], [0.096, 3603], [0.1, 3694], [0.105, 3790], [0.109, 3877], [0.113, 3957], [0.118, 4031], [0.122, 4097], [0.127, 4159], [0.131, 4203], [0.135, 4256], [0.14, 4293], [0.144, 4329], [0.148, 4366], [0.153, 4411], [0.157, 4438], [0.161, 4475], [0.166, 4505], [0.17, 4527], [0.174, 4566], [0.192, 4646], [0.209, 4725], [0.227, 4780], [0.244, 4773], [0.262, 4752]:

As we can see in Fig. 5, the curves show a good adjustment to data test for both models. In the polynomial model, we have not imposed constraints. If we look at the deviations in Fig.  6, we can appreciate a slightly better result of the polynomial model, especially in the starting and final areas.

Fig. 5
figure 5

Lateral force (N). Interpolation

Fig. 6
figure 6

Absolute deviations (N). Lateral force

8 Multivariate analysis

The main variables influencing the behaviour of a tyre are typically normal load (\(F_z\)), camber angle \((a_{\mathrm{c}})\) and of course longitudinal slip or slip angle. We will have a set of \(m = m_{1}.m_{2}.m_{3}\) test points in the variables \(x, F_z\) and \(a_{\mathrm{c}}\), being x longitudinal slip or slip angle. Every test point is a vector of four elements \((x_{{i}}, F_{{zi}}, a_{{ci}}, y_{{i}})\). From the observation of the cloud of test points, we build a model in which the normal load and camber influence the peak value and the shape factor b; in the multivariate domain we will denote it as B*.

$$\begin{aligned}&F_x (x,F_z ,a_\mathrm{c} )=F^{*}(F_z ,a_\mathrm{c} )\cdot \left( A_0 +A_1 \cdot \frac{x}{x+B^{*}(F_z ,a_\mathrm{c} )}\right. \nonumber \\&\quad \!\left. +\,A_2 \cdot \left( \frac{x}{x\!+\!B^{*}(F_z ,a_\mathrm{c} )}\right) ^{2}\!+\! A_2 \cdot \left( \frac{x}{x\!+\!B^{*}(F_z ,a_\mathrm{c})}\right) ^{\!3}\right) \! ;\nonumber \\&F^*(F_z ,ac)=F_1 (F_z )\cdot F_2 (a_\mathrm{c}); \nonumber \\&B^{*}(F_z ,ac)=B_1 (F_z)\cdot B_2 (ac) \end{aligned}$$
(5)

Initially, we assume a parabolic model for every factor of influence. After an analysis, we could simplify the models of \(F_{1}\) and \(B_{1}\) to linear, and we make the mathematic development with degree 2. We have to avoid redundancy of parameters in the possible independent terms (\(d_{0}\) and \(e_{0},b_{0}\) and \(g_{0}\)), with the \(A_{{i}}\) terms because the optimization algorithms cannot converge if the parameters are redundant, for that reason, we propose finally the following models:

$$\begin{aligned} F_1 (F_z)&= 1+d_1 \cdot F_z +d_2 \cdot F_z^2 ;\\ F_2 (a_\mathrm{c})&= 1+e_1 \cdot a_\mathrm{c} +e_2 \cdot a_\mathrm{c}^2;\\ B_1 (F_z )&= 1+b_1 \cdot F_z +b_2 \cdot F_z^2 ; \\ B_2 (a_\mathrm{c})&= g_0 +g_1 \cdot a_\mathrm{c} +g_2 \cdot a_\mathrm{c}^2 ; \end{aligned}$$

The whole model includes 13 parameters.

There is not a theoretical limit in the number of variables and parameters. The limit can be imposed by the computation times and the problem of finding a good initial point with a big number of variables and parameters.

The residuals’ vector in this case is \(r_{i}=y_{i}-F_i^*\); with \(i=1\ldots m\) (495 test points in our tyre example). We will have 55 points in each of the three values of normal load (2 kN, 4 kN y 6 kN) and in each of the three values of camber angle (-\(5^{\circ },\, 0^{\circ }y 5^{\circ }\)). In total, m = 55.3.3= 495. The parameters’ vector will have 13 elements: \(\varvec{\beta }\,(\beta _{ 1},\ldots \beta _{ 13})\). The addition of quadratic deviations \(S(\varvec{\beta })\) will have the same expression, (2) but now m=495.

The expressions of the Gradient, Jacobian matrix and Hessian matrix are the same too (but now with \(j=13\) parameters and \(m=495\) points). See expression (4) is the same. All methods described in Sect. 3 are valid.

In Fig. 7, we present the results obtained with the same example tyre. Longitudinal force in pure slip conditions is presented with values of \(F_z\) equal to 2, 4 and 6 kN and camber angles of \(-5^{\circ }, \,0^{\circ }y 5^{\circ }\). Figure 8 shows the adjustment of longitudinal force near the origin.

Fig. 7
figure 7

Longitudinal force (N), in pure slip conditions, versus slip. Multivariate model

Fig. 8
figure 8

Adjustment of the slope at the origin

The obtained values of the parameters are the following:

$$\begin{aligned}&A_{0} = 4.672; A_{1} = -11.387; A_{2} = 322.675; \\&A_{3} = -277.546;\\&b_{1}= -0.2928e-1; b_{2} = -0.86e-3;\\&e_{1} = -0.460e-2; e_{2} = 0.113e-2;\\&d_{1} = 18.455; d_{2} = -.2288;\\&g_{0} = 6.22766; g_{1} = -0.4696e-1;\\&g_{2} = -0.2489e-1; \end{aligned}$$

The initial point has been the following:

$$\begin{aligned}&A_{0} = 5; A_{1} = -10; A_{2} = 300; A_{3} = -300;\\&b_{1} = b_{2} = 0;d_{1} = 20; d_{2} = e_{1} = e_{2} = g_{1} = g_{2} = 0; \\&g_{0}=10; \end{aligned}$$

The addition of quadratic deviations is = \(2.695\cdot 10^{6}\)

To obtain those results, we have used Quasi-Newton, Gauss–Newton, Nelder–Mead and genetic algorithms, without constraints; all of them converge correctly, but the first three are sensitive to the initial point. The quadratic components of \(F_{1}(F_z)\) and \(B_{1}(F_z)\) are very low compared with the linear term, so that we could simplify the model, assuming a linear behaviour for those two functions, by eliminating \(d_{2}\) and \(b_{2}\) and repeating the optimization; however, the complete quadratic formulation is presented for a more general expression. We can observe a very good adjustment of the model in respect of test with this real tyre. Then, we present the convergence of those data, using the Gauss–Newton method without step modification.

As we can see in Table 3, we can observe that the proposed model converges easily, and in the eighth step, the error is already under 1 %. This is an additional advantage of the use of polynomials.

Table 3 Convergence of the multivariate polynomial tyre model

If we need to adjust the slope at the origin, according to the point 5, we would apply the constraint pointed in 5-R5, but for the multivariate case. The coefficient A1 would be modified with \(F'_{0}\) calculated from a parabolic regression with the first 4 points and calculating the slope at the origin. We repeat this three times, one for each of the values of normal load with camber zero, and finally, a linear regression of the slope at the origin is obtained in function of \(F_z\), with the three values of the slope. We assume that the slope at the origin changes with normal load \(F_z\), but not with camber angle, for every value of \(F_z\). A zoom of the slope at the origin is presented:

We apply the condition of fixed positive derivative for every value of \(F_z\) in Eq. (5).

$$\begin{aligned}&\left( {\frac{\partial F^*}{\partial x}} \right) _{x=0} =\frac{F^*}{B^*}\cdot A_1 =F^{\prime }_0 \cdot F_z ;A_1 =F^{\prime }_0 \cdot F_z \cdot \frac{B^*}{F^*} \end{aligned}$$

\(F'_{0}\). \(F_z = 90\). \(F_z\) is the estimation, for this tyre, of the slopes at the origin from the first 4 points of every curve (\(a_{\mathrm{c}}=0\)). The final model is the following:

$$\begin{aligned}&F_x (x,F_z ,a_\mathrm{c})\!=\!F^{*}\cdot \left( A_0 \!+\!F^{\prime }_0 \cdot F_z\cdot \frac{B^{*}}{F^{*}}\cdot \left( \frac{x}{x+B^{*}}\right) \right. \\&\quad \left. +A_2 \cdot \left( \frac{x}{x+B^{*}}\right) ^{2}+ A_2\cdot \left( \frac{x}{x+B^{*}}\right) ^{3}\right) ; \end{aligned}$$

This constraint adds very few computing load because B* and F* are used in the rest of the model too.

9 Conclusion

In this article, a new tyre model is validated using test data of real tyres. Initially, it has been presented the theoretical background of approximation of functions that allow to come up with the model, and the mathematical analysis of the curve proposed as a tyre model by the authors, a degree three polynomial in a simple rational function (1). Then, the article reviews the nonlinear numerical optimization methods, which calculate the model parameters from real test data. Initially, a basic model in five parameters was used, and then, the complete model in 13 parameters, including the effect of normal load and camber angle was also optimized. We could observe a very fast convergence in both cases. The technique of optimization with constraints in the function or its derivatives is applied to the tyre optimization, which it have to be used to avoid strange values of the slope at the origin.

As a conclusion, it can be stated that, during the process of nonlinear multivariate optimization, to calculate its parameters, the new polynomial tyre model presents very good conditions of convergence, faster and simpler than in the magic formula tyre model.

As we could see at Sect. 4.3, there is a huge difference in the simplicity and computational efficiency between the Jacobian matrix terms of the polynomial model and the magic formula model, and this is the reason why the observed convergence of the method is much faster in the polynomial model (see for example Tables 1, 2 in Sect. 7, the MF model requires 10 times more steps to obtain stable values of the parameters) and of course, every step is computed very much faster because the terms are much simpler.

This is an additional advantage added to its obvious properties of mathematical simplicity.

As it is a polynomial, the computing is very fast and easy, as we could see in Sects. 4.3 and 3.1; it is also very accurate as we shown in the figures of Sects. 7 and 8. The error relative to the magic formula tyre model is always very reduced, the reduced difference between the two mathematical formulae had been proven in [17], and in this paper, we can see the similar optimization results compared with real test data, in the figures of Sect. 7.