1 Introduction

Continuous-time (CT) system identification involves using measured input-output data to build mathematical models that characterize the behavior of a system. Several methods have therefore been developed in the literature to identify CT systems using rational models [9,10,11, 20, 28, 35]. In many real applications such as heat transfer, electrochemical and lithium-ion batteries, system behavior can be described more accurately with fractional models than with rational models [1, 8, 15,16,17, 21, 26, 34, 36, 37]. This greater accuracy is due to the global characterization of fractional differentiation. Consequently, various approches have been developed in the literature to deal with system identification with fractional models (see [7, 22, 29, 31, 32, 38,39,40, 42] and references therein for an overview). Most of the developed approaches have focused on the problem of identifying SISO (Single-Input-Single-Output) and MIMO (Multi-Input-Multi-Output) systems. All approches have been developed when only the output measurements are noisy. In the case where both input and output measurements are affected by additive noise, the system is called with Errors-In-Variables (EIV). However, the estimation cannot be consistent when using the classical approaches. This problem has been solved in [41] by using Second-Order-Statistics (SOS) and in [4, 5] by using Higher-Order-Statistics (HOS), such us third-order and fourth-order cumulants.

One of the main difficulties in CT system identification is the computing of time derivatives. Since the goal is to obtain an estimate of a CT system, the knowledge of input and output derivatives is required. However, these derivatives are firstly not exactly computable when using sampled input-output data and secondly may amplify the noise or the error of measurements effect. To overcome this problem, the simplified and refined based-instrumental variable (sriv) algorithm has been proposed. Firstly, it has been developed for rational and fractional SISO systems identification in the case where only the output signal is noisy [12, 13, 18, 19, 27, 32, 43]. Then, it has been extended for system identification with fractional models in the EIV context. The developed estimator is based on fourth-order cumulants [2]. The obtained results were satisfactory, the estimator gave unbiased estimates with minimal variances in presence of important noise corrupting the input and output signals. In [33], it has been developed for fractional MISO systems identification in the case where only the output is noisy. In [25], the sriv estimator based on third-order cumulants has been developed for MISO system identification with fractional models in the EIV framework. But the assumption of non-symmetry of the input signals on which the method is relying is restrictive. To solve this problem and in early work developed in [3], the ordinary least squares method based on fourth-order cumulants has been proposed. It gave consistent results in presence of noise corrupting both input and output measurements. However, the variance of the estimates becomes larger when the noise level becomes significant.

Hence, motivated by the studied works mentioned above, the main contribution of this paper is to develop a new sriv estimator based on fourth-order cumulants. Firstly, the differentiation orders are supposed to be known and only the linear coefficients are estimated. The developed algorithm is called fractional fourth-order cumulants based-simplified and refined instrumental variable (frac-foc-sriv). Secondly, all parameters of MISO fractional model are estimated by assuming that the structures of all subsystems composing the MISO fractional system are the same and have a prior knowledge. Indeed, three cases are established: in the first case a global commensurate order is estimated. In the second case, the local commensurate orders for all SISO subsystems are estimated. In the third case, all differentiation orders are estimated. The proposed idea consists in combining the frac-foc-sriv with a nonlinear optimization technique. The developed algorithm is called frac-foc-sriv combined with optimization orders (frac-foc-oosriv).

The remainder of this paper is organized as follows: the MISO fractional systems with Errors-In-Variables are presented is Section 2. Section 3 describes the problem statement of CT MISO system identification with fractional models in the EIV framework. In Sections 4 and 5 the developed algorithms are detailed. Section 6 concludes the paper.

Fig. 1
figure 1

Fractional LTI MISO system with Errors-In-Variables

2 MISO fractional systems with errors-in-variables

As illustrated in Figure 1, a fractional MISO Linear-Time-Invariant (LTI) system with noise-affected input and output signals is represented by the following model:

$$\begin{aligned} ({\mathcal {H}})\left\{ \begin{array}{l} \left( {1+{\sum \limits _{n = 1}^{N_\ell } {a_{n,\ell } p^{\alpha _{n,\ell }}} }}\right) {{\tilde{y}}_{\ell }}(t) = \left( {{\sum \limits _{m = 0}^{M_\ell } {b_{m,\ell } p^{\beta _{m,\ell }}}}}\right) {{\tilde{u}}_{\ell }}(t),\,\,\ell = 1,...,L \\ \tilde{y}(t) = \sum \limits _{\ell = 1}^L {{{\tilde{y}}_{\ell }}(t)} \\ {u_\ell }({t_k}) = {{\tilde{u}}_{\ell }}({t_k}) + {e_{{{\tilde{u}}_{\ell }}}}({t_k}),\,\,\ell = 1,...,L \\ y({t_k}) = {{\tilde{y}}}({t_k}) + {e_{{\tilde{y}}}}({t_k}). \\ \end{array} \right. \end{aligned}$$
(2.1)

L is the number of SISO subsystems.

\(\left\{ {{\tilde{u}_{1}}(t),...,{{\tilde{u}}_{\ell }}(t),...,{{\tilde{u}}_{L}}(t)} \right\} \) and \(\left\{ {{{\tilde{y}}_{1}}(t),...,{{\tilde{y}}_{\ell }}(t),...,{\tilde{y}_{L}}(t)} \right\} \) denote, respectively, the noise-free input and output signals. The discrete-time available input and output measurements are \(\left\{ {{u_1}({t_k}),...,{u_l}({t_k}),...,{u_L}({t_k})} \right\} \) and \(y(t_k)\). \({e_{{{\tilde{u}}_{\ell }}}}({t_k})\) and \({e_{{\tilde{y}}}}({t_k})\) stand for discrete-time noise affecting, respectively, the input and the output measurements. They are defined, respectively, by the following equations:

$$\begin{aligned} {e_{{{\tilde{u}}_{\ell }}}}({t_k})= & {} {H_{\tilde{u}_\ell }}(q^{-1})e_{{{\tilde{u}}_{\ell }}}^0({t_k}),\,\ell =1,...,L, \end{aligned}$$
(2.2)
$$\begin{aligned} {e_{{\tilde{y}}}}({t_k})= & {} {H_{{\tilde{y}}}}(q^{-1})e_{{\tilde{y}}}^0({t_k}), \end{aligned}$$
(2.3)

\(q^{-1}\) represent the backward shift operator and \(e_{{\tilde{u}_{\ell }}}^0({t_k})\), \(e_{{\tilde{y}}}^0({t_k})\) are zero-mean white Gaussian noises.

The SISO fractional differential equation, relating the noise-free input \({\tilde{u}}_{\ell }(t)\) and the noise-free output \(\tilde{y}_{\ell }(t)\), is:

$$\begin{aligned} \left( {1+{\sum \limits _{n = 1}^{N_\ell } {a_{n,\ell } p^{\alpha _{n,\ell }}} }}\right) {{\tilde{y}}_{\ell }}(t) = \left( {{\sum \limits _{m = 0}^{M_\ell } {b_{m,\ell } p^{\beta _{m,\ell }}}}}\right) {{\tilde{u}}_{\ell }}(t), \end{aligned}$$
(2.4)

\((a_{n,\ell }, b_{m,\ell })\in {\mathbb {R}}^2\) are the linear coefficients. \(\displaystyle { p=D= \left( {\frac{d}{{dt}}} \right) }\) designates the differential operator in the time-domain and the differentiation orders \({\alpha _{n,\ell }}\) and \({\beta _{m,\ell }}\) are positive real numbers.

Definition 1

(Grünwald-Leitnikov derivative approximation) The \(\upsilon ^{th}\) (\(\upsilon \in {\mathbb {R}}^{+}\)) derivative of a causal function f(t) (\(f(t)=0\) for \(t \le 0\)), is computed by using the following equation [14]:

$$\begin{aligned} D^\upsilon f(t)\simeq \frac{1}{{h^\upsilon }}\sum \limits _{k = 0}^K {( - 1)^k } \left( \begin{aligned} \upsilon \\ k \\ \end{aligned} \right) f(t - kh), \end{aligned}$$
(2.5)

h and K denote, respectively, the sampling period and the number of samples. \(\displaystyle {\left( \begin{aligned} \upsilon \\ k \\ \end{aligned} \right) }\) is the Newton binomial generalized to fractional orders:

$$\begin{aligned} \left( \begin{aligned} \upsilon \\ k \\ \end{aligned} \right) = \frac{{\upsilon \,(\upsilon - 1)\,(\upsilon - 2)\,...\,(\upsilon - k + 1)}}{{k!}}. \end{aligned}$$
(2.6)

Definition 2

(Laplace transform) The Laplace transform of the \(\upsilon ^{th}\) derivative of a causal function f(t) is defined as the rational case:

$$\begin{aligned} {\mathscr {L}}( D^{\upsilon }f(t) ) ={s^{\upsilon }}F(s), \end{aligned}$$
(2.7)

s denotes the Laplace variable.

Applying Definition 2 to equation (2.4), leads to the fractional SISO transfer function:

$$\begin{aligned} H_\ell (s)=\frac{{\sum \limits _{m = 0}^{M_\ell } {b_{m,\ell } s^{\beta _{m,\ell }}}}}{1+{\sum \limits _{n = 1}^{N_\ell } {a_{n,\ell } s^{\alpha _{n,\ell }}} }}. \end{aligned}$$
(2.8)

To ensure the MISO model identifiability, all differentiation orders \({\alpha _{n,\ell }}\) and \({\beta _{m,\ell }}\) are allowed to be real positive numbers and are ordered as following:

$$\begin{aligned} 0< {\alpha _{1,\ell }}< ...< {\alpha _{N_\ell ,\ell }}\,\,\,\, ;\,\,\,\, 0 \le {\beta _{0,\ell }}< ... < {\beta _{M_\ell ,\ell }}. \end{aligned}$$

Definition 3

(Local commensurability) An \({\ell ^{th}}\) SISO subsystem has a local commensurate order, denoted \(\upsilon _\ell \), if all its differentiation orders are successive integer multiples of \(\upsilon _\ell \). In that case, equation (2.8) is rewritten as follows:

$$\begin{aligned} H_{\upsilon _\ell }(s) = \frac{{\sum \limits _{i = 0}^{n_{b,\ell }} {\tilde{b}_i s^{i \upsilon _\ell } } }}{{1 + \sum \limits _{j = 1}^{n_{a,\ell }} {{\tilde{a}}_j s^{j \upsilon _\ell } } }}, \end{aligned}$$
(2.9)

where \(\displaystyle {n_{b,\ell } = {\frac{\beta _{M_\ell ,\ell }}{\upsilon _\ell } }}\) and \(\displaystyle {n_{a,\ell } = \frac{\alpha _{N_\ell ,\ell }}{\upsilon _\ell }} \) are integers and:

$$\begin{aligned} {\left\{ \begin{array}{ll} {\tilde{b}}_{i,\ell } = b_{m,\ell }\text { if } \exists m \in \{0, 1, \ldots , M_\ell \} \text { such that } i \upsilon _\ell = \beta _{m,\ell } \\ {\tilde{b}}_{i,\ell } = 0 \text { otherwise }\\ {\tilde{a}}_{j,\ell } = a_{n,\ell }\text { if } \exists n \in \{1, \ldots , N_\ell \} \text { such that } j \upsilon _\ell = \alpha _{n,\ell } \\ {\tilde{a}}_{j,\ell } = 0 \text { otherwise. }\\ \end{array}\right. } \end{aligned}$$
(2.10)

Example 1

Let us take, as an example, the following SISO transfer function:

$$\begin{aligned} {H_1}(s) = \frac{{0.5 + {s^{0.3}}}}{{1 + 2{s^{0.3}} + {s^{0.6}}}}. \end{aligned}$$
(2.11)

The local commensurate order is \(\upsilon _1=0.3\). Indeed, according to Definition 3, equation (2.11) is rewritten as follows:

$$\begin{aligned} {H_{\upsilon _1}}(s) = \frac{{0.5 + {s^{1\times 0.3}}}}{{1 + 2{s^{1\times 0.3}} + {s^{2\times 0.3}}}}. \end{aligned}$$
(2.12)

Definition 4

(Global commensurability) Consider a fractional MISO system such that all SISO transfer functions have local commensurate orders \(\upsilon _\ell \). If these are integer multiples of the same order, then there is a global commensurability. The order is called global commensurate order and is denoted \(\upsilon _G\).

For each \(\ell ^{th}\) SISO subsystem, a vector containing the fractional differentiation orders can be defined:

$$\begin{aligned} {\eta _\ell } = {\left[ {{\alpha _{1,\ell }},...,{\alpha _{{N_\ell },\ell }},{\beta _{0,\ell }},...,{\beta _{{M_\ell },\ell }}} \right] ^T}. \end{aligned}$$
(2.13)

The integers multiplying the global commensurate order are grouped in a vector as follows:

$$\begin{aligned} {\chi _\ell } = {\left[ {{\gamma _{1,\ell }},...,{\gamma _{{N_\ell },\ell }},{\delta _{0,\ell }},...,{\delta _{{M_\ell },\ell }}} \right] ^T}, \end{aligned}$$
(2.14)

where

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle {\mathrm{{if}}\,\,\,{a_{n,\ell }} \ne 0\,\,\,\,\,\mathrm{{then}}\,\,\,\,\,{\gamma _{n,\ell }} = \frac{{{\alpha _{n,\ell }}}}{{{\upsilon _G}}}\,\,}\displaystyle { \mathrm{{else}}\,\,\,{\gamma _{n,\ell }} = 0,\,\,\,\,n = 1,...,{N_\ell }}\,, \\ \displaystyle { \,\mathrm{{if}}\,\,\,{b_{m,\ell }} \ne 0\,\,\,\,\,\mathrm{{then}}\,\,\,\,\,{\delta _{m,\ell }} = \frac{{{\beta _{m,\ell }}}}{{{\upsilon _G}}}\,\,}\displaystyle {\mathrm{{else}}\,\,\,{\delta _{m,\ell }} = 0,\,\,\,\,m = 0,...,{M_\ell }}\,. \\ \end{array} \right. \end{aligned}$$
(2.15)

Example 2

Take the following example of MISO fractional system:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle { {H_1}(s) = \frac{1}{{1 + 0.5{s^{0.4}} + 2{s^{0.8}}}}} \\ \displaystyle { {H_2}(s) = \frac{{1.5}}{{1 + 2{s^{0.6}} + {s^{1.2}}}}} \\ \displaystyle {{H_3}(s) = \frac{3}{{1 + 1.5{s^{0.8}} + 0.5{s^{1.6}}}}}. \\ \end{array} \right. \end{aligned}$$
(2.16)

According to Definition 3, the local commensurate orders are:

$$\begin{aligned} \begin{array}{l} \upsilon _1 =0.4=2\times 0.2 \\ \upsilon _2 =0.6= 3\times 0.2 \\ \upsilon _3 =0.8=4\times 0.2. \\ \end{array} \end{aligned}$$
(2.17)

So, the global commensurate order is \(\upsilon _G=0.2\). Equation (2.16) can be rewritten as follows:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle { {H_1}(s) = \frac{1}{{1 + 0.5{s^{1\times \upsilon _1}} + 2{s^{2\times \upsilon _1}}}}=\frac{1}{{1 + 0.5{s^{2\times \upsilon _G}} + 2{s^{4\times \upsilon _G}}}}} \\ \displaystyle { {H_2}(s) = \frac{{1.5}}{{1 + 2{s^{1\times \upsilon _2}} + {s^{2\times \upsilon _2}}}}=\frac{{1.5}}{{1 + 2{s^{3\times \upsilon _G}} + {s^{6\times \upsilon _G}}}}} \\ \displaystyle {{H_3}(s) = \frac{3}{{1 + 1.5{s^{1\times \upsilon _3}} + 0.5{s^{2\times \upsilon _3}}}}=\frac{3}{{1 + 1.5{s^{4\times \upsilon _G}} + 0.5{s^{8\times \upsilon _G}}}}}. \\ \end{array} \right. \end{aligned}$$
(2.18)

Theorem 1

(Bounded-Input-Bounded-Output stability of fractional MISO systems) A fractional MISO system is stable in the Bounded-Input-Bounded-Output (BIBO) sense if and only if all its SISO subsystems are stable in the BIBO sense.

Consider an \(\ell ^{th}\) SISO fractional subsystem represented by equation (2.9). The latter can be rewritten as follows:

$$\begin{aligned} \displaystyle {H_{\upsilon _\ell }(s) = \frac{{{Z_{\upsilon _\ell } }(s)}}{{{P_{\upsilon _\ell } }(s)}}}. \end{aligned}$$
(2.19)

Solving equation \({P_{\upsilon _\ell } }(s)=0\) leads to its poles, denoted \(s_{k,\ell },\,\,k=1,...,n_{a,\ell }\).

Thus, the \(\ell ^{th}\) SISO subsystem, is BIBO stable if and only if [23]:

$$\begin{aligned} 0< \upsilon _\ell < 2, \end{aligned}$$
(2.20)

and

$$\begin{aligned} \forall {{s_{k,\ell }} \in {\mathbb {C}}},\,\,\,{P_{\upsilon _\ell } }({s_{k,\ell }}) = 0 \,\text{ such } \text{ as }\, \mid {\arg ({s_{k,\ell }})}\mid > \upsilon _\ell \frac{\pi }{2}. \end{aligned}$$
(2.21)

3 Problem formulation

The aim is to estimate the parameters of the fractional MISO model (\({\mathcal {H}}\)) described by equation (2.1) using available noisy data measured at discrete-time intervals. Two different cases are studied in this paper: the first case assumes that all differentiation orders of all sub-models \(H_\ell (p)\left( \ell =1,...,L\right) \) are known a priori and only the linear coefficients are estimated. In the second case, all differentiation orders are assumed to be unknown and estimated at the same time as the linear coefficients.

When only the linear coefficients are estimated, the parameter vector is defined by:

$$\begin{aligned} \Theta = \left[ {{\theta _1},...,{\theta _\ell },...,{\theta _L}} \right] , \end{aligned}$$
(3.1)

where

$$\begin{aligned} {\theta _\ell } = {\left[ {{a_{1,\ell }},{a_{2,\ell }},...,{a_{{N_\ell },\ell }},{b_{0,\ell }},{b_{1,\ell }},...,{b_{{M_\ell },\ell }}} \right] ^T}. \end{aligned}$$
(3.2)

The dimension of this vector is \( {\sum \limits _{\ell = 1}^L \left( {{N_\ell } + {M_\ell } + 1}\right) }\).

When all parameters are estimated: coefficients and differentiation orders, there are three sub-cases to be distinguished:

\(\bullet \):

Sub-case 1: If the fractional MISO system has a global commensurate order, then the parameter vector is rewritten as follows:

$$\begin{aligned} \Theta = \left[ {{\theta _1},...,{\theta _\ell },...,{\theta _L}, \upsilon _G} \right] . \end{aligned}$$
(3.3)

The dimension of this vector equals \({1+ {\sum \limits _{\ell = 1}^L \left( {{N_\ell } + {M_\ell } + 1} \right) }}\).

\(\bullet \):

Sub-case 2: If the MISO system has not a global commensurate order, or the latter is too small, then the local commensurate order of all subsystems must be estimated. The parameter vector is rewritten as follows:

$$\begin{aligned} \Theta = \left[ {{\theta _1},...,{\theta _\ell },...,{\theta _L},\nu } \right] , \end{aligned}$$
(3.4)

where

$$\begin{aligned} \nu = \left[ { \upsilon _1,...,\upsilon _\ell ,...,\upsilon _L} \right] . \end{aligned}$$
(3.5)

\(\upsilon _\ell \,\{\ell =1,...,L\}\) denotes the local commensurate order of the subsystem \(H_\ell \).

The dimension of this vector equals \({\sum \limits _{\ell = 1}^L \left( {{N_\ell } + {M_\ell } + 2} \right) }\).

\(\bullet \):

Sub-case 3: If the MISO system is non-commensurate, then the parameter vector is rewritten as follows:

$$\begin{aligned} \Theta = \left[ {{\theta _1},...,{\theta _\ell },...,{\theta _L}, \eta } \right] , \end{aligned}$$
(3.6)

where

$$\begin{aligned} \eta = \left[ {{\eta _1},...,{\eta _\ell },...,{\eta _L}} \right] , \end{aligned}$$
(3.7)

and

$$\begin{aligned} {\eta _\ell } = {\left[ {{\alpha _{1,\ell }},...,{\alpha _{{N_\ell },\ell }},{\beta _{0,\ell }},...,{\beta _{{M_\ell },\ell }}} \right] ^T}. \end{aligned}$$
(3.8)

The dimension of this vector equals \({2\sum \limits _{\ell = 1}^L \left( {{N_\ell } + {M_\ell } + 1}\right) }\).

The fractional MISO system identification problem in the EIV context consists in estimating the parameter vector \(\Theta \) (equation (3.3), (3.4) or (3.6)) using \(N_t\) samples of noisy input and output data \(\left\{ {{u_1}({t_k}),...{u_\ell }({t_k}),...,{u_L}({t_k}),y({t_k})} \right\} _{k = 1}^{{N_t}}\). In this work, all sub-cases are handled.

The proposed idea consists in decomposing the fractional MISO system into L SISO subsystems. The parameters of the \(\ell ^{th}\) subsystem are estimated while assuming that the parameters of all others \(q^{th}\) \((q\ne \ell )\) are known.

The noisy output signal of the \(\ell ^{th}\) subsystem, denoted \(x_\ell (t_k)\), is obtained as follows:

$$\begin{aligned} {x_\ell }({t_k}) = y({t_k}) - \sum \limits _{q = 1,q \ne \ell }^L {{{\tilde{y}}_{q}}} ({t_k}). \end{aligned}$$
(3.9)

As is well-known for rational and fractional systems, classical system identification methods cannot be applied in the EIV framework. Indeed, the available input and output data are noisy, and this often leads to biased estimates. In this context, a consistent estimation may be obtained either by using identification methods based on second-order statistics or by using identification methods based on Higher-Order Statistics (such as third- and fourth-order cumulants based methods).

In this work, the use of fourth-order cross-cumulants (foc) is proposed because of the advantage of being immune to colored Gaussian noise, as well as white Gaussian noise. Definitions and properties related to the foc are given in [3].

The following definition will be used in the developed algorithms.

Definition 5

(Fourth-order cross-cumulant estimation) The fourth-order cross-cumulant is estimated from \(N_t\) samples of processes \( {\chi _i(t_k )}, \,i=1,...,4\), by replacing the mathematical expectations by sample averages:

$$\begin{aligned}{} & {} {{{\hat{C}}}_{{\chi _1}{\chi _2}{\chi _3}{\chi _4}}}({{\mathcal {T}}_1,{\mathcal {T}}_2,{\mathcal {T}}_3}) \nonumber \\{} & {} \quad =\displaystyle {\frac{{{N_t} + 2}}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k} + {{\mathcal {T}} _1}){\chi _3}({t_k} + {{\mathcal {T}} _2}){\chi _4}({t_k} + {{\mathcal {T}}_3}) \nonumber \\{} & {} \quad \quad - \displaystyle {\frac{3}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k} + {{\mathcal {T}}_1})\sum \limits _{k = 1}^{{N_t}} {{\chi _3}({t_k} + {{\mathcal {T}}_2}){\chi _4}({t_k} + {{\mathcal {T}}_3})}\nonumber \\{} & {} \quad \quad - \displaystyle {\frac{3}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k} + {{\mathcal {T}} _2})\sum \limits _{k = 1}^{{N_t}} {{\chi _3}({t_k} + {{\mathcal {T}} _1}){\chi _4}({t_k} + {{\mathcal {T}}_3})} \nonumber \\{} & {} \quad \quad - \displaystyle {\frac{3}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k} + {{\mathcal {T}} _3})\sum \limits _{k = 1}^{{N_t}} {{\chi _3}({t_k} + {{\mathcal {T}}_1}){\chi _4}({t_k} + {{\mathcal {T}} _2})} . \nonumber \\ \end{aligned}$$
(3.10)

\({\mathcal {T}}_1\), \({\mathcal {T}}_2\) and \({\mathcal {T}}_3\) are the discrete-time gaps.

Remark 1

The fourth-order cumulants (respectively cross-cumulants) depend on several variables, namely \(N_t\), \({\mathcal {T}}_1\), \({\mathcal {T}}_2\) and \({\mathcal {T}}_3\). These variables contain a big number of informations which may, as a consequence, be redundant in the cumulants estimation. The proposed idea is to use:

  • some parts of the time spaceFootnote 1: these parts are dependent on a single variable and are called cumulant lines. For example, a fourth-order cumulant line can be obtained by fixing \({\mathcal {T}}_2\) and \({\mathcal {T}}_3\) as a constant chosen arbitrarily and \({\mathcal {T}}_1\) variable [5, 30].

  • some part of sampled time data: a part of \(N_t\) samples of data is suffisant to obtain unbiased cumulants (or cross-cumulants) estimates. This part is denoted \(N_h\).

When using the same signal \(\chi \), \(C_{\chi \chi \chi \chi } ({\mathcal {T}}_1 ,{\mathcal {T}} _2 ,{\mathcal {T}} _3 )\) is called cumulant. To simplify writing, \(({\mathcal {T}} _1 ,{\mathcal {T}} _2 ,{\mathcal {T}} _3 )\) will be replaced by \({\mathcal {T}}\) in the rest of the paper.

Remark 2

 

  • Taking into account Remark 1 and choosing \({\mathcal {T}}_2= {\mathcal {T}}_3=0\) leads to the following fourth-order cross-cumulant estimates:

    $$\begin{aligned} \begin{aligned} {{{\hat{C}}}_{{\chi _1}{\chi _2}{\chi _3}{\chi _4}}}({{\mathcal {T}}})&=\displaystyle {\frac{{{N_t} + 2}}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k} + {{\mathcal {T}}}){\chi _3}({t_k} ){\chi _4}({t_k}) \\&- \displaystyle {\frac{3}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k} + {{\mathcal {T}}})\sum \limits _{k = 1}^{{N_t}} {{\chi _3}({t_k} ){\chi _4}({t_k})} \\&- \displaystyle {\frac{3}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k})\sum \limits _{k = 1}^{{N_t}} {{\chi _3}({t_k} + {{\mathcal {T}}}){\chi _4}({t_k})} \\&- \displaystyle {\frac{3}{{{N_t}({N_t} - 1)}}}\sum \limits _{k = 1}^{{N_t}} {{\chi _1}({t_k})} {\chi _2}({t_k})\sum \limits _{k = 1}^{{N_t}} {{\chi _3}({t_k} + {{\mathcal {T}}}){\chi _4}({t_k})} . \end{aligned} \end{aligned}$$
    (3.11)

    \({\mathcal {T}}={\mathcal {T}}_1\) which varies from 1 to \(N_h\).

  • The refining of the part of sampled time data \(N_h\) depends essentially on the system, which is supposed to be unknown. The optimal value of \(N_h\) can therefore be chosen by experimental trials.

4 Linear coefficients estimation

With regard to the MISO fractional systems in the EIV framework, the following assumptions can be made.

Assumptions 1

 

  • A1: The structures of all SISO subsystems are supposed to be known and the same.

  • A2: The real orders \(\alpha _{n,\ell }(n=1,...,N_\ell )\) and \(\beta _{m,\ell }(m=0,...,M_\ell )\) for \(\{\ell =1,...,L\}\) are supposed to be known.

  • A3: The noise-free input signals \({\tilde{u}}_{\ell } \,\{\ell =1,...,L\}\) are zero mean stationary stochastic processes such that their fourth-order cumulants are non-zero. Their probability density function (pdf) cannot therefore be Gaussian.

  • A4: \(e_{{{\tilde{u}}_{\ell }}}({t_k}), \{\ell =1,...,L\}\), \(e_{{\tilde{y}}}({t_k})\) are stationary zero-mean random variables independent of \({\tilde{u}}_{\ell }\, \{l=1,...,L\}\) and \({\tilde{y}}\) and having a Gaussian pdf.

4.1 Fourth-order cumulants based-simplified and refined instrumental variable algorithm

Under assumptions A1-A4, the estimation of fourth-order cross-cumulants is consistent. Indeed, additive noise has a Gaussian pdf, so their fourth-order cross-cumulants equal zero (assumption A4), and the estimates of fourth-order cross-cumulants of inputs and output, therefore, contain only unbiased data [3]. Hence fourth-order cross-cumulants of inputs and output will be used instead of noisy data.

Like the SISO fractional systems, the \(\ell ^{th}\) \(\left( \ell =1,...,L\right) \) subsystem is described by the following model [2, 4]:

$$\begin{aligned} C_{{u_\ell } {x_\ell } {u_\ell }{u_\ell }} (\tau ) = \,\,\,\frac{{\sum \limits _{{m} = 0}^{M_\ell } {b_{m,\ell } p^{\beta _{m,\ell } } } }}{1+{\sum \limits _{n = 1}^{N_\ell } {a_{n,\ell } p^{\alpha _{n,\ell }}} } }\,\,C_{{u_\ell } {u_\ell } {u_\ell }{u_\ell }} (\tau ), \end{aligned}$$
(4.1)

where \(\tau \) is the continuous-time gap.

The use of fourth-order cross-cumulants estimates leads to:

$$\begin{aligned} {\hat{C}}_{{u_\ell } {x_\ell } {u_\ell }{u_\ell }} (\tau ) = \,\,\,\frac{{\sum \limits _{{m} = 0}^{M_\ell } {b_{m,\ell } p^{\beta _{m,\ell } } } }}{1+{\sum \limits _{n = 1}^{N_\ell } {a_{n,\ell } p^{\alpha _{n,\ell }}} } }\,\,{\hat{C}}_{{u_\ell } {u_\ell } {u_\ell }{u_\ell }} (\tau )+ \epsilon _\ell ({\tau }). \end{aligned}$$
(4.2)

An \(\ell ^{th}\) output error can therefore be defined as:

$$\begin{aligned} \epsilon _\ell ({\tau }) = {{\hat{C}}_{{u_\ell }{x_\ell }{u_\ell }{u_\ell }}}({\tau }) - \frac{{\mathcal {Z}}_\ell (p)}{{\mathcal {P}}_\ell (p) }{{\hat{C}}_{{u_\ell }{u_\ell }{u_\ell }{u_\ell }}}({\tau }). \end{aligned}$$
(4.3)

The polynomials \( {\mathcal {Z}}_\ell (p)\) and \({\mathcal {P}}_\ell (p)\) are defined respectively by:

$$\begin{aligned} \left\{ \begin{array}{l} {\mathcal {Z}}_\ell (p) = \sum \limits _{m = 0}^{{M_\ell }} {{b_{m,\ell }}{p^{{\beta _{m,\ell }}}}} \\ {\mathcal {P}}_\ell (p) = 1 + \sum \limits _{n = 1}^{{N_\ell }} {{a_{n,\ell }}{p^{{\alpha _{n,\ell }}}}}. \\ \end{array} \right. \end{aligned}$$
(4.4)

Equation (4.3) can be rewritten as follows:

$$\begin{aligned} {\epsilon _\ell }({\tau })= & {} {\mathcal {P}}_\ell (p)\displaystyle {\left( {\frac{{{{{\hat{C}}}_{{u_\ell }{x_\ell }{u_\ell }{u_\ell }}}({\tau })}}{{{\mathcal {P}}_\ell (p)}}} \right) } -\displaystyle { {\mathcal {Z}}_\ell (p)\left( {\frac{{{{{\hat{C}}}_{{u_\ell }{u_\ell }{u_\ell }{u_\ell }}}({\tau })}}{{{\mathcal {P}}_\ell (p)}}} \right) }\nonumber \\= & {} {\mathcal {P}}_\ell (p){{{\hat{C}}}_{{u_\ell }{x_{\ell ,f}}{u_\ell }{u_\ell }}}({\tau }) -{\mathcal {Z}}_\ell (p){{{\hat{C}}}_{{u_\ell }{u_{\ell ,f}}{u_\ell }{u_\ell }}}({\tau })\nonumber \\= & {} {{{\hat{C}}}_{{u_\ell }{x_{\ell ,f}}{u_\ell }{u_\ell }}}({\tau })-\hat{\varphi }_{\ell ,f}^T(\tau )\theta _\ell , \end{aligned}$$
(4.5)

\(\theta _\ell \) is defined by equation (3.2) and \(\hat{\varphi }_{\ell ,f}^T(\tau )\) contains the fractional derivatives of the filtered fourth-order cross-cumulants:

$$\begin{aligned} \begin{array}{l} {\hat{\varphi }} _{\ell ,f}^T({\tau }) = \left[ - p^{{\alpha _{1,\ell }}}{{{{\hat{C}}}_{{u_\ell }x_{\ell ,f}{u_\ell }{u_\ell }}}({\tau }),...,-p^{{\alpha _{{N_\ell },\ell }}}{{{\hat{C}}}_{{u_\ell }x_{\ell ,f}{u_\ell }{u_\ell }}}({\tau }),} \right. \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left. {p^{{\beta _{0,\ell }}}{{{\hat{C}}}_{{u_\ell }u_{{\ell ,f}}{u_\ell }{u_\ell }}}({\tau }),...,{p^{{\beta _{{M_\ell },\ell }}}{{\hat{C}}}_{{u_\ell }u_{\ell ,f}{u_\ell }{u_\ell }}}({\tau })} \right] , \\ \end{array} \end{aligned}$$
(4.6)

\({{{\hat{C}}}_{{u_\ell }{x_{\ell ,f}}{u_\ell }{u_\ell }}}({\tau })\) and \({{{\hat{C}}}_{{u_\ell }{u_{\ell ,f}}{u_\ell }{u_\ell }}}({\tau })\) are computed using the following filter:

$$\begin{aligned} {F_\ell }(p) = \frac{1}{{{\mathcal {P}}_\ell (p)}}. \end{aligned}$$
(4.7)

It contains the true parameters of the \(\ell ^{th}\) subsystem which are unknown but can be estimated. Thus, the proposed idea is to generate the filter iteratively by using the estimates obtained at each iteration i,

$$\begin{aligned} F_{\ell }^{i}(p) = \frac{1}{{{{\mathcal {{\hat{P}}}_\ell }^{i-1}}(p)}} = \frac{1}{{1 + \sum \limits _{n = 1}^{{N_\ell }} {{\hat{a}}_{{n,\ell }}^{i-1}{p^{{\alpha _{n,\ell }}}}} }}. \end{aligned}$$
(4.8)

The optimal parameter vector, obtained at each iteration i, is the solution of the following problem:

$$\begin{aligned} {\hat{\theta }} _{\ell }^i = \arg \mathop {\min }\limits _{\theta _{\ell }^i} \left( {{J}(\theta _{\ell }^i)} \right) , \end{aligned}$$
(4.9)

obtained by minimizing the criterion:

$$\begin{aligned} {J}\left( {\theta _\ell ^i} \right) = \frac{1}{{{N_h}}}\sum \limits _{\tau = 1}^{{N_h}} {\frac{1}{2}} \left( {{{\left( {\epsilon _\ell ^i({\tau })} \right) }^T}\epsilon _\ell ^i({\tau })} \right) , \end{aligned}$$
(4.10)

\({\epsilon _\ell ^i({t_k})}\) denotes the output error of the \(\ell ^{th}\) \(\left( \ell =1,...,L\right) \) subsystem computed at each iteration i according to the estimates computed at iteration \(i-1\).

The optimal solution define the frac-foc-sriv estimator:

$$\begin{aligned} \begin{aligned} {\hat{\theta }} _\ell ^i = {\left( {\frac{1}{{{N_h}}}\sum \limits _{{\tau } = 1}^{{N_h}} {{\hat{\zeta }} _{\ell ,f}^i({\tau }){\hat{\varphi }} _{\ell ,f}^{i,T}({\tau })} } \right) ^{ - 1}}. \left( {\frac{1}{{{N_h}}}\sum \limits _{{\tau } = 1}^{{N_h}} {{\hat{\zeta }} _{\ell ,f}^i({\tau }){{{\hat{C}}}_{{u_\ell }{x_{\ell ,f}}{u_\ell }{u_\ell }}}({\tau })} } \right) , \end{aligned} \end{aligned}$$
(4.11)

for \(\ell =1,...,L\). \({\hat{\varphi }} _{\ell ,f}^{i,T}(\tau )\) and \({{\hat{\zeta }^i} _{\ell ,f}}(\tau )\) are defined, respectively, by equations (4.12) and (4.13):

$$\begin{aligned} \begin{array}{l} {\hat{\varphi }} _{\ell ,f}^{i,T}({\tau }) = \left[ - {{{{\hat{C}}}_{{u_\ell }\kappa ^{i,1}{u_\ell }{u_\ell }}}({\tau }),...,-{{{\hat{C}}}_{{u_\ell }\kappa ^{i,N_\ell }{u_\ell }{u_\ell }}}({\tau }),}\right. \\ \left. {{{{\hat{C}}}_{{u_\ell }\varrho ^{i,0}{u_\ell }{u_\ell }}}({\tau }),...,{{{\hat{C}}}_{{u_\ell }\varrho ^{i,M_\ell }{u_\ell }{u_\ell }}}({\tau })} \right] , \\ \end{array} \end{aligned}$$
(4.12)

where:

  • \(\kappa ^{i,n}={x_{\ell ,f}^{{i,\alpha _{n,\ell }}}},\,\{n=1,...,N_\ell \}\) is the fractional derivative of the \(\ell ^{th}\) filtered output signal,

  • \(\varrho ^{i,m}=u_{{\ell ,f}}^{{\beta _{m,\ell }}},\,\{m=0,...,M_\ell \}\) is the fractional derivative of the \(\ell ^{th}\) filtered input signal.

$$\begin{aligned} \begin{array}{l} {\hat{\zeta }} _{\ell ,f}^T({\tau }) = \left[ -{{{{\hat{C}}}_{{u_\ell }\hat{\kappa }^{i,1}{u_\ell }{u_\ell }}}({\tau }),...,-{{{\hat{C}}}_{{u_\ell }\hat{\kappa }^{i,N_\ell }{u_\ell }{u_\ell }}}({\tau }),}\right. \\ \left. {{{{\hat{C}}}_{{u_\ell }\varrho ^{i,0}{u_\ell }{u_\ell }}}({\tau }),...,{{{\hat{C}}}_{{u_\ell }\varrho ^{i,M_\ell }{u_\ell }{u_\ell }}}({\tau })} \right] , \\ \end{array} \end{aligned}$$
(4.13)

\({\hat{\kappa }}^{i,n}={{\hat{x}}_{\ell ,f}^{{i,\alpha _{n,\ell }}}},\,\{n=1,...,N_\ell \}\) is the fractional derivative of the \(\ell ^{th}\) estimated and filtered output signal.

The frac-foc-sriv algorithm is summarized in three steps:

figure a

4.2 Example 1

The following fractional system with three inputs and single output is used to analyze the performance of the developed frac-foc-sriv algorithm:

$$\begin{aligned} (\mathcal {H_\text {1}}):\left\{ \begin{array}{l} {{\tilde{y}}}(t) =H_1(p) {{\tilde{u}}_{1}}(t) + H_2(p){{\tilde{u}}_{2}}(t) + H_3(p){{\tilde{u}}_{3}}(t) \\ \displaystyle { {H_1}(p) = \frac{{{b_{0,1}}}}{{1 + {a_{1,1}}{p^{\alpha _{ 1,1}}}}} = \frac{{1.5}}{{1 + 3{p^{0.8}}}} }\\ \displaystyle {{H_2}(p) = \frac{{{b_{0,2}}}}{{1 + {a_{1,2}}{p^{\alpha _{ 1,2}}}}} = \frac{2}{{1 + {p^{0.2}}}}} \\ \displaystyle { {H_3}(p) = \frac{{{b_{0,3}}}}{{1 + {a_{1,3}}{p^{\alpha _{ 1,3}}}}} = \frac{1}{{1 + 0.5{p^{1.2}}}}} \\ {u_\ell }({t_k}) = {{\tilde{u}}_{\ell }}({t_k}) + {e_{{{\tilde{u}}_{\ell }}}}({t_k}), \ell =1,2,3 \\ y({t_k}) = {{\tilde{y}}}({t_k}) + {e_{{\tilde{y}}}}({t_k}). \\ \end{array} \right. \end{aligned}$$
(4.18)

The noise-free inputs are chosen as an uncorrelated multisine signals with a non Gaussian pdf. The available data, plotted in Figure 2, are sampled uniformly with a sampling period \(h=0.05\) sec. The number of samples is \(N_t=7000\).

Fig. 2
figure 2

Three input and single output signals used for system identification

The performance is assessed through \(N_{mc}=100\) runs of Monte Carlo simulations with different noise realizations. Different additive white noise corrupt the input signals with a Signal-to-Noise-Ratio (SNR) denoted \(SNR_u\) and the output signal with a SNR denoted \(SNR_y\). They are defined, respectively, by:

$$\begin{aligned} SN{R_u}= & {} 10\log \left( {\frac{{{\textrm{var}} ({u_\ell })}}{{{\textrm{var}} ({e_{{{\tilde{u}}_{\ell }}}})}}} \right) ,\,\,\ell = 1,...,L, \end{aligned}$$
(4.19)
$$\begin{aligned} SN{R_y}= & {} 10\log \left( {\frac{{{\textrm{var}} ({y})}}{{{\textrm{var}} ({e_{{{{\tilde{y}}}}}})}}} \right) , \end{aligned}$$
(4.20)

\({\textrm{var}}\) is the variance.

In this section, all the structures of the SISO subsystems making up the MISO system are assumed to have prior knowledge (A1 satisfied).

4.2.1 Known differentiation orders

Assuming that all differentiation orders are known (assumption A2 satisfied), the performance of the frac-foc-sriv algorithm is compared with the one obtained with the frac-foc-ls algorithm developed in [3] and used in the initialization step. The SVF cut-off frequency used in the frac-foc-ls algorithm is set to 5rad/sec and \(N_h\) equals 1000.

Both frac-foc-ls and frac-foc-sriv algorithms are applied for each noise realization. Table 1 provides the obtained results for two levels of noise \(SNR_u=SNR_y=20\)dB and \(SNR_u=SNR_y=10\)dB. It contains the means of estimates (\(\bar{\Theta }\)), their standard deviations (\({\hat{\sigma }}(\Theta )\)) and the Normalized Relative Quadratic Error (NRQE)

$$\begin{aligned} NRQE = \frac{1}{{{N_{mc}}}}\sqrt{\frac{{{{\left\| {{\hat{\Theta }} - {\Theta _0}} \right\| }^2}}}{{{{\left\| {{\Theta _0}} \right\| }^2}}}}, \end{aligned}$$
(4.21)

\({\Theta _0}\) denotes the true parameter vector.

Table 1 Parameter estimation obtained by applying the frac-foc-ls and the frac-foc-sriv algorithms in presence of white noise (\(SNR_u\)=\(SNR_y\)=20dB and 10dB)

Thanks to the iterative strategy of the instrumental variable estimator and the use of fourth-order cumulants, the frac-foc-sriv algorithm yields unbiased estimates. Indeed, the means of the estimated parameters are close to the real ones, and the variances are very low. The consistency of the frac-foc-sriv algorithm is confirmed by comparing it with the frac-foc-ls algorithm. The latter can also provide unbiased estimates in presence of noise with high levels but with larger variances.

4.2.2 Unknown differentiation orders

If the differentiation orders are unknown (assumption A2 not satisfied), which is the case of real systems, the estimation of the global commensurate order is crucial. It facilitates the calculation of all differentiation orders if the structures of all SISO sub-models are known (assumption A1 satisfied). Hence, the influence of the global commensurate order on the parameter estimation is analyzed. The frac-foc-ls and frac-foc-sriv algorithms are applied for different values of \(\upsilon _G\) chosen in a range between 0.1 and 0.3 (for \(SNR_u=SNR_y=10\)dB).

The following \(\ell _2\)-norm (in dB) of the normalized output error is evaluated for each value of \(\upsilon _G\):

$$\begin{aligned} J = 10\log \left( {\frac{{{{\left\| {y(t) - {\hat{y}}(t)} \right\| }^2}}}{{{{\left\| {y(t)} \right\| }^2}}}} \right) , \end{aligned}$$
(4.22)

where y and \({\hat{y}}\) denote, respectively, the measured and the estimated output. The minimum value of J must be obtained at the true global commensurate order and would have the value of \(NSR=-SNR_y=-10\)dB.

The variation of the cost function J versus the global commensurate order is plotted in Figure 3. The plot shows that the minimum value of J is obtained, by both algorithms, at the same value \(\upsilon _G=0.2\).

Fig. 3
figure 3

Cost function versus the global commensurate order

The modeling error obtained by the frac-foc-ls algorithm equals 0.4dB (\(J_{frac-foc-ls} =-10.4\)dB) and that obtained by the frac-foc-sriv algorithm equals 0.36dB (\(J_{frac-foc-sriv}=-10.36\)dB).

The frac-foc-sriv algorithm gives the lowest modeling error for \(\upsilon _G=0.2\).

5 All parameters estimation

5.1 frac-foc-sriv combined with optimization orders algorithm

In this section, only assumption A2 is not satisfied and all fractional orders are estimated together with linear coefficients. It is therefore a matter of dealing with the three sub-cases discussed in Section 3.

The parameter vector is given by equation (3.3), (3.4) or (3.6) and obtained by minimizing the \(\ell _2\)-norm of output error:

$$\begin{aligned} J(\Theta ) = \frac{1}{2}{\left\| {\varepsilon ({t})} \right\| _2}, \end{aligned}$$
(5.1)

where

$$\begin{aligned} {\varepsilon ({t})}=y(t)-{\hat{y}}(t), \end{aligned}$$
(5.2)

\({\hat{y}}(t)\) is the estimated output.

This error is linear with respect to the MISO model coefficients but it is nonlinear with respect to the differentiation orders. So, the proposed idea is to estimate the linear coefficients using the frac-foc-sriv algorithm and the differentiation orders using a nonlinear optimization technique (such as Gauss-Newton or Levenberg-Marquardt) [3, 24].

Denote by \(\vartheta \) the differentiation orders vector to be optimized:

\(\bullet \) Case 1::

global commensurate order optimization:

$$\begin{aligned} \vartheta =\upsilon _G. \end{aligned}$$
(5.3)
\(\bullet \) Case 2::

local commensurate orders optimization:

$$\begin{aligned} \vartheta =\nu =\left[ \upsilon _1,...,\upsilon _\ell ,...,\upsilon _L\right] . \end{aligned}$$
(5.4)
\(\bullet \) Case 3::

all differentiation orders optimization:

$$\begin{aligned} \vartheta =\mu =\left[ \mu _1,...,\mu _\ell ,...,\mu _L\right] . \end{aligned}$$
(5.5)

The differentiation orders vector is estimated iteratively according to:

$$\begin{aligned} \vartheta ^{i + 1} = \vartheta ^i - \lambda \left[ {\tilde{H}}^{ - 1}G \right] _{{\vartheta } = \vartheta ^i}, \end{aligned}$$
(5.6)

where \({\tilde{H}}\) is the approximated Hessian [6]:

$$\begin{aligned} {\tilde{ H}} = \frac{{\partial {\varepsilon ^T}}}{{\partial {\vartheta }}}\frac{{\partial \varepsilon }}{{\partial {\vartheta }}}, \end{aligned}$$
(5.7)

G denotes the gradient, which is defined as following:

$$\begin{aligned} G=\frac{{\partial J }}{{\partial {\vartheta }}}= \frac{{\partial {\varepsilon ^T}}}{{\partial {\vartheta }}}\varepsilon , \end{aligned}$$
(5.8)

and \(\lambda \) is a weighting positive constant which is used to ensure the decrease of J.

The analytic compute of the differentiation orders error sensitivity functions are detailed in [24].

The frac-foc-sriv combined with optimization orders algorithm (frac-foc-oosriv) is summarized in three steps:

figure b

5.2 Example 2

The consistency of the developed frac-foc-oosriv algorithm is studied in this paragraph. The same data generating the fractional MISO model, represented by equation (4.18), are used. Firstly, in accordance with Definition 4, the global commensurate order is estimated together with linear coefficients. The frac-foc-oosriv performance is compared with that obtained by applying the output optimization (oe) based algorithm. The latter is developed in [24] to deal with fractional MISO systems identification when only the output measurements are noisy. It is used in this paragraph to prove the efficiency of the frac-foc-oosriv algorithm when input-output measurements are noisy. Secondly, according to Definition 3, the local commensurate orders of all SISO sub-models are estimated along with linear coefficients.

5.2.1 Global commensurate order estimation

To demonstrate the relevance of the oe and frac-foc-oosriv algorithms, a Monte Carlo simulation is performed for two different Gaussian white noise levels affecting both input and output measurements: \(SNR_u=SNR_y=20\)dB and \(SNR_u=SNR_y=10\)dB. The parameters which must be initialized are chosen arbitrarily as follows:

  • for the oe algorithm:

    $$\begin{aligned} {\Theta ^0} = \left[ {a_{1,1}^0,b_{0,1}^0,a_{1,2}^0,b_{0,2}^0,a_{1,3}^0,b_{0,3}^0,\upsilon _G^0} \right] = \left[ {0.1\,,0.1,0.1,0.1,0.1,0.1,0.3} \right] , \end{aligned}$$
  • for the frac-foc-oosriv algorithm:

    $$\begin{aligned} \vartheta ^0 = \upsilon _G^0 = 0.3. \end{aligned}$$

A Monte Carlo simulation with different noise realizations with \(N_{mc}=50\) runs is carried out. Table 2 summarizes the Monte Carlo simulation results obtained by the oe and frac-foc-oosriv algorithms. The oe algorithm gives good results for \(SNR_u=SNR_y=20\)dB. Indeed, the means of the estimates are close to the true ones, the variances are minimal and the NRQE is low. However, for \(SNR_u=SNR_y=10\)dB, it gives biased estimates. Consequently, the oe algorithm can be used in the context of EIV, but in presence of low noise levels. On the other side, the frac-foc-oosriv produces unbiased estimates with low NRQE and low values of standard deviation. The obtained results confirm the consistency of the frac-foc-oosriv algorithm in the EIV framework.

Table 2 Parameter estimation obtained by applying the oe and the frac-foc-oosriv algorithms in presence of white noise (\(SNR_u=SNR_y=20\)dB and 10dB): coefficients and global commensurate order estimation

For a run of Monte Carlo simulation, the cost function (equation (4.22)) is evaluated and plotted in Figure 4. The modeling error equals 0.0419dB for \(SNR_u=SNR_y=20\)dB and 0.4044dB for \(SNR_u=SNR_y=10\)dB. Even when the noise level is high, the modeling error is small. As shown in Figure 5, the global commensurate order estimate converges to the true value.

Fig. 4
figure 4

Cost function variation versus number of iterations: coefficients and global commensurate order estimation

Fig. 5
figure 5

Global commensurate order versus number of iterations

5.2.2 Local commensurate orders estimation

In this paragraph, the local commensurate orders of all sub-models composing the MISO model (equation (4.18)) are estimated by applying the frac-foc-oosriv algorithm for:

  • white noise affecting input and output measurements (\(SNR_u=SNR_y=20\)dB and \(SNR_u=SNR_y=10\)dB);

  • white and colored noise affecting, respectively, input and output measurements (\(SNR_u=SNR_{e_{{\tilde{y}}}^0}=5\)dB).

The initial parameter vector is chosen arbitrarily as follows:

$$\begin{aligned} \vartheta ^0 = \left[ \upsilon _1,\upsilon _2,\upsilon _3\right] =\left[ 0.5,0.5,0.5\right] . \end{aligned}$$

The Monte Carlo simulation results are recapitalized in Table 3. The frac-foc-oosriv algorithm accurately identifies local commensurate orders and linear coefficients. The means of estimates are close to the true ones with a very low standard deviations and NRQE. These results show that even with a large number of parameters, it is possible to consistently estimate the linear coefficients as well as all local commensurate orders.

Table 3 Parameter estimation obtained by applying the frac-foc-oosriv algorithm in presence of white and colored noise: coefficients and local commensurate orders estimation

Figures 6 and 7 respectively represent the cost function and the local commensurate orders variations versus the number of iterations for a Monte Carlo simulation. All local commensurate orders converge to the true orders with a very low modeling error which equals 0.039dB for \(SNR_u=SNR_y=20\)dB and 0.4013dB for \(SNR_u=SNR_y=10\)dB.

Fig. 6
figure 6

Cost function variation versus the number of iterations in presence of white noise (for \(SNR_u=SNR_y=20\)dB (a) and for \(SNR_u=SNR_y=10\)dB (b)): coefficients and local commensurate order estimation

Fig. 7
figure 7

Local commensurate orders variation versus the number of iterations in presence of white noise (for \(SNR_u=SNR_y=20\)dB (a) and for \(SNR_u=SNR_y=10\)dB (b))

5.3 Example 3

In this example, data generating system is not commensurate and all fractional differentiation orders are estimated along with linear coefficients.

Consider the following two-input-single-output fractional system:

$$\begin{aligned} (\mathcal {H_\text {2}}):\left\{ \begin{array}{l} \displaystyle { {\tilde{y}}(t) = {G_1}(p){{\tilde{u}}_{1}}(t) + {G_2}(p){{\tilde{u}}_{2}}(t)} \\ \displaystyle {{G_1}(p) = \frac{{{b_{0,1}}}}{{1 + {a_{1,1}}{p^{{\alpha _{1,1}}}} + {a_{2,1}}{p^{{\alpha _{2,1}}}}}} = \frac{{1.5}}{{1 + 1.5{p^{0.8}} + 2{p^{1.8}}}}} \\ \displaystyle {{G_2}(p) = \frac{{{b_{0,2}}}}{{1 + {a_{1,2}}{p^{{\alpha _{1,2}}}} + {a_{2,2}}{p^{{\alpha _{2,2}}}}}} = \frac{2}{{1 + 0.5{p^{0.6}} + {p^{1.5}}}}} \\ {u_1}({t_k}) = {{\tilde{u}}_{1}}({t_k}) + {e_{{{\tilde{u}}_{1}}}}({t_k}) \\ {u_2}({t_k}) = {{\tilde{u}}_{2}}({t_k}) + {e_{{{\tilde{u}}_{2}}}}({t_k}) \\ y({t_k}) = {{\tilde{y}}}({t_k}) + {e_{{\tilde{y}}}}({t_k}). \\ \end{array} \right. \end{aligned}$$
(5.9)

The noise-free input signals are chosen as uncorrelated multisine signals. The input-output data are uniformly sampled with a sampling time \(h=0.05\) sec. They are represented in Figure 8. The number of samples is \(N_t=9000\) and the part of these samples which is used for system identification is set to \(N_h=1000\).

Fig. 8
figure 8

Two input and single output signals used for system identification

To provide representative results, a Monte Carlo simulation of \(N_{mc}=50\) runs is performed. For each Monte Carlo run, a new noise realization on input and output signals is generated and the frac-foc-oosriv algorithm is applied. The latter is iterative, so the initialization step is necessary. Indeed, the coefficients are initialized by applying the frac-foc-ls estimator (\(\omega _f=2\,\)rad/sec) and the differentiation orders are initialized, arbitrarily, as follows:

$$\begin{aligned} \vartheta ^0=\left[ 0.9,0.4,0.75,0.3\right] . \end{aligned}$$

The obtained results are presented in Table 4 for two levels of white noise affecting the input and output signals, and for one level of colored noise. The frac-foc-oosriv gives very accurate results. All estimated parameters are unbiased. Variances are very low and NRQE remain low even in presence of a high noise level.

Table 4 Parameter estimation obtained by applying the frac-foc-oosriv algorithm in presence of white and colored noise: coefficients and all differentiation orders estimation
Fig. 9
figure 9

Cost function versus the number of iterations in presence of white noise (\(SNR_u=SNR_y=10\)dB): coefficients and all orders estimation

Fig. 10
figure 10

All orders variation versus the number of iterations in presence of white noise (\(SNR_u=SNR_y=10\)dB)

For a run of additive noise with \(SNR_u=SNR_y=10\)dB, the cost function and all orders variations versus the number of iterations are, respectively, plotted in Figure 9 and Figure 10. All fractional orders converge to the real values with a small modeling error (0.4244dB).

6 Conclusions

Continuous-time MISO systems identification with fractional models, when the available input-output data are noisy or contain measurement errors, has been studied in this paper. A new extension of the simplified and refined instrumental variable algorithm has been proposed. This involves using fourth-order cross-cumulants estimates, which are insensitive to Gaussian noise, instead of noisy input and output signals. So, a new algorithm has been developed, it was called fractional-fourth-order cumulants based-simplified and refined instrumental variable (frac-foc-sriv). Assuming that all the structures and differentiations orders of the hole system are known a priori, this estimator allows to estimate only the linear coefficients. When only the structures are known and there is no prior knowledge of differentiation orders, a combination of the frac-foc-sriv estimator with a nonlinear optimization technique has been proposed. Three cases have been highlighted : coefficients and global commensurate order estimation, coefficients and local commensurate orders estimation and coefficients and all differentiation orders estimation. The latter case, has been proposed when the commensurability constraint is released. The performance of the developed algorithms has been analyzed by comparing them with the fractional-fourth-order cumulants based-least squares (frac-foc-ls) and the output error (oe) based optimization algorithms. The frac-foc-sriv and frac-foc-oosriv algorithms gave unbiased estimates in presence of white and colored noise. In the future work, it will be interesting to study the problem of unstable MISO systems identification. It will also be worthwhile developing a technique for selecting the unknown structures of SISO subsystems.