1 Introduction

Engineering structures are often designed to behave linearly under normal operating conditions. Civil structures will inevitably suffer a certain level of deterioration during its service life owing to corrosion and/or fatigue damage, aging of construction materials, long-term effect of loads, sudden attacks of accidental, and natural catastrophes. Damage will make the structural properties nonlinear and vary with time. Therefore, structures may exhibit nonlinear behavior during service life due to operational and environmental loads. The reasons could be due to development of cracks in the structure during service life, that subsequently open and close under operational loading, due to loose connections, debonding between concrete and reinforcing steel, loose bolts, and interference fits that loosen because of material deformation, due to delamination in bonded, layered materials such as fiber-reinforced composite plates and shells, material nonlinearities associated with excessive deformation such as yielding of steel. Hence, parametric identification of these nonlinear systems is essential during structural health monitoring.

Nonlinear system identification is a highly challenging inverse engineering problem. It can be viewed as a succession of three steps: detection, characterization, and parameter estimation. Identification of nonlinear dynamical systems is being investigated extensively in recent years, and the list is very exhaustive. However, to give a flavor of the range of techniques developed, few of them, with a reasonable success, are discussed in the relevant subsections of this paper.

In this paper, we present a new methodology for nonlinear system identification involving all the three stages of nonlinear identification mentioned earlier. In the first stage, we present a data-driven null-subspace method to robustly identify the presence and also the degree of nonlinearity to judge whether the effects of nonlinearity are significant enough to consider the structural behavior as nonlinear. Once the presence of nonlinearity is confirmed, in the second stage, we use a technique based on the reverse path method to identify the spatial location of nonlinearity. Later, in the final stage, we identify the nonlinear system parameters by formulating it as an inverse problem and solving the resulting complex optimization problem using a newly developed hybrid dynamic quantum particle swarm optimization (HDQPSO) algorithm.

Numerical simulation studies have been carried out by solving several numerically simulated examples to demonstrate the effectiveness of the proposed nonlinear system identification algorithm. The numerical investigations carried out in this paper clearly indicate that the null-subspace method is an effective tool for detecting the presence of nonlinearity and degree of nonlinearity of the structure. Studies also reveal that we can precisely arrive at the spatial location of nonlinearity of the structure using the reverse path method, and the HDQPSO algorithm is effective in identifying the nonlinear parameters with good accuracy.

2 Detection of the presence of nonlinearity

Several time- and frequency-domain-based methods exist in the literature for detecting the presence of nonlinearity. The comprehensive list of techniques include Hilbert transform, time frequency analysis, spectral density analysis, time series models, higher-order spectral analysis, principal component analysis, Volterra and Wiener approaches [13]. In this paper, our concern is civil engineering structures, where only ambient vibration data are more convenient to measure than force response data. Even though there are few techniques, which use only ambient vibration data, they are highly susceptible to measurement noise. In this paper, we use a null-subspace method which uses only time-domain ambient vibration data for detection of the presence of nonlinearity.

The null-space method has earlier been used for damage detection [4] and sensor fault detection [5]. In this paper, it is proposed to explore this method to detect the presence of nonlinearity and also to assess its severity.

In order to illustrate the null-space-based approach, we consider a structural system for which the time history response is to be measured. Since, the response is usually measured in the form of acceleration time history, the structural system is instrumented with ‘m’ accelerometers. The acceleration time history response is measured periodically resulting in several data sets. The sampled data can be partitioned into several data subsets, in case of continuous online monitoring of the structure. We can construct a block Hankel matrix of the output data-driven matrix for each data subset, and it can be written as

$$\begin{aligned} {\mathop {\hbox {H}}\limits ^{\frown }}_{l,2i}= & {} \left[ {{\begin{array}{llll} {y_1 }&{}\quad {y_2 }&{}\quad {\cdots }&{}\quad {y_{j} } \\ {\cdots }&{}\quad {\cdots }&{}\quad {\cdots }&{}\quad {\cdots } \\ {y_i }&{}\quad {y_{i+1} }&{}\quad {\cdots }&{}\quad {y_{i+j-1} } \\ {\cdots }&{}\quad {\cdots .}&{}\quad {\cdots }&{}\quad {\cdots } \\ {y_{i+1} }&{}\quad {y_{i+2} }&{}\quad {\cdots }&{}\quad {y_{i+j} } \\ {\cdots }&{}\quad {\cdots }&{}\quad {\cdots }&{}\quad {\cdots } \\ {y_{2i} }&{}\quad {y_{2i+1} }&{}\quad {..}&{}\quad {y_{2i+j-1} } \\ \end{array} }} \right] \nonumber \\\equiv & {} \left[ {{\begin{array}{l} {Y_{p} } \\ {\cdots } \\ {Y_{f} } \\ \end{array} }} \right] \equiv {\begin{array}{l} {\hbox {Past}} \\ {\cdots } \\ {\hbox {Future}} \\ \end{array} } \end{aligned}$$
(1)

where 2ij indicate the user-defined number of row blocks and columns \((j=N-2i+1)\) and N indicates the length of the time history response. The Hankel matrix \({\mathop {\hbox {H}}\limits ^{\frown }}_{1,2i} \in \mathfrak {R}^{\textit{2mixj}}\) is split into a “past” and a “future” part of ‘i’ block rows. Performing the singular value decomposition (SVD) on the weighted Hankel matrix, we obtain:

$$\begin{aligned} {\overline{H}}_{p,q}= & {} W_1 {\mathop {\hbox {H}}\limits ^{\frown }}_{ p,q} W_2 \approx \left[ \begin{array}{ll} U_{L1}&{}\quad {U_{L0}}\\ \end{array}\right] \left[ \begin{array}{ll} S_{L1} &{}\quad 0 \\ 0&{}\quad 0 \\ \end{array} \right] \left[ \begin{array}{ll}V_{L1}&{} {V_{L0}}\\ \end{array}\right] ^{T}\nonumber \\= & {} U_{L} S_{L} V_{L}^{T} \end{aligned}$$
(2)

where \(W_{1}\) and \(W_{2}\) are weighting matrices chosen as identity matrices for simplicity. Due to the orthonormal property of matrices, we can always exactly verify the following relationship mathematically, for any data set,

$$\begin{aligned}&U_{L0}^T {\mathop {\hbox {H}}\limits ^{\frown }}_{p,q} =0\end{aligned}$$
(3)
$$\begin{aligned}&U_{L0}^{T} (U_{L1} S_{L1} V_{L1} )=0\end{aligned}$$
(4)
$$\begin{aligned}&U_{L0}^{T} U_{L1} =0 \end{aligned}$$
(5)

where \(U_{L0}\) and \(U_{L1}\) refer to column null subspace and active subspace of the weighted Hankel matrix \({\overline{H}}_{ p,q} \). In order to find the size of various matrices, we have to scan through the singular values in S, till the values are equal to zero or very insignificant and take the left-hand side vectors; \(\mathbf{U}_{L}\) corresponding to those null-singular values is \(U_{L0}\) and active singular values is \(U_{L1}\). It can be easily realized that \(U_{L1}\), containing the first \(`l'\) active principal components, represents a hyperplane, around which the response data locate. We can assess the state of the structural system (i.e., detection and quantification of nonlinearity), by observing the changes in orthonormality between different data sets (i.e., rotation of the subspace) using the above orthonormal relationship. The subspace of the Hankel matrix remains unchanged, i.e., no rotation of subspace takes place unless there is either environmental variation, or change in the dynamic characteristics indicated by the presence of nonlinearity.

Theoretically, if the response is linear, the orthonormal relationship between the two different data sets (i.e., column active subspace \(U_{L1}\) of the first data set and the column null subspace \(U_{L0}\) of the second data set) is true, then the value must be zero. However, practically, it will not be zero due to environmental variances, variation of the ambient excitation, and also the noises present in the measurement process. Therefore, the residue matrix \(R_{s}\) can be defined as the matrix obtained by multiplying the null-space matrix \((U_{{L0,r}}^T )\) of the baseline data and the active subspace matrix \((U_{{L1, c}})\) of the current data set.

$$\begin{aligned} {R}_s =U_{L0,r}^{T} U_{L1,c} \end{aligned}$$
(6)

The residue matrix \(R_{s}\) contains the information about how the new data obtained has been altered. In this paper, we use two indices, i.e., system state index (SSI) and degree of nonlinearity (DoN) built from the residue matrix to detect and quantify the nonlinearity present in the system [6]. The system state index is defined as follows

$$\begin{aligned} \hbox {SSI}=\hbox {trace}(Q)/n_{\mathrm{ap}} \quad \hbox { with}\quad Q=R_s^{T} R_{s} \end{aligned}$$
(7)

where \(Q, n_{\mathrm{ap}}\) indicate the covariance of residue matrix and the number of active principal components of the current data set, respectively. The value of SSI obviously lies in the range [0–1]. A large value of SSI indicates a change in the state of system from linear to nonlinear. The DoN is given by

$$\begin{aligned} \hbox {DoN}=\frac{\left\| {\beta _C} \right\| }{\left\| {\beta _R} \right\| } \end{aligned}$$
(8)

where \(\beta _C\) and \(\beta _R\) are vectors derived from the residue matrices \(R_s\) of the current and reference data (i.e., data when the structure is in linear state), respectively, and can be defined for a residue matrix \(R_{s}\) of size \(m_{1} \times n_{1}\) as

$$\begin{aligned} {\beta }_i =\sum _{j=1}^{n_1 } {\left| {R_s^{(i,j)}} \right| } \quad \hbox { where } \quad i=1,2,\ldots , m_1 \end{aligned}$$
(9)

The degree of nonlinearity index value is close to unity for the linear system, and if it exceeds unity, then the system is said to be exhibiting nonlinear behavior. The DoN value indicates the quantification of severity of the nonlinearity present in the system.

As pointed out earlier, the residue matrix of the linear response cannot be zero due to measurement noise, environmental variability, and other sources of errors, and consequently, the system state index (SSI) also cannot be equal to zero. In view of this, a number of reference data sets with linear response at varied excitation levels are collected by taking measurements at different time instants and partitioned into several sets. The values of SSI are calculated for all those data sets to obtain a limit point of linearity (linear level indicator). When dealing with the current data set, the presence of nonlinearity in the structure can be identified when the monitored SSI value exceeds the linear level indicator (LLI) defined as the mean value plus three times its standard deviation.

3 Identification of the spatial location

Since the nonlinearity is generally present locally in structures, once the presence of nonlinearity using the null-subspace-based approach is identified and the degree of nonlinearity is found to be significant enough, it is desirable to identify the spatial location of the nonlinearity present in the structure before identification of the nonlinear parameters. The information related to the spatial location of nonlinearity can help us to separate the underlying linear system from the nonlinear parts and create mathematical models for efficient parametric estimation. Hence, this step certainly helps in reducing the complexity of nonlinear parametric identification considerably. The procedures based on the restoring force surface method, test analysis correlation, error localization in a linear model updating framework, pattern recognition, and methods using scanning laser vibrometry are some of the widely reported approaches for nonlinear spatial location identification [1, 2].

In this paper, we present a new refined approach for identification of spatial location of nonlinearity using the concept of the reverse path method [7]. Accordingly a systematic search with MIMO (multiple-input/multiple-output) models is conducted to identify the exact spatial locations which alter the system response to nonlinear. However, we need to have knowledge of the possible form and the type of nonlinearities present in the system in order to arrive at the exact spatial location in order to use the proposed approach. Hence, in this paper, we limit our problems of interest to systems having a polynomial form of nonlinearity. It is appropriate to mention here that various types of nonlinearities can be conveniently idealized using a polynomial.

Any typical structural system can be described as a general MIMO system having a number of input signals \(F_{q}, q = 1, 2, \ldots , Q,\) and a number of output signals \(X_{p}, p = 1, 2, \ldots , P\), where the time notation is dropped for simplicity and the schematic view of the system is shown in Fig. 1.

Fig. 1
figure 1

Multiple-input-multiple-output (MIMO) system

In the reverse path algorithm, the measured responses of the nonlinear system are used as external forces (artificial input) acting on an underlying linear system, and the actual external force applied will be treated as (artificial) output [8]. The time- and frequency-domain model of a generalized system with local nonlinearities as applied forces can be written as

$$\begin{aligned}&[M]\{ {\ddot{x}(t)} \}+[C]\{ {\dot{x}(t)} \}+[K]\{ {x(t)}\}\nonumber \\&\quad =\{ {f(t)} \}-\{ {g( {\{x\},\{ {\dot{x}}\},\{{\ddot{x}}\}})}\} \end{aligned}$$
(10)
$$\begin{aligned}&\{X(\omega )=[H(\omega )](\{F(\omega )\}-\{G(\omega )\})+\{N(\omega )\}\end{aligned}$$
(11)
$$\begin{aligned}&g_m (t)=\{q\}^{T} \cdot \{{y(t)}\}_m =\left[ {{\begin{array}{llllll} {q_1 }&{} {q_2 }&{} \cdots {q_{{Q}-1}} \\ \end{array}}} \right] \left[ {{\begin{array}{c} x_m^2 (t) \\ {x_m^3 (t)} \\ \vdots \\ {x_{m}^{Q} (t)} \\ \end{array}}} \right] \nonumber \\ \end{aligned}$$
(12)
$$\begin{aligned}&G_{m} (\omega )=\{q \}^{\mathrm{T}} \cdot \{{Y(\omega )}\}_m \end{aligned}$$
(13)

where MCK represent the mass, damping, and stiffness matrices and the vectors \(f(t), x(t),g( \{ x \},\{ {\dot{x}} \},\{{\ddot{x}} \} )\) represent the external force, displacement, and the nonlinear restoring force vector, while \(F(\omega ),X(\omega ),G(\omega )\) indicate the corresponding Fourier transforms. The nonlinear restoring force vector g(t) is composed of nonlinear functions of the response vectors and the polynomial coefficients vector {q}. The term \(N(\omega )\) added indicates the contaminated noise in the response.

The reverse path model equation can be simplified by taking the corresponding force (DOF ‘k’) and local nonlinearities column (nonlinear DOF ‘m’) in the frequency response function as

$$\begin{aligned} \left\lfloor {H_{k}} \right\rfloor \left\{ {F_{k}} \right\} -\left\lfloor {H_m} \right\rfloor \left\{ {G_m}\right\} =\{X\} \end{aligned}$$
(14)

Each row n can then be written as

$$\begin{aligned} H_{{nk}} ( \omega )\cdot F_{k} (\omega )-H_{{nm}} (\omega )\cdot G_m (\omega )=X_{n} (\omega ) \end{aligned}$$
(15)

After substituting Eq. (13) in Eq. (15), the reverse path model equation can be rewritten as

$$\begin{aligned}&\left[ {H_{{nk}}^{-1} \cdot H_{{nm}} (\omega )H_{{nk}}^{-1} \{q\}^{{T}}} \right] \cdot \left[ {{\begin{array}{l} {X_{n} (\omega )} \\ {\left\{ {Y(\omega )} \right\} _m } \\ \end{array} }} \right] \nonumber \\&\quad +N_{k} (\omega )=F_{k} (\omega ) \end{aligned}$$
(16)

The input of the MIMO system consists of displacement and all nonlinear restoring force as input and external force as output. The output predicted by the model without noise is given by \(U_{{k}}\) and with noise \(N_{{k}}\) as \(F_{{k}}\). The underlying linear FRFs [8] of the MIMO system is given by

$$\begin{aligned}{}[B]^{{T}}= & {} \left[ {H_{{nk}}^{-1} \cdot H_{{nm}} H_{{nk}}^{-1}\{q\}^{T}} \right] \nonumber \\= & {} [{G_{\mathrm{FX}}}]_{{m,n}} \cdot [{G_{\mathrm{XX}}}]_{{m,n}}^{-1} \end{aligned}$$
(17)
$$\begin{aligned}{}[{G_{\mathrm{FX}}}]_{{m,n}}= & {} E\left[ {F_p \cdot \left\{ {{\begin{array}{l} {X_{n} } \\ {\{Y\}_m } \\ \end{array} }} \right\} ^{{H}}} \right] ; \left[ {G_{XX}} \right] _{{m,n}} \nonumber \\= & {} E\left[ {\left\{ {{\begin{array}{l} {X_{n}} \\ {\{Y\}_m} \\ \end{array}}} \right\} \cdot \left\{ {{\begin{array}{l} {X_{n} } \\ {\{Y\}_m } \\ \end{array} }} \right\} ^{{H}}} \right] \end{aligned}$$
(18)

where \([G_{\mathrm{FX}}] _{{m,n}}\) is a cross-spectral row-vector between output and all inputs and \([G_{XX}] _{{m,n}}\) is the auto spectral matrix of the inputs and \(E [\cdot ]\) denotes the expected value. Subscript m denotes the nonlinear DOF while subscript n is the linear response DOF used in the MIMO model shown in Fig. 2.

Fig. 2
figure 2

MIMO system depicting the reverse path model

The output force spectrum, \(U ({\omega })\) by the reverse path analogy can be written as

$$\begin{aligned} U=\lfloor {B_{k}}\rfloor \{X\} \end{aligned}$$
(19)

By multiplying the above Eq. (19) with its Hermitian transpose and taking the expected value of each term, the expression for the output power spectrum \(G_{UU}^{(m,n)} \) for all the possibilities considering only single grounded nonlinearity at DOF ‘m’ can be written as

$$\begin{aligned} \left[ {G_{UU}^{(m, n)} } \right]= & {} [B]\left[ {G_{XX}^{(m, n)} } \right] [B]^{\mathop {\hbox {H}}\limits ^{\smile }}\end{aligned}$$
(20)
$$\begin{aligned} \left[ {G_{UU}^{({m,n})}} \right] _{k}= & {} \left[ {G_{{F}_{k} ,{X}}} \right] \left( {\left[ {G_{XX}^{({m,n})}} \right] ^{-1}} \right) \left[ {G_{{F}_{k} ,{X}}} \right] ^{\mathop {\hbox {H}}\limits ^{\smile }} \end{aligned}$$
(21)

where ‘n’ indicates the linear response and ‘m’ is the location of the assumed nonlinear response. The residue matrix obtained for all the possible combinations of the nonlinear locations is given by

$$\begin{aligned} R_{{m,n}} =\sqrt{\frac{\Delta \omega }{2\pi }\sum _{\omega ={\omega }_1 }^{\omega ={\omega }_{r}} {G_{\mathrm{FF}}}}-\sqrt{\frac{\Delta \omega }{2\pi }\sum _{\omega ={\omega } _1}^{\omega ={\omega }_{r}} {G_{UU}^{({m,n})}}} \end{aligned}$$
(22)

where \(\Delta \omega \) denotes the frequency increment (in rad/s) between \(\omega _1\, \hbox {and} \,\omega _{r} \) and \(G_{\mathrm{FF}}\) represents the actual force spectrum. It is preferable to choose the frequency range carefully so that the nonlinear response is predominant in this interval. We propose an index based on residue matrix called ‘nonlinear location index’ (NLI), and it is defined as follows.

$$\begin{aligned}&\hbox {NLI}=1/\mu _m \nonumber \\&\hbox {where } \mu _m \!=\!\sum _{\mathrm{j}=1}^{n_{s} } {\left| {R_{\mathrm{m,j}} } \right| } /n_{s} \quad \hbox { and } \, m=1,2,3,\ldots n_{\mathrm{s}}\nonumber \\ \end{aligned}$$
(23)

where \(n_{s}\) indicates the number of sensors, m refers to the assumed nonlinear location (degrees of freedom). The NLI index is calculated for all the output force measurements as per the reverse path analogy. The peak values in all the plots of NLI index indicate the exact location of the nonlinear attachment element. This is due to the fact that the spatial location is the one which minimizes the error between the computed and the actual force spectra.

4 Identification of nonlinear parameters

The task of identification of nonlinear parameters becomes little bit easier once the spatial location of nonlinear attachment is identified. The comprehensive list of techniques for nonlinear parameter estimation includes the Wiener and Volterra series approaches [9, 10], harmonic balance nonlinearity identification [11], reverse path spectral method [12], and other time- and frequency-domain parametric identification techniques [1316]. Apart from this, in recent years, nonlinear parameter estimation techniques using stochastic search techniques [1719] have been successfully developed. In the present paper, we propose a new variant of quantum particle swarm optimization with dynamic subpopulations for much faster and reliable convergence for problems involving complex nonlinear objective functions.

Once the spatial locations are identified using the approach outlined earlier and with the availability of the nonlinear form (polynomial), we can identify the nonlinear parameters by formulating the inverse problem associated with the identification as an optimization problem and solve by using the newly proposed hybrid dynamic quantum PSO algorithm. The equation of motion can be written as

$$\begin{aligned}&[K]\{ {x(t)} \}+[C]\{ {\dot{x}(t)} \}=\{ {f(t)} \}-[M ]\{ {\ddot{x}(t)} \}\nonumber \\&\quad -[L]\{ {g\left( {\{ x\},\{ {\dot{x}} \},\{ {\ddot{x}}\}} \right) } \}\end{aligned}$$
(24)
$$\begin{aligned}&[\tilde{H}]\{\theta \}=F(t), \quad \hbox { where }\,[\tilde{H}]=[K\, C], \{ \theta \}=\{{x\, \dot{ x}}\}^{T}\nonumber \\ \end{aligned}$$
(25)
$$\begin{aligned}&K=\mathop {A}\limits _{{i}=1}^{\mathrm{Nel}} \gamma _{i} k_{i}; \quad C=\alpha M+\beta K \nonumber \\&g(x) = \sum _{{i}=1}^p {\psi _{i}^j} e^{d_{i}^j}x_{j}^{{i}+1} ; g(\dot{x}) = \sum _{{i}=1}^p {\psi _{i}^{j} e^{d_{{i}}^{j} }} \dot{x}_{j}^{{i}+1} \nonumber \\&\quad j=\hbox {location of nonlinear attachments} \end{aligned}$$
(26)

where \({\mathrm{A}}\) is the assembly operator. The damping matrix is computed using Rayleigh damping, which can be related to damping ratios for any two selected modes. \(\alpha \) and \(\beta \) are the Rayleigh damping constants and \(\gamma \) and \(\psi \) are the element stiffness coefficients and nonlinear parameter coefficients, respectively, that are needed to be identified in the proposed parameter identification using the proposed inverse formulations. Hence, \( \gamma =\left\{ {\gamma _1 ,\gamma _2 ,\gamma _3 ,\ldots , \gamma _{\mathrm{Nel}} } \right\} \in \mathfrak {R}^{\mathrm{nel}} d=\left\{ {d_1^{j} ,d_2^{j} ,d_3^{j} ,\ldots , d_{p}^{j} } \right\} \in \mathfrak {R}^{{P}} \& \psi \!=\!\left\{ \psi _1^{j} ,\psi _2^{j} ,\psi _3^{j} ,\ldots ,\right. \left. \psi _{{p}}^{j} \right\} \in \mathfrak {R}^{{P}}\) are taken as design variables in the optimization algorithm and with a constraint on \(\psi \) such that \(\psi _1 e^{{d}_1 }\prec \psi _2 e^{{d}_2 }\prec \psi _3 e^{{d}_3 }\cdots \prec \psi _{{p}} e^{{d}_{{p}}}\). We also assume that the nonlinearities present in the system are either displacement based or alternatively as velocity based and both together will not coexist in the system. This assumption has been made to reduce the number of design variables. Extending it to systems with nonlinearities both in the form of displacement and velocity is rather straight forward.

4.1 Objective function

The system parameters need to be identified include stiffness and damping properties of the structure and also nonlinear parameters in the form of polynomial coefficients at the spatial locations where nonlinearity is present. In the present formulation, it is assumed that mass properties, load history, and the initial conditions of Eq. (24) are known a priori and that the mass is invariant in time. The objective of any identification procedure is to find the best estimates of the structural parameters defined using \(\gamma =\left\{ {\gamma _1 ,\gamma _2 ,\gamma _3 ,\ldots , \gamma _{\mathrm{Nel}} } \right\} \in \mathfrak {R}^{\mathrm{nel}}\), and the nonlinear polynomial coefficients \(d=\left\{ {d_1^{j} ,d_2^{j} ,d_3^{j} , \ldots , d_{p}^{j}} \right\} \in \mathfrak {R}^{{P}}\) and \(\psi =\left\{ {\psi _1^{j} ,\psi _2^{j} ,\psi _3^{j} , \ldots , \gamma _{{p}}^{j}} \right\} \in \mathfrak {R}^{{P}}\) so as to minimize the error between the measured acceleration response and the predicted (or computed) response using a set of parameters \(\gamma ,\psi \hbox { and}\,\, d\) over the entire time history. In identification problems that rely on the dynamic measurements of the response, the objective function to be minimized through the optimization process can be formulated as a sum of the normalized mean square error between calculated and measured accelerations at observation points from the beginning up to the number of measured samples in each record.

$$\begin{aligned} \chi (\gamma ,\varPsi ,d)= & {} \frac{1}{\hbox {NR}^{*}\hbox {NT}}\sum _{{i}=1}^{\mathrm{NR}} {\sum _{j=1}^{\mathrm{NT}} \sum _{{k}=1}^{\mathrm{NL}}}\nonumber \\&\times {\left( {\frac{{}^{{k}}\ddot{x}_{{i,j}}^{{ m}}-{}^{{k}}\ddot{x}_{{i,j}}^{{e}} }{\max \left\{ {\left| {{}^{{k}}\ddot{x}_{{i,j}}^{{e}}} \right| \quad j=1,2,\ldots \hbox {NT}} \right\} }} \right) }\nonumber \\ \end{aligned}$$
(27)

where NR refers to the number of records of measured time history data, NT is the total number of samples in each record and NL refers to the spatial locations (degrees of freedom) at which the measurements are available. The superscript ‘m’ and ‘e’ represent measured and estimated values.

The overall identification problem can then be summarized as follows: Find \((\gamma ,\varPsi ,d)=\{\gamma _1 ,\gamma _2 ,\gamma _3 ,\ldots ,\gamma _{\mathrm{Nel}} ,\psi _1^{j} ,\psi _2^{{j}} ,\psi _3^{j} ,\ldots ,\gamma _{{p}}^{j} ,d_1^{j} ,d_2^{j} ,d_3^{j} ,\ldots ,d_{{p}}^{j} \}\in \varGamma \) such that \(\chi (\gamma ,\varPsi ,d)\) is minimum where \(\varGamma \) is the feasible n-dimensional parameter search space:

$$\begin{aligned} \varGamma =\left\{ {\begin{array}{l} \gamma \in \mathfrak {R}^{{n}}\left| {\gamma _{j}^{\min } \le \gamma _{j}\le \gamma _{{j}}^{\max } \quad \forall \,j=1,2,3,\ldots ,\hbox {Nel}} \right| ;\psi \in \mathfrak {R}^{{P}}\left| {\psi _{{i}}^{\min } \le \psi _{{i}} \le \psi _{{i}}^{\max } \quad \forall \, i=1,2,3,..p} \right| ; \\ d\in \mathfrak {R}^{{P}}\left| { d_{{i}}^{\min } \le d_{{i}} \le d_{{i}}^{\max } \quad \forall \, i=1,2,3,..p} \right| ; \\ \end{array}} \right\} \nonumber \\ \end{aligned}$$
(28)

where \((\gamma ,\varPsi ,d)\) are the number of parameters to be identified, \(\gamma _{j}^{\min } and \gamma _{j}^{\max } , \psi _{i}^{\min } and \psi _{i}^{\max } , d_{i}^{\min } and d_{i}^{\max }\) are the lower and upper bounds respectively of the j-th parameter of \(\gamma \) and i-th parameter of \(\psi \) and d, respectively. The problem of identification can be treated as a linearly constrained nonlinear optimization problem. In this paper, we propose to solve the constrained nonlinear optimization problem associated with an inverse problem of nonlinear parametric identification using a variant of recently developed quantum particle swarm optimization (QPSO) algorithm.

Fig. 3
figure 3

Quantum PSO algorithm

PSO is an evolutionary-like algorithm developed by Eberhart and Kennedy [20]. It is a population based search algorithm and is inspired by the observation of natural habits of bird flocking and fish schooling. In PSO, a swarm of particles moves through a D-dimensional search space. The particles in the search process are the potential solutions, which move around a defined search space with some velocity until the error is minimized or the solution is reached, which is decided by the fitness function. The particles reach the desired solution by updating their position and velocity according to the PSO equations. In PSO, each individual is treated as a volume-less particle in the D-dimensional space, with the position and velocity of the \(i{\mathrm{th}}\) particle represented as

$$\begin{aligned} v_{{ij}}^{{k}+1}= & {} \omega v_{{ij}}^{{k}} \!+\! c_1 r_1 (\hbox {pbest}_{{ij}} \!-\!x_{{ij}}^{{k}})\!+\!c_2 r_2 (\hbox {gbest}_{j} -x_{{ij}}^{{k}})\nonumber \\ \end{aligned}$$
(29)
$$\begin{aligned} x_{{ij}}^{{k}+1}= & {} x_{{ij}}^{{k}} +v_{{ij}}^{{k}+1} \end{aligned}$$
(30)

where \(v_{{ij}}\) is the particle velocity, \(x_{{ij}}\) is the current particle (solution), and \(w, c_{1}\) and \(c_{2}\) are weight coefficients. The existence of position data in PSO on the swarm-shared best solution gbest assures interaction among agents. PSO is not a global convergence-guaranteed optimization algorithm; therefore, Sun et al. [21] proposed a global convergence search technique QPSO whose performance is superior to the PSO. In the quantum model of a PSO, the state of the particle is depicted by wave function \(\psi (r,t)\) instead of position and velocity. The dynamic behavior of the particle is widely different from that of the particle in the traditional PSO systems in that the exact values of position and velocity cannot be determined simultaneously. We can only learn the probability of the particle’s appearing in position x from probability density function \(\left| {\uppsi (r,t)} \right| ^{2}\), the form of which depends on the potential field the particle lies in. The complete theoretical details of QPSO can be found in Sun et al. [21, 22]. QPSO algorithm can be implemented as

$$\begin{aligned} x_{{ij}}^{{k}+1}= & {} p_{{ij}}^{{k}} +\beta \left| {\hbox {mbest}_{{ij}}^{{k}} -x_{{ij}}^{{k}} } \right| {}^*\ln (1/u_{{ij}} ); \hbox { if } R_{d} >0.50\nonumber \\ \end{aligned}$$
(31)
$$\begin{aligned} x_{{ij}}^{{k}+1}= & {} p_{{ij}} ^{{k}} -\beta \left| {\hbox {mbest}_{{ij}}^{{k}} -x_{{ij}}^{{k}} } \right| {}^*\ln (1/u_{{ij}} ); \hbox { if }R_{{d}} \le 0.50\nonumber \\ \end{aligned}$$
(32)

where \({R}_{d}\) is a random number in the range [0, 1], \(\hbox {mbest}_{{ij}}\) is the mean best of all the particles in \(j{\mathrm{th}}\) dimension, \(u_{{ij}}\) is a random number uniformly distributed in the range [0,1]. The subscripts i and j refers to the particle and design variable, respectively. In the present work, the parameter \(\beta \) varied linearly from 1.0 to 0.30 with the iteration as

$$\begin{aligned} \beta ^{t}=\beta _{\max } -\frac{(\beta _{\max }-\beta _{\min } )}{t}\times \hbox {max}{\text {-}}\hbox {iterations} \end{aligned}$$
(33)

\(p_{{ij}}^t \) is the local attractor and defined as:

$$\begin{aligned} p_{{ij}}^t =\varphi _{{ij}}^t \hbox {Pbest}_{{ij}}^t +(1-\varphi _{{ij}}^t )\hbox {.gbest}_{j}^t \end{aligned}$$
(34)

where \(\varphi _{{ij}}^t\) is a random number uniformly distributed in [0, 1]. The ‘mbest’ is the mean best position and is defined as the center of pbest positions of the swarm, and it can be written as:

$$\begin{aligned}&\hbox {mbest}_{{ij}}^t =(\hbox {mbest}_1^t ,\hbox {mbest}_2^t ,\hbox {mbest}_3^t ......\hbox {mbest}_{{D}}^t)\nonumber \\&\quad =\left( {\frac{1}{M}\sum _{{i}=1}^{M} {P_{{i}1}^t ,\frac{1}{M}\sum _{{ i}=1}^{M} {P_{{i}2}^t ,\frac{1}{M}\sum _{{i}=1}^{M} {P_{{ i}3}^t ,\ldots ,\frac{1}{M}\sum _{{i}=1}^{M} {P_{\mathrm{iD}}^t}}}}} \right) \nonumber \\ \end{aligned}$$
(35)

where M is population size and \(P_{{i}}\) is the personal best position of particle i. The details of the QPSO algorithm are given in Fig. 3. The characteristics of QPSO algorithm are reflected mainly in two ways. First of all, the introduced exponential distribution of positions makes QPSO search in a wide space. Furthermore, the introduction of mean best position into QPSO is another improvement. In the standard PSO, each particle converges to the global best position independently. In the QPSO with mean best position GP, each particle cannot converge to the global best position without considering its colleagues because the distance between the current position and GP determines the position distribution of the particle for the next iteration.

4.2 Dynamic quantum PSO (DQPSO) algorithm

Although QPSO possesses better global search behavior than PSO, it may encounter premature convergence, a major problem also encountered by GA, PSO, and other evolutionary algorithms in multimodal optimization, which results in great performance loss and suboptimal solutions. In QPSO, although the search space of an individual particle is the whole feasible solution space of the problem throughout the iterations, diversity loss of the whole population is also inevitable due to the collectiveness [23].

The dynamic quantum particle swarm optimizer is constructed based on the QPSO algorithm with a new neighborhood topology in order to improve the diversification mechanism. In the proposed DQPSO algorithm, the swarms are dynamic, and the size of swarms is small. The whole population is divided into many small swarms called subswarms, and each subswarm uses its own members to search for better area in the search space. These subswarms are regrouped frequently and rather dynamically by using several regrouping schedules. Thus the information is exchanged among the swarms. Since the small-sized swarms are searching using their own best historical information, they are likely to converge to a local optimum because of typical PSO’s convergence characteristics. In order to prevent the convergence to suboptimal solution, the information needs to be exchanged among the swarms. While exchanging information among subswarms, it is necessary to exercise sufficient care to maintain larger diversity in subswarms. In order to accomplish this, we have proposed a shuffling schedule to have a dynamically changing neighborhood structure for the particles. After every user-defined ‘S’ generations, the population is shuffled, and the search will be continued using a new configuration of small swarms. In the proposed DQPSO algorithm, the search is based on quantum principles (QPSO) in each subswarm and dynamic mixing of the results obtained through this parallel searches contributes to move toward a global solution. Figure 4 clearly depicts the proposed dynamic QPSO implementation.

Fig. 4
figure 4

Dynamic quantum PSO algorithm

4.3 Hybrid dynamic quantum PSO algorithm

A larger diversity and faster convergence is always a trade-off problem. Since large diversity is achieved in the proposed dynamic quantum PSO algorithm, the convergence characteristics inherent in the algorithm may be lost. In order to alleviate this problem and also to build much stronger intensification mechanism into the algorithm, we propose to construct a hybrid version of DQPSO algorithm by integrating a strong neighborhood search algorithm. It is a well-known fact that the meta-heuristic algorithms cannot compete with an effective neighborhood algorithm in terms of intensified search and finding out the optimal value. However, the neighborhood search algorithms require a good starting point to perform their search; otherwise, they falter and often result in finding out local optima. Here, the meta-heuristic algorithms help in supplying the initial seed solutions for neighborhood algorithm to explore and locate the global optima. Keeping these things in view, we propose to improve the intensification mechanism of the proposed DQPSO algorithm by integrating it with a gradient free neighborhood search algorithm called Nelder–Mead algorithm [24]. Since the Nelder–Mead algorithm works with multiple solutions simultaneously to improve the fitness, it suits very well to integrate with the population-based meta-heuristic algorithms. The Nelder–Mead algorithm implemented in the proposed hybrid DQPSO algorithm is given in Fig. 5.

Fig. 5
figure 5

Nelder–Mead algorithm

In the present work, we propose to perform neighborhood search with NM algorithm, after each subswarm in DQPSO algorithm performs the user-specified number (say ‘S’) of evolutions and after regrouping stage. The NL best particles (solutions) obtained during the regrouping stage are given as the input to NM algorithm and was allowed to carry out the local search as per the NM algorithm given in Fig. 5 till convergence. Once the converged solutions from NM algorithm are obtained, the particles with the improved solutions are regrouped to perform the QPSO evolutions. The proposed algorithm with the hybridization of NM with DQPSO is termed as Hybrid Dynamic Quantum Particle Swarm Optimization (HDQPSO) algorithm.

4.4 Shrinking the search space

Shrinking the search space which basically aims to increase the efficiency and possibly the accuracy of identification algorithms by reducing (shrinking), the large search space is not a new concept in meta-heuristic algorithms and has been successfully implemented in several earlier instances [25] showing a substantial improvement in the final results when dealing with an initial large parameter search space. Hence, it appears natural to adopt such a strategy within the proposed HDQPSO algorithm. Here, we propose to implement the strategy related to shrinking the search space as follows:

Fig. 6
figure 6

Complete nonlinear system identification strategy of the proposed approach

Initially, the HDQPSO algorithm is executed with the initial search space and allowed to perform evolutions in each subswarm following which the neighborhood search is performed using NM algorithm. Once the improved solutions are obtained using the NM algorithm, we evaluate the weighted mean and weighted standard deviation values of the identified parameters to arrive at the bounds of the new search space for each design variable. The weighted average and the weighted standard deviation of the design variables can be computed as

$$\begin{aligned} {\overline{\gamma }}_{{j}}= & {} \frac{\sum \nolimits _{{q=1}}^{\mathrm{NS}} {W_{{q}}^\gamma }\gamma _{{qj}}}{\sum \nolimits _{{ q=1}}^{\mathrm{NS}}{W_{{q}}^\gamma }} \quad \hbox {where} \,j=1 \hbox { to nel};\nonumber \\ {\overline{d}}_{{j}}= & {} \frac{\sum \nolimits _{{q=1}}^{\mathrm{NS}} {W_{{q}}^{{d}}} d_{{qj}}}{\sum \nolimits _{{ q=1}}^{{p}} {W_{{q}}^{{d}}} }\quad \hbox {where} \, j= 1 \hbox { to } p; \nonumber \\ {\overline{\psi }}_{{j}}= & {} \frac{\sum \nolimits _{{q=1}}^{\mathrm{NS}} {W_{{q}}^\psi } \psi _{{qj}}}{\sum \nolimits _{{ q=1}}^{{p}} {W_{{q}}^\psi }}\quad \hbox {where} \,j= 1 \hbox { to p}\,\end{aligned}$$
(36)
$$\begin{aligned} \sigma _{\gamma {j}}= & {} \left[ {\frac{\sum \nolimits _{{ q=1}}^{\mathrm{NS}} {W_{{q}}^\gamma } \left( {\gamma _{{ qj}} -{\overline{\gamma }}_{{j}} } \right) ^{2}}{\sum \nolimits _{{ q=1}}^{\mathrm{NS}} {W_{{ q}}^\gamma }}} \right] ^{1/2}\quad \hbox {where} \,j=1 \hbox { to nel}; \nonumber \\ \sigma _{{dj}}= & {} \left[ {\frac{\sum \nolimits _{{q=1}}^{\mathrm{NS}} {W_{{q}}^{{d}} } \left( {d_{{qj}} -{\overline{d}}_{{j}}} \right) ^{2}}{\sum \nolimits _{{ q=1}}^{\mathrm{NS}} {W_{{q}}^{{d}}}}} \right] ^{1/2}\quad \hbox {where} \,j= 1 \hbox { to } p \nonumber \\ \sigma _{{\overline{\psi }}_{j}}= & {} \left[ {\frac{\sum \nolimits _{{ q=1}}^{\mathrm{NS}} {W_{{q}}^\psi } \left( {\psi _{{qj}} -{\overline{\psi }}_{{j}} } \right) ^{2}}{\sum \nolimits _{{q=1}}^{\mathrm{NS}} {W_{{q}}^\psi }}} \right] ^{1/2}\quad \hbox {where} \,j= 1 \hbox { to } p\nonumber \\ \end{aligned}$$
(37)

where NS is the number of solutions and \(W_{{q}}^\gamma , W_{{q}}^{{d}} , W_{{q}}^\psi \) are the weighted mean value of the \(j\mathrm{th}\) design variable of the three respective classes of design variables considered in the proposed inverse optimization process and given as :

$$\begin{aligned} W_{{q}}^\gamma =\frac{\hbox {fitness}(\gamma _{q} )}{\hbox {Best fitness}}; W_{{q}}^{{d}} =\frac{\hbox {fitness}(d_{{q}} )}{\hbox {Best fitness}}; W_{{q}}^\psi =\frac{\hbox {fitness}(\psi _{{ q}})}{\hbox {Best fitness}}\nonumber \\ \end{aligned}$$
(38)

Once the weighted average and standard deviation of each design variable are computed, we can update the search space by arriving at lower and upper limits of the reduced search space as follows:

$$\begin{aligned} \gamma _{{j}}^{\mathrm{UL}}= & {} {\overline{\gamma }}_{{j}} +\lambda _1 \sigma _{\gamma {{j}}}; \gamma _{{j}}^{\mathrm{LL}} ={\overline{\gamma }}_{{j}} -\lambda _1 \sigma _{\gamma {{j}}}\nonumber \\ d_{{j}}^{\mathrm{UL}}= & {} {\overline{d}}_{{j}} +\lambda _2 \sigma _{{dj}};d_{{j}}^{\mathrm{LL}} ={\overline{d}}_{{j}} -\lambda _2 \sigma _{{dj}}\nonumber \\ \psi _{{j}}^{\mathrm{UL}}= & {} {\overline{\psi }}_{{j}} +\lambda _3 \sigma _{\psi {j}}; \psi _{{j}}^{\mathrm{LL}} ={\overline{\psi }}_{{j}} -\lambda _3 \sigma _{\psi {j}} \end{aligned}$$
(39)

where \(\lambda _1 , \lambda _2\) and \(\lambda _3\) are the carefully chosen positive integer values and these values should be chosen in such a way that they are not very small to force the evolutionary process to stagnate. Sometimes it may be possible that the shrunk search space may exceed either on the limits originally given at the start of the algorithm by the user. In that case we set these limits not to exceed the original search space. The complete nonlinear identification process of the proposed approach is illustrated in Fig. 6.

5 Numerical studies

Before carrying out the numerical investigations on the three-stage nonlinear parametric identification problem proposed in this paper, we first present a couple of realistic numerical examples to evaluate the performance of the proposed HDQPSO algorithm in parameter estimation of various kinds of nonlinear systems. For this purpose, two classical nonlinear problems: breathing crack problem and chaotic nonlinear duffing oscillator are considered. The breathing crack problem is chosen specifically as these breathing cracks generally initiated due to persistent cyclic (fatigue) loading on structures and will continue to propagate once developed. Similarly, chaotic motions can be commonly observed in several engineering systems like vibrations of a buckled beam, oscillations of articulated mooring towers, and vortex resonance of cables.

The equation of motion of the beam with a breathing crack [26] can be written as

$$\begin{aligned}&m\ddot{x}(t)+c\dot{x}(t)+g[x(t)]=f(t) \end{aligned}$$
(40)
$$\begin{aligned}&\hbox {restoring force}, g(x)\nonumber \\&\quad =\left\{ {{\begin{array}{lll} {\alpha kx} &{} {x\ge 0} &{} {\hbox {when the crack is open}} \\ {kx} &{} {x<0} &{} {\hbox {when the crack is closed}} \\ \end{array}}} \right. \end{aligned}$$
(41)

where m and c are the mass and damping, respectively; x(t) is the displacement; k is the stiffness; \(\alpha \) is known as the stiffness ratio or loss factor \((0 \le \alpha \le 1)\), and f(t) is the external force exciting the system. The loss factor \(\alpha \) is a function of response and equal to one when the crack is closed (i.e. in linear state) and smaller than one, when the crack is open (i.e. system exhibiting nonlinear behavior). By modal transformation \(\{x\}=[\varPhi ]\{q\}\), we can easily arrive at the single-degree of freedom equation for the fundamental mode of vibration as

$$\begin{aligned}&\ddot{q}_1 (t)+2\varepsilon _1 \sqrt{\alpha }\omega \dot{q}_1 (t)+\alpha \omega ^2 q(t)=f_1 (t)\end{aligned}$$
(42)
$$\begin{aligned}&\alpha =\left\{ {{\begin{array}{ll} {\frac{\varPhi k^{\prime }\varPhi }{{\omega }^2 }}&{} {\hbox {when the crack is open }} \\ 1&{} {\hbox {when the crack is closed}} \\ \end{array} }} \right. \end{aligned}$$
(43)

where \(\varPhi \) indicates the fundamental mode and \(\omega \) is the fundamental frequency and \(k'\) represents the reduced stiffness due to the crack. The system parameters of the problem considered are \(m=1\,\hbox {kg}\), \(k=0.1\,\hbox {N/m}\), \(c=0.01\,\hbox {Ns/m}\), stiffness ratio or loss factor \(\alpha =0.8\) and the system is subjected to harmonic excitation of about 1N with forcing frequency of about 0.75 Hz. The nonlinear vibration response of a beam with breathing crack is very well obtained through analyzing the response of an equivalent bilinear oscillator. The bilinear frequency \({\omega }_{{B}}\) of bilinear oscillator is given by

$$\begin{aligned} {\omega }_{B}= & {} \frac{2{\omega }_0 {\omega }_1}{({\omega }_0 +{\omega }_1 )}; {\omega }_0 =\sqrt{\frac{k}{m}} \hbox { and }\, {\omega }_1 =\sqrt{\frac{k'}{m}}=\sqrt{\frac{\alpha k}{m}}\nonumber \\ \end{aligned}$$
(44)
$$\begin{aligned} {\omega }_{B}= & {} \frac{2\sqrt{\alpha }}{(1+\sqrt{\alpha })}\sqrt{\frac{k}{m}}=\frac{2\sqrt{\alpha }}{(1+\sqrt{\alpha })}{\omega }_0 \end{aligned}$$
(45)

where \({\omega }_{0}\) and \({\omega }_{1}\) are the natural frequencies of the un-cracked and cracked beam, respectively, k and \(k'\) are the stiffness corresponding to the un-cracked and cracked states of the beam. Therefore, the identified linear and bilinear (nonlinear) frequencies of the system considered are found to be 0.316 and 0.298 Hz, respectively. The time history responses are computed using Newmark’s (constant average acceleration) time integration scheme combining with the Newton–Raphson algorithm. The chosen sampling frequency is 100 Hz.

For parameter estimation using HDQPSO, the number of subswarms is considered as 4 and the number of swarms in each subswarm is considered as 5 with total swarm size as 20. The solution is assumed to converge, when the number of evolutions reaches 100 or when there is no improvement in the solution in the last five evolutions. The same parameters are considered for first two numerical simulations presented in this paper. The design variables considered for this breathing crack problem are mass, stiffness, damping and loss factor. The limits of the design variables associated with mass, stiffness, and damping are set as 0.1 to 10, and the loss factor is in the range [0–1]. Using the proposed HDQPSO, the stiffness loss factor is identified as 0.8 which matches exactly with the original value after 20 iterations with 25 numbers of evolutions. Similarly the mass, stiffness, and damping are identified with very high precision. The convergence characteristic of the proposed HDQPSO algorithm is presented in Fig. 7, comparing with the classical QPSO and DQPSO algorithms. Superior convergence characteristics of the proposed HDQPSO algorithm are clearly shown in Fig. 7.

Fig. 7
figure 7

Convergence study—breathing crack

Fig. 8
figure 8

Convergence study—chaotic oscillator

A nonlinear chaotic duffing oscillator is considered as the second numerical example [27]. The equation of motion of duffing oscillator is given by

$$\begin{aligned} \ddot{x}(t)+a\dot{x}(t)-bx+\hbox {cx}^{3}=\hbox {2.1sin(1.8\textit{t})}+u \end{aligned}$$
(46)

The system is in chaotic state when \(u=0, a=0.4, b=1.1\), and \(c=1\), and initial state values are \(x=\dot{x}=0\). The limits of all the design variables (ab and c) are selected in the range [0–1]. As chaotic systems are highly sensitive to initial conditions, we prefer to formulate the objective function in terms of phase space rather than time series, and it is given as

$$\begin{aligned} \hbox {Objective function}=\sum _{{i=1}}^{n} {\sum _{{ j=1}}^2 {\left\| {Y_{\mathrm{act}} (i,j)-\hbox {Y}_{\mathrm{est}} (i,j)} \right\| }}\nonumber \\ \end{aligned}$$
(47)

where n is the length of the time series and Y is a matrix indicating Poincare map which depends on displacement response. However, we can also use the difference in time series as objective function, but control term must be added in the cost function in case of chaotic systems [28]. More details on the choice of cost function for chaotic systems can be found in Jafari et al. [28, 29]. The identified parameters are found to be exactly matching with the actual parameters. The superiority of the proposed HDQPSO algorithm can be clearly observed from the convergence plot shown in Fig. 8 along with QPSO and DQPSO algorithms.

In order to demonstrate the effectiveness of the proposed three-stage nonlinear parametric identification algorithm presented in this paper, we prefer to choose the extensively investigated nonlinear model [3], i.e., cantilever beam with nonlinear stiffness attachment or alternatively with damper attachment at varied spatial locations.

Fig. 9
figure 9

Cantilever beam

The cantilever beam model considered is shown in Fig. 9. The span of the beam is 1.0 m, and the cross-sectional dimensions are \(0.014 \times 0.014\) m. The material properties are: Youngs Moduls, \(E=2.1\hbox {e}11\) Pa; mass density, \(\rho = 7800\hbox { Kg/m}^{3}\). Here the linear damping matrix is constructed using Rayleigh damping. The first five Eigen frequencies of the linear beam, i.e., without the nonlinear element, are 11.73, 73.54, 205.96, 403.87, and 668.67 Hz, respectively. The nonlinearity is introduced after 3500 time steps in a linear cantilever beam for the simulation of nonlinear phenomena. The beam is excited with ambient excitation of about 24 NRMS on free end for about 0.4 Sec in the frequency band [1–800 Hz]. The acceleration time history data corresponding to translational degrees of freedom of all active nodes are considered. The white Gaussian noise in the form of SNR (SNR \(= 50\)) is added to the acceleration time history before it is processed for all the problems. Moreover, the noisy sequences affecting different nodes are uncorrelated, in this way severe experimental conditions are simulated. The response corresponding to very low amplitude of excitation, i.e., 0.4NRMS, exhibiting linear behavior is taken as the reference data. The chosen sampling frequency is 40k Hz, i.e., 50 times the maximum frequency contained in the output signal.

Fig. 10
figure 10

Singular value diagram of test case 1

Fig. 11
figure 11

Nonlinear indicators. a Test case 1, b test case 2, c test case 3, d test case 4

In order to study the effectiveness of the proposed algorithm, the following four types of simulations of the above cantilever beam have been considered and investigated.

  • Test case 1: Cantilever Beam (discretized with 10 elements) with a quadratic damper nonlinear attachment at the end node (node no: 10) with \(F_{\mathrm{CNL}} =100\dot{x}^2 _{\mathrm{10}}\)

  • Test case 2: Cantilever beam (discretized with 10 elements) with a cubic stiffness nonlinear attachment at the end node (node no:10) with \({F}_\mathrm{KNL} =1\hbox {e}7 x_{10}^3\)

  • Test case 3: Cantilever beam (discretized with 10 elements) with cubic stiffness attachment at the end node (node no: 10) with \( F_\mathrm{KNL} =1\hbox {e}9x_{10}^3 \) and quadratic stiffness attachment at the \(8{\mathrm{th}}\) node \(({F}_\mathrm{KNL} =8\hbox {e}14x_{7}^2 )\) to demonstrate the identification of multiple nonlinear locations

  • Test case 4: Cantilever beam (discretized with 20 elements) with the odd stiffness attachment at nodes 12, 17 and 20 \(F_{{\mathrm{KNL}}} =\hbox {2.2e4}x_{12}^3 +8\hbox {e6}x_{12}^7 F_\mathrm{KNL} =4\hbox {e}5x_{17}^3 , F_\mathrm{KNL} =1\hbox {e}7x_{20}^3\) respectively

Fig. 12
figure 12

Nonlinear location index—cantilever problem. a Test case 1, b test case 2, c test case 3, d test case 4

The nonlinear indices (SSI & DoN) Plots are generated for all the test cases of the cantilever beam problem and the results obtained are shown in Figs. 10 and 11. The acceleration time history data of the nonlinear system is partitioned into 10 equal data subsets with sample length of 1600 each and the output data-driven Hankel matrix of size \(200 \times 200\) is constructed for calculation of nonlinear indices. The following observations can be made from the SSI and DoN plots shown in Figs. 10 and 11, respectively

  1. i.

    The singular value plot of test case 1 is shown in Fig. 10. It can be observed from Fig. 10 that the first 12 singular values are meeting the threshold limit of 99.5 % of energy and hence are considered for evaluation of residual matrix. Similar procedure of selecting singular values based on the energy criteria is followed for all data subsets of current data in all the test cases presented in this paper.

  2. ii.

    We can observe from the nonlinear indices plot shown in Fig. 11 that the computed SSI and DoN values for all the test cases of the cantilever problem exceeds the LLI after two data sets indicating the exact time instant of incipience of nonlinearity.

  3. iii.

    The DoN Values for the test case 2 is higher than the test case 1 due to the hardening behavior of the cubic stiffness attachment, while the DoN Value of the test case 1 remains constant or getting reduced due to the effect of damping in the system.

  4. iv

    The nonlinear indicator values of the test case 3 are higher than the test cases 1 and test case 2 due to the hardening behavior of the multiple nonlinear stiffness attachment and similarly the values of test case 4 are higher than all the earlier test cases due to the hardening behavior of the multiple nonlinear odd order stiffness attachment.

Table 1 Nonlinear parameter identification using hybrid dynamic quantum PSO algorithm

For spatial identification of nonlinearity, a trial function with \(Q=10\) is used for the analysis and the nonlinear location index is estimated using the procedure outlined in the earlier section. The nonlinear location index (NL I) plot is shown in Fig. 12 for all the test cases. The peak value of the computed NLI Index is found to be at \(10{\mathrm{th}}\) node for the test cases 1 and 2. We can also observe from Fig. 12c and d that NLI value reaches maximum at \(8{\mathrm{th}}\) and \(10{\mathrm{th}}\) node for test case 3, while for the test case 4, it reaches the maximum value at \(12{\mathrm{th}}, 17{\mathrm{th}}\) and \(20{\mathrm{th}}\) node respectively. From these investigations, we can conclude that the nonlinear location index proposed based on reverse path concept, identify the precise location of various nonlinear attachments. Once the spatial location of the nonlinearity is detected, we can use the proposed HDQPSO algorithm to identify the nonlinear parameters. The design variables for parameter estimation are element stiffness coefficients \(\gamma =\left\{ {\gamma _1 ,\gamma _2 ,\gamma _3 ,\ldots ,\gamma _{\mathrm{Nel}} } \right\} \in \mathfrak {R}^{\mathrm{nel}}\) and the nonlinear coefficients \(d=\left\{ {d_1^{j} ,d_2^{j} ,d_3^{j} , \ldots ,d_{{ p}}^{j} } \right\} \in \mathfrak {R}^{{P}}\) and \(\psi =\left\{ {\psi _1^{j} ,\psi _2^{j} ,\psi _3^{j} ,\ldots ,\gamma _{{ p}}^{j} } \right\} \in \mathfrak {R}^{{P}}\). The lower limits of the design variables \(\gamma , \psi \) and \(d_{{i}}^{j} \) are initially considered as 0.1, 0.1 and 2, respectively. Similarly the upper limits are set as 1100 and 10, respectively ,for \(\gamma , \psi \) and \(d_{{i}}^{j}\). For parameter estimation using HDQPSO, the number of subswarms is considered as 6 and the number of swarms in each subswarm is considered as 5 with total swarm size as 30. The solution is assumed to converge, when the number of evolutions is 500 or when there is no improvement in the solution in the last five evolutions. The nonlinear parameters obtained are shown in Table 1. It can be observed from the results presented in Table 1 that the identified parameters of all the test cases are comparing with the actual nonlinear parameters. It can also be observed from Fig. 13 that the convergence is found to be monotonic and converged rapidly in only 16 iterations for test case 1 and similarly, the convergencme occurred after 51, 85, and 90 iterations for other test cases 2, 3, and 4 respectively.

Fig. 13
figure 13

Convergency study—cantilever problem—HDQPSO. a Test case 1, b test case 2, c test case-3, d test case-4

6 Conclusion

In this paper, we have presented an approach for nonlinear system identification of structures. The proposed method is organized in to three stages. In the first stage, we use null-subspace-based approach to identify the presence of nonlinearity and also degree of nonlinearity. The nonlinear indices (SSI and DoN) are computed using the change in the orthonormality between the column null subspace of the reference data and the column active subspace of the current data. While SSI can be used to determine the presence of nonlinearity in the system, DoN will help in determining the intensity measure of nonlinearity in the system. The DoN index can be used to decide on whether linear damage indicators derived from the system are valid or we have to use the nonlinear damage indicators for health monitoring of structures. Numerical studies presented in this paper clearly indicate the robustness of the Null subspace method for identifying the presence of nonlinearity even with noisy measurements.

The major advantage of the null-space-based approach for detection of nonlinearity is that it can be easily extended to systems provided with very limited sensors. This approach directly uses acceleration time history data and does not require any signal processing or transformations. Further, the proposed technique is well suited for civil engineering applications as it utilizes ambient vibration data and also for online health monitoring to identify the exact instant of time where the structure exhibits the nonlinear behavior.

The second stage of parametric identification is to identify the precise location of nonlinear element. Here we use an approach devised using the concept of the reverse path method. The major advantages of the proposed approach are that it is based on random excitation and force can be even applied at a single location in a structure (SIMO systems), and there is no need to obtain the responses with varied levels of excitation unlike several earlier approaches. Further, the proposed algorithm does not require any prior information regarding the corresponding linear system. Numerical simulation studies carried out and presented in this paper clearly indicate that the proposed algorithm can identify single as well as multiple locations of nonlinear elements very precisely even with noisy measurements. It should be mentioned here that we require high sampling rate of the data in order to suppress the leakage effect and statistical errors due to the noise in the estimators. It can be further improved if one can possibly reduce the search space (i.e., the number of possible locations to search) when using this approach.

Structural parameter identification is a very challenging task from the computational point of view. In the third stage, we identify the parameters using the appropriate data subsets reflecting the nonlinear behavior, isolated from the first stage and also using the information of precise location of nonlinear elements present, in the current system. The nonlinear parametric identification problem is formulated as an inverse problem and the resulting nonlinear complex optimization problem is solved using the newly developed HDQPSO algorithm. Numerical simulation studies carried out for all the test cases clearly indicate that the proposed meta-heuristic algorithm identify the nonlinear parameters with minimal error. Robustness and computational efficiency of the proposed HDQPSO algorithm is demonstrated by solving two practical engineering problems associated breathing crack and chaos by comparing with classical QPSO and DQPSO.