1 Introduction

A permanent magnet synchronous motor (PMSM) is popular for some advantages over others such as low noise, low inertia, high efficiency, robustness, and low maintenance cost. Conventional proportional–integral (PI) and proportional–integral–derivative (PID) control methods usually work well under some operation point [1]. However, due to motor parameter variations and external disturbances, the conventional control methods cannot achieve the fast and precise speed response, quick recovery of speed from disturbances, parameter insensitivity, and robustness in the variable speed domain. To overcome these problems, many researchers have proposed various advanced control design methods, e.g., model predictive control [24], sliding mode control [5, 6], internal model control [7], adaptive control [8, 9], nonlinear feedback linearization control [10], nonlinear optimal control [11, 12], fuzzy control [1316], and neural network control [17]. Recently, several researchers have introduced disturbance observers to compute load torque compensating terms. These terms have been incorporated into conventional PI/PID controllers to perform rejection of load torque disturbances [18, 19]. However, all the PI/PID methods and most of the previous advanced controllers can assure perfect tracking performance only under the restrictive assumption that the PMSM parameters are accurately available.

Fig. 1
figure 1

Block diagram of a field-oriented PMSM control system

This paper shows that by including an additional simple learning feedforward term, a conventional PI control system can be enforced to have perfect tracking performance in the presence of repeating load torque and model parameter variations. Because PMSMs are used for repetitive tasks in many industrial applications such as robots and hard disk drives, this paper focuses on developing a simple PI-type controller for a PMSM with a repetitive desired trajectory. The proposed controller can be divided into the stabilizing part and the intelligent part. A conventional PI feedback control input term is used as the stabilizing part, and a feedforward term to compensate for repeating load torque and model parameter uncertainties is incorporated as the intelligent part. Since the proposed method constructs the feedforward compensating term using a simple learning rule, it does not require any load torque disturbance observer unlike the previous disturbance-observer-based PMSM control methods of [18, 19]. The additional learning feedforward term requires no information on motor parameter and load torque values and, thus, the proposed PI-type controller is insensitive to model parameter and load torque uncertainties. Additionally, stability and convergence of the proposed control system response are analytically proven. It should be noted that unlike the previous learning-type control methods given in [79, 16, 17], our controller does not require any identification procedure. Simulation and experimental results are shown to demonstrate the effectiveness of the proposed learning controller under the uncertainties such as motor parameter and load torque variations using a prototype PMSM drive system.

2 Problem formulation

A field-oriented vector-controlled PMSM can be represented by the following dynamic equation:

$$\begin{aligned} {\dot{\omega }}\left( t \right) = k_1 i_{\mathrm{qs}} \left( t \right) - k_2 \omega \left( t \right) - k_3 T_L \left( t \right) \end{aligned}$$
(1)

where \(\omega ={\dot{\theta }}\) is the electrical rotor angular speed, \(\theta \) is the electrical rotor angle, \(T_{L}\) represents the load torque disturbance input, and \(k_{i} > 0\), \(i = 1,\ldots ,3\) are the parameter values given by

$$\begin{aligned} k_1 = \frac{3}{2}\frac{1}{J}\frac{p^{2}}{4}\lambda _m, \quad k_2 = \frac{B}{J},\quad k_3 = \frac{p}{2J} \end{aligned}$$

and p is the number of poles, and J, B, \(\lambda _{m}\) are the rotor inertia, the viscous friction coefficient, the magnetic flux. The uncertainties on the parameters \(k_{i}\) as well as the load torque disturbance can severely deteriorate the control performance.

The following assumptions will be used:

A1 :

\(\omega \), \(i_{\mathrm{qs}}\), \(i_{\mathrm{d}s}\) are available.

A2 :

The desired trajectory and the load torque are T-periodic, i.e., \(\theta _{d}(t+T)=\theta _{d}(t), \omega _{d} (t+T)=\omega _{d} (t), {{\dot{\omega }}}_d ({t+T})={{\dot{\omega }}}_d (t), T_L \left( {t+T} \right) =T_L \left( t \right) .\)

It should be noted that most of the previous methods use the restrictive assumption that the parameters \(k_{i}\) are exactly known. The assumption A2 is not so restrictive because PMSMs are usually called upon to execute repetitive operations in many industrial applications such as robots and hard disk drives. Figure 1 illustrates a block diagram of a general field-oriented vector control system for a PMSM. In a field-oriented PMSM control system as shown in Fig. 1, the three-phase current commands are computed by converting the controller current commands \(i_{\mathrm{qsd}}\), \(i_{\mathrm{dsd}}\). The d axis reference current \(i_{\mathrm{dsd}}\) is usually set as \(i_{\mathrm{dsd}} = 0\). Thus, our problem can be formulated as proposing a simple learning control algorithm to generate the q axis reference current command \(i_{\mathrm{qsd}}\) for the system model (1) under the assumptions A1-2.

The following background results will be used to derive main results:

Definition 2.1

A vector \(f\left( t \right) \in R^{n}\) is said to be \(f\left( t \right) \in L_2\) if and only if \(\left\| {f\left( t \right) } \right\| _2 \cong \sqrt{\int _0^\infty {\sum _{i=1}^n {f_i^2 \left( t \right) \mathrm{d}t}} }<\infty \) And \(f\left( t \right) \in R^{n}\) is said to be \(f\left( t \right) \in L_\infty \) if and only if \(\max _{1_{\le i\le n }} \left| {f_i \left( t \right) } \right| <\infty \).

Lemma 2.2

If \(y\left( t \right) \in L_2 \cap L_\infty \), and \({\dot{y}}\left( t \right) \) is bounded, then \(y\left( t \right) \) converges to zero [20].

3 Controller design and stability analysis

Let \(i_{\mathrm{qs}}\) consist of a PI feedback input term \(u_{\mathrm{fb}}\) as the stabilizing part and a feedforward compensating input term \(u_{\mathrm{ff}}\) as the intelligent part:

$$\begin{aligned} i_{\mathrm{qs}} \left( t \right) =u_{\mathrm{fb}} \left( t \right) +u_{\mathrm{ff}} \left( t \right) \end{aligned}$$
(2)

where

$$\begin{aligned}&u_{\mathrm{fb}} \left( t \right) =-\beta \sigma \left( t \right) \end{aligned}$$
(3)
$$\begin{aligned}&\sigma \left( t \right) =\gamma e_1 \left( t \right) +e_2 \left( t \right) \end{aligned}$$
(4)

and \(\beta > 0\), \(\gamma > 0\), \(e_1 =\int _0^t {e_2 \mathrm{d}\tau =\theta -\theta _d, e_2 =\omega -\omega _d}\), and \(u_{\mathrm{ff}}\) will be specified later. It should be noted that the positive constants \(\beta \) and \(\gamma \beta \) correspond to the P and I gains, respectively.

Using the error vector \(e = [e_{1}, e_{2}]^{T}\), we can obtain the following error dynamics:

$$\begin{aligned} {\dot{e}}_1= & {} e_2\nonumber \\ {\dot{e}}_2= & {} -k_1 \beta \sigma +k_1 u_e -k_2 e_2 \end{aligned}$$
(5)

where \(u_e =u_{\mathrm{ff}} -u_{\mathrm{ff}}^*\) and \(u_{\mathrm{ff}}^*\) is given by

$$\begin{aligned} u_{\mathrm{ff}}^*\left( t \right) =\frac{1}{k_1 }\left[ {k_3 T_L \left( t \right) +{\dot{\omega }}\left( t \right) +k_3 \omega _d \left( t \right) } \right] \end{aligned}$$
(6)

Let us define the Lyapunov function \(V_{0}(t)\) as:

$$\begin{aligned} V_0 \left( t \right) =\zeta e_1^2 +\sigma ^{2} \end{aligned}$$
(7)

where \(\zeta = \gamma (\gamma -k_{2})\). It should be noted that if \(\gamma \) is sufficiently large, then \(\zeta > 0\) and \(V_{0}(t) \ge 0\). The time derivative of \(V_{0}(t)\) along the error dynamics (5) is given by

$$\begin{aligned} {\dot{V}}_0 \left( t \right)= & {} 2\zeta e_1 {\dot{e}}_1 +2\sigma {\dot{\sigma }}=2\zeta e_1 e_2 +2\sigma \left( {\gamma e_2 +{\dot{e}}_2} \right) \nonumber \\= & {} 2\zeta e_1 \left( {\sigma -\gamma e_1} \right) -2\left( {k_1 \beta -k_2 +\gamma } \right) \sigma ^{2}\nonumber \\&-\,2\zeta e_1 \sigma +2k_1 \sigma u_e\nonumber \\\le & {} -2q_r \left\| {e_r} \right\| _2^2 +2k_1 \sigma u_e \end{aligned}$$
(8)

where \(q_{r} = \hbox {min}(\zeta \gamma \), \(k_{1}\) \(\beta - k_{2})\), \(e_{r} = [e_{1}, \sigma ]^{T}\), and the following equation is used

$$\begin{aligned} {\dot{\sigma }}=\gamma e_2 +{\dot{e}}_2 =-\left( {k_1 \beta -k_2 +\gamma } \right) \sigma -\zeta e_1 +k_1 u_e \end{aligned}$$

If the feedforward compensating input term is zero, then \(u_e =-u_{\mathrm{ff}}^*\) and the inequality (8) can be reduced to

$$\begin{aligned} {\dot{V}}_0 \left( t \right) \le -2q_r \left\| {e_r} \right\| _2^2 +2k_1 \eta \left\| {e_r} \right\| _2 \end{aligned}$$

where \(\eta = \max _{0_{\le t\le T }} \left| {u_{\mathrm{ff}}^*\left( t \right) } \right| \). On the other hand, for the case of accurate feedforward compensation, the inequality (8) can be reduced to

$$\begin{aligned} {\dot{V}}_0 \left( t \right) \le -2q_r \left\| {e_r} \right\| _2^2 \le 0 \end{aligned}$$

which implies that the perfect tracking response of the conventional PI control system can be guaranteed under the restrictive assumption of availability of accurate information on motor parameter and/or load torque values. This demands incorporating an effective compensation algorithm into the conventional PI control system to get good performance in the presence of motor parameter and load torque variations.

Now, let the feedforward control input \(u_{\mathrm{ff}}\) be updated by the following simple repetitive learning rule:

$$\begin{aligned} u_{\mathrm{ff}} \left( t \right) =u_{\mathrm{ff}} \left( {t-T} \right) -\delta \sigma \left( t \right) . \end{aligned}$$
(9)

where \(\delta > 0\) is the learning gain. Figure 2 shows the overall block diagram of the proposed learning control algorithm.

Fig. 2
figure 2

Block diagram of the proposed learning control algorithm

Theorem 3.1

Let \(i_{\mathrm{qs}}\) be given by (2) with (3) and (9). Assume that \(\gamma \) is sufficiently large enough to guarantee \(\gamma > k_{2}\). Then, the tracking error \(e_{1}\) converges to zero.

Proof

Let us define the Lyapunov functional as:

$$\begin{aligned} V\left( t \right) =V_0 \left( t \right) +\frac{k_1 }{\delta }\int _{t-T}^t {u_e^2 \left( \tau \right) \mathrm{d}\tau }. \end{aligned}$$

Its time derivative along the error dynamics (5) is given by

$$\begin{aligned} {\dot{V}}\left( t \right)= & {} {\dot{V}}_0 \left( t \right) +\frac{k_1 }{\delta }u_e^2 \left( t \right) -\frac{k_1 }{\delta }u_e^2 \left( {t-T} \right) \\\le & {} -2q_r \left\| {e_r \left( t \right) } \right\| _2^2 +2k_1 \sigma \left( t \right) u_e \left( t \right) \\&+\frac{k_1 }{\delta }\left[ {u_e \left( {t-T} \right) -\delta \sigma \left( t \right) } \right] ^{2}-\frac{k_1 }{\delta }u_e^2 \left( {t-T} \right) \\\le & {} -2q_r \left\| {e_r \left( t \right) } \right\| _2^2 +2k_1 \sigma \left( t \right) \left[ {u_e \left( {t-T} \right) -\delta \sigma \left( t \right) } \right] \\&+\frac{k_1 }{\delta }\left[ {u_e \left( {t-T} \right) -\delta \sigma \left( t \right) } \right] ^{2}-\frac{k_1 }{\delta }u_e^2 \left( {t-T} \right) \\\le & {} -2q_r \left\| {e_r \left( t \right) } \right\| _2^2 +k_1 \delta \sigma ^{2}\left( t \right) \end{aligned}$$

which implies that \(e_{r} \in L_{2} \cap L_{\infty }\) (i.e., \(e_{1} \in L_{2} \cap L_{\infty }\), \(\sigma \in L_{2} \cap L_{\infty }\), \(e_{2} \in L_{2} \cap L_{\infty })\). After all, by Lemma 1 it can be concluded that the tracking error \(e_{1}\) converges to zero. \(\square \)

Instead of (9), the following repetitive learning rule can be used without losing the stability property

$$\begin{aligned} u_{\mathrm{ff}} \left( t \right) =u_{\mathrm{ff}} \left( {t-T} \right) -\delta \sigma \left( {t-T} \right) \end{aligned}$$
(10)

where \(2 \beta> \delta > 0\).

Corollary 3.2

Let \(i_{\mathrm{qs}}\) be given by (2) with (3) and (9). Assume that the control parameters \(\beta \), \(\delta \), and \(\gamma \) satisfy the following inequalities:

$$\begin{aligned} 2\beta>\delta>0, \gamma >k_2 \end{aligned}$$
(11)

Then, the tracking error \(e_{1}\) converges to zero.

Proof

Define a Lyapunov functional as:

$$\begin{aligned} V_c \left( t \right) =V_0 \left( t \right) +\frac{k_1 }{\delta }\int _t^{t+T} {u_e^2 \left( \tau \right) \mathrm{d}\tau } \end{aligned}$$

Then, the time derivative of \(V_{c}\) along the error dynamics (5) is given by

$$\begin{aligned} {\dot{V}}_c \left( t \right) ={\dot{V}}_0 \left( t \right) +\frac{k_1 }{\delta }u_e^2 \left( {t+T} \right) -\frac{k_1 }{\delta }u_e^2 \left( t \right) \end{aligned}$$
(12)

By referring to the inequality (8), the above Eq. (12) can be reduced to

$$\begin{aligned} {\dot{V}}_c \left( t \right)= & {} -2\zeta \gamma e_1^2 \left( t \right) -2\left( {k_1 \beta -k_2 +\gamma } \right) \sigma ^{2}\left( t \right) \nonumber \\&+\,2k_1 \sigma \left( t \right) u_e \left( t \right) +\frac{k_1 }{\delta }u_e^2 \left( {t+T} \right) -\frac{k_1 }{\delta }u_e^2 \left( t \right) \nonumber \\ \end{aligned}$$
(13)

The learning rule (9) implies \(u_{e}(t+T)=u_{e}(t)-\delta \sigma (t)\) and, thus, (12) can be rewritten as:

$$\begin{aligned} {\dot{V}}_c \left( t \right)= & {} -2\zeta \gamma e_1^2 \left( t \right) -2\left( {k_1 \beta -k_2 +\gamma } \right) \sigma ^{2}\left( t \right) \nonumber \\&+\,2k_1 \sigma \left( t \right) u_e \left( t \right) +\frac{k_1 }{\delta }\left[ {u_e \left( t \right) -\delta \sigma \left( t \right) } \right] ^{2}-\frac{k_1 }{\delta }u_e^2 \left( t \right) \nonumber \\= & {} -2\zeta \gamma e_1^2 \left( t \right) -\left[ {k_1 \left( {2\beta -\delta } \right) +2\left( {\gamma -k_2} \right) } \right] \sigma ^{2}\left( t \right) \end{aligned}$$

which implies that \(e_{1} \in L_{2} \cap L_{\infty }\), \(\sigma \in L_{2} \cap L_{\infty }\), \(e_{2} \in L_{2} \cap L_{\infty }\) as long as the inequalities (11) hold. After all, by Lemma 1 it can be concluded that the tracking error \(e_{1}\) converges to zero. \(\square \)

Remark 3.3

If the control law (2) and learning rule (9) are replaced with

$$\begin{aligned} i_{\mathrm{qs}} \left( t \right)= & {} u_{\mathrm{fb}} \left( t \right) +\mathrm{Sat}\left[ {u_{\mathrm{ff}} \left( t \right) } \right] \end{aligned}$$
(14)
$$\begin{aligned} u_{\mathrm{ff}} \left( t \right)= & {} \mathrm{Sat}\left[ {u_{\mathrm{ff}} \left( {t-T} \right) } \right] -\delta \sigma \left( t \right) \end{aligned}$$
(15)

where

$$\begin{aligned} Sat\left( x \right) =\left\{ {{\begin{array}{ll} {u^{*}}&{} \quad {x>u^{*}}\\ x&{}\quad {-u^{*}\le x\le u^{*}}\\ {-u^{*}}&{}\quad {x<-u^{*}}\end{array} }} \right. \end{aligned}$$
(16)

where \(u^{*}\) is a sufficiently large constant satisfying \(u^{*} \ge \hbox { max }0 \le \hbox { t }\le T \left| {u_{\mathrm{ff}}^*\left( t \right) } \right| \), then using Lemma 1 and the fact that

$$\begin{aligned} \left( {\mathrm{Sat}\left[ {u_{\mathrm{ff}} \left( t \right) } \right] -u_{\mathrm{ff}}^*\left( t \right) } \right) ^{2}\le \left( {u_{\mathrm{ff}} \left( t \right) -u_{\mathrm{ff}}^*\left( t \right) } \right) ^{2} \end{aligned}$$

It can be shown that \(e_{r} \in \quad L_{2} \cap L_{\infty }\), \(u_{\mathrm{ff}}(t) \in L_{\infty }\), \({\dot{e}}_2 \left( t \right) \in L_{\infty }\), thus \(\hbox {lim}_{t\rightarrow \infty } e_{2}(t) = 0\), and therefore \(e_{1}\) as well as \(e_{2}\) converges to zero.

Remark 3.4

Because the positive constants \(\beta \) and \(\gamma \beta \) correspond to the P and I gains, we can easily design the constants \(\beta \) and \(\gamma \) using the PI tuning rule given in the previous PI control methods such as [1]. The tuning rule of [1] implies \(\gamma =\omega _{s}/1.4\) where \(\omega _{s}\) is the bandwidth of the speed loop PI controller. Usually, \(\omega _{s}\) is much larger than 1 and, thus, we may assume that \(\gamma =\omega _{s}/1.4{>>}1\). On the other hand, the viscous friction coefficient B is very small and, thus, we can regard \(k_{2}\) as a small constant. This implies that the stability condition \(\gamma =\omega _{s}/1.4 > k_{2}\) of Theorem 3.1 can be trivially satisfied when the PI feedback control term \(u_{fb}(t)\) is designed by the existing tuning method.

4 Simulation and experiment

From the PMSM parameters given in Table 1 for simulation and experiment, we can derive the following dynamic equation:

$$\begin{aligned} {\dot{\omega }} = 1679.4\, i_{\mathrm{qs}} - 0.2837\, \omega - 3782.4\, T_L \end{aligned}$$
(17)
Table 1 PMSM parameters for simulation and experiment
Fig. 3
figure 3

Overall block diagram of the proposed PMSM learning control system

Fig. 4
figure 4

Simulation results of the first trial under +200 % variations of some parameters \((J, B, \lambda _{m}\), and \(T_{L})\)

Fig. 5
figure 5

Simulation results of the fifth trial under +200 % variations of some parameters \((J, B, \lambda _{m}\), and \(T_{L})\)

Fig. 6
figure 6

Simulation results about the speed tracking error under +200 % variations of some parameters \((J, B, \lambda _{m}\), and \(T_{L})\) (blue first trial, green second trial, red third trial, cyan fourth trial, and purple fifth trial)

Referring to the result given in the previous section, we can obtain the following current control law

$$\begin{aligned} i_{\mathrm{qsd}} \left( t \right) =-0.2\sigma \left( t \right) +u_{\mathrm{ff}} \left( t \right) \end{aligned}$$
(18)

where \(\sigma (t)\) is given by

$$\begin{aligned} \sigma =\int _0^t{\left( {\omega -\omega _d} \right) \mathrm{d}\tau +\left( {\omega -\omega _d} \right) =e_1 +e_2} \end{aligned}$$
(19)

and \(u_{\mathrm{ff}}(t)\) are updated by the following adaptation law:

$$\begin{aligned} u_{\mathrm{ff}} \left( t \right) =u_{\mathrm{ff}} \left( {t-T} \right) -0.2\sigma \left( t \right) \end{aligned}$$
(20)
Table 2 Maximum speed tracking errors of each trial
Fig. 7
figure 7

Experimental results under the same condition as Fig. 4. a \(\omega _{d}\), \(\omega \), and \(e_{2}\). b \(i_{\mathrm{qsd}}\), \(i_{\mathrm{qs}}\), and \(i_{\mathrm{d}s}\)

where \(T = 1\) and \(u_{\mathrm{ff}}(t) = 0\) for \(t \in [-T, 0]\). Figure 3 shows the overall block diagram of the proposed PMSM learning control system. All blocks in the dotted line are implemented on a Texas Instruments TMS320F28335 floating-point DSP. Two stator currents \((i_{a}, i_{b})\) as well as a dc-link voltage \((V_{\mathrm{dc}})\) are measured for control and their analog signals are precisely converted to digital values by a 12-bit ADC module with a built-in sample-and-hold circuit. The rotor position \((\theta )\) as well as the motor speed \((\omega )\) is obtained. In this figure, the control system uses the cascade control structure including two control loops : a proposed learning controller in an outer loop and a conventional PI current controller in an inner loop. As shown in Fig. 3, a conventional PI current controller is used to evaluate the performance of the proposed learning controller, so the output of the proposed learning controller becomes the q axis current command \((i_{\mathrm{qsd}})\) of the PI current controller. In this paper, the switching frequency is chosen as 5 (kHz), and a space vector pulse-width modulation (SVPWM) technique is used. In simulations and experiments, the motor speed command \((\omega _{d})\) is changed from 125.7 to 251.3 (rad/s). Figure 4 shows the simulation results \((\omega _{d}, \omega , e_{2}, i_{\mathrm{qsd}}, i_{\mathrm{qs}}, i_{\mathrm{d}s})\) of the first trial using MATLAB/Simulink under +200 % variations of some parameters \((J, B, \lambda _{m}\), and \(T_{L})\). It should be noted that the results of the first trial are equivalent to those by the following PI speed control law:

$$\begin{aligned} i_{\mathrm{qsd}}= & {} -0.06\int _0^t {\left( {\omega -\omega _d} \right) \mathrm{d}\tau -0.2\left( {\omega -\omega _d} \right) }\\= & {} -0.06e_1 -0.2e_2 \end{aligned}$$
Fig. 8
figure 8

Experimental results under the same condition as Fig. 5. a \(\omega _{d}\), \(\omega \), and \(e_{2}\). b \(i_{\mathrm{qsd}}\), \(i_{\mathrm{qs}}\), and \(i_{\mathrm{d}s}\)

Figure 5 shows the simulation results of the fifth trial under +200 % variations of some parameters \((J, B, \lambda _{m}\), and \(T_{L})\). Figure 6 shows the simulation results about the speed tracking error \((e_{2})\) under +200 % variations of some parameters \((J, B, \lambda _{m}\), and \(T_{L})\). As shown in Figs. 4, 5 and 6, the proposed learning controller is very insensitive to model parameter and load torque variations. Table 2 summarizes the maximum speed tracking errors of each simulation trial. Figure 7 shows the experimental results under the same condition as Fig. 4. Figure 7a illustrates the desired speed \((\omega _{d})\), measured speed \((\omega )\), and speed error \((e_{2})\). Figure 7b shows the desired q axis current \((i_{\mathrm{qsd}})\), measured q axis current \((i_{\mathrm{qs}})\), and measured d axis current \((i_{\mathrm{d}s})\). Also, Fig. 8 shows the experimental results under the same condition as Fig. 5.

The simulation and experimental results verify that the proposed PI-type controller gives a remarkable control performance, in that it can accurately control the speed of a PMSM without precise information about motor parameter and load torque values. And our simple controller guarantees a fast convergence in the presence of repeating load torque and model parameter uncertainties. Table 3 summarizes the speed tracking errors of the first trial and the fifth trial during steady state based on the simulation and experimental results.

Table 3 Speed tracking errors of the first trial and the fifth trial during steady state

5 Conclusion

This paper showed that by adding a very simple learning feedforward term a conventional PI control system can be enforced to have perfect tracking performance in the presence of model parameter and load torque variations. Convergence and stability of the proposed control system were proven by showing that the tracking error of the closed-loop system asymptotically goes to zero. To validate the practicality and feasibility of the proposed PI-type learning controller, simulations and experiments were carried out under no information on motor parameter and load torque values using a conventional PI current controller with the proposed speed controller. From simulation and experimental results, it was verified that even though the proposed PI-type control algorithm is simple and easy to be implemented it yields good control performance.