1 Introduction

Quadratic programming (QP) problems appear from many practical applications such as portfolio, least squares estimation (LSE) and sequential quadratic programming (SQP). Apart from its wide applicability, from a methodological point of view, quadratic programming is also interesting because certain nonlinear optimization problems are based on it. Also, solving the quadratic problem in real time is necessary in practical application [1,2,3].

Recent years, researchers have put a lot of energy on optimization problems in MG [4,5,6,7,8,9,10,11,12,13,14,15,16,17]. Urias et al. [18] presented a recurrent neural network (RNN) for linear programming and applied it to obtain optimal solution of a microgrid. The day-ahead dispatch of isolated power systems including battery system was revealed under a new control strategy in [19]. In [20], Umeozor et al. presented a parametric programming-based approach for energy management in MGs. Yang et al. [21] investigated a discrete-time neural network for solving convex QP problems in constrained model predictive control technology and implemented on a digital signal processor device. Equipped with energy storage system and nature resources in MGs, demand response and thermal optimization are supported by a monitoring strategy. With a focus on the dispatch problem of smart appliances, Li et al. [22] investigated the sparse load shifting in demand-side by a distributed algorithm introduced.

Over the years, solving constrained optimization problems using neural network was in deep research. Combining back propagation neural network (BPNN) with traditional genetic algorithm (GA), Huang et al. [23] proposed a new optimization method. Matallanas et al. [24] implemented a scheduler and a coordinator by neural networks in a distributed control system. In [25], Wang discussed a combined adaptive neural network control and nonlinear model predictive control approach for a class of multi-rate networked nonlinear systems with double-layer architecture. In [26], Patel designed a quantum-based neural network classifier as a firewall to detect malicious Web requests on the Web. Zhang et al. [27] developed a class of neural networks for solving the general nonlinear programming problem; this approach is based on the famous Lagrange multiplier theory, by a modified constraint into the objective function to deal with constraints. This method reduces the difficulty of calculation.

The electrical MG under study is shown in Fig. 1. The MG is a low-voltage power system integrated with different power generation resources (PGRs), energy storage system (ESS) and variable demands (VDs), which can be operated by the control center. PGRs include renewable generation resources (RGRs) and conventional generation resources (CGRs). The RGRs consist of wind turbines (WTs) and photovoltaics (PVs), and CGRs involve diesel generators (DGs), fuel cells (FCs) and microturbines (MTs). The ESS mainly refers to batteries with rapid charge–discharge. Heating equipment, air conditioning and ventilation in VDs can be served to balance supply and demand. A power management strategy (PMS) is required in MG to minimize the cost function under certain operational constraints by controlling the power flows between each system unit.

Fig. 1
figure 1

MG architecture

In this paper, the LPNN is introduced to obtain the optimal scheduling and minimize the cost function in a hybrid MG. Variable neurons and Lagrange neurons in the network are used to determine the optimal solution as well as the solution at equilibrium point and keep the neurons moving in the feasible region during the iteration process. Our contributions include the following four aspects: (1) We obtain the LPNN solution for optimal operating strategy and the asymptotic stability condition of the neurodynamic model is also analyzed. Combining variable neurons with Lagrange neurons, the LPNN aims to minimize the cost function and maximize the energy generated by WTs and PVs. (2) RBFNN, whose input vector is a set of sequences with certain time interval, is first applied to obtain the predicted values of wind speed (WS), solar radiation (SR) and load demand with smaller MSE and ARE, showing good performance. (3) By defining price to control emissions and integrating the emission cost with the fuel cost function, an objective function with full consideration is proposed for the MG system.

The remainder of this paper is organized as follows. The system description is given in Sect. 2. In Sect. 3, the CEED problem is stated. By RBFNN, the process of RGRs and VDs prediction is presented in Sect. 4. In Sect. 5, we introduce the LPNN and investigate the asymptotic stability of the proposed algorithm. Next, optimization solutions are obtained and the simulation results are shown in Sect. 6. Finally, Sect. 7 concludes this paper.

2 System description

The MG is a low-voltage power system integrated with different PGRs, ESS and VDs, which can be operated with the control center.

2.1 Power generation resources

Distributed generation units inside the MG control area have the microsources (MS) consisting of CGRs (DG, MT and FC) and RGRs (WT, PV).

  1. 1.

    Diesel generator The fuel cost function of a DG system is denoted by a quadratic polynomial with regard to the output power of generator [28]:

    $$ C_{1} \left( {P_{\text{DG}} \left( t \right)} \right) = \alpha_{\text{DG}} + \beta_{\text{DG}} P_{\text{DG}} \left( t \right) + \gamma_{\text{DG}} P_{\text{DG}}^{2} \left( t \right) $$
    (1)

    where \( \alpha_{\text{DG}} ,\beta_{\text{DG}} ,\gamma_{\text{DG}} \) are the positive parameters of DGs, \( P_{\text{DG}} \) is the generator power of the DGs and \( C_{1} \) is the fuel cost of DGs.

  2. 2.

    Fuel cell In previous work [29], the cost of FCs was adopted as a one-order function with regard to its power output. In our work, a more reasonable equation is considered. The fuel cost of the FCs is calculated as:

    $$ C_{2} \left( {P_{\text{FC}} \left( t \right)} \right) = \alpha_{\text{FC}} + \beta_{\text{FC}} P_{\text{FC}} \left( t \right) + \gamma_{\text{FC}} P_{\text{FC}}^{2} \left( t \right) $$
    (2)

    where \( \alpha_{\text{FC}} ,\beta_{\text{FC}} ,\gamma_{\text{FC}} \) are the cost coefficients of the FCs, \( P_{\text{FC}} \) is the power generation of the FCs and \( C_{2} \) is the fuel cost of FCs.

  3. 3.

    Microturbine The expression of MT is analogous to the FC and the difference between them is that the efficiency of the MT has positive correlation with the power generation [29, 30]. The fuel cost function of a MT system is expressed by the quadratic function (3)

    $$ C_{3} \left( {P_{\text{MT}} \left( t \right)} \right) = \alpha_{\text{MT}} + \beta_{\text{MT}} P_{\text{MT}} \left( t \right) + \gamma_{\text{MT}} P_{\text{MT}}^{2} \left( t \right) $$
    (3)

    where \( \alpha_{\text{MT}} ,\beta_{\text{MT}} ,\gamma_{\text{MT}} \) are the parameters of the MTs, \( P_{\text{MT}} \) is the generator power of the MT and \( C_{3} \) is the fuel cost of MTs.

  4. 4.

    Wind energy system The power generation of the WT unit can be described as a cubic polynomial with regard to the wind speed at the monitoring station (4):

    $$ P_{\text{WT}} = \frac{1}{2}\rho \pi R^{2} v^{3} C_{\text{P}} $$
    (4)

    where \( P_{\text{WT}} \) is the power generation of the system, \( \rho \) is the air density of the monitoring area, \( R \) is the radius of the paddle of wind turbines, \( v \) is the wind speed of power generation area and \( C_{\text{p}} \) is conversion efficiency of the wind power.

  5. 5.

    Solar energy system The generation power of PV system with regard to solar radiation can be denoted by (5)

    $$ P_{\text{PV}} = P_{\text{STC}} \frac{{G_{\text{ING}} }}{{G_{\text{STC}} }}\left( {1 + k\left( {T_{\text{C}} - T_{\gamma } } \right)} \right) $$
    (5)

    where \( P_{\text{PV}} \) is the power generation of the system, \( G_{\text{ING}} \) is incident irradiance, \( P_{\text{STC}} \) is maximum power at standard test condition, \( G_{\text{STC}} \) is irradiance at standard test condition, \( k \) is the temperature coefficient, \( T_{\text{C}} \) is the module temperature and \( T_{\gamma } \) is the reference temperature.

2.2 Battery storage

The power from battery storage (BS) system is needed when the supply from MS cannot meet the VDs. On the other hand, battery bank will store the energy for later use when the supply exceeds load demand. Figure 2 shows the charge and discharge process of the battery storage system, where state of charge (SOC) is used to quantify the level of discharge of the battery as follows [31]:

$$ {\text{state of charge}}\quad {\text{SOC}} = 1 - Q_{e} /C\left( {0,\theta } \right) $$
(6)

in which

$$ {\text{Extracted charge}}\quad Q_{e} \left( t \right) = \int_{0}^{t} { - I_{m} } \left( \tau \right){\text{d}}\tau $$
(7)
$$ {\text{Battery capacity}}\quad C\left( {I,\theta } \right) = C_{0} \left( 0 \right)\left( {1 + \frac{\theta }{{ - \theta_{\text{f}} }}} \right)^{\varepsilon } \left( {\theta > \theta_{\text{f}} } \right) $$
(8)

Therefore

$$ \begin{aligned} C\left( {0,\theta } \right) & = C\left( {I,\theta } \right)_{I = 0} \\ & = C_{0} \left( 0 \right)\left( {1 + \frac{\theta }{{ - \theta_{\text{f}} }}} \right)^{\varepsilon } \left( {\theta > \theta_{\text{f}} } \right) \\ & = K_{c} C_{{0^{*} }} \left( {1 + \frac{\theta }{{ - \theta_{\text{f}} }}} \right)^{\varepsilon } \\ \end{aligned} $$
(9)

where \( C_{0} \left( I \right) \) is the battery capacity at 0 °C, \( K_{c} \) is a constant in lead-acid battery system, \( C_{{0^{*} }} \) is the load-free capacity when the temperature is zero, \( \theta_{\text{f}} \) is the electrolyte freezing temperature, \( \theta \) is the electrolyte temperature, \( I_{m} \) is the extracted current, \( \varepsilon \) is a constant parameter.

Fig. 2
figure 2

Lead-acid battery SOC

By means of Eq. (6), we can easily calculate \( {\text{SOC}},C\left( {0,\theta } \right) \) or \( Q_{e} \). During the optimal process, the output power of the battery \( P_{\text{BAT}} \) satisfies the bound constraints (10):

$$ P_{\text{BATmin}} \le P_{\text{BAT}} \le P_{\text{BATmax}} ,\quad t = 1,2, \ldots ,T $$
(10)

3 Variable demands

There are four kinds of demands in our MG, including critical demand, controllable demand, sensitive to price demand and thermal demand [30].

  1. 1.

    Critical demand (\( P_{\text{Dcr}} \)) Loads in some basic processes that must be satisfied from day to night.

  2. 2.

    Controllable demand (\( P_{\text{Dco}} \)) Loads that have preferred level and flexible magnitude.

  3. 3.

    Price-sensitive demand (\( P_{\text{Dps}} \)) Loads whose magnitude rely on energy price and regard a price as margin price.

  4. 4.

    Thermal Demand (\( P_{\text{Dth}} \)) The thermal load supplied by boiler and the recovered heat generated by MTs.

In this way, the total load demand satisfies Eq. (11):

$$ P_{D} = P_{\text{Dcr}} + P_{\text{Dco}} + P_{\text{Dps}} + P_{\text{Dth}} $$
(11)

4 Problem statement

The CEED is one of important optimization problems in MG system, and it aims to minimize the total cost while satisfying some equality and inequality constrains. A single objective function, which combines the power generation cost with the emission-control cost, can be obtained to perform CEED. Hence, this problem can be regarded to minimize of the mixed function.

4.1 Objective function

In the MG system mentioned above, the total cost mainly includes four aspects: fuel cost, operating and maintenance cost, start-up cost and emission-control cost. The proposed objective function (12) of the MG system is on the basis of minimizing the cost of the next day [32].

$$ \begin{aligned} \hbox{min} \, CF_{\text{MG}} & = F\left( {P_{i} } \right) + \rho E\left( {P_{i} } \right) \\ & = \sum\limits_{i = 1}^{N} {\left( {C_{i} F_{i} + {\text{OM}}_{i} + {\text{SC}}_{i} } \right)} + \sum\limits_{i = 1}^{N} {\rho_{i} E_{\text{mi}} } \\ \end{aligned} $$
(12)

where \( P_{i} \) is the power of the \( i{\text{th}} \) generating units and \( 1{\text{th}},2{\text{th}},3{\text{th}} \) denote DG, FC and MT, respectively, \( F\left( {P_{i} } \right) \) the power generation cost of the \( i{\text{th}} \) generator, \( C_{i} \) the fuel cost of the \( i{\text{th}} \) generator, \( F_{i} \) the fuel consumption rate of the \( i{\text{th}} \) generator, \( {\text{OM}}_{i} \) the operation and maintenance cost of \( i{\text{th}} \) generator, \( {\text{SC}}_{i} \) the start-up cost of the \( i{\text{th}} \) generator, \( E_{mi} \) the total emission of the contaminants, \( \rho_{i} \) the price assigned to emissions which represents the harmfulness of the emissions, \( N \) the number of generating units, which equals to three.

Equation (13) indicates that \( {\text{OM}}_{i} \) has positive correlation with the output power.

$$ {\text{OM}}_{i} = K_{{{\text{OM}} - i}} P_{i} $$
(13)

where \( P_{i} \) is the power generation by DGs, MTs or FCs; \( K_{\text{OM}} \) is the proportion coefficient.

The start-up cost depends on the time the unit has been off before it is started up once again [33]:

$$ {\text{SC}}_{i} = \sigma_{i} + \delta_{i} \left[ {1 - \exp \left( {\frac{{ - T_{{{\text{off}},i}} }}{{\tau_{i} }}} \right)} \right] $$
(14)

where \( \sigma_{i} \) is the hot start-up cost, \( \delta_{i} \) is the cold start-up cost, \( \tau_{i} \) is the cooling time and \( T_{{{\text{off}},i}} \) is the time a unit has been off.

Many researchers used second-order polynomial functions to express the total emission of the pollutants [34]:

$$ E_{m,i} = \alpha_{m,i} + \beta_{m,i} P_{i} + \gamma_{m,i} P_{i}^{2} $$
(15)

4.2 Objective constraints

  1. 1.

    Equality constraints All the power generated by generators and battery must be equal to the load demand. Hence, the equality constraint is

    $$ P_{\text{DG}} + P_{\text{MT}} + P_{\text{FC}} + P_{\text{PV}} + P_{\text{WT}} + P_{\text{BAT}} = P_{\text{D}} $$
    (16)
  2. 2.

    Inequality constraints To carry out normal operation, output power of each power generation unit must work between the lower and upper limit; therefore, the inequality constraint is

    $$ \begin{aligned} P_{\text{DGmin}} \le P_{\text{DG}} \le P_{\text{DGmax}} \hfill \\ P_{\text{FCmin}} \le P_{\text{FC}} \le P_{\text{FCmax}} \hfill \\ P_{\text{MTmin}} \le P_{\text{MT}} \le P_{\text{MTmax}} \hfill \\ 0 \le P_{\text{WT}} \le P_{\text{WTmax}} \hfill \\ 0 \le P_{\text{PV}} \le P_{\text{PVmax}} \hfill \\ \end{aligned} $$
    (17)

    where \( P_{\text{DGmin}} \) is the minimum power generated by DG and \( P_{\text{DGmax}} \) is the maximum power generated by DG. \( P_{\text{FCmin}} \) is the minimum power generated by FC and \( P_{\text{FCmax}} \) is the maximum power generated by FC. \( P_{\text{MTmin}} \) is the minimum power generated by MT and \( P_{\text{MTmax}} \) is the maximum power generated by MT. \( P_{\text{BATmin}} \) is the minimum charge power of the battery bank and \( P_{\text{BATmax}} \) is the maximum discharge power of battery bank. \( P_{\text{WTmax}} \) and \( P_{\text{PVmax}} \) are the maximum power generated by WT and PV, respectively. It should be emphasized that \( P_{\text{WTmax}} \) and \( P_{\text{PVmax}} \) are obtained by the prediction process in next section.

5 Radial basis function neural network for prediction process

The values of WS, SR and VDs for the next intervals are obtained in the prediction process. RBFNN in [35, 36], which can predict a set of sequences, is applied in the process. The RBFNN introduces the Gaussian functions in its architecture. As is shown in Fig. 3, the architecture is composed of an input layer, a hidden layer and an output layer, and the hidden layer is constructed using the Gaussian functions.

Fig. 3
figure 3

Frame of RBFNN

The RBFNN is a feedback network with two layers, whose input vector is a set of sequences with certain time intervals. The hidden layer corresponds to a set of radial basis functions, and the output of each hidden unit is as follows

$$ o_{k} = f\left( {x - \mu_{k} } \right) = \exp \left\{ { - \frac{{\left\| {x - \mu_{k} } \right\|}}{{2\sigma_{k}^{2} }}} \right\} $$
(18)

where \( \mu_{k} \) is the mathematical expectation of Gaussian distribution and \( \sigma_{k} \) the width of Gaussian distribution to control the distribution around the center.

The output of the \( j{\text{th}} \) output unit is:

$$ y_{j} = \sum\limits_{i = 0}^{N} {w_{i} o_{i} } = \sum\limits_{k = 0}^{N} {f\left( {x - \mu_{k} } \right)} = \sum\limits_{k = 0}^{N} {w_{k} \exp \left\{ { - \frac{{\left\| {x - \mu_{k} } \right\|}}{{2\sigma_{k}^{2} }}} \right\}} $$
(19)

The RBFNN model is trained off-line and adopts a parallel mode. The output power data of the WTs, PVs and VDs are taken as the delayed output. The mean square error (MSE) is applied to judge when the training process stops. MSE is required to be less than \( 8.5 \times 10^{ - 4} \) at the end of training. First, the raw data sequence is used to generate four training sequences with an interval delay, respectively. Then, the input layer receives the training sequences and the output layer receives the raw data sequence. Set expansion speed to be 1 and goal error be 8.5e−004; the prediction results can be obtained by the MATLAB tools “newrb.” Figures 4, 5 and 6 show three different evolution curves of MSE during learning stage, in which the black line represents the goal MSE. Table 1 shows the iterations, MSE and SSE of the prediction series.

Fig. 4
figure 4

MSE in the training process of wind power

Fig. 5
figure 5

MSE in the training process of solar power

Fig. 6
figure 6

MSE in the training process of load demand

Table 1 The iterations, MSE and SSE of the prediction series

To measure the performance, \( {\text{ARE}} \) is calculated as [37]:

$$ {\text{ARE}} = \left| {\frac{{y\left( n \right) - \hat{y}\left( n \right)}}{y\left( n \right)}} \right| $$
(20)

where \( \hat{y}\left( n \right) \) is the time series predicted by the RBFNN. \( y\left( n \right) \) is the input vector of the neural network model. As is shown in Figs. 7, 8 and 9, the prediction values of the second day are obtained by training the RBFNN using the samples of first day. The RBF neural network achieves day-ahead forecast by learning the potential characteristics of historical data. Comparing with the latest research [38], it is easy to find the prediction model can obtain the overall trend and fail to predict precisely. The large errors are relative and from Figs. 7, 8 and 9, we can know the large errors exist in different points in wind, solar and load demand prediction.

Fig. 7
figure 7

Contrast curve of real and predicted wind power

Fig. 8
figure 8

Contrast curve of real and predicted solar power

Fig. 9
figure 9

Contrast curve of real and predicted load demand power

6 Lagrange neural network for quadratic programming

In this part, the solution for operating strategy in MG is first obtained and the asymptotic stability condition of the neurodynamic model with two kinds of neurons is analyzed.

6.1 LPNN approach

To deal with the problem with equality and inequality constraints of (16) and (17), we apply the LPNN approach proposed in [27]. The LPNN approach, which is on the basis of Lagrange multiplier method [39], constructs a Lagrange function and introduces variable neurons and Lagrange neurons. In the optimal process of the neurodynamic model, variable neurons \( P \) and \( y \) decrease the Lagrangian function while Lagrange neurons keep the variable neurons moving in the feasible region and hold Lagrange multiplier \( \uplambda \) and \( \mu \) contacting constraints function with the objective function. The transient behavior of two kinds of neurons is described as follows:

$$ \frac{{{\text{d}}P}}{{{\text{d}}t}} = - \frac{{{\partial }L\left( {P,y,\lambda ,\mu } \right)}}{{{\partial }P}} $$
(21)
$$ \frac{{{\text{d}}y}}{{{\text{d}}t}} = - \frac{{{\partial }L\left( {P,y,\lambda ,\mu } \right)}}{{{\partial }y}} $$
(22)
$$ \frac{{{\text{d}}\lambda }}{{{\text{d}}t}} = \frac{{{\partial }L\left( {P,y,\lambda ,\mu } \right)}}{{{\partial }\lambda }} $$
(23)
$$ \frac{{{\text{d}}\mu }}{{{\text{d}}t}} = \frac{{{\partial }L\left( {P,y,\lambda ,\mu } \right)}}{{{\partial }\mu }} $$
(24)

where \( \lambda \in R^{m} ,\mu \in R^{r} \) are the Lagrange multipliers and \( t \) is the time variable. The LPNN realization is shown in Fig. 10.

Fig. 10
figure 10

Lagrange programming neural network

6.2 LPNN solution

The proposed objective function can be summarized as a QP problem with equality and inequality constraints as shown in (25)–(27).

$$ {\text{CF}} - {\text{const}} = \frac{1}{2}{\mathbf{P}}^{T} {\mathbf{QP}} + {\mathbf{C}}^{T} {\mathbf{P}} $$
(25)

subject to

$$ {\mathbf{H}} = {\mathbf{A}} \times {\mathbf{P}} - P_{D} = 0 $$
(26)
$$ {\mathbf{G}}\left( {\mathbf{P}} \right) = \left( {\begin{array}{*{20}c} {P_{\text{DG}} } & {P_{\text{DGmax}} } \\ \vdots & \vdots \\ {P_{\text{BAT}} } & {P_{\text{BATmax}} } \\ {P_{\text{DGmin}} } & {P_{\text{DG}} } \\ \vdots & \vdots \\ {P_{\text{BATmin}} } & {P_{\text{BAT}} } \\ \end{array} } \right) \cdot \left( {\begin{array}{*{20}c} 1 \\ { - 1} \\ \end{array} } \right) \le {\mathbf{0}}_{12 \times 1} $$
(27)

where

$$ {\mathbf{P}} = \left( {P_{\text{DG}} ,P_{\text{FC}} ,P_{\text{MT}} ,P_{\text{WT}} ,P_{\text{PV}} ,P_{\text{BAT}} } \right)^{T} $$
(28)
$$ {\mathbf{A}} = \left( {\begin{array}{*{20}c} 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array} } \right) $$
(29)
$$ {\text{const}} = \sum\limits_{i = 1}^{N} {\left( {d_{i} + \alpha_{i} \rho + {\text{SC}}_{i} } \right)} $$
(30)
$$ {\mathbf{Q}} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}_{{\mathbf{1}}} } & {{\mathbf{0}}_{3 \times 3} } \\ {{\mathbf{0}}_{3 \times 3} } & {{\mathbf{0}}_{3 \times 3} } \\ \end{array} } \right),\quad {\mathbf{C}} = \left( {\begin{array}{*{20}c} {{\mathbf{A}}_{{\mathbf{2}}} } \\ {{\mathbf{0}}_{3 \times 1} } \\ \end{array} } \right) $$
(31)

in which

$$ {\mathbf{A}}_{{\mathbf{1}}} = \left( {\begin{array}{*{20}c} {f_{1} + r_{1} \rho } & 0 & 0 \\ 0 & {f_{2} + r_{2} \rho } & 0 \\ 0 & 0 & {f_{3} + r_{3} \rho } \\ \end{array} } \right),\quad {\mathbf{A}}_{2} = \left( {\begin{array}{*{20}c} {e_{1} + K_{{{\text{OM}} - 1}} + \beta_{1} \rho } \\ {e_{2} + K_{{{\text{OM}} - 2}} + \beta_{2} \rho } \\ {e_{3} + K_{{{\text{OM}} - 3}} + \beta_{3} \rho } \\ \end{array} } \right) $$
(32)

Based on (25), the Lagrange function can be summarized as follows:

$$ \begin{aligned} L\left( {{\mathbf{P}},{\mathbf{y}},\lambda ,{\varvec{\upmu}}} \right) & = \frac{1}{2}{\mathbf{P}}^{T} \cdot {\mathbf{Q}} \cdot {\mathbf{P}} + {\mathbf{C}}^{T} \cdot {\mathbf{P}} + \lambda \left( {{\mathbf{\rm A}} \cdot {\mathbf{P}} - P_{D} } \right) \\ & \quad + \,\sum\limits_{j = 1}^{r} {\mu_{j} } \cdot \left( {{\mathbf{G}}_{j} \left( {P_{j} } \right) + y_{j}^{2} } \right) \\ \end{aligned} $$
(33)

The transient behaviors of the neurons are calculated as:

$$ \begin{aligned} \frac{{{\text{d}}{\mathbf{P}}}}{{{\text{d}}t}} & = - \frac{{{\partial }L\left( {{\mathbf{P}},{\mathbf{y}},\lambda ,{\varvec{\upmu}}} \right)}}{{{\partial }{\mathbf{P}}}} \\ \, & { = } - {\mathbf{Q}} \times {\mathbf{P}} - {\mathbf{C}} - \lambda \times {\mathbf{A}}^{T} \\ & \quad - \,\sum\limits_{j = 1}^{r} {\mu_{j} } \cdot {\nabla }_{{P_{j} }} {\mathbf{G}}_{j} \left( {P_{j} } \right) \\ \end{aligned} $$
(34)
$$ \begin{aligned} \frac{{{\text{d}}y}}{{{\text{d}}t}} & = - \frac{{{\partial }L\left( {{\mathbf{P}},{\mathbf{y}},\lambda ,{\varvec{\upmu}}} \right)}}{{{\partial }y}} \\ \, & { = } - \sum\limits_{j = 1}^{r} {2\mu_{j} } y_{j} \\ \end{aligned} $$
(35)
$$ \begin{aligned} \frac{{{\text{d}}\lambda }}{{{\text{d}}t}} & = \frac{{{\partial }L\left( {{\mathbf{P}},{\mathbf{y}},\lambda ,{\varvec{\upmu}}} \right)}}{{{\partial }\lambda }} \\ \, & { = }\left( {{\mathbf{AP}} - P_{D} } \right) \\ \end{aligned} $$
(36)
$$ \begin{aligned} \frac{{{\text{d}}\mu }}{{{\text{d}}t}} & = \frac{{{\partial }L\left( {{\mathbf{P}},{\mathbf{y}},\lambda ,{\varvec{\upmu}}} \right)}}{{{\partial }{\varvec{\upmu}}}} \\ \, & { = }\sum\limits_{j = 1}^{r} {\left( {{\mathbf{G}}_{j} \left( {P_{j} } \right) + y_{j}^{2} } \right)} \\ \end{aligned} $$
(37)

6.3 Stability analysis

The basic theory [27] of the LPNN is to make certain that the equilibrium point of the neural network is always a Kuhn–Tucker point of the proposed problem, and the Lagrange solution is an asymptotically stable point of the network. The equilibrium point is expressed as \( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \) and the following matrices are first set:

$$ \overline{{\mathbf{P}}} = \left( {{\mathbf{P}}^{T} ,{\mathbf{y}}^{T} } \right)^{T} ,\quad \overline{{\varvec{\uplambda}}} = \left( {\lambda ,{\varvec{\upmu}}^{T} } \right)^{T} $$
(38)
$$ \overline{{\mathbf{H}}} \left( {{\mathbf{P,y}}} \right) = \left( {{\mathbf{H}}\left( {{\mathbf{P,y}}} \right)^{T} ,{\mathbf{G}}\left( {{\mathbf{P,y}}} \right)^{T} } \right)^{T} $$
(39)

To guarantee the local stability, the gradient of constraint vectors \( {\mathbf{G}}\left( {\mathbf{P}} \right) \) with respect to \( \overline{{\mathbf{P}}} \) at equilibrium point \( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \) should be linearly independent.

Lemma 1

At the equilibrium point\( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \)of (34)–(37), the gradients of constraint vectors (27) with respect to\( \overline{{\mathbf{P}}} = \left( {{\mathbf{P}}^{T} ,{\mathbf{y}}^{T} } \right)^{T} \)are linearly independent.

Proof

$$ {\nabla }_{{\overline{P} }} {\mathbf{G}}_{j} \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} } \right) = \left[ {\begin{array}{*{20}c} {{\nabla }{\mathbf{G}}_{j} \left( {{\mathbf{P}}^{*} } \right)} \\ 0 \\ \vdots \\ 0 \\ {2y_{j}^{*} } \\ 0 \\ \vdots \\ 0 \\ \end{array} } \right] \, \quad j = 1,{ \ldots },r $$
(40)
$$ \nabla_{{\overline{P} }} {\mathbf{H}}\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} } \right) = \left[ {\begin{array}{*{20}c} {\nabla H\left( {{\mathbf{P}}^{*} } \right)} \\ 0 \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{A}}^{T} } \\ 0 \\ \end{array} } \right] \, $$
(41)

The gradients above can be easily verified to be linearly independent. Then, \( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} } \right) \) is a regular point. Based on the conclusion got from Lemma 1, [27] has ensured that \( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \) is always a Kuhn–Tucker point. Therefore, the following lemma is to ensure that all equilibrium point correspond to a stable system.

Lemma 2

Let\( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \)be a stationary point of Lagrange function\( L\left( {{\mathbf{P}},{\mathbf{y}},\lambda ,{\varvec{\upmu}}} \right) \)such that\( \nabla_{\text{PP}}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) > 0 \). If\( {\mathbf{P}}^{*} \)is a regular point and satisfies the strict complementarity condition. Then\( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \)is an asymptotically stable point of (34)–(37).

Proof

$$ \left[ {\begin{array}{*{20}c} {\frac{{{\text{d}}\overline{{\mathbf{P}}} }}{{{\text{d}}t}}} \\ {\frac{{{\text{d}}\overline{{\varvec{\uplambda}}} }}{{{\text{d}}t}}} \\ \end{array} } \right] = - {\mathbf{G}}^{*} \cdot \left[ {\begin{array}{*{20}c} {\overline{{\mathbf{P}}} - \overline{{\mathbf{P}}}^{*} } \\ {\overline{{\varvec{\uplambda}}} - \overline{{\varvec{\uplambda}}}^{*} } \\ \end{array} } \right]{ = } - \left[ {\begin{array}{*{20}c} {{\mathbf{B}}^{*} } & {{\mathbf{C}}^{*} } \\ {{\mathbf{D}}^{*} } & {\mathbf{0}} \\ \end{array} } \right] \cdot \left[ {\begin{array}{*{20}c} {\overline{{\mathbf{P}}} - \overline{{\mathbf{P}}}^{*} } \\ {\overline{{\varvec{\uplambda}}} - \overline{{\varvec{\uplambda}}}^{*} } \\ \end{array} } \right] $$
(42)

where

$$ {\mathbf{B}}^{*} = \nabla_{{\overline{\text{PP}} }}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) $$
(43)
$$ {\mathbf{C}}^{*} = \nabla_{{\overline{P} }} \overline{{\mathbf{H}}} \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} } \right) $$
(44)
$$ {\mathbf{D}}^{*} = - {\mathbf{C}}^{*T} $$
(45)

Additionally, we have

$$ \nabla_{{\overline{P} }} \overline{{\mathbf{H}}} \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} } \right) = \left[ {\begin{array}{*{20}c} {\nabla {\mathbf{H}}_{j} \left( {P^{*} } \right)} \\ {\mathbf{0}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{A}}^{{\mathbf{T}}} } \\ {\mathbf{0}} \\ \end{array} } \right] $$
(46)
$$ \nabla_{{\overline{\text{PP}} }}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) = \left[ {\begin{array}{*{20}c} {\nabla_{xx}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right)} & 0 & 0 & 0 \\ 0 & {2\mu_{k1}^{*} } & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & {2\mu_{kJ}^{*} } \\ \end{array} } \right] $$
(47)

where

$$ \begin{aligned} \nabla_{\text{PP}}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \hfill \\ \quad = {\mathbf{Q}}^{*} + \sum\limits_{j = 1}^{m} {\mu_{j} } \nabla_{\text{PP}}^{2} G_{j} \left( {P_{j} } \right) \hfill \\ \quad = {\mathbf{Q}}^{*} \hfill \\ \end{aligned} $$
(48)

The positivity of matrix \( {\mathbf{G}}^{*} \) depends on the positivity of matrix \( \nabla_{\text{PP}}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) = {\mathbf{Q}}^{*} \); \( {\mathbf{G}}^{*} \) is strictly positive definite from the strictly complementarity condition as well as the strict positivity of \( \nabla_{\text{PP}}^{2} L\left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \). Hence \( - {\mathbf{G}}^{*} \) is strictly negative definite and the point \( \left( {{\mathbf{P}}^{*} ,{\mathbf{y}}^{*} ,\lambda^{*} ,{\varvec{\upmu}}^{*} } \right) \) is asymptotically stable.

7 Simulation results

7.1 The results of LPNN approach

As is known to us all, many optimization problems exist in MG system including optimal power dispatching, real-time price optimization and so on. In this paper, LPNN is applied to deal with the optimization problem of PMS in a MG system. In Sect. 3, we introduce a single objective function, which aims to minimize the total cost and maximize the power generated by RGRs, including wind and solar energy. In this section, we conduct MATLAB simulation to obtain the optimal solutions of the proposed CEED problem using the LPNN idea. RGRs, VDs and the SOC of battery banks should be known before solving the CEED problem. Also, the generation restrictions of DG, MT, FC and BA should be known as well. The following three characteristics are the main items that should be mentioned:

  1. 1.

    Considered that WT and PV product power without the emission-control cost, so the power generation of them is regarded as a negative load.

  2. 2.

    The SOC of the battery bank is monitored and the battery works in charge state to provide energy when the RGRs is not enough to serve the VDs.

  3. 3.

    Through the objective functions, we can easily choose the sources from DG, MT and FC to serve the load.

In this part, simulation results based on LPNN approach are presented. The first-day samples of RGRs and VDs are used to adjust the parameters of RBFNN, and the second-day samples are used to test the effect of prediction. The predicted data of the second day in Figs. 7 and 8 are served as the upper limit of RGRs in optimal process, respectively. Figure 11 shows the VDs of the second day. To obtain the optimal solutions of the LPNN, the values of generation limits are set as:

$$ \begin{aligned} P_{\text{DGmin}} = 0{\text{ kW }}P_{\text{DGmax}} = 14{\text{ kW}} \hfill \\ P_{\text{MTmin}} = 0{\text{ kW }}P_{\text{MTmax}} = 8{\text{ kW}} \hfill \\ P_{\text{FCmin}} = 0{\text{ kW }}P_{\text{FCmax}} = 8{\text{ kW}} \hfill \\ \end{aligned} $$
(49)
Fig. 11
figure 11

Real and predicted load demand power

Also, Table 2 shows the values of the parameters in Eq. (1)–(3) and the proportional constant in Eq. (13). As is shown in Fig. 2, the battery banks are charged at 80% in initial condition. The meaning of SCB is described in “Appendix,” and for SCB = 2, Figs. 12, 13 and 14 show the optimal results of all system modules.

Table 2 The comparison between the LPNN proposed and the PSO approach
Fig. 12
figure 12

Predicted and optimal renewable sources

Fig. 13
figure 13

Optimal lead-acid battery power

Fig. 14
figure 14

Optimal power product from DG, FC and MT

Figure 12 describes the optimal power of RGRs. The optimal power of WT system is calculated by the LPNN. As is mentioned before, predicted power of WT is the upper limit that the WT system can generate in the optimal process, which relies on the WS in the location for monitoring. The lower limit of WT generation is set to zero. The optimal power of PV system is calculated by the LPNN. As well as WT system, predicted power of PV is the upper limit that the PV system can generate in the optimal process, which relies on the intensity of SR in the location for monitoring. The lower limit of PV generation equals zero.

Figure 13 reveals the optimal output power of battery bank. From the figure, we can easily find that the BS system work all the time for RGRs is not enough to serve the VDs. The maximum and minimum output powers of battery bank are the upper limit and lower limit that the BS system can output or input in the optimal process, which are calculated by the algorithm in “Appendix.”

Figure 14 shows the optimal power of DGs, MTs and FCs, and Fig. 15 describes the total cost of DGs, MTs and FCs in real time, including fuel costs, operation and maintenance cost. Predicted and zero power of RGRs generated by WTs and PVs and the output power limit of DG, MT, FC and BS are treated as constant values in LPNN optimal process.

Fig. 15
figure 15

Optimal time-vary cost

Table 3 Cost coefficients and proportional constants for generation system

In Fig. 12, the optimal output power of RGRs is consistent with the predicted power. It shows that power of RGRs is maximized. When comparing Fig. 14 with Fig. 15, we can find that the best choice to reduce cost is to shut down the DGs and make use of output power from the MTs, FCs and RGRs when the load demand is low. When the load is very high, we can start up the DGs to satisfy the load. What should be mentioned is that the figures above are all plotted with the interval of 15 min.

7.2 Comparison with the PSO approach

The problem (12) can be summarized as (50):

$$ \begin{aligned} {\text{minimize }}f\left( x \right) \hfill \\ {\text{subject to }}h\left( x \right) = 0 \hfill \\ {\text{ g}}\left( x \right) \le 0 \hfill \\ \, \hfill \\ \end{aligned} $$
(50)

By the penalty function method, we define the penalty function:

$$ F\left( x \right) = f\left( x \right) + C_{1} \sum\limits_{i = 1}^{k} {h_{i}^{2} } \left( x \right) + C_{2} \sum\limits_{j = 1}^{m} {\frac{1}{{g_{j} \left( x \right)}}} $$
(51)

where \( C_{1} ,C_{2} \) are the penalty factor. In this way, this problem converts to be an unconstrained problem.

The position of the \( i{\text{th}} \) particle defined as \( L_{G}^{i} = \left( {L_{G,1}^{i} ,L_{G,2}^{i} , \ldots ,L_{G,n}^{i} } \right)^{T} \) is generated randomly and \( V^{i} = \left( {V_{i,1} ,V_{i,2} , \ldots ,V_{i,n} } \right)^{T} \) is represented as the velocity of the \( i{\text{th}} \) particle, where \( n \) is the dimension of each particle. Define \( L_{\text{pb}}^{i} = \left( {L_{\text{pb,1}}^{i} , \ldots ,L_{{{\text{pb}},n}}^{i} } \right) \) as the best personal position corresponding to the best fitness value for the \( i{\text{th}} \) particle, and \( L_{\text{gb}} = \left( {L_{{{\text{gb}},1}} , \ldots ,L_{{{\text{gb}},n}} } \right) \) as the global best position of the group. The movement of the \( i{\text{th}} \) particle is adjusted by [40]

$$ V_{i,j} \left( {t + 1} \right) = wV_{i,j} \left( t \right) + c_{1} r_{1} \left( {L_{\text{pb}}^{i} - L_{G,j}^{i} \left( t \right)} \right) + c_{2} r_{2} \left( {L_{\text{gb}} - L_{G,j}^{i} \left( t \right)} \right) $$
(52)
$$ L_{G,j}^{i} \left( {t + 1} \right) = L_{G,j}^{i} \left( t \right) + L_{i,j} \left( {t + 1} \right) $$
(53)

where \( w \) is inertial weight, \( c_{1} ,c_{2} \) are acceleration constants, \( r_{1} ,r_{2} \) are random numbers in the range [0,1].

We choose the \( 10{\text{th}},20{\text{th}}, \ldots ,90{\text{th}} \) intervals to compare the results of the proposed LPNN method and the basic PSO approach. The comparison of them is shown in Table 3 and Fig. 16. We define the ratio of the cost obtained by LPNN and PSO as LTP; it is easy to find that the optimal cost of LPNN is all less than the cost of PSO approach. Considering that PSO is weak to deal with constrained problem, we also record the index \( \left\| {Ax - b} \right\| \) to indicate that the comparison is reliable. From Fig. 16, we can easily find the LPNN is better than the PSO method.

Fig. 16
figure 16

Comparison between the LPNN and PSO

8 Conclusion

In this paper, Lagrange programming neural network, based on Lagrangian multiplier method, is introduced to optimize economic dispatch and minimize the objective function proposed in a hybrid microgrid. The LPNN is Lyapunov stable, and the stationary point of Lagrange function is proved asymptotically stable. Simulation shows that LPNN approach can determine the optimal solutions of power generation resources and energy storage system. Also, RBF neural network is utilized to achieve day-ahead prediction of renewable resources and load demand.