Keywords

1 Introduction

Electrical energy is essential and has become basic need for mankind. But dependence on fossil fuels makes it very hard to utilize clean energy causing global warming, climate changes, etc. [1]. The 1997 adoption of the Kyoto Protocol to the United Nations Framework Convention on Climate Change landmarked a major turning point to promote Renewable Energy [2, 3]. One of the few nations to have made the essential preparations for the production, transmission, and distribution of renewable energy is India. The availability of electrical energy also affects a nation's social and economic conditions [4]. Renewable energy like solar energy plays an important role to increase energy independence. India is a country that has enormous potential for solar energy and because of its location, access to sun energy all year long and the biggest land area [5]. India stands fourth in generating solar energy in the world. India has also invested a huge sum of money to generate 100 GW by 2022 in the national solar mission. Therefore, it is essential to recognize the best location for the nation's solar plant installation. However, research is currently being conducted in several Indian states [6]. Choosing solar farm location can be problematic because it has to be checked whether it can be extended in the future [7]. Thus, choosing a location for a solar farm is one of the most essential concerns to generating or maximizing the overall effectiveness. India is a nation that receives 5000 trillion kWh of solar insolation annually. The barren lands in the country can be accessed by up to 750 GW of solar energy. According to the Government of India's Ministry of New and Renewable Energy (MNRE-GOI), the Thar Desert can generate up to 142 GW of solar energy [8]. Four states of India have been taken for case study, i.e., Chandigarh, Gujarat, Manipur, and Kerala in this paper. The research methodology is explained in detail for the presented work. Further, multi-layer perceptron back-propagation and genetic algorithm optimization technique is used for comparative analysis.

2 Research Methodology

This section examines the pertinent research on choosing a solar farm and outlines the knowledge gaps that must be filled for the project to be completed. Social, technological, economic, environmental, and political (STEEP) factor identification is the primary and the most crucial stage in laying the groundwork for a new solar farm. The architecture of STEEP is shown in Fig. 1.

Fig.1
A classification chart of major criteria influencing solar farm locations. Includes Social aspect, Technical aspect, Economical aspect, Environmental aspect, and Political aspect. The various factors under the different criteria are also mentioned.

Major criteria influencing the location of the solar farm

2.1 Detecting Solar Hotspots in India

Since 2005 India's power generation has grown at a compound annual growth rate (CAGR) of 5.2%. Still, more than 400 million people lack the opportunity to use electricity. The IEPR 2006 (Integrated Energy Policy) has predicted greater than 800,000 megawatts (MW) by 2032 [9]. An average of 66 MW of solar applications are installed in the country [10]. The goals outlined by “Solar India” launched in 2010 are to meet the ambitious target of 22,000 MW grid-connected and 2000 MW off-grid solar production by 2025. It is crucial to locate the country's solar hotspots to access the potential and variability of the solar resources in India to realize the vision of a “Solar India” [11].

3 The Research Framework of the Present Study

Using a thorough literature review methodology and recommendations from experts, it begins the choice of STEEP criterion influencing the choice of solar farms in the context of India. The creation of a decision hierarchy utilizing MLPBP and MLP-GA technique with STEEP criteria and sub-criteria are adopted in the present study. Experts are engaged once again to address any discrepancies that may appear in MLPBP results. The right place is then suggested after certain potential locations are taken into account based on MLPBP results and MLP-GA. Figure 2 shows the proposed framework for the proposed investigation [12].

Fig. 2
A 2-part flow diagram of a proposed framework. Decision compiler gets data fro msocial, technical, economical, environmental, and political factors. Then selection criteria is used in the identification of proposed farms with application of G A and M L P - B P, for ranking of sites.

Framework for the proposed investigation

3.1 MLP-BP

A supervised network that arranges different layers of neurons is used by MLP. In Fig. 3, MLP consists of an input layer, a hidden layer which is the network's brain, and an output layer, i.e., three layers. It is a network administrator which is topologically organized into numerous sections of neurons. Each layer's neurons are linked to the layer of every other layer's neurons in the (i + 1)th order. Back-propagation, a training procedure for MLP, must be used to calculate this connection weight. The back-propagation (BP) technique is based on the error-correction principle, which is used to train the network for learning. The weights of the hidden layer are updated using Eq. (1). The following steps are used to carry out the detailed training procedure.

$$v_{ij} \left( {n + 1} \right) = v_{ij} (n) + \eta {*}\partial_{j} \left( n \right){*}x_{i} \left( n \right),$$
(1)
$$w_{jk} \left( {n + 1} \right) = v_{jk} (n) + \eta {*}\partial_{j} \left( n \right){*}y_{i} \left( n \right).$$
(2)
Fig. 3
A flow diagram. Data initialization, given training sample, apply back propagation, check if training sample finished. If yes check for minimum error. If yes, perform ranking based on the output at the output layer propagation.

Working flowchart for MLP-BP

$$\partial_{k} \left( {n + 1} \right) = (\partial_{k} - o_{k} )*(1 - o_{k} )*O_{k} ,$$
(3)
$$\partial_{j}{\prime} = \mathop \sum \limits_{k = 0}^{k} \partial_{k} {*}w_{jk} ,$$
(4)
$$\partial_{{j{ }}} = \left( {1 - y_{i} } \right){*}y_{j} *\partial_{j}{\prime} .$$
(5)

Using the sigmoid function, learn the output of every layer of neurons for every input. Accordingly, Eq. (6) gives the input layerʼs output, and Eq. (7) gives the output layerʼs output (Fig. 4).

$$s_{1 } = {\text{sigmoid }}\left( {\mathop \sum \limits_{i,j} v_{ij} *x_{i} } \right),$$
(6)
$$s_{2 } = {\text{sigmoid}} \left( {\mathop \sum \limits_{j,k} w_{jk} *y_{j} } \right).$$
(7)
Fig. 4
A diagram of the M L P neural network. It illustrates three layers, the input layer, the hidden layer, and the output layer. Input nodes are mapped to the nodes in 2 hidden layers with V i j, which via W j k link to 2 nodes in the output layer.

MLP neural network

Where the weight of \(i{\text{th}}\) input to the jth hidden neuron layers is denoted by \(v_{ij}\), \(\partial_{j}\) is the error signal produced by the jth hidden neuron, \(y_{i} { }\) is the output of the jth hidden neuron, η is the learning rate, \(o_{k}\) is \({\text{the}}\) output of the kth output neuron, and \(\partial_{k}\) is the desired output kth output neuron, \(x_{i} {\text{ is the}}\) ith Input [13].

Equations (1) and (2) are used to change the weights of every neuron's hidden and output layers. The erroneous signals from the hidden neurons are fed back to the hidden layer by the output layer in the BP method. The procedure is repeated until the error is lower than a predetermined threshold for the complete input–output pattern. For minimization, using the squares error cost function in (8).

$$E = \frac{1}{2}\mathop \sum \limits_{k = 0}^{k} (\partial_{k} - o_{k} )^{2} { }.$$
(8)

In Table 1, the different parameters are tabulated for MLP-BP algorithm.

Table 1 Climatic parameters of specific sites

3.2 MLP-GA Algorithm

Figure 5 gives the flow chart of MLP-GA implementation. By designating the genotype of the GA as the weight list, an MLP is evolved. A binary number can be used to represent each weight. To indicate the weights of the connections between the layers of the ANN, each solution will be a bit string. Each training input size used in this work is 11 in size. There are four hidden neurons and two output neurons in total. Eleven total weights (TW) are assigned based on the size of each training input. There are four and two hidden neurons and output neurons, respectively. Total weights are provided by Eq. (9).

$${\text{TW}}\, = \,(I\, * \,{\text{AB}}\, + \,{\text{AB}}\, * \,{\text{QN}}),$$
(9)
Fig. 5
A flow chart. Randomly generate m initials individual, calculate individual fitness, check if termination criteria satisfied. If yes, stop G A optimization, decode parameters, perform ranking. If no, select cross-over and mutation operations and new individuals, and go to calculate fitness.

Working flowchart for MLP-GA

where the size of input pattern is denoted by I, AB is the of hidden neurons, and QN is the number of output neurons. The current work's overall weight is 52. The equation of length of gene (LG) is given by:

$${\text{LG}}\, = \,[P\, * \,(I\, * \,{\text{AB}}\, + \,{\text{AB}}\, * \,{\text{QN}})].$$
(10)

where P is the number of bits per weight. Each weight is represented using a 16-bit binary number, i.e., P = 16, and hence, the length of the gene is (LG) = 832. Equation (11) shows the reconstruction of the phenotype from the genotype:

$$y_{{m{ } = }} \mathop \sum \limits_{k = 1}^{P} b_{{mk2^{ - 2} }} ,$$
(11)

where \({ }b_{{mk2^{ - 2} }}\) is the m weight's k bit.

$$w_{S } = y_{m} *D\, + \,F,$$
(12)

where \({ }w_{S}\) is the amount of weight in the string, D is the scaling factor, and F is the shifting factor. The output layer and hidden layer’s output are given as follows.

$$s_{{1{ }}} = \left( {\mathop \sum \limits_{m,j} v_{jm} {*}x_{pm} } \right),$$
(13)

where \(y_{j} = {\text{sigmoid }}\,{\text{function }}\left( {{\text{S}}1} \right)\), \(x_{pm}\) is the input:

$$s_{{2{ }}} = \left( {\mathop \sum \limits_{j,k} w_{kj} {*}y_{j} } \right).$$
(14)

The mistake is eventually updated using Eq. (15):

$$E = \frac{1}{2}\mathop \sum \limits_{k = 0}^{k} (\partial_{k} - o_{k} )^{2} ,$$
(15)

where \(o_{k}\) = sigmoid 1 (function of unipolar activation is sigmoid), \(\partial_{k}\) is the previously determined desired output equation fitness, and N represents the number of training samples to determine the string fitness = \(\frac{1 - E}{N}\).

4 Result and Analysis

In Table 1, MLP-GA and MLP-BP parameter are shown. It consists of 11 inputs, 4 hidden neurons, and 2 output neurons. The target result is set as (1, 0) for good, (0, 1) for fair, and (0, 0) for bad. Table 1 shows the data for the solar power plant site selected in a recent study. After the application of the proposed MLP-GA and conventional MLP-BP, the results obtained are as in Table 2. Effective implementation of the algorithms for 40,000 iterations is illustrated as under:

Table 2 Ranking and comparison of MLP-BP and MLP-GA

For MLP-BP:

Weight “W” (for 40,000 iterations) in the output layers are: −11.250484, −6.032527, −5.925765, −5.783226, −5.201148 and 5.376030, −4.364461, −4.718610, −4.377911, 5.598781.

Weights for connections in the hidden layer are: −0.310785, −0.372383, −0.397257, 0.095255, −0.406114, −0.531399, −0.327321, 0.506233, 0.277953, 0.512685, −0.456857 and 0.143809, −0.552357, 0.374949, 0.110471, 0.015784, −0.061495, −0.159856, −0.614426, −0.127249, 0.211235, 0.327446 and −0.163731, −0.132746, −0.136705, 0.392167, −0.311036, −0.442995, 0.156651, −0.154446, −0.030061, −0.233577, −0.083521 and 0.371313, −0.296220, 0.133337, −0.136436, −0.304520, −0.255625, −0.298221, −0.038948, −0.331417, −0.081242, −0.137536.

For MLP-GA:

Weight “W” (For 40,000 iteration) in the output layers are W: 5.003967, −3.188782, −0.614929, 4.794006, 3.460999, and −7.992859, 9.524231, −8.113098, 3.690796, 4.251099.

Weights for connections in the hidden layer are:

2.979126, −0.974121, 0.061646, −4.470215, −6.751404, −1.930542, −4.538269, −5.680542, −5.046082, 5.933533, 9.702148, −1.342163, −1.699219, 9.031982, 7.217712, 9.011230, −5.486450, 6.130371, −3.273926, 6.184692 and 2.531433, −4.640808, 6.295166, −2.701721, −5.285339, 9.144287, −9.393616, 2.718811, 5.208740, 4.013367, −9.048157, 5.948486, −4.125366, −5.840454, −7.821655, 9.327087, 4.200745, −4.916992, 4.541321, 4.348145 and −1.596985, 8.518372, 7.958069, −3.184814, 5.506897, 1.583862, −8.864136, 7.969360, −5.873108, 7.953491, −9.496460, 8.467102, 1.188660, 1.690979, −4.476318, 2.637634, 0.491028, −0.689697, 6.616211, 9.660339 and 2.225342, 9.621887, 1.385193, −9.605713, 1.617737, 7.885437, 0.466919, −0.914307, 1.846008, −9.912720, −2.230835, 6.101990, −5.264893, 6.053162, 4.118652, 9.774475, 6.222534, 4.756470, −0.056458, 1.686096.

5 Conclusion

In this paper, both the quantitative and qualitative features required for the site selection of the plant are discussed. The proposed MLP-GA demonstrates its ability to correctly rank appropriate sites for the development of solar energy projects. Our findings are objective, and various crucial factors, including both quantitative and qualitative details regarding the proposed solar power facility, were taken into account. As the needs of the business sector change, considerations like internal return rate (IRR) and systemic advantages may be considered in future. However, the focus of our current study is on environmental effects, which are crucial to long-term sustainability. The adoption of these methods will result in an expansion of the work discussed in this study by using MLP-BP and fuzzy MLP-GA in combination to determine the best places to build solar power plants.