Abstract
This study investigates the applicability of three different soft computing methods, least square support vector regression (LSSVR), multivariate adaptive regression splines (MARS), and M5 Model Tree (M5-Tree), in forecasting SO2 concentration. These models were applied to monthly data obtained from Janakpuri, Nizamuddin, and Shahzadabad, located in Delhi, India. The models were compared with each other using the cross validation method with respect to root mean square error, mean absolute error, and correlation coefficient. According to the comparison, LSSVR provided better accuracy than the other models, while the MARS model was found to be the second best model in forecasting monthly SO2 concentration. Results indicated that the applied models gave better forecasting accuracy in Janakpuri station than the other stations. The results were also compared with previous studies and satisfactory results were obtained from three methods in modeling SO2 concentrations.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Introduction
Soft computing consists of different techniques, which are helpful to solve uncertain and complex problems (Corchado et al. 2011; Corchado and Herrero 2011; Vaidya et al. 2012, Kisi and Parmar 2016). It is used to investigate, simulate, and analyze complex issues and phenomenon in an attempt to solve real-world problems. Soft computing is useful where the precise scientific tools are incapable of giving analytic, low cost, and complete solution. The problem of air pollution is one of the most important problems among all, and it had come into play since the beginning. Air pollution affects both the developing and the developed countries alike. Air pollutants consist of gaseous pollutants (SO2, NO2, CO, etc.), odors, and suspended particulate matter (SPM) such as fumes, dust, smoke, and mist. The high concentration of air pollutants in and near the urban region causes severe pollution to the surroundings. Sulfur dioxide is a pungent, toxic gas that is in the atmosphere. Moreover, it harms the society, as it causes acid rain which affects the environment (Rizwan et al. 2013). Sulfur dioxide reacts in the atmosphere to form aerosol particles, which can create outbreaks of haze and other climate problems. The main sources of SO2 are volcanic and anthropogenic emissions from burning sulfur-contaminated fossil fuels and the refinement of sulfide ores (Seinfeld and Pandis 2006). According to the new analysis of data from NASA’s Aura satellite, the emissions of sulfur dioxide from power plants in India increased by more than 60% between 2005 and 2012 (Krotkov et al. 2016). In 2010, India surpassed the USA as the world’s second largest emitter of SO2 after China (EPA 2015a, b). The capital of India, Delhi, is considered among the most polluted megacities of the world (Gurjar et al. 2010). In the past, some studies were undertaken for air quality assessment of Delhi (Aneja et al. 2001; Goyal 2003; Gurjar et al. 2004; Mohan and Kandya 2007; Soni et al. 2014). Recently, Krotkov et al. (2016), studied ozone layer and major atmospheric pollutant gases (nitrogen dioxide (NO2) and sulfur dioxide (SO2)) by using the Ozone Monitoring Instrument (OMI) onboard NASA’s Aura satellite and examined changes in SO2 and NO2 over some of the world’s most polluted industrialized regions during the first decade of OMI observations and observed that in India, SO2 and NO2 levels from coal power plants and smelters are growing at a fast pace, increasing by more than 100 and 50%, respectively, from years 2005 to 2015.
The advanced soft computing techniques such as artificial neural networks (ANNs), adaptive-network-based fuzzy inference system (ANFIS), genetic algorithm (GL), fuzzy inference system (FIS), decision trees, and support vector machines have been successfully applied for modeling from the last decade (Kisi 2009a, b; Guven and Kisi 2011; Voukantsis et al. 2011; Kisi and Cengiz 2013; Antanasijević et al. 2013; Kisi and Tombul 2013; Gennaro et al. 2013; Goyal et al. 2014; Parmar and Bhardwaj 2014; Wanga et al. 2015; Kisi et al. 2016). Etemad-Shahidi and Mahjoobi (2009) used M5 algorithm for prediction of wave height, and the results of model trees were also compared with those of artificial neural networks. In some applications, generalized regression neural networks (GRNN), multilayer perceptron-neural networks (MLP), and support vector machine (SVM) are used to calculate the predicted value (Kim et al. (2012). In comparison to empirical and MLR models, ANN models performed better. In order to check the accuracy of the these models, Kisi (2015) applied multivariate adaptive regression splines (MARS), least square support vector regression (LSSVR), and M5 Model Tree (M5Tree) in Pan evaporation at Antalya and Mersin sample stations in Turkey. The LSSVR model performs more accurately in the case of using local input and output; in the second case, the MARS model is better. Kim et al. (2015) reported daily pan evaporation prediction by using soft computing models. Recently, Shafaei and Kisi (2016) employed three WANFIS (wavelet-ANFIS), WSVR (wavelet-SVR), and WARMA (wavelet-ARMA) hybrid methods for estimating monthly lake-level changes and found that three hybrid models forecasted more accurate than the single models.
However, not many scientific literature discuss a number of robust forecasting methods using soft computing techniques for air pollution modeling. The present paper includes MARS, the LSSVR, and M5 Model Tree (M5Tree) techniques. Each one of these algorithms is discussed separately and the results discussed. In addition, a comparison of all methods is made to emphasize their advantages as well as their disadvantages. To the best of the authors’ knowledge, it is the first time that such an analysis related to the LSSVR, MARS, and M5Tree is being performed for air pollutants of Delhi. This constitutes a real challenge as the urban pollution gets mixed with the desert dust aerosols during pre-monsoon and summer seasons over Delhi (Singh et al. 2005; Prasad et al. 2007; Soni et al. 2015; Parmar et al. 2016), whereas the winters are extremely polluted with high concentrations of black carbon aerosols from vehicular and other anthropogenic pollution sources leading to the formation of foggy and haze conditions over Delhi (Ganguly et al. 2006; Singh et al. 2010).
Delhi is the second most populous urban agglomeration in India and third largest urban area in the world. NASA’s report creates the importance to investigate the pollutant levels at different sites (residential and industrial) in Delhi. Air pollutants directly affect the health of residents. This research has an importance as 19 million people have to breathe in this air and quality of air is directly related to health.
Data and methodology
In the Northern region, Delhi is located in central India and 715 ft. above the sea level (Fig. 1). The region has a semi-arid or steppe climate, with extremely hot summers, heavy rain falls in the monsoon months, and cold winters. There are dust storms in summer and foggy mornings in winter. Temperatures gradually rise to 46 °C in the summer and falls to 4 °C in winter. In winter months, temperature inversion and low wind speed are the main cause of accumulation of airborne pollutants in Delhi. In Delhi, industries, vehicular activities, power plants, and frequent dust storms are majorly contributing in the high concentration of the pollutants. The Central Pollution Control Board (CPCB) SO2 data over four sites, in which two are residential, Janakpuri and Nizamuddin, and one industrial, Shahazada Bagh, are utilized for the present study. The ambient air quality and long-term data used in the present study covers the period 1993–2012 which is obtained from the CPCB. The monthly statistical parameters of the used data set for Janakpuri, Nizamuddin, and Shahzadabad stations are given in Table 1.
Least square support vector regression
Vladimir Vapnik and his co-workers developed this least square support vector machine models at AT&T Bell Laboratories in 1995, which are applied to calculate the nonlinear relationship between input variables and output variables with least error (Cortes and Vapnik 1995; Suykens 2001; Smola 2004). LSSVR generated from SVR (support vector regression), which is a great technique to solve the real-life problems by a combination of regression, function estimation, and classification. This SVR is developed on the ground of structural risk minimization (SRM), which provides the least error in forecasting problems. It is mostly suitable for signal processing, pattern recognition, and nonlinear regression estimation.
Firstly, the LSSVR model was projected by Suykens and Vandewalle in 1999 (Suykens and Vandewalle 1999), which is applied on chaotic time series forecasting. The main difference between LSSVR and SVR is consideration of the equations; during the training phase, LSSVR uses linear equations while SVR uses quadratic optimization. The other conventional models like back propagation neural networks (BPNN), partial least square regression (PLS), and multivariate linear regression (MLR) are computationally more extensive than LSSVR. So it is easy to apply this model as compared to others.
Consider a given training set \( {\left\{{p}_k,{q}_k\right\}}_{k=1}^N \) with input data p k ∈ R n and output data q k ∈ R with class labels q k ∈ {−1, +1} and linear classifier
When the data of the two classes are separable, one can say
These two sets of inequalities can be combined into one single set as follows
The convex optimization theory is used to formulate SVR. In this methodology, firstly, it starts formulating the problem as a constrained optimization problem. In the second step, it formulates the Lagrangian and then takes the conditions for optimality and finally solves the problem in the dual space of Lagrange multipliers. With the resulting classifier
Cortes and Vapnik (1995) extended this linear SVR classifier to a non-separable case by using an additional slack variable in the problem formulation. Now, after applying additional slack variable, the set of inequalities is as
In classic SVR, inequality type constraints are considered, but in LSSVR equality type of constraints are used. This equality type of constraints simplifies the problem as the solution of LSSVR, received directly after solving a set of linear equations instead of solving a convex quadratic program. In this LSSVR classifier, in the primal space is as follow,
where b is a real constant. In the nonlinear classification, the LSSVR classifier in the dual space is like below
In Eq. (7), α k is the +ve real constants and b a real constant, in general, K(p k , p) = 〈ϕ(p k ), ϕ(p)〉 , 〈•, •〉 is the inner product, and ϕ(p) the nonlinear map from the original space to a high dimensional space. In the function inference, the LSSVR model is in the below form
In radial basis function (RBF), kernels are in use, two alteration parameters (γ, α) are inserting. Where γ is the regularization constant and α the width of the radial basis function kernel.
In this current research work, LSSVR is used for modeling of air pollutants level in Delhi. By using the LSSVR model, the output prediction error is least. As compared to conventional models, the LSSVR model is best to remove noises and reduces the computational labor. So, because of these benefits, the conventional models can be replaced with LSSVR. This will be more useful in application of forecast modeling in different areas of research.
Multivariate adaptive regression splines
Multivariate adaptive regression splines model is a non-parametric regression model, which is applied to predict continuous numeric outcomes. MARS was developed by Friedman (1991), which is a flexible procedure to organize relationships that are nearly additive or involve interactions with other variables. MARS makes no assumptions about the underlying functional relationship between dependent and independent variables in order to estimate the general functions of high dimensional arguments given sparse data, which is the main beauty of this model (Friedman 1991). The MARS model explains the complex nonlinear relation between predictor and response variables; this is the major beauty of this model. Apart from other conventional models, it can work by both backward and forward stepwise procedures. By using the backward stepwise procedure, it is helpful to remove preventable variables from the earlier selected set and this elimination improves prediction accuracy (Andres et al. 2010). The forward stepwise procedure helps to choose the appropriate input variables.
The value of other variables can be defined by using two basis functions or by inflection point along the range of inputs; then, we will have the new variable Y, which is mapped from variable X as below:
where c represents the threshold value. There are two adjacent splines, which are intersecting at knot to maintain the continuity of the basis functions (Sephton 2001; Bera et al. 2006). The MARS model has differently many applications in research, like prediction modeling, financial management, time series analysis, etc. Here, the MARS model is applied to calculate the level of air pollution at different sites in Delhi, India.
M5 model tree
Quinlan (1992) developed the model for continuous class learning, which is named as an M5 model tree. The main strength of this model is a binary decision tree. To find the relation between independent and dependent variables, linear regression function is applied to terminal leaf nodes (Mitchell 1997).
The M5 model tree is commonly used for categorical data; this is better than other conventional decision tree models. The main advantage of this model is to handle quantitative data, which makes it different from other tree models. It has two steps; in the first step, the data is divided into subsets to generate a decision tree (Solomatine and Xue 2004). In the second step, the standard deviation of the class value reached at a node is used for splitting criterion. Here, expected reduction is measured, in order to check the error of testing each attribute at node. Then compute the SDR (standard deviation reduction) (Pal and Deswal 2009) as below:
where sd expressed as standard deviation, T represents a set of examples which reaches at the node, and T i is the i th outcome of the possible set. The data’s standard deviation (SD) are less than parent nodes. In this phase, large tree-like design have poor generalization and result in around appropriate. Quinlan (1992) suggests a solution for this circumstance, in the real dense sapling is actually clipped and then clipped subtrees are usually changed by using linear regression functions. By this process, accuracy and reliability of the design tree are very much improved. The M5 model tree is applied in this research work for decision making for the air quality level in Delhi, India.
Application and results
Monthly SO2 of three different regions, Janakpuri, Nizamuddin, and Shahzadabad, located in India were modeled using three different heuristic methods, LSSVR, MARS, and M5Tree. Three previous lags were used as inputs to the models to forecast 1-month ahead SO2 parameter. The cross validation method was used for each model by dividing data into four subsets. Table 2 reports the training and test data sets of each model. In this, table M1 indicates model 1 and vice versa. Evaluation criteria used in the applications are root mean square errors (RMSE), mean absolute errors (MAE), and correlation coefficient (R). The RMSE and MAE statistics can be given as
where N is the number of data, SO2 i,o is the observed SO2 values, and SO2 i,e is the model’s estimate.
For each LSSVR model in each data set, various parameters were tried and the best ones that gave the minimum RMSE error in the test period were obtained. The parameters of the optimal LSSVR models for each combination of Janakpuri, Nizamuddin, and Shahzadabad stations are provided in Table 2. In this table, (100, 5) indicates the regularization constant and RBF kernel values of the LSSVR model, respectively. Test results of the optimal LSSVR models for each station and for each data set are given in Table 3. This table obviously shows that all the LSSVR models give different accuracies for different data sets. In Janakpuri, average accuracies reveal that the best results were generally obtained for the third input combination. It is clear from Table 3 that the LSSVR model provides the worst results in forecasting SO2 of three stations for the M4 data set (1987–1995). The basic reason for this might be the fact that the data range of this test data set is very different from those of the M1, M2, and M3 (see Table 1). The maximum values (x max = 24.6, 36.6, and 42.7) of the M4 test data set are higher than those of the other test data sets for the Janakpuri, Nizamuddin, and Shahzadabad stations, respectively. Training with M1, M2, and M3 data sets causes some extrapolation difficulties for the applied LSSVR models. Standard deviation of the M4 data set is also higher than those of the others. In Janakpuri, the LSSVR model provides the best accuracy for the M1 data set and second and third input combinations while the M3 and M2 data sets with third input combinations give the best results for the Nizamuddin and Shahzadabad stations, respectively. Figure 2 illustrates the observed and predicted SO2 values using the LSSVR model for each data set. The models’ accuracies seem to be better in forecasting the SO2 of Janakpuri station than those of the other stations. This is also confirmed by the comparison statistics reported in Table 3. This may be due to the difference of SO2 data range of each station. Janakpuri has a lower data range (x min = 3 and x max = 24.6) than those of the Nizamuddin (x min = 2.1 and x max = 36.6) and Shahzadabad (x min = 3.2 and x max = 42.7) stations, respectively.
Table 4 compares the accuracy of different MARS models for different test data sets. Unlike the previous application models generally yield better results in combination ii of Janakpuri. The best MARS model was obtained from M1 and second inputs in Janakpuri while the M1 with third inputs provided the best results in Nizamuddin and Shahzadabad stations, respectively. The observed and forecasted SO2 by the MARS models is demonstrated in Fig. 3 in the form of scatter plot. Similar to the LSSVR, here, also less scattered forecasts were obtained for the Janakpuri in relative to the other two stations. SO2 modeling accuracy of the optimal M5-Tree models is provided in Table 5. Different from the LSSVR and MARS models, the M5-Tree model gives the best accuracy for M2 and third inputs in Janakpuri while the M3 and M2 with first input provide the best results in Nizamuddin and Shahzadabad stations, respectively. The scatter plots given in Fig. 4 clearly show that the M5-Tree model forecasts SO2 of Janakpuri better than those of the other stations. Comparison with Figs. 2 and 3 obviously indicates that the M5-Tree model gives more scattered forecasts than the LSSVR and MARS models. The reason of this may be the fact that the linear structure of the M5-Tree model prevents it from accurately predicting highly nonlinear SO2. Comparison of average statistics provided in Tables 3, 4, 5 says that the LSSVR models are generally more successful than the MARS and M5-Tree models in forecasting SO2.
Sahin et al. (2005) modeled SO2 distribution in Istanbul using artificial neural networks (ANNs) and non-linear regression (NLR), and they found that the optimal ANNs and NLR provided RMSE = 23.13 μg/m3 and 22.35 μg/m3, MAE = 14.97 μg/m3 and 18.41 μg/m3, and R = 0.528 and 0.638, respectively. Akkoyunlu et al. (2010) used the ANN-based approach for the prediction of urban SO2 concentrations and found correlation coefficients of about 0.770, 0.744, and 0.751 for the winter, summer, and overall data, respectively. Sahin et al. (2011) used cellular neural network (CNN) and the statistical persistence method (PER) to model SO2 concentrations of Istanbul, and they found RMSE = 14.2 and 13.9, MAE = 9.9 and 7.8, and R = 0.85 and 0.83 for the CNN and PER models, respectively. It is clear from Tables 3, 4, and 5 that the applied LSSVR, MARS, and M5-Tree models in this study generally provide satisfactory results in modeling SO2 concentrations.
Conclusions
The ability of three different soft computing methods, LSSVR, MARS, and M5-Tree, in forecasting SO2 concentration is evaluated. Data from three stations, Janakpuri, Nizamuddin, and Shahzadabad, located in Delhi, India, were used in the applications. The cross validation method was employed for presenting generality of the applied models. LSSVR performed superior to the other models in forecasting monthly SO2 concentration. MARS was found to be the second best method. Because of its linear nature, M5-Tree provided worse results than the nonlinear LSSVR and MARS models. All the models provided better accuracy in forecasting SO2 of Janakpuri station than those of the other stations because of its lower data range. Comparison with previous studies showed that soft computing models applied in this study generally provided satisfactory results in modeling SO2 concentrations.
References
Akkoyunlu A, Yetilmezsoy K, Erturk F, Oztemel E (2010) A neural network-based approach for the prediction of urban SO2 concentrations in the Istanbul metropolitan area. Int J Environ Pollut 40(4):301–321
Andres JD, Lorca P, Juez FJ, Sánchez-Lasheras F (2010) Bankruptcy forecasting: a hybrid approach using Fuzzy c-means clustering and Multivariate Adaptive Regression Splines (MARS). Expert Syst Appl 38:1866–1875
Aneja VP, Agarwal A, Roelle PA, Phillips SB, Tong Q, Watkins N, Yablonsky R (2001) Measurements and analysis of criteria pollutants in New Delhi, India. Environ Int 27(1):35–42
Antanasijević DZ, Pocajt VV, Povrenović DS, Ristić MD, Perić-Grujić AA (2013) PM10 emission forecasting using artificial neural networks and genetic algorithm input variable optimization. Sci Total Environ 443:511–519
Bera P, Prasher SO, Patel RM, Madani A, Lacroix R, Gaynor JD, Tan SC, Kim SH (2006) Application of MARS in simulating pesticide concentrations in soil. Trans Am Soc Agric Eng 49:297–307
Corchado E, Herrero E (2011) Neural visualization of network traffic data for intrusion detection. Appl Soft Comput 11(2):2042–2056 CrossRef
Corchado E, Arroyo A, Tricio V (2011) Soft computing models to identify typical meteorological days. Logic J IGPL 19(2):373–383 MathSciNet CrossRef
Cortes C, Vapnik V (1995) Support vector networks. Mach Learn 20:273–297
Etemad-Shahidi A, Mahjoobi J (2009) Comparison between M5′ model tree and neural networks for prediction of significant wave height in Lake Superior. Ocean Eng 36:1175–1181. doi:10.1016/j.oceaneng.2009.08.008
EPA: Environmental Protection Agency Clean Air Interstate Rule. Accessed December 10, 2015a. (http://earthobservatory.nasa.gov/IOTD/view.php?id=87182)
EPA: Environmental Protection Agency Acid Rain Program. Accessed December 10, 2015b. (http://earthobservatory.nasa.gov/IOTD/view.php?id=82626)
Friedman JH (1991) Multivariate adaptive regression splines. Ann Stat 19:1–67
Ganguly D, Jayaraman A, Rajesh TA, Gadhavi H (2006) Wintertime aerosol properties during foggy and nonfoggy days over Urban Center Delhi and their implications for shortwave radiative forcing. J Geophys Res 111:D15217. doi:10.1029/2005JD007029
Gennaro G, Trizio L, Gilio A, Pey J, Pérez N, Cusack M, Alastuey A, Querol X (2013) Neural network model for the prediction of PM10 daily concentrations in two sites in the Western Mediterranean. Sci Total Environ 463-464:875–883
Goyal P (2003) Present scenario of air quality in Delhi: a case study of CNG implementation. Atmos Environ 37(38):5423–5431
Goyal MK, Bharti B, Quilty J, Adamowski J, Pandey A (2014) Modeling of daily pan evaporation in sub-tropical climates using ANN, LS-SVR, Fuzzy Logic, and ANFIS. Expert 516 Syst Appl 41(11):5267–5276
Gurjar BR, Van Aardenne JA, Lelieveld J, Mohan M (2004) Emission estimates and trends (1990–2000) for megacity Delhi and implications. Atmos Environ 38(33):5663–5681
Gurjar BR, Jain A, Sharma A, Agarwal A, Gupta P, Nagpure AS, Lelieveld J (2010) Human health risks in megacities due to air pollution. Atmos Environ 44(36):4606–4613
Guven A, Kisi O (2011) Daily pan evaporation modeling using linear genetic programming technique. Irrig Sci 29(2):135–145
Kim S, Shiri J, Kisi O (2012) Pan evaporation modeling using neural computing approach for different climatic zones. Water Resour Manag 26(11):3231–3249
Kim S, Shiri J, Singh VP, Kisi Ö, Landeras G (2015) Predicting daily pan evaporation by soft computing models with limited climatic data. Hydrol Sci J 60(6):1120–1136
Kisi O (2009a) Daily pan evaporation modelling using multi-layer perceptrons and radial basis neural networks. Hydrol Process 23(2):213–223
Kisi O (2009b) Fuzzy genetic approach for modeling reference evapotranspiration. J irrig 538 drainage eng 136(3):175–183
Kisi O (2015) Pan evaporation modeling using least square support vector machine, multivariate adaptive regression splines and M5 model tree. J Hydrol 528:312–320
Kisi O, Cengiz TM (2013) Fuzzy genetic approach for estimating reference evapotranspiration of Turkey: Mediterranean region. Water Resour Manag 27:3541–3553
Kisi, O and Parmar, K. S, 2016. Application of least square support vector machine and multivariate adaptive regression spline models in long term prediction of river water pollution. J Hydrology.
Kisi O, Tombul M (2013) Modeling monthly pan evaporations using fuzzy genetic approach. J Hydrol 477:203–212
Kisi O, Genc O, Dinc S, Zounemat-Kermani M (2016) Daily pan evaporation modeling using chi-squared automatic interaction detector, neural networks, classification and regression tree. Comput Elect Agr 122:112–117
Krotkov NA, McLinden CA, Li C, Lamsal LN, Celarier EA, Marchenko SV, Swartz WH, Bucsela EJ, Joiner J, Duncan BN, Boersma KF, Veefkind JP, Levelt PF, Fioletov VE, Dickerson RR, He H, Lu Z, Streets DG (2016) Aura OMI observations of regional SO2 and NO2 pollution changes from 2005 to 2015. Atmos Chem Phys 16:4605–4629. doi:10.5194/acp-16-4605-2016
Mitchell TM (1997) Machine learning. The McGraw-Hill Companies, Inc., New York 414
Mohan M, Kandya A (2007) An analysis of the annual and seasonal trends of air quality index of Delhi. Environ Monit Assess 131(1–3):267–277
Pal M, Deswal S (2009) M5 model tree based modelling of reference evapotranspiration. Hydrol Process 23:1437–1443
Parmar KS, Bhardwaj R (2014) River water prediction modeling using neural networks, fuzzy and wavelet coupled model. Water Resour Manage 29(1):17–33
Parmar KS, Soni K, Kumar N, Kapoor S (2016) Statistical variability comparison in MODIS and AERONET derived aerosol optical depth over Indo-Gangetic Plains using time series modeling. Sci Total Environ:553
Prasad AK, Singh S, Chauhan SS, Srivastava MK, Singh RP, Singh R (2007) Aerosol radiative forcing over the Indo-Gangetic Plains during major dust storms. Atmos Environ 41(6289–6301):2007
Quinlan JR (1992) Learning with continuous classes. In proceedings of the Fifth Australian Joint Conference on Artificial Intelligence, Hobart, Australia, 16–18 November. World Scientific, Singapore, pp 343–348
Rizwan SA, Nongkynrith B, Gupta SK (2013) Air pollution in Delhi: its magnitude and effects on health. Indian J Community Med 38:4–8 http://www.ijcm.org.in/text.asp?2013/38/1/4/106617
Sahin U, Ucan ON, Bayat C, Oztorun N (2005) Modeling of SO2 distribution in Istanbul using artificial neural networks. Environ Model Assess 10:135–142
Sahin UA, Ucan ON, Bayat C, Tolluoglu O (2011) A new approach to prediction of SO2 and PM10 concentrations in Istanbul, Turkey: cellular neural network (CNN). Environ Forensic 12(3):253–269
Seinfeld JH, Pandis SN (2006) Atmospheric chemistry and physics: from air pollution to climate change, vol 2006, 2nd edn. John Wiley & Sons, Hoboken
Sephton P (2001) Forecasting recessions: can we do better on MARS? Federal Reserve Bank of St. Louis Rev 83:39–49
Shafaei M, Kisi O (2016) Lake level forecasting using wavelet-SVR, wavelet-ANFIS and wavelet-ARMA conjunction models. Water Resour Manag 30(1):79–97. doi:10.1007/s11269-015-1147-z
Singh S, Nath S, Kohli R, Singh R (2005) Aerosols over Delhi during pre-monsoon months: characteristics and effects on surface radiation forcing. Geophys Res Lett 32:L13808
Singh S, Soni K, Bano T, Tanwar RS, Nath S, Arya BC (2010) Clear-sky direct aerosol radiative forcing variations over mega-city Delhi. Ann Geophys 28:1157–1166
Smola JA, Bernhard S (2004) A tutorial on support vector regression. Stat Comput 14:199–222
Solomatine DP, Xue Y (2004) M5 model trees compared to neural networks: application to flood forecasting in the upper reach of the Huai River in China. J Hydrol Eng 9:491–501
Soni K, Kapoor S, Parmar KS, Kaskaoutis DG (2014) Statistical analysis of aerosols over the Gangetic–Himalayan region using ARIMA model based on long-term MODIS observations. Atmos Res 149:174–119. doi:10.1016/j.atmosres.2014.05.025
Soni K, Parmar KS, Kapoor S (2015) Time series model prediction and trend variability of aerosol optical depth over coal mines in India. Environ Sci Pollut Res 22:3652–3671
Suykens JAK (2001) Support vector machines: a nonlinear modeling and control perspective. Eur J Control 7:311–327
Suykens JAK, Vandewalle J (1999) Least square support vector machine classifiers. Neural Process Lett 9:293–300
Vaidya V, Park JH, Arabnia HR, Pedrycz W, Peng S (2012) Bio-inspired computing for hybrid information technology. Soft Comput 16(3):367–368
Voukantsis D, Karatzas K, Kukkonen J, Räsänen T, Karppinen A, Kolehmainen M (2011) Intercomparison of air quality data using principal component analysis, and forecasting of PM10 and PM2.5 concentrations using artificial neural networks, in Thessaloniki and Helsinki. Sci Total Environ 409:1266–1276
Wanga P, Liu Y, Qin Z, Zhang G (2015) A novel hybrid forecasting model for PM10 and SO2 daily concentrations. Sci Total Environ 505:1202–1212
Acknowledgements
The authors are thankful to the Central Pollution Control Board (CPCB), Government of India, for providing the research data and Dr. B. R. Ambedkar National Institute of Technology, Jalandhar (Government of India) and IKG Punjab Technical University (Government of Punjab) for providing research facilities. The second author is also thankful to Prof. Rashmi Bhardwaj, Guru Gobind Singh Indraprastha University, for her motivation and astute guidance. The third author (KS) is grateful to the Director, CSIR-NPL.
Author information
Authors and Affiliations
Corresponding authors
Rights and permissions
About this article
Cite this article
Kisi, O., Parmar, K.S., Soni, K. et al. Modeling of air pollutants using least square support vector regression, multivariate adaptive regression spline, and M5 model tree models. Air Qual Atmos Health 10, 873–883 (2017). https://doi.org/10.1007/s11869-017-0477-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11869-017-0477-9