Assessment of Fine Particulate Matter for Port City of Eastern Peninsular India Using Gradient Boosting Machine Learning Model
Abstract
:1. Introduction
2. Literature Review
- (1)
- Analysis of the concentration of PM2.5 and air pollutants in the air of the eastern peninsular port city of India, on yearly and seasonal bases;
- (2)
- Evaluation of the correlation between PM2.5 concentration, air pollutants and meteorological parameters for the port city of Visakhapatnam;
- (3)
- Observation of the CatBoost prediction model as the most efficient prediction model for assessing and predicting the concentration of PM2.5 in the air;
- (4)
- Analysis of high PM2.5 concentration prediction results for the period under observation.
3. Materials and Methods
3.1. Study Area
3.2. Data Description
3.3. Machine Learning Model Description
- (a)
- Gradient boosting decision tree regression models: LGBM, XGB and CB;
- (b)
- Ensemble regression models: VR and RF (tree-based bagging ensemble technique);
- (c)
- Penalized regression models: LR–LA (lasso model with least-angle regression algorithm), LAR (lasso regularization technique to derive the coefficients exactly to zero), MTE (regularize multi-tasking using lasso and ridge norms) and RR (regularization technique to derive weights nearer to the origin);
- (d)
- Linear regression models: LR (using linear equation, i.e., linear relation of inputs and single target), BRR (linear regression using probability distribution) and;
- (e)
- Miscellaneous regression models: MLP (supervised neural network model), PLS (covariance-based statistical approach), QR (evaluates the median or other quantiles of target variable conditional on feature variables) and KNN (k-nearest neighbors algorithm).
3.4. CatBoost (Based on Gradient Boosting Algorithm) Model Description
- (a)
- Categorical features: The model is capable of handling categorical features. In conventional gradient boosting decision tree-based algorithms, categorial features are replaced by their mean label value. If mean values are used to characterize features, then it will give rise to an effect of conditional shift [35]. However, in CB, an approach known as greedy target statistics is employed, and the model inculcates prior values to greedy target statistics. The employed technique reduces overfitting with minimum information loss;
- (b)
- Combining features: CB implements a greedy way to amalgamate all of the multiple categorical features and their combinations by the current tree during the formation of the new split. All the splits in the decision tree are considered as categories with two disjoint values and are employed during amalgamation;
- (c)
- The CB models are fast scorers. They are based on oblivious decision trees which are balanced and less inclined to overfitting.
4. Performance Measurement Indicators
5. Analysis of Results
5.1. Correlation of PM2.5 Concentration with Air Pollutants and Meteorological Parameters
5.2. Seasonal and Annual Behaviour of PM2.5 Concentration with Air Pollutants and Meteorological Parameters
5.2.1. Seasonal Behavior of PM2.5 Concentration with Air Pollutants and Meteorological Parameters
5.2.2. Annual Behavior of PM2.5 Concentration, Air Pollutants and Meteorological Parameters
5.3. Machine Learning-Based PM2.5 Concentration Estimation
5.3.1. Performance of Regression Models
5.3.2. Impact of Input Variables (Air Pollutant Concentration and Meteorological Parameters) on the CB Model
5.4. Evaluation of High PM2.5 Concentration
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- WHO. 2 May 2018. Available online: https://www.who.int/news/item/02-05-2018-9-out-of-10-people-worldwide-breathe-polluted-air-but-more-countries-are-taking-action (accessed on 25 November 2021).
- Central Pollution Control Board, India. Available online: https://www.cpcb.nic.in/ (accessed on 5 October 2021).
- EEA. Healthy Environment, Healthy Lives. 2019. Available online: https://www.eea.europa.eu/ (accessed on 26 November 2021).
- EEA. Air Pollution: How It Affects Our Health. 2021. Available online: https://www.eea.europa.eu/ (accessed on 26 November 2021).
- Masood, A.; Ahmad, K. A model for particulate matter (PM2.5) prediction for Delhi based on machine learning approaches. Procedia Comput. Sci. 2020, 167, 2101–2110. [Google Scholar] [CrossRef]
- Deters, J.K.; Zalakeviciute, R.; Gonzalez, M.; Rybarczyk, Y. Modeling PM2.5 Urban Pollution Using Machine Learning and Selected Meteorological Parameters. J. Electr. Comput. Eng. 2017, 2017, 5106045. [Google Scholar] [CrossRef] [Green Version]
- Moisan, S.; Herrera, R.; Clements, A. A dynamic multiple equation approach for forecasting PM2.5 pollution in Santiago, Chile. Int. J. Forecast. 2018, 34, 566–581. [Google Scholar] [CrossRef] [Green Version]
- Jiang, X.; Luo, Y.; Zhang, B. Prediction of PM2.5 Concentration Based on the LSTM-TSLightGBM Variable Weight Combination Model. Atmosphere 2021, 12, 1211. [Google Scholar] [CrossRef]
- Suleiman, A.; Tight, M.; Quinn, A. Applying machine learning methods in managing urban concentrations of traffic related particulate matter (PM10 and PM2.5). Atmos. Pollut. Res. 2019, 10, 134–144. [Google Scholar] [CrossRef]
- Kumar, S.; Mishra, S.; Singh, S.K. A machine learning-based model to estimate PM2.5 concentration levels in Delhi’s atmosphere. Heliyon 2020, 6, e05618. [Google Scholar] [CrossRef]
- Chandu, K.; Dasari, M. Variation in Concentrations of PM2.5 and PM10 during the Four Seasons at the Port City of Visakhapatnam, Andhra Pradesh, India. Nat. Environ. Pollut. Technol. 2020, 19, 1187–1193. [Google Scholar] [CrossRef]
- Ferov, M.; Modry, M. Enhancing lambdaMART using oblivious trees. arXiv 2016, arXiv:1609.05610. [Google Scholar]
- Rao, K.S.; Devi, G.L.; Ramesh, N. Air Quality Prediction in Visakhapatnam with LSTM based Recurrent Neural Networks. Int. J. Intell. Syst. Appl. 2019, 11, 18–24. [Google Scholar] [CrossRef] [Green Version]
- Prasad, N.K.; Sarma, M.; Sasikala, P.; Raju, N.M.; Madhavi, N. Regression Model to Analyse Air Pollutants Over a Coastal Industrial Station Visakhapatnam (India). Int. J. Data Sci. 2020, 1, 107–113. [Google Scholar] [CrossRef]
- Devi Golagani, L.; Rao Kurapati, S. Modelling and Predicting Air Quality in Visakhapatnam using Amplified Recurrent Neural Networks. In Proceedings of the International Conference on Time Series and Forecasting, Universidad de Granada, Granada, Spain, 25–27 September 2019; Volume 1, pp. 472–482. [Google Scholar]
- Andhra Pradesh Pollution Control Board. Available online: https://pcb.ap.gov.in (accessed on 5 October 2021).
- Census-India (2012) Census of India. The Government of India, New Delhi. 2011. Available online: https://censusindia.gov.in/census.website/ (accessed on 28 November 2021).
- Indian Meteorological Department. Government of India. Available online: https://mausam.imd.gov.in/ (accessed on 23 October 2021).
- Dietterich, T.G. Ensemble Methods in Machine Learning. In Multiple Classifier Systems; Springer: Berlin/Heidelberg, Germany, 2000; pp. 1–15. [Google Scholar]
- Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. Catboost: Unbiased boosting with categorical features. In Advances in Neural Information Processing Systems; Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2018; Volume 31, pp. 6638–6648. [Google Scholar]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
- Chen, T.; Guestrin, C. Xgboost: A Scalable Tree Boosting System. In Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
- Avila, J.; Hauck, T. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef] [Green Version]
- Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Haykin, S. Neural Networks and Learning Machines; Pearson: Upper Saddle River, NJ, USA, 2009; Volume 3. [Google Scholar]
- Tibshirani, R. Regression Analysis and Selection via the Lasso. R. Stat. Soc. Ser. 1996, 58, 267288. [Google Scholar]
- Abdi, H. Partial least square regression (PLS regression). In Encyclopedia of Measurement and Statistics; Salkind, N.J., Ed.; Sage: Thousand Oaks, CA, USA, 2007. [Google Scholar]
- Koenker, R.; Hallock, K.F. Quantile regression. J. Econ. Perspect. 2001, 15, 143–156. [Google Scholar] [CrossRef]
- Liu, J.; Ji, S.; Ye, J. Multi-task feature learning via efficient l 2, 1-norm minimization. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence; AUAI Press: Arlington, VA, USA, 2009; pp. 339–348. [Google Scholar]
- Kidwell, J.S.; Brown, L.H. Ridge Regression as a Technique for Analyzing Models with Multicollinearity. J. Marriage Fam. 1982, 44, 287–299. [Google Scholar] [CrossRef]
- Bayesian Ridge Regression. Available online: https://scikit-learn.org/stable/modules/linear_model.html#bayesian-ridge-regression (accessed on 13 December 2021).
- Fehrmann, L.; Lehtonen, A.; Kleinn, C.; Tomppo, E. Comparison of linear and mixed-effect regression models and k-nearest neighbour approach for estimation of single-tree biomass. Can. J. For. Res. 2008, 38, 1–9. [Google Scholar] [CrossRef]
- Seber, G.A.; Lee, A.J. Linear Regression Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2012; Volume 329. [Google Scholar]
- Prokhorenkova, L.; Gusev, G.; Vorobev, A. CatBoost: Unbiased boosting with categorical features. In Proceedings of the Advances in Neural Information Processing Systems, Montréal, QC, Canada, 3–8 December 2018; pp. 6638–6648. [Google Scholar]
- Zhang, K.; Schölkopf, B.; Muandet, K.; Wang, Z. Domain adaptation under target and conditional shift. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 819–827. [Google Scholar]
- The Gazette of India, Part III–Section 4, NAAQS CPCB Notification. 2009. Available online: https://cpcb.nic.in/ (accessed on 17 December 2021).
- Tai, A.P.K.; Mickley, L.J.; Jacob, D.J. Correlations between fine particulate matter (PM2.5) and meteorological variables in the United States: Implications for the sensitivity of PM2.5 to climate change. Atmos. Environ. 2010, 44, 3976–3984. [Google Scholar] [CrossRef]
- Khillare, P.S.; Sarkar, S. Airborne inhalable metals in residential areas of Delhi, India: Distribution, source apportionment and health risks. Atmos. Pollut. Res. 2012, 3, 46–54. [Google Scholar] [CrossRef] [Green Version]
- Xu, R.; Tang, G.; Wang, Y.; Tie, X. Analysis of a long-term measurement of air pollutants (2007–2011) in North China Plain (NCP); Impact of emission reduction during the Beijing Olympic Games. Chemosphere 2016, 159, 647–658. [Google Scholar] [CrossRef]
- Jian, L.; Zhao, Y.; Zhu, Y.-P.; Zhang, M.-B.; Bertolatti, D. An application of ARIMA model to predict submicron particle concentrations from meteorological factors at a busy roadside in Hangzhou, China. Sci. Total Environ. 2012, 426, 336–345. [Google Scholar] [CrossRef]
- Wang, J.; Ogawa, S. Effects of Meteorological Conditions on PM2.5 Concentrations in Nagasaki, Japan. Int. J. Environ. Res. Public Health 2015, 8, 9089–9101. [Google Scholar] [CrossRef] [PubMed]
- Lundberg, S.M.; Lee, S.I. A Unified Approach to Interpreting Model Predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. Available online: https://arxiv.org/abs/1705.07874 (accessed on 20 December 2021).
Variable | Unit | Mean | Std. Deviation | Min. Value | Max. Value |
---|---|---|---|---|---|
PM2.5 | µg/m3 | 48.63 | 30.05 | 3.06 | 202.52 |
NO | µg/m3 | 13.4 | 11.47 | 0.45 | 99.32 |
NO2 | µg/m3 | 38.37 | 16.26 | 0.56 | 131.53 |
NOx | ppb | 31.82 | 15.97 | 0.31 | 129.9 |
NH3 | µg/m3 | 10.96 | 6.31 | 0.16 | 63.62 |
SO2 | µg/m3 | 12.05 | 7.42 | 1.32 | 60.83 |
CO | mg/m3 | 0.78 | 0.27 | 0.15 | 2.18 |
C6H6 | µg/m3 | 4.46 | 1.75 | 0.01 | 11.69 |
C7H8 | µg/m3 | 10.15 | 5.33 | 0.01 | 54.29 |
Temperature | °C | 29.26 | 1.95 | 23.36 | 42.68 |
RH | % | 71.81 | 5.57 | 43.77 | 86.56 |
WS | m/s | 2.32 | 0.83 | 0.47 | 5.56 |
WD | degree | 213.80 | 50.44 | 79.45 | 351.46 |
SR | W/m2 | 137.39 | 53.48 | 6.00 | 520.41 |
BP | mmHg | 744.19 | 13.82 | 701.00 | 766.58 |
Models | Pros | Cons |
---|---|---|
LGBM |
|
|
XGB |
|
|
CB |
|
|
VR |
|
|
RF |
|
|
LR-LA |
|
|
LAR |
|
|
MTE |
|
|
RR |
|
|
LR |
|
|
BRR |
|
|
MLP |
|
|
PLS |
|
|
QR |
|
|
KNN |
|
|
Observed/Actual | |||
---|---|---|---|
Yes | No | ||
Forecast/ Predicted | Yes | Hit (a) | False Alarm (b) |
No | Miss (c) | Correct Rejection (d) |
Parameter | Significance | Mathematical Representation |
---|---|---|
HR | Used to measure accurate forecasts of events. Its value ranges between 0 and 1. A value close to 1 indicates excellent prediction performance. | |
FAR | Measures the ratio of false alarms and gives an indication of the occurence of an event when there is no event. Its value ranges between 0 and 1; a value close to 0 indicates better prediction. | |
CSI | Together takes into account hits, misses and false alarms. Its value ranges between 0 and 1. A value close to 1 indicates excellent prediction performance | |
TSS | Determines the ability of the model to distinguish between “Yes” and “No” cases. Its value ranges between −1 and 1, with 1 indicating a perfect forecast, 0 defining a standard forecast, and a negative value indicating a below-standard forecast. | |
R2 Score/Coefficient of determination | Provides a degree of discrepancy in dependent variables. | |
MedAE | Provides the median value of the absolute difference between forecasted and true values. The MedAE is least influenced by outliers. | |
MAPE | Predicts the accuracy of a regression model. It defines the relative percentage error of the predicted value against the true value. | |
RMSE | Used to evaluate the standard deviation of predicted errors. | |
MAE | Evaluates the mean of absolute values of the difference between the predicted value and true value. |
Parameter | Pearson’s Correlation Coefficient | Parameter | Pearson’s Correlation Coefficient |
---|---|---|---|
NO | 0.26 | C7H8 | −0.01 |
NO2 | 0.54 | Temperature | 0.05 |
NOx | 0.44 | RH | −0.25 |
NH3 | 0.33 | WS | −0.49 |
SO2 | 0.16 | SR | −0.16 |
CO | 0.60 | BP | 0.42 |
C6H6 | 0.25 |
Parameter | PM2.5 (µg/m3) | CO (mg/m3) | NH3 (µg/m3) | NO2 (µg/m3) | ||||||||
sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | |
Mean Value | 33.64 | 69.84 | 34.49 | 0.67 | 0.90 | 0.75 | 10.37 | 12.83 | 8.75 | 35.08 | 42.85 | 35.54 |
Std. Deviation | 14.21 | 34.50 | 12.36 | 0.19 | 0.29 | 0.25 | 6.35 | 6.70 | 4.48 | 11.55 | 20.87 | 10.51 |
Median Value | 31.24 | 67.15 | 33.61 | 0.67 | 0.84 | 0.72 | 9.44 | 12.54 | 8.33 | 33.83 | 38.59 | 37.65 |
Min. Value | 9.09 | 3.06 | 6.19 | 0.27 | 0.41 | 0.15 | 1.83 | 0.16 | 0.70 | 5.39 | 0.56 | 9.23 |
Max. Value | 95.90 | 202.52 | 71.84 | 1.59 | 2.18 | 1.66 | 42.93 | 63.62 | 40.08 | 72.02 | 131.53 | 58.19 |
Parameter | C6H6 (µg/m3) | NOx (ppb) | SO2 (µg/m3) | NO (µg/m3) | ||||||||
sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | |
Mean Value | 3.92 | 4.49 | 5.16 | 26.62 | 35.17 | 30.90 | 11.77 | 12.90 | 11.04 | 9.92 | 15.43 | 14.83 |
Std. Deviation | 1.65 | 1.77 | 1.58 | 8.00 | 21.51 | 11.21 | 6.84 | 8.14 | 6.86 | 4.67 | 15.04 | 10.15 |
Median Value | 3.87 | 4.38 | 5.06 | 25.71 | 30.17 | 30.55 | 10.69 | 10.32 | 10.66 | 9.37 | 11.18 | 12.69 |
Min. Value | 0.86 | 0.01 | 0.51 | 6.44 | 0.31 | 7.56 | 1.97 | 1.84 | 1.32 | 1.22 | 0.45 | 1.48 |
Max. Value | 10.41 | 11.69 | 9.90 | 52.37 | 129.9 | 92.99 | 40.47 | 60.83 | 46.36 | 27.86 | 99.32 | 93.45 |
Parameter | C7H8 (µg/m3) | WS (m/s) | WD (degree) | RH (%) | ||||||||
sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | |
Mean Value | 8.38 | 9.33 | 13.91 | 2.79 | 1.85 | 2.45 | 239.96 | 180.80 | 231.95 | 71.89 | 70.00 | 74.68 |
Std. Deviation | 4.40 | 4.66 | 5.71 | 0.73 | 0.66 | 0.82 | 46.43 | 39.28 | 41.00 | 4.55 | 6.18 | 4.48 |
Median Value | 7.78 | 8.53 | 13.03 | 2.82 | 1.78 | 2.41 | 223.95 | 177.03 | 226.91 | 72.85 | 70.81 | 74.51 |
Min. Value | 1.28 | 0.01 | 2.84 | 0.72 | 0.47 | 0.55 | 84.74 | 79.45 | 116.00 | 48.18 | 43.77 | 63.06 |
Max. Value | 26.93 | 34.76 | 54.29 | 5.56 | 4.97 | 5.28 | 351.46 | 313.86 | 315.59 | 82.62 | 86.56 | 85.23 |
Parameter | SR (W/m2) | Temperature (°C) | BP (mmHg) | |||||||||
sum. | win. | mon. | sum. | win. | mon. | sum. | win. | mon. | ||||
Mean Value | 167.18 | 120.51 | 124.12 | 29.16 | 29.75 | 28.59 | 745.58 | 751.36 | 730.54 | |||
Std. Deviation | 56.02 | 38.33 | 54.04 | 1.08 | 2.61 | 1.31 | 6.51 | 12.44 | 13.52 | |||
Median Value | 172.88 | 122.86 | 129.13 | 29.07 | 29.44 | 28.44 | 746.82 | 754.14 | 734.16 | |||
Min. Value | 6.50 | 6.00 | 6.00 | 24.50 | 23.36 | 24.21 | 730.29 | 701.00 | 701.33 | |||
Max. Value | 520.41 | 409.73 | 257.24 | 34.18 | 42.68 | 36.93 | 757.18 | 766.58 | 753.38 |
Parameter | PM2.5 (µg/m3) | CO (mg/m3) | NH3 (µg/m3) | NO2 (µg/m3) | ||||
Year | 2018 | 2019 | 2018 | 2019 | 2018 | 2019 | 2018 | 2019 |
Mean | 49.97 | 47.32 | 0.71 | 0.86 | 11.94 | 10.01 | 38.88 | 37.86 |
Std. | 28.53 | 31.45 | 0.30 | 0.22 | 5.55 | 6.86 | 15.53 | 16.94 |
Median | 43.88 | 37.15 | 0.62 | 0.83 | 11.07 | 8.50 | 37.44 | 36.04 |
Min. | 5.04 | 3.06 | 0.15 | 0.38 | 0.16 | 0.70 | 0.56 | 5.39 |
Max. | 202.52 | 201.85 | 2.18 | 1.75 | 63.63 | 42.93 | 93.54 | 131.53 |
Parameter | C6H6 (µg/m3) | NOx (ppb) | SO2 (µg/m3) | NO (µg/m3) | ||||
Year | 2018 | 2019 | 2018 | 2019 | 2018 | 2019 | 2018 | 2019 |
Mean | 4.81 | 4.12 | 30.72 | 31.63 | 11.13 | 12.96 | 12.60 | 14.18 |
Std. | 1.41 | 1.97 | 16.04 | 15.92 | 6.68 | 7.99 | 11.42 | 11.48 |
Median | 4.63 | 4.08 | 27.84 | 29.22 | 10.58 | 10.74 | 10.16 | 11.57 |
Min. | 0.01 | 0.51 | 0.31 | 6.44 | 1.32 | 1.97 | 0.45 | 1.95 |
Max. | 11.69 | 10.41 | 129.9 | 105.43 | 40.47 | 60.83 | 99.32 | 93.45 |
Parameter | WS (m/s) | WD (degree) | RH (%) | |||||
Year | 2018 | 2019 | 2018 | 2019 | 2018 | 2019 | ||
Mean | 2.24 | 2.40 | 234.25 | 193.71 | 70.70 | 72.91 | ||
Std. | 0.83 | 0.82 | 58.95 | 28.63 | 6.01 | 4.87 | ||
Median | 2.19 | 2.34 | 244.56 | 199.72 | 71.99 | 73.07 | ||
Min. | 0.47 | 0.55 | 79.45 | 84.74 | 43.77 | 57.72 | ||
Max. | 4.97 | 5.56 | 351.46 | 263.23 | 85.23 | 86.56 | ||
Parameter | SR (W/m2) | Temperature (°C) | BP (mmHg) | |||||
Year | 2018 | 2019 | 2018 | 2019 | 2018 | 2019 | ||
Mean | 143.01 | 131.87 | 29.12 | 29.39 | 745.87 | 742.54 | ||
Std. | 57.60 | 48.54 | 1.52 | 2.30 | 9.77 | 16.73 | ||
Median | 141.97 | 131.34 | 29.14 | 28.95 | 748.18 | 747.07 | ||
Min. | 6 | 6 | 23.36 | 24.50 | 701 | 701.33 | ||
Max. | 520.41 | 261.92 | 38.33 | 42.68 | 766.58 | 765.57 |
Sr. No | Regression Model | R2 Score | MedAE (µg/m3) | MAPE | RMSE (µg/m3) | MAE (µg/m3) |
---|---|---|---|---|---|---|
1 | VR (GB + RF + LR) | 0.73 | 7.98 | 0.31 | 13.74 | 10.23 |
2 | CB | 0.81 | 6.95 | 0.29 | 11.42 | 9.07 |
3 | LGBM | 0.76 | 8.05 | 0.29 | 12.94 | 9.85 |
4 | XGB | 0.71 | 7.8 | 0.30 | 14.03 | 10.34 |
5 | LR-LA | 0.57 | 8.75 | 0.43 | 17.26 | 13.10 |
6 | RF | 0.52 | 9.90 | 0.49 | 18.22 | 14.19 |
7 | MLP | 0.69 | 8.96 | 0.37 | 14.65 | 11.12 |
8 | LAR | 0.57 | 8.71 | 0.42 | 17.24 | 13.09 |
9 | PLS | 0.57 | 10.25 | 0.43 | 17.31 | 13.07 |
10 | QR | 0.47 | 10.33 | 0.46 | 19.18 | 14.21 |
11 | MTE | 0.58 | 8.35 | 0.43 | 17.09 | 12.86 |
12 | RR | 0.56 | 8.61 | 0.43 | 17.36 | 13.18 |
13 | BRR | 0.57 | 8.53 | 0.43 | 17.26 | 13.02 |
14 | KNN | 0.50 | 9.75 | 0.43 | 18.58 | 13.37 |
15 | LR | 0.56 | 8.65 | 0.43 | 17.38 | 13.20 |
Variable | HR | FAR | CSI | TSS |
---|---|---|---|---|
PM2.5 | 0.85 | 0.02 | 0.80 | 0.83 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sharma, M.; Kumar, N.; Sharma, S.; Jangra, V.; Mehandia, S.; Kumar, S.; Kumar, P. Assessment of Fine Particulate Matter for Port City of Eastern Peninsular India Using Gradient Boosting Machine Learning Model. Atmosphere 2022, 13, 743. https://doi.org/10.3390/atmos13050743
Sharma M, Kumar N, Sharma S, Jangra V, Mehandia S, Kumar S, Kumar P. Assessment of Fine Particulate Matter for Port City of Eastern Peninsular India Using Gradient Boosting Machine Learning Model. Atmosphere. 2022; 13(5):743. https://doi.org/10.3390/atmos13050743
Chicago/Turabian StyleSharma, Manoj, Naresh Kumar, Shallu Sharma, Vikas Jangra, Seema Mehandia, Sumit Kumar, and Pawan Kumar. 2022. "Assessment of Fine Particulate Matter for Port City of Eastern Peninsular India Using Gradient Boosting Machine Learning Model" Atmosphere 13, no. 5: 743. https://doi.org/10.3390/atmos13050743
APA StyleSharma, M., Kumar, N., Sharma, S., Jangra, V., Mehandia, S., Kumar, S., & Kumar, P. (2022). Assessment of Fine Particulate Matter for Port City of Eastern Peninsular India Using Gradient Boosting Machine Learning Model. Atmosphere, 13(5), 743. https://doi.org/10.3390/atmos13050743