Detecting Burned Vegetation Areas by Merging Spectral and Texture Features in a ResNet Deep Learning Architecture
Highlights
- A ResNet-50-based deep learning framework (ResNet-IST) was successfully developed to integrate spectral and spatial features from satellite imagery for the precise detection and classification of wildfire-affected vegetation.
- When evaluated across ten global sites, the ResNet-IST model demonstrated superior performance in identifying burned areas compared to the established VASTI and NBR indices.
- It introduces a superior, intelligent method for global environmental monitoring. The ResNet-IST model represents a paradigm shift from traditional, fixed-formula indices to an adaptive deep learning approach, enabling more accurate and reliable detection of burned vegetation on a global scale.
- It enables more effective disaster assessment and ecological recovery. By significantly improving detection accuracy over current methods (VASTI and NBR), the model provides crucial data for precise post-fire evaluation, which is fundamental for coordinating restoration efforts and managing ecosystem health.
Abstract
1. Introduction
2. Study Area and Data
2.1. Satellite Dataset of Anomalous Vegetation
2.2. Data Processing
3. Methodology
3.1. ResNet-IST Framework
3.2. Extraction of the Input Features
3.2.1. Spectral Indices
3.2.2. Texture Features
3.2.3. Vegetation Anomaly Spectral Texture Index
3.3. ResNet Algorithm
3.4. Comparison of the Normalized Burn Ratio Index
3.5. Evaluation Method
3.6. Deep Learning Model Interpretability Analysis
4. Results
4.1. Parameters Setting of GLCM
4.2. Performance of the ResNet-IST Model
4.3. Visual Evaluation Results of ResNet-IST, VASTI, and NBR
4.4. Validation and Comparison of ResNet-IST, VASTI, and NBR
5. Discussion
5.1. Impact of Different Kinds of Input Features on ResNet-IST
5.2. Capability of the ResNet-IST Framework for Detecting Fire-Affected Vegetation
5.3. The Uncertainties of the ResNet-IST Model
5.4. Merits and Limitations of the ResNet-IST Model
6. Conclusions
- (1)
- We developed a potential vegetation anomaly detection model named the ResNet-IST by merging spectral and texture features using a DL algorithm based on the ResNet50 architecture. Unlike traditional mathematical models, the performance of ResNet-IST significantly increases the computational capacity, and the iterative nature of the model improves its robustness.
- (2)
- In most study areas, the ResNet-IST model showed significant improvement in validation, with accuracies that were mostly 3% higher than those of VASTI and 5–15% higher than those of NBR. The results demonstrate its superior performance in anomaly detection tasks. Additionally, the recognition accuracy of VASTI is notably superior to that of NBR, indicating that incorporating TFs compensates for the limitations of SFs.
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Flannigan, M.D.; Amiro, B.D.; Logan, K.A.; Stocks, B.J.; Wotton, B.M. Forest Fires and Climate Change in the 21ST Century. Mitig. Adapt. Strateg. Glob. Chang. 2006, 11, 847–859. [Google Scholar] [CrossRef]
- Pew, K.L.; Larsen, C.P.S. GIS analysis of spatial and temporal patterns of human-caused wildfires in the temperate rain forest of Vancouver Island, Canada. For. Ecol. Manag. 2001, 140, 1–18. [Google Scholar] [CrossRef]
- Carmenta, R.; Parry, L.; Blackburn, A.; Vermeylen, S.; Barlow, J. Understanding Human-Fire Interactions in Tropical Forest Regions: A Case for Interdisciplinary Research across the Natural and Social Sciences. Ecol. Soc. 2011, 16, 22. [Google Scholar] [CrossRef]
- Tanase, M.A.; Kennedy, R.; Aponte, C. Fire severity estimation from space: A comparison of active and passive sensors and their synergy for different forest types. Int. J. Wildland Fire 2015, 24, 1062–1075. [Google Scholar] [CrossRef]
- Vhengani, L.; Frost, P.; Lai, C.; Booi, N.; Dool, R.v.D.; Raath, W. Multitemporal burnt area mapping using Landsat 8: Merging multiple burnt area indices to highlight burnt areas. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Milan, Italy, 26–31 July 2015; IEEE: New York, NY, USA, 2015. [Google Scholar]
- Brewer, C.K.; Winne, J.C.; Redmond, R.L.; Opitz, D.W. Classifying and mapping wildfire severity: A comparison of methods. Photogramm. Eng. Remote Sens. 2005, 71, 1311–1320. [Google Scholar] [CrossRef]
- Poulos, H.M.; Barton, A.M.; Koch, G.W.; Kolb, T.E.; Thode, A.E. Wildfire severity and vegetation recovery drive post-fire evapotranspiration in a southwestern pine-oak forest, Arizona, USA. Remote Sens. Ecol. Conserv. 2021, 7, 579–591. [Google Scholar] [CrossRef]
- Chuvieco, E.; Congalton, R.G. Mapping and inventory of forest fires from digital processing of tm data. Geocarto Int. 1988, 3, 41–53. [Google Scholar] [CrossRef]
- Pascolini-Campbell, M.; Lee, C.; Stavros, N.; Fisher, J.B. ECOSTRESS reveals pre-fire vegetation controls on burn severity for Southern California wildfires of 2020. Glob. Ecol. Biogeogr. 2022, 31, 1976–1989. [Google Scholar] [CrossRef]
- Joshi, R.C.; Jensen, A.; Pascolini-Campbell, M.; Fisher, J.B. Coupling between evapotranspiration, water use efficiency, and evaporative stress index strengthens after wildfires in New Mexico, USA. Int. J. Appl. Earth Obs. 2024, 135, 12. [Google Scholar] [CrossRef]
- Veraverbeke, S.; Gitas, I.; Katagis, T.; Polychronaki, A.; Somers, B.; Goossens, R. Assessing post-fire vegetation recovery using red-near infrared vegetation indices: Accounting for background and vegetation variability. ISPRS J. Photogramm. Remote Sens. 2012, 68, 28–39. [Google Scholar] [CrossRef]
- Tucker, C.J. Red and Photographic Infrared Linear Combinations for Monitoring Vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
- Lhermitte, S.; Verbesselt, J.; Verstraeten, W.W.; Veraverbeke, S.; Coppin, P. Assessing intra-annual vegetation regrowth after fire using the pixel based regeneration index. ISPRS J. Photogramm. Remote Sens. 2011, 66, 17–27. [Google Scholar] [CrossRef]
- Fernández-García, V.; Kull, C.A. Refining historical burned area data from satellite observations. Int. J. Appl. Earth Obs. Geoinf. 2023, 120, 12. [Google Scholar] [CrossRef]
- Pinty, B.; Verstraete, M.M. GEMI—A nonlinear index to monitor global vegetation from satellites. Vegetatio 1992, 101, 15–20. [Google Scholar] [CrossRef]
- Grigorov, B. GEMI—A Possible Tool for Identification of Disturbances in Confirerous Forests in Pernik Povince (Western Bulgaria). Civ. Environ. Eng. Rep. 2022, 32, 116–122. [Google Scholar] [CrossRef]
- Meena, S.V.; Dhaka, V.S.; Sinwar, D. Exploring the Role of Vegetation Indices in Plant Diseases Identification. In Proceedings of the 2020 Sixth International Conference on Parallel, Distributed and Grid Computing (PDGC), Waknaghat, India, 6–8 November 2020. [Google Scholar]
- García, M.J.L.; Caselles, V. Mapping burns and natural reforestation using thematic Mapper data. Geocarto Int. 1991, 6, 31–37. [Google Scholar] [CrossRef]
- Gerard, F.; Plummer, S.; Wadsworth, R.; Sanfeliu, A.F.; Iliffe, L.; Balzter, H.; Wyatt, B. Forest fire scar detection in the boreal forest with multitemporal SPOT-VEGETATION data. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2575–2585. [Google Scholar] [CrossRef]
- Chuvieco, E.; Martín, M.P. Cartografía de Grandes Incendios Forestales en la Península Ibérica a Partir de Imágenes NOAA-AVHRR; Universidad de Alcalá de Henares: Alcalá de Henares, Spain, 1998. [Google Scholar]
- Majdar, R.S.; Ghassemian, H. A probabilistic SVM approach for hyperspectral image classification using spectral and texture features. Int. J. Remote Sens. 2017, 38, 4265–4284. [Google Scholar] [CrossRef]
- Li, C.; Liu, Q.; Li, B.R.; Liu, L. Investigation of Recognition and Classification of Forest Fires Based on Fusion Color and Textural Features of Images. Forests 2022, 13, 19. [Google Scholar] [CrossRef]
- Mitri, G.H.; Gitas, I.Z. A semi-automated object-oriented model for burned area mapping in the Mediterranean region using Landsat-TM imagery. Int. J. Wildland Fire 2004, 13, 367–376. [Google Scholar] [CrossRef]
- Shama, A.; Zhang, R.; Zhan, R.Q.; Wang, T.; Xie, L.; Bao, X.; Lv, J. A Burned Area Extracting Method Using Polarization and Texture Feature of Sentinel-1A Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 5. [Google Scholar] [CrossRef]
- Fan, J.H.; Yao, Y.J.; Tang, Q.X.; Zhang, X.; Xu, J.; Yu, R.; Liu, L.; Xie, Z.; Ning, J.; Zhang, L. A Hybrid Index for Monitoring Burned Vegetation by Combining Image Texture Features with Vegetation Indices. Remote Sens. 2024, 16, 1539. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. Commun. Acm 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.Q.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A.; Liu, W.; et al. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; IEEE: New York, NY, USA, 2015. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Identity Mappings in Deep Residual Networks. In Proceedings of the 14th European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 8–16 October 2016; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
- Lin, G.S.; Milan, A.; Shen, C.H.; Reid, I. RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Zhao, H.S.; Shi, J.P.; Qi, X.J.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1904–1916. [Google Scholar] [CrossRef] [PubMed]
- Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Zeiler, M.D.; Krishnan, D.; Taylor, G.W.; Fergus, R. Deconvolutional Networks. In Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; IEEE Computer Society: Los Alamitos, CA, USA, 2010. [Google Scholar]
- Yu, K.; Lin, Y.Q.; Lafferty, J. Learning Image Representations from the Pixel Level via Hierarchical Sparse Coding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; IEEE: New York, NY, USA, 2011. [Google Scholar]
- Salakhutdinov, R.; Hinton, G. Deep Boltzmann Machines. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics, Clearwater Beach, FL, USA, 16–18 April 2009; pp. 448–455. [Google Scholar]
- Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.-A. Extracting and Composing Robust Features with Denoising Autoencoders; proceedings of the Machine Learning. In Proceedings of the Twenty-Fifth International Conference (ICML 2008), Helsinki, FL, USA, 5–9 June 2008. [Google Scholar]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Li, H.; Hu, B.X.; Li, Q.; Jing, L. CNN-Based Individual Tree Species Classification Using High-Resolution Satellite Imagery and Airborne LiDAR Data. Forests 2021, 12, 1697. [Google Scholar] [CrossRef]
- Wang, M.; Zhang, X.; Niu, X.; Wang, F.; Zhang, X. Scene Classification of High-Resolution Remotely Sensed Image Based on ResNet. J. Geovisualization Spat. Anal. 2019, 3, 16. [Google Scholar] [CrossRef]
- Park, M.; Tran, D.Q.; Lee, S.; Wang, F.; Zhang, X. Multilabel Image Classification with Deep Transfer Learning for Decision Support on Wildfire Response. Remote Sens. 2021, 13, 3985. [Google Scholar] [CrossRef]
- Zhang, L.; Wang, M.Y.; Fu, Y.J.; Ding, Y. A Forest Fire Recognition Method Using UAV Images Based on Transfer Learning. Forests 2022, 13, 975. [Google Scholar] [CrossRef]
- Dogan, S.; Barua, P.D.; Kutlu, H.; Baygin, M.; Fujita, H.; Tuncer, T.; Acharya, U. Automated accurate fire detection system using ensemble pretrained residual network. Expert Syst. Appl. 2022, 203, 9. [Google Scholar] [CrossRef]
- Tsalera, E.; Papadakis, A.; Voyiatzis, I.; Samarakou, M. CNN-based, contextualized, real-time fire detection in computational resource-constrained environments. Energy Rep. 2023, 9, 247–257. [Google Scholar] [CrossRef]
- Hu, X.K.; Zhang, P.Z.; Ban, Y.F.; Rahnemoonfar, M. GAN-based SAR and optical image translation for wildfire impact assessment using multi-source remote sensing data. Remote Sens. Environ. 2023, 289, 13. [Google Scholar] [CrossRef]
- Andela, N.; Morton, D.C.; Giglio, L.; Paugam, R.; Chen, Y.; Hantson, S.; van der Werf, G.R. Global Fire Atlas with Characteristics of Individual Fires Size, Duration, Speed and Direction. Earth Syst. Sci. Data 2019, 11, 529–552. [Google Scholar] [CrossRef]
- Irons, J.R.; Dwyer, J.L.; Barsi, J.A. The next Landsat satellite: The Landsat Data Continuity Mission. Remote Sens. Environ. 2012, 122, 11–21. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Moghadam, P.; Ward, D.; Goan, E.; Jayawardena, S.; Sikka, P.; Hernandez, E. Plant Disease Detection using Hyperspectral Imaging. In Proceedings of the International Conference on Digital Image Computing—Techniques and Applications (DICTA), Sydney, Australia, 29 November–1 December 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Wu, G.S.; Fang, Y.L.; Jiang, Q.Y.; Cui, M.; Li, N.; Ou, Y.; Diao, Z.; Zhang, B. Early identification of strawberry leaves disease utilizing hyperspectral imaging combing with spectral features, multiple vegetation indices and textural features. Comput. Electron. Agric. 2023, 204, 12. [Google Scholar] [CrossRef]
- Carlson, T.N.; Ripley, D.A. On the relation between NDVI, fractional vegetation cover, and leaf area index. Remote Sens. Environ. 1997, 62, 241–252. [Google Scholar] [CrossRef]
- Liu, H.Q.; Huete, A. A feedback based modification of the ndvi to minimize canopy background and atmospheric noise. IEEE Trans. Geosci. Remote Sens. 1995, 33, 457–465. [Google Scholar] [CrossRef]
- Pearson, R.L.; Miller, L.D. Remote Mapping of Standing Crop Biomass for Estimation of Productivity of the Shortgrass Prairie. Remote Sens. Environ. VIII 1972, 2, 1357–1381. [Google Scholar] [CrossRef]
- Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
- Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens. Environ. 2001, 76, 156–172. [Google Scholar] [CrossRef]
- Jordan, C.F. Derivation of leaf-area index from quality of light on forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
- Galvao, L.S.; Formaggio, A.R.; Tisot, D.A. Discrimination of sugarcane varieties in southeastern brazil with EO-1 hyperion data. Remote Sens. Environ. 2005, 94, 523–534. [Google Scholar] [CrossRef]
- Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
- Gitelson, A.A.; Viña, A.; Arkebauer, T.J.; Rundquist, D.C.; Keydan, G.; Leavitt, B. Remote estimation of leaf area index and green leaf biomass in maize canopies. Geophys. Res. Lett. 2003, 30, 4. [Google Scholar] [CrossRef]
- Pu, R.L.; Gong, P.; Yu, Q. Comparative analysis of EO-1 ALI and Hyperion, and Landsat ETM+ data for mapping forest crown closure and leaf area index. Sensors 2008, 8, 3744–3766. [Google Scholar] [CrossRef] [PubMed]
- Rao, N.R.; Garg, P.K.; Ghosh, S.K.; Dadhwal, V.K. Estimation of leaf total chlorophyll and nitrogen concentrations using hyperspectral satellite imagery. J. Agric. Sci. 2008, 146, 65–75. [Google Scholar]
- Smith, J.R.; Chang, S.F. Automated binary texture feature sets for image retrieval. In Proceedings of the Acoustics, Speech, and Signal Processing, Atlanta, GA, USA, 9 May 1996. [Google Scholar]
- Baraldi, A.; Parmiggiani, F. An investigation of the textural characteristics associated with gray-level cooccurrence matrix statistical parameters. IEEE Trans. Geosci. Remote Sens. 1995, 33, 293–304. [Google Scholar] [CrossRef]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, SMC3, 610–621. [Google Scholar] [CrossRef]
- Conners, R.W.; Trivedi, M.M.; Harlow, C.A. Segmentation of a high-resolution urban scene using texture operators. Comput. Vis. Graph. Image Process. 1984, 25, 273–310. [Google Scholar] [CrossRef]
- Yuan, J.Y.; Wang, D.L.; Li, R.X. Remote Sensing Image Segmentation by Combining Spectral and Texture Features. IEEE Trans. Geosci. Remote Sens. 2014, 52, 16–24. [Google Scholar] [CrossRef]
- Zhou, Y.; Wang, Z.; Zheng, S.; Zhou, L.; Dai, L.; Luo, H.; Zhang, Z.; Sui, M. Optimization of automated garbage recognition model based on ResNet-50 and weakly supervised CNN for sustainable urban development. Alex. Eng. J. 2024, 108, 415–427. [Google Scholar] [CrossRef]
- Xie, S.N.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Lu, Z.Y.; Lu, J.; Ge, Q.B.; Zhan, T. Multi-object Detection Method based on YOLO and ResNet Hybrid Networks. In Proceedings of the 4th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Osaka, Japan, 3–5 July 2019; IEEE: New York, NY, USA, 2019. [Google Scholar]
- Pan, T.S.; Huang, H.C.; Lee, J.C.; Chen, C.H. Multi-scale ResNet for real-time underwater object detection. Signal Image Video Process. 2021, 15, 941–949. [Google Scholar] [CrossRef]
- Pohlen, T.; Hermans, A.; Mathias, M.; Leibe, B. Full-Resolution Residual Networks for Semantic Segmentation in Street Scenes. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
- Xia, K.J.; Yin, H.S.; Zhang, Y.D. Deep Semantic Segmentation of Kidney and Space-Occupying Lesion Area Based on SCNN and ResNet Models Combined with SIFT-Flow Algorithm. J. Med. Syst. 2019, 43, 12. [Google Scholar] [CrossRef]
- Lipovetsky, S.; Conklin, M. Analysis of regression in game theory approach. Appl. Stoch. Models Bus. Ind. 2001, 17, 319–330. [Google Scholar] [CrossRef]
- Fujimoto, K.; Kojadinovic, I.; Marichal, J.-L. Axiomatic characterizations of probabilistic and cardinal-probabilistic interaction indices. Games Econ. Behav. 2006, 55, 72–99. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.G.; Lee, S.I. Consistent Individualized Feature Attribution for Tree Ensembles. arXiv 2018, arXiv:1802.03888. [Google Scholar]
- Tan, M.; Pang, R.; Le, Q.V. Efficientdet: Scalable and efficient object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
- Doshi-Velez, F.; Kim, B. Towards A Rigorous Science of Interpretable Machine Learning. arXiv 2017, arXiv:1702.08608. [Google Scholar] [CrossRef]
- Fuentes, A.; Yoon, S.; Kim, S.C.; Park, D.S. A Robust Deep-Learning-Based Detector for Real-Time Tomato Plant Diseases and Pests Recognition. Sensors 2017, 17, 2022. [Google Scholar] [CrossRef] [PubMed]
- Roy, S.K.; Manna, S.; Song, T.C.; Bruzzone, L. Attention-Based Adaptive Spectral-Spatial Kernel ResNet for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2021, 59, 7831–7843. [Google Scholar] [CrossRef]
- Gu, J.X.; Wang, Z.H.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Balling, J.; Herold, M.; Reiche, J. How textural features can improve SAR-based tropical forest disturbance mapping. Int. J. Appl. Earth Obs. Geoinf. 2023, 124, 14. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.L.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 74. [Google Scholar] [CrossRef]
- Hu, X.Y.; Tao, C.V.; Prenzel, B. Automatic segmentation of high-resolution satellite imagery by integrating texture, intensity, and color features. Photogramm. Eng. Remote Sens. 2005, 71, 1399–1406. [Google Scholar] [CrossRef]
- Ghasemi, N.; Sahebi, M.R.; Mohammadzadeh, A. Biomass Estimation of a Temperate Deciduous Forest Using Wavelet Analysis. IEEE Trans. Geosci. Remote Sens. 2013, 51, 765–776. [Google Scholar] [CrossRef]
- Shafiq, M.; Gu, Z.Q. Deep Residual Learning for Image Recognition: A Survey. Appl. Sci. 2022, 12, 8972. [Google Scholar] [CrossRef]
- Rawat, W.; Wang, Z.H. Deep Convolutional Neural Networks for Image Classification: A Comprehensive Review. Neural Comput. 2017, 29, 2352–2449. [Google Scholar] [CrossRef]
- Kumar, D.; Taylor, G.W.; Wong, A. Opening the black box of financial ai with clear-trade: A class-enhanced attentive response approach for explaining and visualizing deep learning-driven stock market prediction. arXiv 2017, arXiv:170901574. [Google Scholar] [CrossRef]
- Xu, L.; Chen, Q. Remote-Sensing Image Usability Assessment Based on ResNet by Combining Edge and Texture Maps. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 1825–1834. [Google Scholar] [CrossRef]
- Drozdzal, M.; Vorontsov, E.; Chartrand, G.; Kadoury, S.; Pal, C. The Importance of Skip Connections in Biomedical Image Segmentation. In Proceedings of the 2nd International Workshop on Deep Learning in Medical Image Analysis (DLMIA)/1st International Workshop on Large-Scale Annotation of Biomedical Data and Expert Label Synthesis (LABELS), Athens, Greece, 21 October 2016; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Shrestha, A.; Mahmood, A. Review of Deep Learning Algorithms and Architectures. IEEE Access 2019, 7, 53040–53065. [Google Scholar] [CrossRef]










| Region | Site Name | Longitude | Latitude | Land Cover Type | Number of Samples | Sample Size |
|---|---|---|---|---|---|---|
| (a) | Los Gatos | 121.8273°W | 37.1111°N | Evergreen Needleleaf forest | 52 | 200 × 200 |
| (b) | EI Dorado | 120.8235°W | 38.9682°N | Mixed forest | 163 | |
| (c) | Inyo | 117.9242°W | 36.0431°N | Evergreen Needleleaf forest | 147 | |
| (d) | Apure | 69.2507°W | 7.2310°N | Grasslands | 186 | |
| (e) | Kahemba | 18.8686°E | 7.2313°S | Woody savannas | 217 | |
| (f) | Barh Azoum | 21.3424°E | 11.5677°N | Savannas | 223 | |
| (g) | Kusti | 32.0963°E | 11.5679°N | Croplands | 252 | |
| (h) | Liangshan | 100.7287°E | 27.4312°N | Evergreen Needleleaf forest | 29 | |
| (i) | Mpika | 31.8310°E | 11.5675°S | Woody savannas | 236 | |
| (j) | Dirico | 21.2766°E | 17.3446°S | Savannas | 267 |
| VIs | Formula | Ref |
|---|---|---|
| NDVI | [53] | |
| EVI | [54] | |
| RVI | [55] | |
| GNDVI | [56] | |
| TVI | [57] | |
| DVI | [58] | |
| DSWI | [59] | |
| MSAVI | [60] | |
| GCVI | [61] | |
| MSR | [62] | |
| PBI | [63] | |
| GEMI | [15] |
| Texture | Formula |
|---|---|
| Mean | |
| Standard Deviation | |
| Contrast | |
| Dissimilarity | |
| Homogeneity | |
| Energy | |
| Correlation | |
| Autocorrelation | |
| Entropy |
| Layer Name | Output Size | ResNet50 |
|---|---|---|
| Conv 1 | 112 × 112 | 7 × 7, 64, stride 2 |
| Conv 2_x | 56 × 56 | 3 × 3 max pool, stride 2 |
| × 3 | ||
| Conv 3_x | 28 × 28 | × 4 |
| Conv 4_x | 14 × 14 | × 6 |
| Conv 5_x | 7 × 7 | × 3 |
| 1 × 1 | Average pool, 1000 d fc, softmax | |
| FLOPs | 3.8 × | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fan, J.; Yao, Y.; Li, Y.; Zhang, X.; Chen, J.; Fisher, J.B.; Zhang, X.; Jiang, B.; Liu, L.; Xie, Z.; et al. Detecting Burned Vegetation Areas by Merging Spectral and Texture Features in a ResNet Deep Learning Architecture. Remote Sens. 2025, 17, 3665. https://doi.org/10.3390/rs17223665
Fan J, Yao Y, Li Y, Zhang X, Chen J, Fisher JB, Zhang X, Jiang B, Liu L, Xie Z, et al. Detecting Burned Vegetation Areas by Merging Spectral and Texture Features in a ResNet Deep Learning Architecture. Remote Sensing. 2025; 17(22):3665. https://doi.org/10.3390/rs17223665
Chicago/Turabian StyleFan, Jiahui, Yunjun Yao, Yajie Li, Xueyi Zhang, Jiquan Chen, Joshua B. Fisher, Xiaotong Zhang, Bo Jiang, Lu Liu, Zijing Xie, and et al. 2025. "Detecting Burned Vegetation Areas by Merging Spectral and Texture Features in a ResNet Deep Learning Architecture" Remote Sensing 17, no. 22: 3665. https://doi.org/10.3390/rs17223665
APA StyleFan, J., Yao, Y., Li, Y., Zhang, X., Chen, J., Fisher, J. B., Zhang, X., Jiang, B., Liu, L., Xie, Z., Zhang, L., & Qiu, F. (2025). Detecting Burned Vegetation Areas by Merging Spectral and Texture Features in a ResNet Deep Learning Architecture. Remote Sensing, 17(22), 3665. https://doi.org/10.3390/rs17223665

