FBC-ANet: A Semantic Segmentation Model for UAV Forest Fire Images Combining Boundary Enhancement and Context Awareness
Abstract
:1. Introduction
- (1)
- Xception [27] is used as the backbone network of the encoder. In the decoder section, a boundary enhancement module is proposed to generate enhanced features to restore boundary information in order to improve the effectiveness of semantic segmentation for forest fires.
- (2)
- With regard to the bottleneck part, the proposed contextual information awareness module is utilized to perform segmentation, which enhances the feature learning ability of the fire pixels and makes feature extraction more robust.
- (3)
- In the experimental environment, we verified that the FBC-ANet model obtained a prediction accuracy of 0.9219, an IoU of 0.8308, and an F1 score of 0.9076 on the FLAME dataset.
2. Related Works
3. Materials and Methods
3.1. Datasets
3.2. Feature Extraction Module (FEM)
3.3. Boundary Enhancement Module (BEM)
3.4. Contextual Information Awareness (CIA) Module
3.5. Loss Function
3.6. Overall Architecture of the FBC-ANet Model
4. Results and Discussion
4.1. Evaluation Metrics
4.2. Parameters Settings and Ablation Experiments
Comparison with Other Segmentation Methods
4.3. Visualization of Segmentation Results
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
UAV | Unmanned aerial vehicle |
TP | True positive |
TN | True negative |
FP | False positive |
FN | False negative |
IoU | Intersection over union |
References
- Dimitropoulos, S. Fighting fire with science. Nature 2019, 576, 328–329. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Aytekin, E. Wildfires Ravaging Forestlands in Many Parts of Globe. 2021. Available online: https://www.aa.com.tr/en/world/wildfires-ravaging-forestlands-in-many-parts-of-globe/2322512 (accessed on 20 February 2023).
- Huang, Q.; Razi, A.; Afghah, F.; Fule, P. Wildfire Spread Modeling with Aerial Image Processing. In Proceedings of the 2020 IEEE 21st International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), Cork, Ireland, 31 August–3 September 2020; pp. 335–340. [Google Scholar]
- Friedlingstein, P.; Jones, M.; O’Sullivan, M.; Andrew, R.; Hauck, J.; Peters, G.; Peters, W.; Pongratz, J.; Sitch, S.; Le Quéré, C.; et al. Global carbon budget 2019. Earth Syst. Sci. Data 2019, 11, 1783–1838. [Google Scholar] [CrossRef] [Green Version]
- Erdelj, M.; Natalizio, E.; Chowdhury, K.R.; Akyildiz, I.F. Help from the sky: Leveraging UAVs for disaster management. IEEE Pervasive Comput. 2017, 16, 24–32. [Google Scholar] [CrossRef]
- Shamsoshoara, A.; Afghah, F.; Razi, A.; Mousavi, S.; Ashdown, J.; Turk, K. An Autonomous Spectrum Management Scheme for Unmanned Aerial Vehicle Networks in Disaster Relief Operations. IEEE Access 2020, 8, 58064–58079. [Google Scholar] [CrossRef]
- Mousavi, S.; Afghah, F.; Ashdown, J.D.; Turck, K. Use of a quantum genetic algorithm for coalition formation in large-scale uav networks. Hoc Netw. 2019, 87, 26–36. [Google Scholar] [CrossRef]
- Mahmudnia, D.; Arashpour, M.; Bai, Y.; Feng, H. Drones and Blockchain Integration to Manage Forest Fires in Remote Regions. Drones 2022, 6, 331. [Google Scholar] [CrossRef]
- Saffre, F.; Hildmann, H.; Karvonen, H.; Lind, T. Monitoring and Cordoning Wildfires with an Autonomous Swarm of Unmanned Aerial Vehicles. Drones 2022, 6, 301. [Google Scholar] [CrossRef]
- Gaur, A.; Singh, A.; Kumar, A.; Kumar, A.; Kapoor, K. Video flame and smoke based fire detection algorithms: A literature review. Fire Technol. 2020, 56, 1943–1980. [Google Scholar] [CrossRef]
- Ghali, R.; Jmal, M.; Souidene Mseddi, W.; Attia, R. Recent advances in fire detection and monitoring systems: A review. In Proceedings of the 18th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT’18), Genoa, Italy, 20–22 December 2018; pp. 332–340. [Google Scholar]
- Huang, L.; Liu, G.; Wang, Y.; Yuan, H.; Chen, T. Fire detection in video surveillances using convolutional neural networks and wavelet transform. Eng. Appl. Artif. Intell. 2022, 110, 104737. [Google Scholar] [CrossRef]
- Ahmad Khan, Z.; Hussain, T.; Min Ullah, F.U.; Gupta, S.K.; Lee, M.Y.; Baik, W.S. Randomly Initialized CNN with Densely Connected Stacked Autoencoder for Efficient Fire Detection. Eng. Appl. Artif. Intell. 2022, 116, 105403. [Google Scholar] [CrossRef]
- Lin, J.; Lin, H.; Wang, F. STPM_SAHI: A Small-Target Forest Fire Detection Model Based on Swin Transformer and Slicing Aided Hyper Inference. Forests 2022, 13, 1603. [Google Scholar] [CrossRef]
- Harkat, H.; Nascimento, J.; Bernardino, A.; Thariq Ahmed, H.F. Fire images classification based on a handcraft approach. Expert Syst. Appl. 2023, 212, 118594. [Google Scholar] [CrossRef]
- Guede-Fernández, F.; Martins, L.; de Almeida, R.V.; Gamboa, H.; Vieira, P. A Deep Learning Based Object Identification System for Forest Fire Detection. Fire 2021, 4, 75. [Google Scholar] [CrossRef]
- Alipour, M.; La Puma, I.; Picotte, J.; Shamsaei, K.; Rowell, E.; Watts, A.; Kosovic, B.; Ebrahimian, H.; Taciroglu, E. A Multimodal Data Fusion and Deep Learning Framework for Large-Scale Wildfire Surface Fuel Mapping. Fire 2023, 6, 36. [Google Scholar] [CrossRef]
- Ghali, R.; Akhloufi, M.A.; Jmal, M.; Souidene Mseddi, W.; Attia, R. Wildfire Segmentation Using Deep Vision Transformers. Remote Sens. 2021, 13, 3527. [Google Scholar] [CrossRef]
- Harkat, H.; Nascimento, J.M.P.; Bernardino, A.; Thariq Ahmed, H.F. Assessing the Impact of the Loss Function and Encoder Architecture for Fire Aerial Images Segmentation Using Deeplabv3+. Remote Sens. 2022, 14, 2023. [Google Scholar] [CrossRef]
- Toulouse, T.; Rossi, L.; Campana, A.; Celik, T.; Akhloufi, M.A. Computer vision for wildfire research: An evolving image dataset for processing and analysis. Fire Saf. J. 2017, 92, 188–194. [Google Scholar] [CrossRef] [Green Version]
- Shamsoshoara, A.; Afghah, F.; Razi, A.; Zheng, L.; Fulé, P.Z.; Blasch, E. Aerial Imagery Pile Burn Detection Using Deep Learning: The FLAME Dataset. Comput. Netw. 2021, 193, 142–149. [Google Scholar] [CrossRef]
- Avazov, K.; Mukhiddinov, M.; Makhmudov, F.; Cho, Y.I. Fire Detection Method in Smart City Environments Using a Deep-Learning-Based Approach. Electronics 2022, 11, 73. [Google Scholar] [CrossRef]
- Norkobil Saydirasulovich, S.; Abdusalomov, A.; Jamil, M.K.; Nasimov, R.; Kozhamzharova, D.; Cho, Y.-I. A YOLOv6-Based Improved Fire Detection Approach for Smart City Environments. Sensors 2023, 23, 3161. [Google Scholar] [CrossRef]
- Guan, Z.; Miao, X.; Mu, Y.; Sun, Q.; Ye, Q.; Gao, D. Forest fire segmentation from aerial imagery data using an improved instance segmentation model. Remote Sens. 2022, 14, 3159. [Google Scholar] [CrossRef]
- Huang, Z.; Huang, L.; Gong, Y.; Huang, C.; Wang, X. Mask scoring R-Cnn. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 6409–6418. [Google Scholar]
- Ghali, R.; Akhloufi, M.A.; Mseddi, W.S. Deep Learning and Transformers Approaches for UAV Based Wildfire Detection and Segmentation. Sensors 2022, 22, 1977. [Google Scholar] [CrossRef] [PubMed]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar]
- Shelhamer, E.; Long, J.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6230–6239. [Google Scholar]
- Lin, G.; Milan, A.; Shen, C.; Reid, I.D. RefineNet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5168–5177. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 833–851. [Google Scholar]
- Mottaghi, R.; Chen, X.; Liu, X.; Cho, N.G.; Lee, S.W.; Fidler, S.; Urtasun, R.; Yuille, A. The role of context for object detection and semantic segmentation in the wild. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 891–898. [Google Scholar]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
- Zhou, B.; Zhao, H.; Puig, X.; Fidler, S.; Barriuso, A.; Torralba, A. Scene parsing through ade20k dataset. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5122–5130. [Google Scholar]
- Caesar, H.; Uijlings, J.; Ferrari, V. COCO-Stuff: Thing and stuff classes in context. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1209–1218. [Google Scholar]
- Wu, H.; Zhang, J.; Huang, K.; Liang, K.; Yu, Y. FastFCN: Rethinking dilated convolution in the backbone for semantic segmentation. arXiv 2019, arXiv:1903.11816. [Google Scholar]
- Allison, R.S.; Johnston, J.M.; Craig, G.; Jennings, S. Airborne optical and thermal remote sensing for wildfire detection and monitoring. Sensors 2016, 16, 1310. [Google Scholar] [CrossRef] [Green Version]
- Valero, M.M.; Rios, O.; Mata, C.; Pastor, E.; Planas, E. An integrated approach for tactical monitoring and data-driven spread forecasting of wildfires. Fire Saf. J. 2017, 91, 835–844. [Google Scholar] [CrossRef]
- Paul, S.E.; Salvaggio, C. A polynomial regression approach to subpixel temperature extraction from a single-band thermal infrared image. Proc. SPIE 2011, 8013, 801302. [Google Scholar]
- DJI. Phantom 3 Professional. Available online: https://www.dji.com/phantom-3-pro (accessed on 16 April 2023).
- DJI. Matrice 200 V1. Available online: https://www.dji.com/matrice-200-series/info#specs (accessed on 16 April 2023).
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 2015, 115, 211–252. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-Local Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7794–7803. [Google Scholar]
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Jorge Cardoso, M. Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer International Publishing: Cham, Switzerland, 2017; pp. 240–248. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
Type | Format | Palette | Duration | FPS | Resolution | Label | Shot by |
---|---|---|---|---|---|---|---|
Video | MOV | WhiteHot | 89 s | 30 | 640 × 512 | - | Vue Pro R, FLIR |
Video | MOV | GreenHot | 305 s | 30 | 640 × 512 | - | Vue Pro R, FLIR |
Video | MOV | Fusion | 25 min | 30 | 640 × 512 | - | Vue Pro R, FLIR |
Video | MOV | RGB | 17 min | 30 | 3840 × 1920 | - | Phantom, DJI |
Image | JPEG | RGB | 2003 frames | - | 3840 × 1920 | Fire | Phantom, DJI |
Mask | PNG | Binary | 2003 frames | - | 3840 × 1920 | Fire | - |
Type | Horizontal Speed | Remote Distance | Wheelbase | Weight | |
---|---|---|---|---|---|
PHANTOM 3 | <61.2 km/h | 3500 m to 5000 m | 350 mm | 1.28 kg | |
MATRICE 200 | <57.6 km/h | 4000 m to 7000 m | 643 mm | 3.80 kg |
Raw | Flipping | Rotating | Translating | Clipping | |
---|---|---|---|---|---|
Image | |||||
Mask | |||||
Image | |||||
Mask |
Block | Operation | Kernel Size | Stride/Padding | Output Size |
---|---|---|---|---|
➀ | Conv + ReLU | 3 × 3 × 32 | 2 × 2/1 | 1920 × 1080 × 32 |
Conv + ReLU | 3 × 3 × 64 | 1 × 1/2 | 1920 × 1080 × 64 | |
➁ | Residual | 1 × 1 × 128 | 2 × 2/0 | 960 × 540 × 128 |
SeparableConv | 3 × 3 × 128 | 1 × 1/2 | 1920 × 1080 × 128 | |
ReLU + SeparableConv | 3 × 3 × 128 | 1 × 1/2 | 1920 × 1080 × 128 | |
MaxPooling | 3 × 3 × 128 | 2 × 2/1 | 960 × 540 × 128 | |
➂ | Residual | 1 × 1 × 256 | 2 × 2/0 | 480 × 270 × 256 |
ReLU + SeparableConv | 3 × 3 × 256 | 1 × 1/2 | 960 × 540 × 256 | |
ReLU + SeparableConv | 3 × 3 × 128 | 1 × 1/2 | 960 × 540 × 256 | |
MaxPooling | 3 × 3 × 256 | 2 × 2/1 | 480 × 270 × 256 | |
➃ | Residual | 1 × 1 × 728 | 2 × 2/0 | 240 × 135 × 728 |
ReLU + SeparableConv | 3 × 3 × 728 | 1 × 1/2 | 480 × 270 × 728 | |
ReLU + SeparableConv | 3 × 3 × 728 | 1 × 1/2 | 480 × 270 × 728 | |
MaxPooling | 3 × 3 × 728 | 2 × 2/1 | 240 × 135 × 728 | |
➄–⑫ | ReLU + SeparableConv | 3 × 3 × 728 | 1 × 1/2 | 240 × 135 × 728 |
ReLU + SeparableConv | 3 × 3 × 728 | 1 × 1/2 | 240 × 135 × 728 | |
ReLU + SeparableConv | 3 × 3 × 728 | 1 × 1/2 | 240 × 135 × 728 | |
⑬ | Residual | 1 × 1 × 1024 | 2 × 2/0 | 120 × 67 × 1024 |
ReLU + SeparableConv | 3 × 3 × 728 | 1 × 1/2 | 240 × 135 × 728 | |
ReLU + SeparableConv | 3 × 3 × 1024 | 1 × 1/2 | 240 × 135 × 1024 | |
MaxPooling | 3 × 3 × 1024 | 2 × 2/1 | 120 × 67 × 1024 | |
⑭ | SeparableConv + ReLU | 3 × 3 × 1536 | 2 × 2/1 | 120 × 67 × 1536 |
SeparableConv + ReLU | 3 × 3 × 2048 | 1 × 1/2 | 120 × 67 × 2048 | |
⑮ | UpSampling | – | – | 240 × 135 × 2048 |
SeparableConv + ReLU | 3 × 3 × 1024 | 2 × 2/1 | 240 × 135 × 1024 | |
Residual | 1 × 1 × 1024 | 1 × 1/0 | 240 × 135 × 1024 | |
SeparableConv + ReLU | 3 × 3 × 728 | 1 × 1/2 | 240 × 135 × 728 | |
SeparableConv + ReLU | 3 × 3 × 728 | 1 × 1/2 | 240 × 135 × 728 | |
⑯ | UpSampling | 3 × 3 × 728 | 2 × 2/1 | 480 × 270 × 728 |
SeparableConv + ReLU | 3 × 3 × 728 | 2 × 2/1 | 480 × 270 × 728 | |
Residual | 1 × 1 × 728 | 1 × 1/0 | 480 × 270 × 728 | |
SeparableConv + ReLU | 3 × 3 × 728 | 1 × 1/2 | 480 × 270 × 728 | |
SeparableConv + ReLU | 3 × 3 × 728 | 1 × 1/2 | 480 × 270 × 728 | |
⑰ | UpSampling | – | – | 960 × 540 × 728 |
SeparableConv + ReLU | 3 × 3 × 256 | 2 × 2/1 | 960 × 540 × 256 | |
Residual | 1 × 1 × 256 | 1 × 1/0 | 960 × 540 × 256 | |
SeparableConv + ReLU | 3 × 3 × 256 | 1 × 1/2 | 960 × 540 × 256 | |
SeparableConv + ReLU | 3 × 3 × 256 | 1 × 1/2 | 960 × 540 × 256 | |
⑱ | UpSampling | – | – | 1920 × 1080 × 256 |
SeparableConv + ReLU | 3 × 3 × 128 | 2 × 2/1 | 1920 × 1080 × 128 | |
Residual | 1 × 1 × 128 | 1 × 1/0 | 1920 × 1080 × 128 | |
SeparableConv + ReLU | 3 × 3 × 128 | 1 × 1/2 | 1920 × 1080 × 128 | |
SeparableConv + ReLU | 3 × 3 × 128 | 1 × 1/2 | 1920 × 1080 × 128 | |
⑲ | Conv + ReLU | 3 × 3 × 64 | 1 × 1/2 | 1920 × 1080 × 64 |
UpSampling | – | – | 3840 × 2160 × 64 | |
Conv + ReLU | 3 × 3 × 32 | 1 × 1/2 | 3840 × 2160 × 32 | |
Conv + Sigmoid | 3 × 3 × 2 | 1 × 1/2 | 3840 × 2160 × 2 |
Environment | Type |
---|---|
Operating System | Ubuntu 18.04 |
Framework | TensorFlow 2.6.0 and Keras 2.6.0 |
Language | Python 3.7 |
CPU | Intel(R) Xeon(R) Silver 4110 |
GPU | GeForce RTX 2080Ti |
Configuration | Value |
---|---|
Batch Size | 32 |
Optimizer | Adam |
Learning Rate | 1 × 10−3 |
UpSampling | bi-linear |
Epochs | 50 |
Model | FEM | BEM | CIA | Precision (%) | Recall (%) | F1 Score (%) | IoU (%) |
---|---|---|---|---|---|---|---|
0 | 87.89 | 85.02 | 85.91 | 76.43 | |||
1 | ✔ | 91.62 | 87.59 | 89.56 | 81.09 | ||
2 | ✔ | ✔ | 91.91 | 86.94 | 89.36 | 80.76 | |
3 | ✔ | ✔ | 91.76 | 87.48 | 89.57 | 81.11 | |
4 | ✔ | ✔ | ✔ | 92.19 | 89.37 | 90.76 | 83.08 |
UpSampling | Loss Function | Precision (%) | Recall (%) | F1 Score (%) | IoU (%) |
---|---|---|---|---|---|
Bi-Linear | 91.93 | 89.17 | 90.53 | 82.70 | |
91.92 | 89.53 | 90.71 | 83.01 | ||
92.19 | 89.37 | 90.76 | 83.08 | ||
DeConv | 91.83 | 89.06 | 90.44 | 82.56 | |
91.99 | 89.19 | 90.57 | 82.77 | ||
92.02 | 89.57 | 90.74 | 83.11 |
Batch Size | Precision (%) | Recall (%) | F1 Score (%) | IoU (%) |
---|---|---|---|---|
4 | 92.08 | 88.22 | 90.11 | 81.99 |
8 | 92.13 | 89.02 | 90.54 | 82.76 |
16 | 91.92 | 89.45 | 90.67 | 82.94 |
32 | 92.19 | 89.37 | 90.76 | 83.08 |
Method | Precision (%) | Recall (%) | F1 Score (%) | IoU (%) |
---|---|---|---|---|
UNet | 84.75 | 76.22 | 80.82 | 67.23 |
SegNet [50] | 85.21 | 78.65 | 81.80 | 71.12 |
RefineNet | 88.80 | 82.95 | 85.78 | 76.22 |
PSPNet | 87.89 | 85.02 | 85.91 | 76.43 |
DeepLab | 90.01 | 85.10 | 87.01 | 80.20 |
FLAME | 91.99 | 83.88. | 87.75 | 78.17 |
MaskSU R-CNN (w/o DSA) | 88.63 | 88.89 | 88.76 | 80.77 |
MaskSU R-CNN (w/DSA) | 91.85 | 88.81 | 90.30 | 82.31 |
FBC-ANet | 92.19 | 89.37 | 90.76 | 83.08 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, L.; Wang, M.; Ding, Y.; Wan, T.; Qi, B.; Pang, Y. FBC-ANet: A Semantic Segmentation Model for UAV Forest Fire Images Combining Boundary Enhancement and Context Awareness. Drones 2023, 7, 456. https://doi.org/10.3390/drones7070456
Zhang L, Wang M, Ding Y, Wan T, Qi B, Pang Y. FBC-ANet: A Semantic Segmentation Model for UAV Forest Fire Images Combining Boundary Enhancement and Context Awareness. Drones. 2023; 7(7):456. https://doi.org/10.3390/drones7070456
Chicago/Turabian StyleZhang, Lin, Mingyang Wang, Yunhong Ding, Tingting Wan, Bo Qi, and Yutian Pang. 2023. "FBC-ANet: A Semantic Segmentation Model for UAV Forest Fire Images Combining Boundary Enhancement and Context Awareness" Drones 7, no. 7: 456. https://doi.org/10.3390/drones7070456