CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images
Abstract
:1. Introduction
- We addressed the limitations of single-modal remote sensing imagery by incorporating two heterogeneous datasets, Sentinel-1 and Sentinel-2, into the offshore aquaculture area extraction task. This approach enriched features in these areas and provided the model with a more diverse and comprehensive set of land feature information for training. Spectral feature indices were computed for the MSI imagery to avoid data redundancy. Consequently, we established a multimodal remote sensing image dataset comprising visible light imagery, spectral index imagery, and SAR imagery, which was utilized for model training;
- We propose a cross-modal multidimensional frequency perception network (CMFPNet) method to enhance offshore aquaculture area extraction accuracy. Through experimental validation on datasets from six typical offshore aquaculture areas, we found that CMFPNet outperforms all other models. It reduces instances of false negatives, false positives, and adhesion phenomena, resulting in extracted shapes closely aligning with the actual aquaculture area shapes. This demonstrates the significant potential of CMFPNet in accurately extracting large-scale offshore aquaculture areas;
- In CMFPNet, we designed the local–global perception block (LGPB), which combines local and global semantic information to effectively understand the global relationship of the scene and the detailed target features to enhance model robustness in different sea environments. We also designed the multidimensional adaptive frequency filtering attention block (MAFFAB), which retains and enhances information that is beneficial to recognizing aquaculture areas by dynamically filtering the information in different frequency domains. It also efficiently aggregates the modal semantic features from different modalities to enhance the recognition accuracy of the model.
2. Materials
2.1. Study Area
2.2. Dataset and Processing
3. Methodology
3.1. Architecture of the Proposed CMFPNet
3.2. Local–Global Perception Block (LGPB)
3.3. Multidimensional Adaptive Frequency Filtering Attention Block (MAFFAB)
3.4. Experimental Setups
3.5. Evaluation Metrics
4. Results
4.1. Comparative Experiments
4.1.1. Quantitative Results
4.1.2. Qualitative Results
4.2. Ablation Experiments
5. Discussion
5.1. Influence of Different Band Data Combinations
5.2. Application of the Model
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
FRA | Floating raft aquaculture |
CA | Cage aquaculture |
CMFPNet | Cross-modal multidimensional frequency perception network |
LGPB | Local–global perception block |
MAFFAB | Multidimensional adaptive frequency filtering attention block |
MSI | Multispectral image |
SAR | Synthetic aperture radar |
GEE | Google Earth Engine |
IRB | Inverted residual block |
Appendix A
References
- FAO. Fishery and Aquaculture Statistics—Yearbook 2020; FAO Yearbook of Fishery and Aquaculture Statistics; FAO: Rome, Italy, 2023. [Google Scholar] [CrossRef]
- Zhang, C.; Meng, Q.; Chu, J.; Liu, G.; Wang, C.; Zhao, Y.; Zhao, J. Analysis on the status of mariculture in China and the effectiveness of mariculture management in the Bohai Sea. Mar. Environ. Sci. 2021, 40, 887–894. [Google Scholar]
- Costello, C.; Cao, L.; Gelcich, S.; Cisneros-Mata, M.Á.; Free, C.M.; Froehlich, H.E.; Golden, C.D.; Ishimura, G.; Maier, J.; Macadam-Somer, I.; et al. The future of food from the sea. Nature 2020, 588, 95–100. [Google Scholar] [CrossRef] [PubMed]
- Long, L.; Liu, H.; Cui, M.; Zhang, C.; Liu, C. Offshore aquaculture in China. Rev. Aquacult. 2024, 16, 254–270. [Google Scholar] [CrossRef]
- Yucel-Gier, G.; Eronat, C.; Sayin, E. The impact of marine aquaculture on the environment; the importance of site selection and carrying capacity. Agric. Sci. 2019, 10, 259–266. [Google Scholar] [CrossRef]
- Dunne, A.; Carvalho, S.; Morán, X.A.G.; Calleja, M.L.; Jones, B. Localized effects of offshore aquaculture on water quality in a tropical sea. Mar. Pollut. Bull. 2021, 171, 112732. [Google Scholar] [CrossRef] [PubMed]
- Simone, M.; Vopel, K. The need for proactive environmental management of offshore aquaculture. Rev. Aquac. 2024, 16, 603–607. [Google Scholar] [CrossRef]
- Rubio-Portillo, E.; Villamor, A.; Fernandez-Gonzalez, V.; Anton, J.; Sanchez-Jerez, P. Exploring changes in bacterial communities to assess the influence of fish farming on marine sediments. Aquaculture 2019, 506, 459–464. [Google Scholar] [CrossRef]
- Chen, G.; Bai, J.; Bi, C.; Wang, Y.; Cui, B. Global greenhouse gas emissions from aquaculture: A bibliometric analysis. Agric. Ecosyst. Environ. 2023, 348, 108405. [Google Scholar] [CrossRef]
- Mahdavi, S.; Salehi, B.; Granger, J.; Amani, M.; Brisco, B.; Huang, W. Remote sensing for wetland classification: A comprehensive review. GISci. Remote Sens. 2018, 55, 623–658. [Google Scholar] [CrossRef]
- Sun, W.; Chen, C.; Liu, W.; Yang, G.; Meng, X.; Wang, L.; Ren, K. Coastline extraction using remote sensing: A review. GISci. Remote Sens. 2023, 60, 2243671. [Google Scholar] [CrossRef]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
- Kang, J.; Sui, L.; Yang, X.; Liu, Y.; Wang, Z.; Wang, J.; Yang, F.; Liu, B.; Ma, Y. Sea surface-visible aquaculture spatial-temporal distribution remote sensing: A case study in Liaoning province, China from 2000 to 2018. Sustainability 2019, 11, 7186. [Google Scholar] [CrossRef]
- Hou, T.; Sun, W.; Chen, C.; Yang, G.; Meng, X.; Peng, J. Marine floating raft aquaculture extraction of hyperspectral remote sensing images based decision tree algorithm. Int. J. Appl. Earth Obs. Geoinf. 2022, 111, 102846. [Google Scholar] [CrossRef]
- Fu, Y.; Zhang, W.; Bi, X.; Wang, P.; Gao, F. TCNet: A Transformer–CNN Hybrid Network for Marine Aquaculture Mapping from VHSR Images. Remote Sens. 2023, 15, 4406. [Google Scholar] [CrossRef]
- Ai, B.; Xiao, H.; Xu, H.; Yuan, F.; Ling, M. Coastal aquaculture area extraction based on self-attention mechanism and auxiliary loss. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 16, 2250–2261. [Google Scholar] [CrossRef]
- Amani, M.; Mohseni, F.; Layegh, N.F.; Nazari, M.E.; Fatolazadeh, F.; Salehi, A.; Ahmadi, S.A.; Ebrahimy, H.; Ghorbanian, A.; Jin, S.; et al. Remote sensing systems for ocean: A review (part 2: Active systems). IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 1421–1453. [Google Scholar] [CrossRef]
- Gao, L.; Wang, C.; Liu, K.; Chen, S.; Dong, G.; Su, H. Extraction of floating raft aquaculture areas from sentinel-1 SAR images by a dense residual U-Net model with pre-trained Resnet34 as the encoder. Remote Sens. 2022, 14, 3003. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, C.; Chen, J.; Wang, F. Shape-constrained method of remote sensing monitoring of marine raft aquaculture areas on multitemporal synthetic sentinel-1 imagery. Remote Sens. 2022, 14, 1249. [Google Scholar] [CrossRef]
- Xiao, S.; Wang, P.; Diao, W.; Rong, X.; Li, X.; Fu, K.; Sun, X. MoCG: Modality Characteristics-Guided Semantic Segmentation in Multimodal Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–18. [Google Scholar] [CrossRef]
- Li, J.; Hong, D.; Gao, L.; Yao, J.; Zheng, K.; Zhang, B.; Chanussot, J. Deep learning in multimodal remote sensing data fusion: A comprehensive review. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102926. [Google Scholar] [CrossRef]
- Wu, X.; Hong, D.; Chanussot, J. Convolutional neural networks for multimodal remote sensing data classification. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
- Ma, M.; Ma, W.; Jiao, L.; Liu, X.; Li, L.; Liu, F.; Feng, Z.; Yang, S. A multimodal hyper-fusion transformer for remote sensing image classification. Inf. Fusion 2023, 96, 66–79. [Google Scholar] [CrossRef]
- Li, Y.; Zhou, Y.; Zhang, Y.; Zhong, L.; Wang, J.; Chen, J. DKDFN: Domain knowledge-guided deep collaborative fusion network for multimodal unitemporal remote sensing land cover classification. ISPRS J. Photogramm. Remote Sens. 2022, 186, 170–189. [Google Scholar] [CrossRef]
- Fan, X.; Zhou, W.; Qian, X.; Yan, W. Progressive adjacent-layer coordination symmetric cascade network for semantic segmentation of multimodal remote sensing images. Expert Syst. Appl. 2024, 238, 121999. [Google Scholar] [CrossRef]
- Li, G.; Liu, C.; Liu, Y.; Yang, J.; Zhang, X.; Guo, K. Effects of climate, disturbance and soil factors on the potential distribution of Liaotung oak (Quercus wutaishanica Mayr) in China. Ecol. Res. 2012, 27, 427–436. [Google Scholar] [CrossRef]
- Hu, J.; Huang, M.; Yu, H.; Li, Q. Research on extraction method of offshore aquaculture area based on Sentinel-2 remote sensing imagery. Mar. Environ. Sci 2022, 41, 619–627. [Google Scholar]
- Hafner, S.; Ban, Y.; Nascetti, A. Unsupervised domain adaptation for global urban extraction using Sentinel-1 SAR and Sentinel-2 MSI data. Remote Sens. Environ. 2022, 280, 113192. [Google Scholar] [CrossRef]
- Mullissa, A.; Vollrath, A.; Odongo-Braun, C.; Slagter, B.; Balling, J.; Gou, Y.; Gorelick, N.; Reiche, J. Sentinel-1 sar backscatter analysis ready data preparation in google earth engine. Remote Sens. 2021, 13, 1954. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, C.; Ji, Y.; Chen, J.; Deng, Y.; Chen, J.; Jie, Y. Combining segmentation network and nonsubsampled contourlet transform for automatic marine raft aquaculture area extraction from sentinel-1 images. Remote Sens. 2020, 12, 4182. [Google Scholar] [CrossRef]
- Wang, D.; Han, M. SA-U-Net++: SAR marine floating raft aquaculture identification based on semantic segmentation and ISAR augmentation. J. Appl. Remote Sens. 2021, 15, 016505. [Google Scholar] [CrossRef]
- Gao, B.C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
- Wu, Q.; Wang, M.; Shen, Q.; Yao, Y.; Li, J.; Zhang, F.; Zhou, Y. Small water body extraction method based on Sentinel-2 satellite multi-spectral remote sensing image. Natl. Remote Sens. Bull. 2022, 26, 781–794. [Google Scholar] [CrossRef]
- Yan, P.; Zhang, Y.; Zhang, Y. A study on information extraction of water system in semi-arid regions with the enhanced water index (EWI) and GIS based noise remove techniques. Remote Sens. Inf. 2007, 6, 62–67. [Google Scholar]
- Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
- Ni, Y.; Liu, J.; Chi, W.; Wang, X.; Li, D. CGGLNet: Semantic Segmentation Network for Remote Sensing Images Based on Category-Guided Global-Local Feature Interaction. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–17. [Google Scholar] [CrossRef]
- Song, W.; Zhou, X.; Zhang, S.; Wu, Y.; Zhang, P. GLF-Net: A Semantic Segmentation Model Fusing Global and Local Features for High-Resolution Remote Sensing Images. Remote Sens. 2023, 15, 4649. [Google Scholar] [CrossRef]
- Liu, D.; Zhang, J.; Li, T.; Qi, Y.; Wu, Y.; Zhang, Y. A Lightweight Object Detection and Recognition Method Based on Light Global-Local Module for Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 1–5. [Google Scholar] [CrossRef]
- Khan, A.; Sohail, A.; Zahoora, U.; Qureshi, A.S. A survey of the recent architectures of deep convolutional neural networks. Artif. Intell. Rev. 2020, 53, 5455–5516. [Google Scholar] [CrossRef]
- Cong, S.; Zhou, Y. A review of convolutional neural network architectures and their optimizations. Artif. Intell. Rev. 2023, 56, 1905–1969. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–41. [Google Scholar] [CrossRef]
- Mehta, S.; Rastegari, M. Mobilevit: Light-weight, general-purpose, and mobile-friendly vision transformer. arXiv 2021, arXiv:2110.02178. [Google Scholar]
- Wadekar, S.N.; Chaurasia, A. Mobilevitv3: Mobile-friendly vision transformer with simple and effective fusion of local, global and input features. arXiv 2022, arXiv:2209.15159. [Google Scholar]
- Yang, Y.; Jiao, L.; Li, L.; Liu, X.; Liu, F.; Chen, P.; Yang, S. LGLFormer: Local-global Lifting Transformer for Remote Sensing Scene Parsing. IEEE Trans. Geosci. Remote Sens. 2023, 62, 1–13. [Google Scholar] [CrossRef]
- Xue, J.; He, D.; Liu, M.; Shi, Q. Dual network structure with interweaved global-local feature hierarchy for transformer-based object detection in remote sensing image. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2022, 15, 6856–6866. [Google Scholar] [CrossRef]
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X.; Feng, J.; Yan, S. Metaformer is actually what you need for vision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 10819–10829. [Google Scholar]
- Hafner, S.; Ban, Y.; Nascetti, A. Semi-Supervised Urban Change Detection Using Multi-Modal Sentinel-1 SAR and Sentinel-2 MSI Data. Remote Sens. 2023, 15, 5135. [Google Scholar] [CrossRef]
- Zheng, A.; He, J.; Wang, M.; Li, C.; Luo, B. Category-wise fusion and enhancement learning for multimodal remote sensing image semantic segmentation. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
- Liu, X.; Zou, H.; Wang, S.; Lin, Y.; Zuo, X. Joint Network Combining Dual-Attention Fusion Modality and Two Specific Modalities for Land Cover Classification Using Optical and SAR Images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023, 17, 3236–3250. [Google Scholar] [CrossRef]
- Wu, W.; Guo, S.; Shao, Z.; Li, D. CroFuseNet: A semantic segmentation network for urban impervious surface extraction based on cross fusion of optical and SAR images. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2023, 16, 2573–2588. [Google Scholar] [CrossRef]
- Qin, Z.; Zhang, P.; Wu, F.; Li, X. Fcanet: Frequency channel attention networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 783–792. [Google Scholar]
- Ruan, J.; Xie, M.; Xiang, S.; Liu, T.; Fu, Y. MEW-UNet: Multi-axis representation learning in frequency domain for medical image segmentation. arXiv 2022, arXiv:2210.14007. [Google Scholar]
- Zhang, S.; Li, H.; Li, L.; Lu, J.; Zuo, Z. A high-capacity steganography algorithm based on adaptive frequency channel attention networks. Sensors 2022, 22, 7844. [Google Scholar] [CrossRef] [PubMed]
- Duhamel, P.; Vetterli, M. Fast Fourier transforms: A tutorial review and a state of the art. Signal Process. 1990, 19, 259–299. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
- Ruby, U.; Yendapalli, V. Binary cross entropy with deep learning technique for image classification. Int. J. Adv. Trends Comput. Sci. Eng. 2020, 9, 4. [Google Scholar]
- Sudre, C.H.; Li, W.; Vercauteren, T.; Ourselin, S.; Jorge Cardoso, M. Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, September 14, Proceedings 3; Springer: Cham, Switzerland, 2017; pp. 240–248. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Wang, J.; Sun, K.; Cheng, T.; Jiang, B.; Deng, C.; Zhao, Y.; Liu, D.; Mu, Y.; Tan, M.; Wang, X.; et al. Deep high-resolution representation learning for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3349–3364. [Google Scholar] [CrossRef]
- Wang, L.; Li, R.; Zhang, C.; Fang, S.; Duan, C.; Meng, X.; Atkinson, P.M. UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS J. Photogramm. Remote Sens. 2022, 190, 196–214. [Google Scholar] [CrossRef]
- Ma, X.; Zhang, X.; Pun, M.O.; Liu, M. A multilevel multimodal fusion transformer for remote sensing semantic segmentation. IEEE Trans. Geosci. Remote Sens. 2024, 62, 1–15. [Google Scholar] [CrossRef]
- Guo, S.; Wu, W.; Shao, Z.; Teng, J.; Li, D. Extracting urban impervious surface based on optical and SAR images cross-modal multi-scale features fusion network. Int. J. Digit. Earth 2024, 17, 2301675. [Google Scholar] [CrossRef]
- Cai, B.; Shao, Z.; Huang, X.; Zhou, X.; Fang, S. Deep learning-based building height mapping using Sentinel-1 and Sentienl-2 data. Int. J. Appl. Earth Obs. Geoinf. 2023, 122, 103399. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
Area | Image Size | Geographical Scope | Image Data |
---|---|---|---|
Gaoling Town | 5749 × 3456 | 119.85–120.46°E 39.92–40.21°N | February and October 2023 |
Bayuquan District | 2582 × 6453 | 121.86–122.15°E 39.94–40.40°N | February and October 2023 |
Pulandian Bay | 4400 × 2451 | 121.27–121.67°E 39.16–39.38°N | February and October 2023 |
Jinshitan Bay | 3899 × 2933 | 121.91–122.26°E 38.93–39.19°N | February and October 2023 |
Changhai County | 6643 × 2795 | 122.26–122.86°E 39.12–39.37°N | February and October 2023 |
Shicheng Township | 5007 × 3011 | 122.84–123.29°E 39.32–39.59°N | February and October 2023 |
Methods | FRA | CA | Mean | ||||||
---|---|---|---|---|---|---|---|---|---|
IoU | F1 | Kappa | IoU | F1 | Kappa | IoU | F1 | Kappa | |
UNet [61] | 88.75 | 94.04 | 93.94 | 77.76 | 87.49 | 85.08 | 83.26 | 90.77 | 89.51 |
DeepLabV3+ [62] | 80.32 | 89.08 | 88.90 | 69.95 | 82.32 | 78.97 | 75.14 | 85.70 | 83.94 |
HRNet [63] | 82.74 | 90.56 | 90.40 | 76.91 | 86.95 | 84.44 | 79.83 | 88.76 | 87.42 |
UNetFormer [64] | 82.10 | 90.17 | 90.01 | 79.77 | 88.75 | 86.59 | 80.94 | 89.46 | 88.30 |
CMFFNet [66] | 83.09 | 90.77 | 90.61 | 81.58 | 89.85 | 87.91 | 82.34 | 90.31 | 89.26 |
BHENet [67] | 82.01 | 90.12 | 89.95 | 77.07 | 87.05 | 84.59 | 79.54 | 88.59 | 87.27 |
JoiTriNet [51] | 81.69 | 89.92 | 89.75 | 81.10 | 89.56 | 87.51 | 81.40 | 89.74 | 88.63 |
FTransUNet [65] | 89.08 | 94.23 | 94.13 | 83.83 | 91.20 | 89.51 | 86.46 | 92.72 | 91.82 |
CMFPNet (ours) | 90.29 | 94.90 | 94.81 | 85.02 | 91.91 | 90.36 | 87.66 | 93.41 | 92.59 |
Methods | FRA | CA | Mean | Model Complexity | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|
IoU | F1 | Kappa | IoU | F1 | Kappa | IoU | F1 | Kappa | Params(M) | FLOPS(G) | |
Baseline | 88.15 | 93.71 | 93.60 | 81.70 | 89.61 | 87.61 | 84.93 | 91.66 | 90.61 | 11.70 | 20.50 |
Baseline +LGPB | 89.07 | 94.22 | 94.12 | 83.74 | 91.15 | 89.46 | 86.41 | 92.69 | 91.79 | 16.92 | 21.17 |
Baseline +LGPB +MAFFAB | 90.29 | 94.90 | 94.81 | 85.02 | 91.91 | 90.36 | 87.66 | 93.41 | 92.59 | 22.82 | 21.28 |
Groups | Bands | FRA | CA | Mean | ||||||
---|---|---|---|---|---|---|---|---|---|---|
IoU | F1 | Kappa | IoU | F1 | Kappa | IoU | F1 | Kappa | ||
Group 1 | 4 | 88.00 | 93.61 | 93.50 | 83.52 | 91.02 | 89.31 | 85.76 | 92.32 | 91.41 |
Group 2 | 11 | 88.58 | 93.94 | 93.84 | 84.44 | 91.56 | 89.89 | 86.51 | 92.75 | 91.87 |
Group 3 | 7 | 90.29 | 94.90 | 94.81 | 85.02 | 91.91 | 90.36 | 87.66 | 93.41 | 92.59 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yu, H.; Wang, F.; Hou, Y.; Wang, J.; Zhu, J.; Cui, Z. CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images. Remote Sens. 2024, 16, 2825. https://doi.org/10.3390/rs16152825
Yu H, Wang F, Hou Y, Wang J, Zhu J, Cui Z. CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images. Remote Sensing. 2024; 16(15):2825. https://doi.org/10.3390/rs16152825
Chicago/Turabian StyleYu, Haomiao, Fangxiong Wang, Yingzi Hou, Junfu Wang, Jianfeng Zhu, and Zhenqi Cui. 2024. "CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images" Remote Sensing 16, no. 15: 2825. https://doi.org/10.3390/rs16152825
APA StyleYu, H., Wang, F., Hou, Y., Wang, J., Zhu, J., & Cui, Z. (2024). CMFPNet: A Cross-Modal Multidimensional Frequency Perception Network for Extracting Offshore Aquaculture Areas from MSI and SAR Images. Remote Sensing, 16(15), 2825. https://doi.org/10.3390/rs16152825