Cropland Extraction Based on PlanetScope Images and a Newly Developed CAFM-Net Model
Highlights
- A novel dual-branch fusion network is developed for cropland extraction, integrating local spatial details and global contextual information.
- The CAFM model with edge-assisted supervision enhances boundary delineation and small cropland detection.
- The developed CAFM model accurately achieves fine-scale cropland mapping in high-resolution remote sensing images.
- Improved cropland boundary and small parcel detection accuracy supports agricultural monitoring, cropland protection, and precision land management.
Abstract
1. Introduction
2. Materials and Methods
2.1. Study Area
2.2. Data Sources and Preprocessing
2.3. Cropland Extraction Model
2.3.1. Dual-Branch Encoder of CNN–Transformer
2.3.2. The Improved Dual-Branch Network Model: CAFM-Net
2.4. Ablation Experiment
2.5. Comparison Experiment
2.6. Accuracy Evaluation
3. Results
3.1. Results of Ablation Experiments
3.1.1. Self-Built Dataset
3.1.2. GID Public Dataset
3.2. Results of Comparison Experiments
3.2.1. Model Efficiency and Computational Complexity Analysis
3.2.2. Self-Built Dataset
3.2.3. GID Public Dataset
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Chen, A.; He, H.; Wang, J.; Li, M.; Guan, Q.; Hao, J. A study on the arable land demand for food security in China. Sustainability 2019, 11, 4769. [Google Scholar] [CrossRef]
- He, X.; Liu, W. Coupling coordination between agricultural eco-Efficiency and urbanization in China considering food security. Agriculture 2024, 14, 781. [Google Scholar] [CrossRef]
- Ma, E.; Cai, J.; Lin, J.; Guo, H.; Han, Y.; Liao, L. Spatio temporal evolution and influencing factors of global food security pattern from 2000 to 2014. Acta Geogr. Sin. 2020, 75, 332–347. (In Chinese) [Google Scholar]
- Sun, X.; Xiang, P.; Cong, K. Research on early warning and control measures for arable land resource security. Land Use Policy 2023, 128, 106601. [Google Scholar] [CrossRef]
- Liao, Y.; Lu, X.; Liu, J.; Huang, J.; Qu, Y.; Qiao, Z.; Xie, Y.; Liao, X.; Liu, L. Integrated Assessment of the Impact of Cropland Use Transition on Food Production Towards the Sustainable Development of Social–Ecological Systems. Agron. J. 2024, 14, 2851. [Google Scholar] [CrossRef]
- Bren d’Amour, C.; Reitsma, F.; Baiocchi, G.; Barthel, S.; Güneralp, B.; Erb, K.; Haberl, H.; Creutzig, F.; Seto, K.C. Future urban land expansion and implications for global croplands. Proc. Natl. Acad. Sci. USA 2017, 114, 8939–8944. [Google Scholar] [CrossRef]
- Song, D.; Ding, W.; Zhou, W. Temporal and spatial variation characteristics and sustainable utilization strategy of main cropland reserve resources in China. J. Plant Nutr. Fert. 2024, 30, 1437–1446. (In Chinese) [Google Scholar]
- Zhao, S.; Yin, M. Change of urban and rural construction land and driving factors of arable land occupation. PLoS ONE 2023, 18, e0286248. [Google Scholar] [CrossRef]
- Li, H.; Song, W. Spatial transformation of changes in global cropland. Sci. Total Environ. 2023, 859, 160–194. [Google Scholar] [CrossRef]
- Cai, Z.; Hu, Q.; Zhang, X.; Yang, J.; Wei, H.; He, Z.; Song, Q.; Wang, C.; Yin, G.; Xu, B. An adaptive image segmentation method with automatic selection of optimal scale for extracting cropland parcels in smallholder farming systems. Remote Sens. 2022, 14, 3067. [Google Scholar] [CrossRef]
- Hossain, M.; Chen, D. Segmentation for Object-Based Image Analysis (OBIA): A review of algorithms and challenges from remote sensing perspective. ISPRS J. Photogramm. Remote Sens. 2019, 150, 115–134. [Google Scholar] [CrossRef]
- Yang, Y.; Meng, Z.; Zu, J.; Cai, W.; Wang, J.; Su, H.; Yang, J. Fine-scale mangrove species classification based on uav multispectral and hyperspectral remote sensing using machine learning. Remote Sens. 2024, 16, 3093. [Google Scholar] [CrossRef]
- Agnoletti, M.; Cargnello, G.; Gardin, L.; Santoro, A.; Bazzoffi, P.; Sansone, L.; Pezza, L.; Belfiore, N. Traditional landscape and rural development: Comparative study in three terraced areas in northern, central and southern Italy to evaluate the efficacy of GAEC standard 4.4 of cross compliance. Ital. J. Agron. 2011, 6, 121–139. [Google Scholar] [CrossRef]
- Martínez-Casasnovas, J.; Ramos, M.; Cots-Folch, R. Influence of the EU CAP on terrain morphology and vineyard cultivation in the Priorat region of NE Spain. Land Use Policy 2010, 27, 11–21. [Google Scholar] [CrossRef]
- Zhao, B.; Ma, N.; Yang, J.; Li, Z.; Wang, Q. Extracting features of soil and water conservation measures from remote sensing images of different resolution levels: Accuracy analysis. Bull. Soil Water Conserv. 2012, 32, 154–157. [Google Scholar]
- Li, X.; Li, Y.; Ai, J.; Shu, Z.; Xia, J.; Xia, Y. Semantic segmentation of UAV remote sensing images based on edge feature fusing and multi-level upsampling integrated with Deeplabv3+. PLoS ONE 2023, 18, e0279097. [Google Scholar] [CrossRef]
- Han, H.; Feng, Z.; Du, W.; Guo, S.; Wang, P.; Xu, T. Remote sensing image classification based on multi-spectral cross-sensor super-resolution combined with texture features: A case study in the Liaohe planting area. IEEE Access 2024, 12, 16830–16843. [Google Scholar] [CrossRef]
- Hofmann, P.; Blaschke, T.; Strobl, J. Quantifying the robustness of fuzzy rule sets in object-based image analysis. Int. J. Remote Sens. 2011, 32, 7359–7381. [Google Scholar] [CrossRef]
- Huang, X.; Zhang, L. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2012, 51, 257–272. [Google Scholar] [CrossRef]
- Yan, S.; Yao, X.; Zhu, D.; Liu, d; Zhang, L.; Yu, G.; Gao, B.; Yang, J.; Yun, W. Large-scale crop mapping from multi-source optical satellite imageries using machine learning with discrete grids. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102485. [Google Scholar] [CrossRef]
- Go, S.H.; Park, J.H. Improving field crop classification accuracy using GLCM and SVM with UAV-acquired images. Korean J. Remote Sens. 2024, 40, 93–101. [Google Scholar]
- Wang, M.; Huang, L.; Tang, B.H.; Yu, Y.; Zhang, Z.; Wu, Q.; Cheng, J. Mapping cropland in Yunnan Province during 1990–2020 using multi-source remote sensing data with the Google Earth Engine Platform. Geocarto Int. 2024, 39, 2392848. [Google Scholar] [CrossRef]
- Saini, R. Integrating vegetation indices and spectral features for vegetation mapping from multispectral satellite imagery using AdaBoost and random forest machine learning classifiers. Geomat. Environ. Eng. 2023, 17, 57–74. [Google Scholar] [CrossRef]
- Wan, L.; Kendall, A.D.; Rapp, J.; Hyndman, D.W. Mapping agricultural tile drainage in the US Midwest using explainable random forest machine learning and satellite imagery. Sci. Total Environ. 2024, 950, 175283. [Google Scholar] [CrossRef]
- Thanh Noi, P.; Kappas, M. Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using Sentinel-2 imagery. Sensors 2017, 18, 18. [Google Scholar] [CrossRef]
- Moharram, M.A.; Sundaram, D.M. Spatial–spectral hyperspectral images classification based on Krill Herd band selection and edge-preserving transform domain recursive filter. Appl. Remote Sens. 2022, 16, 044508. [Google Scholar] [CrossRef]
- Aziz, N.; Minallah, N.; Hasanat, M.; Ajmal, M. Geographic Object-based Image Analysis for Small Farmlands using Machine Learning Techniques on Multispectral Sentinel-2 Data. Proc. Pak. Acad. Sci. A Phys. Comput. Sci. 2024, 61, 41–49. [Google Scholar] [CrossRef]
- Rangel, R.; Lourenço, V.; Oldoni, L.; Bonamigo, A.; Santos, W.; Oliveira, B.; Barreto, M. A unified framework for cropland field boundary detection and segmentation. In Proceedings of the 2024 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), Waikoloa, HI, USA, 4–8 January 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 636–644. [Google Scholar]
- Shen, Q.; Deng, H.; Wen, X.; Chen, Z.; Xu, H. Statistical texture learning method for monitoring abandoned suburban cropland based on high-resolution remote sensing and deep learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3060–3069. [Google Scholar] [CrossRef]
- Papadopoulou, E.; Mallinis, G.; Siachalou, S.; Koutsias, N.; Thanopoulos, A.; Tsaklidis, G. Agricultural land cover mapping through two deep learning models in the framework of EU’s CAP activities using sentinel-2 multitemporal imagery. Remote Sens. 2023, 15, 4657. [Google Scholar] [CrossRef]
- Li, H.; Du, Y.; Xiao, X.; Chen, Y. Remote Sensing Identification Method of cropland at Hill County of Sichuan Basin Based on Deep Learning. Smart Agric. 2024, 6, 34. [Google Scholar]
- Voelsen, M.; Lauble, S.; Rottensteiner, F.; Heipke, C. Transformer Models for Multi-Temporal Land Cover Classification Using Remote Sensing Images. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, 10, 981–990. [Google Scholar] [CrossRef]
- Yang, S. Performance and Analysis of FCN, U-Net, and SegNet in Remote Sensing Image Segmentation Based on the LoveDA Dataset. ITM Web Conf. 2025, 70, 03023. [Google Scholar] [CrossRef]
- Liu, Y.; Bai, X.; Wang, J.; Li, G.; Li, J.; Lv, Z. Image semantic segmentation approach based on DeepLabV3 plus network with an attention mechanism. Eng. Appl. Artif. Intell. 2024, 127, 107260. [Google Scholar] [CrossRef]
- Gao, X.; Liu, L.; Gong, H. MMUU-Net: A robust and effective network for farmland segmentation of satellite imagery. J. Phys. Conf. Ser. 2020, 1651, 012189. [Google Scholar] [CrossRef]
- Hu, L.; Qin, M.; Zhang, F.; Du, Z.; Liu, R. RSCNN: A CNN-based method to enhance low-light remote-sensing images. Remote Sens. 2020, 13, 62. [Google Scholar] [CrossRef]
- Popel, M.; Bojar, O. Training tips for the transformer model. arXiv 2018, arXiv:1804.00247. [Google Scholar] [CrossRef]
- Dosovitskiy, A. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2010, arXiv:2010.11929. [Google Scholar]
- Qi, L.; Zuo, D.; Wang, Y.; Tao, Y.; Tang, R.; Shi, J.; Gong, J.; Li, B. Convolutional neural network-based method for agriculture plot segmentation in remote sensing images. Remote Sens. 2024, 16, 346. [Google Scholar] [CrossRef]
- Lingwal, S.; Bhatia, K.; Singh, M. Semantic segmentation of landcover for cropland mapping and area estimation using Machine Learning techniques. Data Intell. 2023, 5, 370–387. [Google Scholar] [CrossRef]
- Zhang, H. Automatic Extraction of Non-grain and Non-agriculturalization Use Patterns of Cultivated Land Based on Satellite Remote Sensing Images. Geomatics. Spat. Inf. Technol. 2025, 6, 87–90. (In Chinese) [Google Scholar]
- Xie, Y.; Zeng, H.; Tian, F.; Zhang, M.; Hu, Y. Study on sample dependence and model space extrapolation of crop remote sensing classification. Nat. Remote Sens. Bull. 2024, 28, 2878–2895. (In Chinese) [Google Scholar]
- Zhang, X.; Li, S.; Wang, X.; Song, K.; Chen, Z.; Zheng, K. Quantitative remote sensing retrieval of soil total nitrogen in Suihua City, Heilongjiang Province Based on sentinel-2 satellite image. Trans. Chin. Soc. Agric. Eng. 2023, 39, 144–151. (In Chinese) [Google Scholar]
- Peng, Z.; Huang, W.; Gu, S.; Xie, L.; Wang, Y.; Jiao, J.; Ye, Q. Conformer: Local features coupling global representations for visual recognition. In IEEE Transactions on Pattern Analysis and Machine Intelligence; IEEE: Piscataway, NJ, USA, 2021; pp. 367–376. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 10012–10022. [Google Scholar]
- Huang, J.; Fang, Y.; Wu, Y.; Wu, H.; Gao, Z.; Li, Y.; Del Ser, J.; Xia, J.; Yang, G. Swin transformer for fast MRI. Neurocomputing 2022, 493, 281–304. [Google Scholar] [CrossRef]
- Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
- Xia, L.; Mi, S.; Zhang, J.; Luo, J.; Shen, Z.; Cheng, Y. Dual-stream feature extraction network based on CNN and transformer for building extraction. Remote Sens. 2023, 15, 2689. [Google Scholar] [CrossRef]
- Hu, S.; Gao, F.; Zhou, X.; Dong, J.; Du, Q. Hybrid convolutional and attention network for hyperspectral image denoising. IEEE Geosci. Remote Sens. Lett. 2024, 21, 5504005. [Google Scholar] [CrossRef]
- Yang, Y.; Zhou, Y.; Chen, Y.; Zhang, Z.; Ma, Z.; Yuan, C.; Li, B.; Song, L.; Gao, J.; Li, P.; et al. DetailFusion: A Dual-branch Framework with Detail Enhancement for Composed Image Retrieval. arXiv 2025, arXiv:2505.17796. [Google Scholar]
- Pu, M.; Huang, Y.; Liu, Y.; Guan, Q.; Ling, H. Edter: Edge detection with transformer. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LO, USA, 21–24 June 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1402–1412. [Google Scholar]
- Singh, N.J.; Nongmeikapam, K. Semantic segmentation of satellite images using deep-unet. Arab. J. Sci. Eng. 2023, 48, 1193–1205. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, H.; Jia, J. Pyramid scene parsing network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 2881–2890. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Computer Vision—ECCV 2018; Springer: Cham, Switzerland, 2018; pp. 801–818. [Google Scholar]
- Chen, J.; Mei, J.; Li, X.; Lu, Y.; Yu, Q.; Wei, Q.; Luo, X.; Xie, Y.; Adeli, E.; Wang, Y. TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers. Med. Image Anal. 2024, 97, 103280. [Google Scholar] [CrossRef]
- Paszke, A.; Chaurasia, A.; Kim, S.; Culurciello, E. Enet: A deep neural network architecture for real-time semantic segmentation. arXiv 2016, arXiv:1606.02147. [Google Scholar] [CrossRef]
- Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Baillard, C.; Benitez, S.; Breitkopf, U. The ISPRS Benchmark on Urban Object Classification and 3D Building Reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci. 2012, I-3, 293–298. [Google Scholar] [CrossRef]
- Zhang, J.; He, Y.; Yuan, L.; Liu, P.; Zhou, X.; Huang, Y. Machine learning-based spectral library for crop classification and status monitoring. Agron. J. 2019, 9, 496. [Google Scholar] [CrossRef]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef]
- Li, W.; Dong, R.; Fu, H.; Yu, L. Large-scale oil palm tree detection from high-resolution satellite images using two-stage convolutional neural networks. Remote Sens. 2018, 11, 11–31. [Google Scholar] [CrossRef]
- Xu, Y.; Xue, X.; Sun, Z.; Gu, W.; Cui, L.; Jin, Y.; Lan, Y. Deriving agricultural field boundaries for crop management from satellite images using semantic feature pyramid network. Remote Sens. 2023, 15, 2937. [Google Scholar] [CrossRef]
- Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like pure transformer for medical image segmentation. In Computer Vision—ECCV 2022 Workshops; Springer: Cham, Switzerland, 2022; pp. 205–218. [Google Scholar]
- Liu, B.; Wang, W.; Wu, Y.; Gao, X. Attention Swin Transformer UNet for Landslide Segmentation in Remotely Sensed Images. Remote Sens. 2024, 16, 44–64. [Google Scholar] [CrossRef]










| Country of Origin | Orbit | Spectral Band | Spatial Resolution | Revisit Period | Width |
|---|---|---|---|---|---|
| America | Sun-synchronous orbit (465–700 km) | Red: 610–700 nm | 3–5 m | 1–2 day | 24 km |
| Green: 500–590 nm | |||||
| International Space Station orbit (about 420 km) | Blue: 420–530 nm | ||||
| Near-infrared: 760–860 nm |
| Ablation Experiment | CNN | CAFM | EH | OA (%) | Precision (%) | Recall (%) | F1_Score (%) | Dice (%) | IOU (%) |
|---|---|---|---|---|---|---|---|---|---|
| Test 1 | √ | 93.60 | 92.11 | 95.33 | 93.75 | 93.75 | 88.24 | ||
| Test 2 | √ | 96.71 | 96.10 | 97.43 | 96.76 | 96.76 | 93.73 | ||
| Test 3 | √ | √ | 93.73 | 92.72 | 95.00 | 93.85 | 93.85 | 88.41 | |
| Test 4 | √ | √ | 96.75 | 96.27 | 97.32 | 96.80 | 96.80 | 93.79 |
| Ablation Experiment | CNN | CAFM | EH | OA (%) | Precision (%) | Recall (%) | F1_Score (%) | Dice (%) | IOU (%) |
|---|---|---|---|---|---|---|---|---|---|
| Test 1 | √ | 92.20 | 89.56 | 94.53 | 94.13 | 94.13 | 88.92 | ||
| Test 2 | √ | 94.54 | 93.45 | 95.05 | 94.25 | 94.25 | 89.12 | ||
| Test 3 | √ | √ | 92.38 | 89.95 | 94.33 | 92.09 | 92.09 | 85.34 | |
| Test 4 | √ | √ | 94.58 | 94.97 | 93.42 | 94.19 | 94.19 | 89.02 |
| Comparative Experiment | Parameters (M) | FLOPs (G) | Inference Time (ms) |
|---|---|---|---|
| UNet | 31.4 | 49.8 | 21.6 |
| PSPNet | 47.2 | 172.6 | 46.3 |
| Deeplabv3+ | 41.1 | 142.9 | 38.7 |
| Swin Transformer | 28.3 | 92.4 | 33.1 |
| TransUNet | 102.7 | 196.8 | 63.9 |
| CAFM-Net | 69.8 | 153.6 | 44.5 |
| Comparative Experiment | F1_Score (%) | Dice (%) | IOU (%) | OA (%) | Precision (%) | Recall (%) |
|---|---|---|---|---|---|---|
| UNet | 87.82 | 87.82 | 78.61 | 89.74 | 90.01 | 86.31 |
| PSPNet | 77.79 | 77.79 | 64.81 | 82.51 | 83.82 | 75.61 |
| Deeplabv3+ | 86.79 | 86.79 | 76.83 | 81.50 | 83.87 | 90.74 |
| Swin Transformer | 76.77 | 76.77 | 62.61 | 78.03 | 76.33 | 79.79 |
| TransUNet | 90.23 | 90.23 | 82.41 | 91.66 | 91.79 | 89.10 |
| CAFM-Net | 96.80 | 96.80 | 93.79 | 96.75 | 96.27 | 97.32 |
| Comparative Experiment | F1_Score (%) | Dice (%) | IOU (%) | OA (%) | Precision (%) | Recall (%) |
|---|---|---|---|---|---|---|
| UNet | 89.95 | 89.95 | 81.82 | 90.48 | 89.80 | 90.12 |
| PSPNet | 77.79 | 77.79 | 64.81 | 82.51 | 83.82 | 75.43 |
| Deeplabv3+ | 89.04 | 89.04 | 80.59 | 86.48 | 85.99 | 92.96 |
| Swin Transformer | 81.96 | 81.96 | 69.99 | 83.25 | 82.30 | 82.07 |
| TransUNet | 83.11 | 83.11 | 71.22 | 83.70 | 82.68 | 83.99 |
| CAFM-Net | 94.19 | 94.19 | 89.02 | 94.58 | 94.97 | 93.42 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Ren, J.; Jing, Y.; Zheng, X.; Li, S.; Li, K.; Mu, G. Cropland Extraction Based on PlanetScope Images and a Newly Developed CAFM-Net Model. Remote Sens. 2026, 18, 646. https://doi.org/10.3390/rs18040646
Ren J, Jing Y, Zheng X, Li S, Li K, Mu G. Cropland Extraction Based on PlanetScope Images and a Newly Developed CAFM-Net Model. Remote Sensing. 2026; 18(4):646. https://doi.org/10.3390/rs18040646
Chicago/Turabian StyleRen, Jianhua, Yating Jing, Xingming Zheng, Sijia Li, Kai Li, and Guangyi Mu. 2026. "Cropland Extraction Based on PlanetScope Images and a Newly Developed CAFM-Net Model" Remote Sensing 18, no. 4: 646. https://doi.org/10.3390/rs18040646
APA StyleRen, J., Jing, Y., Zheng, X., Li, S., Li, K., & Mu, G. (2026). Cropland Extraction Based on PlanetScope Images and a Newly Developed CAFM-Net Model. Remote Sensing, 18(4), 646. https://doi.org/10.3390/rs18040646

