remotesensing-logo

Journal Browser

Journal Browser

Improving Remote Sensing Crop Mapping and Yield Estimation by New Techniques

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Environmental Remote Sensing".

Deadline for manuscript submissions: 31 March 2026 | Viewed by 5757

Special Issue Editors

State Key Laboratory of Black Soils Conservation and Utilization, Northeast Institute of Geography and Agroecology, Chinese Academy of Sciences, Changchun 130102, China
Interests: agricultural remote sensing; crop classification; yield estimation and prediction; machine learning; deep learning; vegetation mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Technology and Engineering, Changchun Institute of Technology, Changchun 130012, China
Interests: machine learning; deep learning; science classification; semantic segmantation of remote sensing; image classification

E-Mail Website
Guest Editor
School of Computer Technology and Engineering, Changchun Institute of Technology, Changchun 130012, China
Interests: machine learning; deep learning; science classification; remote sensing classification; image classification; semantic segmentation

Special Issue Information

Dear Colleagues,

Food security continues to be a global focus. The second goal of the United Nations’s Sustainable Development Goals (SDGs) is to eradicate hunger. Remote sensing technology, due to its advantages such as large-scale coverage, has shown tremendous potential in addressing food security issues and has made significant contributions to global food security monitoring.

Remote sensing plays two main roles in supporting global food security. First, remote sensing-based crop mapping and classification can quickly determine the spatial distribution of different crops, providing strong support for field crop management. Second, remote sensing technology enables the rapid and accurate estimation of crop yield, laying the foundation for government macro-level decision-making. The recent continuous launch of multiple high-resolution satellites has further enhanced the capability of classifying crops and estimating yield using remote sensing technology.

However, although the capacity to acquire remote sensing data has greatly improved, existing methods cannot fully meet the demands of related analyses. The agricultural ecosystem is a highly dynamic system affected by both human and natural factors, making accurate crop classification and yield estimation very challenging. The use of high-resolution remote sensing data further increases this difficulty. This necessitates the development of new technical algorithms/methods by researchers and practitioners to further enhance the capabilities of remote sensing technology for crop mapping and yield estimation.

Fortunately, recent breakthroughs in artificial intelligence (AI) have provided new opportunities to address the challenges of crop mapping and yield estimation. For example, the development of deep learning has significantly improved the accuracy of remote sensing crop mapping, and some studies have also applied deep learning techniques to enhance crop yield estimation accuracy. However, the generality of the current technical methods still requires further validation, and there is considerable room for improvement.

Therefore, this Special Issue is dedicated to outlining new technologies and methods for crop classification/mapping and yield estimation. We welcome the submission of research that applies new technologies and methods (including, but not limited to, AI) to the fields of crop mapping and yield estimation. Additionally, we also welcome work that applies new technologies and methods to land cover/land use mapping, as crops are themselves a type of land use.

Dr. Huapeng Li
Dr. Hua Zhang
Prof. Dr. Xin Pan
Dr. Ce Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • crop classification/mapping
  • yield estimation
  • high-resolution remote sensing
  • artificial intelligence
  • deep learning
  • semantic segmentation
  • precision agriculture
  • food security
  • agricultural sustainability
  • big data
  • data analytics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

31 pages, 8699 KiB  
Article
Transformer-Based Dual-Branch Spatial–Temporal–Spectral Feature Fusion Network for Paddy Rice Mapping
by Xinxin Zhang, Hongwei Wei, Yuzhou Shao, Haijun Luan and Da-Han Wang
Remote Sens. 2025, 17(12), 1999; https://doi.org/10.3390/rs17121999 - 10 Jun 2025
Viewed by 394
Abstract
Deep neural network fusion approaches utilizing multimodal remote sensing are essential for crop mapping. However, challenges such as insufficient spatiotemporal feature extraction and ineffective fusion strategies still exist, leading to a decrease in mapping accuracy and robustness when these approaches are applied across [...] Read more.
Deep neural network fusion approaches utilizing multimodal remote sensing are essential for crop mapping. However, challenges such as insufficient spatiotemporal feature extraction and ineffective fusion strategies still exist, leading to a decrease in mapping accuracy and robustness when these approaches are applied across spatial‒temporal regions. In this study, we propose a novel rice mapping approach based on dual-branch transformer fusion networks, named RDTFNet. Specifically, we implemented a dual-branch encoder that is based on two improved transformer architectures. One is a multiscale transformer block used to extract spatial–spectral features from a single-phase optical image, and the other is a Restormer block used to extract spatial–temporal features from time-series synthetic aperture radar (SAR) images. Both extracted features were then combined into a feature fusion module (FFM) to generate fully fused spatial–temporal–spectral (STS) features, which were finally fed into the decoder of the U-Net structure for rice mapping. The model’s performance was evaluated through experiments with the Sentinel-1 and Sentinel-2 datasets from the United States. Compared with conventional models, the RDTFNet model achieved the best performance, and the overall accuracy (OA), intersection over union (IoU), precision, recall and F1-score were 96.95%, 88.12%, 95.14%, 92.27% and 93.68%, respectively. The comparative results show that the OA, IoU, accuracy, recall and F1-score improved by 1.61%, 5.37%, 5.16%, 1.12% and 2.53%, respectively, over those of the baseline model, demonstrating its superior performance for rice mapping. Furthermore, in subsequent cross-regional and cross-temporal tests, RDTFNet outperformed other classical models, achieving improvements of 7.11% and 12.10% in F1-score, and 11.55% and 18.18% in IoU, respectively. These results further confirm the robustness of the proposed model. Therefore, the proposed RDTFNet model can effectively fuse STS features from multimodal images and exhibit strong generalization capabilities, providing valuable information for governments in agricultural management. Full article
Show Figures

Figure 1

21 pages, 5887 KiB  
Article
Meta-Features Extracted from Use of kNN Regressor to Improve Sugarcane Crop Yield Prediction
by Luiz Antonio Falaguasta Barbosa, Ivan Rizzo Guilherme, Daniel Carlos Guimarães Pedronette and Bruno Tisseyre
Remote Sens. 2025, 17(11), 1846; https://doi.org/10.3390/rs17111846 - 25 May 2025
Viewed by 512
Abstract
Accurate crop yield prediction is essential for sugarcane growers, as it enables them to predict harvested biomass, guiding critical decisions regarding acquiring agricultural inputs such as fertilizers and pesticides, the timing and execution of harvest operations, and cane field renewal strategies. This study [...] Read more.
Accurate crop yield prediction is essential for sugarcane growers, as it enables them to predict harvested biomass, guiding critical decisions regarding acquiring agricultural inputs such as fertilizers and pesticides, the timing and execution of harvest operations, and cane field renewal strategies. This study is based on an experiment conducted by researchers from the Commonwealth Scientific and Industrial Research Organisation (CSIRO), who employed a UAV-mounted LiDAR and multispectral imaging sensors to monitor two sugarcane field trials subjected to varying nitrogen (N) fertilization regimes in the Wet Tropics region of Australia. The predictive performance of models utilizing multispectral features, LiDAR-derived features, and a fusion of both modalities was evaluated against a benchmark model based on the Normalized Difference Vegetation Index (NDVI). This work utilizes the dataset produced by this experiment, incorporating other regressors and features derived from those collected in the field. Typically, crop yield prediction relies on features derived from direct field observations, either gathered through sensor measurements or manual data collection. However, enhancing prediction models by incorporating new features extracted through regressions executed on the original dataset features can potentially improve predictive outcomes. These extracted features, nominated in this work as meta-features (MFs), extracted through regressions with different regressors on original features, and incorporated into the dataset as new feature predictors, can be utilized in further regression analyses to optimize crop yield prediction. This study investigates the potential of generating MFs as an innovation to enhance sugarcane crop yield predictions. MFs were generated based on the values obtained by different regressors applied to the features collected in the field, allowing for evaluating which approaches offered superior predictive performance within the dataset. The kNN meta-regressor outperforms other regressors because it takes advantage of the proximity of MFs, which was checked through a projection where the dispersion of points can be measured. A comparative analysis is presented with a projection based on the Uniform Manifold Approximation and Projection (UMAP) algorithm, showing that MFs had more proximity than the original features when projected, which demonstrates that MFs revealed a clear formation of well-defined clusters, with most points within each group sharing the same color, suggesting greater uniformity in the predicted values. Incorporating these MFs into subsequent regression models demonstrated improved performance, with R¯2 values higher than 0.9 for MF Grad Boost M3, MF GradientBoost M5, and all kNN MFs and reduced error margins compared to field-measured yield values. The R¯2 values obtained in this work ranged above 0.98 for the AdaBoost meta-regressor applied to MFs, which were obtained from kNN regression on five models created by the researchers of CSIRO, and around 0.99 for the kNN meta-regressor applied to MFs obtained from kNN regression on these five models. Full article
Show Figures

Figure 1

28 pages, 17294 KiB  
Article
Detail and Deep Feature Multi-Branch Fusion Network for High-Resolution Farmland Remote-Sensing Segmentation
by Zhankui Tang, Xin Pan, Xiangfei She, Jing Ma and Jian Zhao
Remote Sens. 2025, 17(5), 789; https://doi.org/10.3390/rs17050789 - 24 Feb 2025
Viewed by 684
Abstract
Currently, the demand for refined crop monitoring through remote sensing is increasing rapidly. Due to the similar spectral and morphological characteristics of different crops and vegetation, traditional methods often rely on deeper neural networks to extract meaningful features. However, deeper networks face a [...] Read more.
Currently, the demand for refined crop monitoring through remote sensing is increasing rapidly. Due to the similar spectral and morphological characteristics of different crops and vegetation, traditional methods often rely on deeper neural networks to extract meaningful features. However, deeper networks face a key challenge: while extracting deep features, they often lose some boundary details and small-plot characteristics, leading to inaccurate farmland boundary classifications. To address this issue, we propose the Detail and Deep Feature Multi-Branch Fusion Network for High-Resolution Farmland Remote-Sensing Segmentation (DFBNet). DFBNet introduces an new three-branch structure based on the traditional UNet. This structure enhances the detail of ground objects, deep features across multiple scales, and boundary features. As a result, DFBNet effectively preserves the overall characteristics of farmland plots while retaining fine-grained ground object details and ensuring boundary continuity. In our experiments, DFBNet was compared with five traditional methods and demonstrated significant improvements in overall accuracy and boundary segmentation. On the Hi-CNA dataset, DFBNet achieved 88.34% accuracy, 89.41% pixel accuracy, and an IoU of 78.75%. On the Netherlands Agricultural Land Dataset, it achieved 90.63% accuracy, 91.6% pixel accuracy, and an IoU of 83.67%. These results highlight DFBNet’s ability to accurately delineate farmland boundaries, offering robust support for agricultural yield estimation and precision farming decision-making. Full article
Show Figures

Figure 1

26 pages, 6704 KiB  
Article
Hyperspectral Band Selection for Crop Identification and Mapping of Agriculture
by Yulei Tan, Jingtao Gu, Laijun Lu, Liyuan Zhang, Jianyu Huang, Lin Pan, Yan Lv, Yupeng Wang and Yang Chen
Remote Sens. 2025, 17(4), 663; https://doi.org/10.3390/rs17040663 - 15 Feb 2025
Cited by 1 | Viewed by 886
Abstract
Different crops, as well as the same crop at different growth stages, display distinct spectral and spatial characteristics in hyperspectral images (HSIs) due to variations in their chemical composition and structural features. However, the narrow bandwidth and closely spaced spectral channels of HSIs [...] Read more.
Different crops, as well as the same crop at different growth stages, display distinct spectral and spatial characteristics in hyperspectral images (HSIs) due to variations in their chemical composition and structural features. However, the narrow bandwidth and closely spaced spectral channels of HSIs result in significant data redundancy, posing challenges to crop identification and classification. Therefore, the dimensionality reduction in HSIs is crucial. Band selection as a widely used method for reducing dimensionality has been extensively applied in research on crop identification and mapping. In this paper, a crop superpixel-based affinity propagation (CS-AP) band selection method is proposed for crop identification and mapping in agriculture using HSIs. The approach begins by gathering crop superpixels; then, a spectral band selection criterion is developed by analyzing the variations in the spectral and spatial characteristics of crop superpixels. Finally, crop identification bands are determined through an efficient clustering approach, AP. Two typical agricultural hyperspectral data sets, the Salinas Valley data set and the Indian Pines data set, are selected for validation, each containing 16 crop classes, respectively. The experimental results show that the proposed CS-AP method achieves a mapping accuracy of 92.4% for the Salinas Valley data set and 88.6% for the Indian Pines data set. When compared to using all bands, two unsupervised band selection techniques, and three semi-supervised band selection techniques, the proposed method outperforms others with an improvement of 3.1% and 4.3% for the Salinas Valley and Indian Pines data sets, respectively. Indicate that the proposed CS-AP method achieves superior mapping accuracy by selecting fewer bands with greater crop identification capability compared to the other band selection methods. This research’s significant results demonstrate the potential of this approach in precision agriculture, offering a more cost-effective and timely solution for large-scale crop mapping and monitoring in the future. Full article
Show Figures

Figure 1

25 pages, 15211 KiB  
Article
MHRA-MS-3D-ResNet-BiLSTM: A Multi-Head-Residual Attention-Based Multi-Stream Deep Learning Model for Soybean Yield Prediction in the U.S. Using Multi-Source Remote Sensing Data
by Mahdiyeh Fathi, Reza Shah-Hosseini, Armin Moghimi and Hossein Arefi
Remote Sens. 2025, 17(1), 107; https://doi.org/10.3390/rs17010107 - 31 Dec 2024
Cited by 2 | Viewed by 1258
Abstract
Accurate prediction of soybean yield is important for safeguarding food security and improving agricultural management. Recent advances have highlighted the effectiveness and ability of Machine Learning (ML) models in analyzing Remote Sensing (RS) data for this purpose. However, most of these models do [...] Read more.
Accurate prediction of soybean yield is important for safeguarding food security and improving agricultural management. Recent advances have highlighted the effectiveness and ability of Machine Learning (ML) models in analyzing Remote Sensing (RS) data for this purpose. However, most of these models do not fully consider multi-source RS data for prediction, as processing these increases complexity and limits their accuracy and generalizability. In this study, we propose the Multi-Residual Attention-Based Multi-Stream 3D-ResNet-BiLSTM (MHRA-MS-3D-ResNet-BiLSTM) model, designed to integrate various RS data types, including Sentinel-1/2 imagery, Daymet climate data, and soil grid information, for improved county-level U.S. soybean yield prediction. Our model employs a multi-stream architecture to process diverse data types concurrently, capturing complex spatio-temporal features effectively. The 3D-ResNet component utilizes 3D convolutions and residual connections for pattern recognition, complemented by Bidirectional Long Short-Term Memory (BiLSTM) for enhanced long-term dependency learning by processing data arrangements in forward and backward directions. An attention mechanism further refines the model’s focus by dynamically weighting the significance of different input features for efficient yield prediction. We trained the MHRA-MS-3D-ResNet-BiLSTM model using multi-source RS datasets from 2019 and 2020 and evaluated its performance with U.S. soybean yield data for 2021 and 2022. The results demonstrated the model’s robustness and adaptability to unseen data, achieving an R2 of 0.82 and a Mean Absolute Percentage Error (MAPE) of 9% in 2021, and an R2 of 0.72 and MAPE of 12% in 2022. This performance surpassed some of the state-of-the-art models like 3D-ResNet-BiLSTM and MS-3D-ResNet-BiLSTM, and other traditional ML methods like Random Forest (RF), XGBoost, and LightGBM. These findings highlight the methodology’s capability to handle multiple RS data types and its role in improving yield predictions, which can be helpful for sustainable agriculture. Full article
Show Figures

Figure 1

18 pages, 10786 KiB  
Article
TDMSANet: A Tri-Dimensional Multi-Head Self-Attention Network for Improved Crop Classification from Multitemporal Fine-Resolution Remotely Sensed Images
by Jian Li, Xuhui Tang, Jian Lu, Hongkun Fu, Miao Zhang, Jujian Huang, Ce Zhang and Huapeng Li
Remote Sens. 2024, 16(24), 4755; https://doi.org/10.3390/rs16244755 - 20 Dec 2024
Viewed by 791
Abstract
Accurate and timely crop distribution data are crucial for governments, in order to make related policies to ensure food security. However, agricultural ecosystems are spatially and temporally dynamic systems, which poses a great challenge for accurate crop mapping using fine spatial resolution (FSR) [...] Read more.
Accurate and timely crop distribution data are crucial for governments, in order to make related policies to ensure food security. However, agricultural ecosystems are spatially and temporally dynamic systems, which poses a great challenge for accurate crop mapping using fine spatial resolution (FSR) imagery. This research proposed a novel Tri-Dimensional Multi-head Self-Attention Network (TDMSANet) for accurate crop mapping from multitemporal fine-resolution remotely sensed images. Specifically, three sub-modules were designed to extract spectral, temporal, and spatial feature representations, respectively. All three sub-modules adopted a multi-head self-attention mechanism to assign higher weights to important features. In addition, the positional encoding was adopted by both temporal and spatial submodules to learn the sequence relationships between the features in a feature sequence. The proposed TDMSANet was evaluated on two sites utilizing FSR SAR (UAVSAR) and optical (Rapid Eye) images, respectively. The experimental results showed that TDMSANet consistently achieved significantly higher crop mapping accuracy compared to the benchmark models across both sites, with an average overall accuracy improvement of 1.40%, 3.35%, and 6.42% over CNN, Transformer, and LSTM, respectively. The ablation experiments further showed that the three sub-modules were all useful to the TDMSANet, and the Spatial Feature Extraction Module exerted larger impact than the remaining two sub-modules. Full article
Show Figures

Figure 1

Back to TopTop