Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (65)

Search Parameters:
Keywords = spatial and temporal adaptive reflectance fusion model (STARFM)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 4257 KB  
Article
High-Accuracy Identification of Cropping Structure in Irrigation Districts Using Data Fusion and Machine Learning
by Xinli Hu, Changming Cao, Ziyi Zan, Kun Wang, Meng Chai, Lingming Su and Weifeng Yue
Remote Sens. 2026, 18(1), 101; https://doi.org/10.3390/rs18010101 - 27 Dec 2025
Viewed by 116
Abstract
Persistent cloud cover during the growing season and mosaic cropping patterns introduce temporal gaps and mixed pixels, undermining the reliability of large-scale crop identification and acreage statistics. To address these issues, we develop a high spatiotemporal-resolution remote-sensing approach tailored to heterogeneous farmlands. First, [...] Read more.
Persistent cloud cover during the growing season and mosaic cropping patterns introduce temporal gaps and mixed pixels, undermining the reliability of large-scale crop identification and acreage statistics. To address these issues, we develop a high spatiotemporal-resolution remote-sensing approach tailored to heterogeneous farmlands. First, an improved Spatiotemporal Adaptive Reflectance Fusion Model (STARFM) is used to fuse Landsat, Sentinel-2, and MODIS observations, reconstructing a continuous Normalized Difference Vegetation Index (NDVI) time series at 30 m spatial and 8-day temporal resolution. Second, at the field scale, we derive phenological descriptors from the reconstructed series—key phenophase timing, amplitude, temporal trend, and growth rate—and use a Random Forest (RF) classifier for detailed crop discrimination. We further integrate SHapley Additive exPlanations (SHAP) to quantify each feature’s class-discriminative contribution and signed effect, thereby guiding feature-set optimization and threshold refinement. Finally, we generate a 2024 crop distribution map and conduct comparative evaluations. Relative to baselines without fusion or without phenological variables, the fused series mitigates single-sensor limitations under frequent cloud/rain and irregular acquisitions, enhances NDVI continuity and robustness, and reveals inter-crop temporal phase shifts that, when jointly exploited, reduce early-season confusion and improve identification accuracy. Independent validation yields an overall accuracy (OA) of 90.78% and a Cohen’s kappa(κ) coefficient of 0.882. Coupling dense NDVI reconstruction with phenology-aware constraints and SHAP-based interpretability demonstrably improves the accuracy and reliability of cropping-structure extraction in complex agricultural regions and provides a reusable pathway for regional-scale precision agricultural monitoring. Full article
Show Figures

Figure 1

20 pages, 32621 KB  
Article
A Novel Rapeseed Mapping Framework Integrating Image Fusion, Automated Sample Generation, and Deep Learning in Southwest China
by Ruolan Jiang, Xingyin Duan, Song Liao, Ziyi Tang and Hao Li
Land 2025, 14(1), 200; https://doi.org/10.3390/land14010200 - 19 Jan 2025
Cited by 2 | Viewed by 2159
Abstract
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic [...] Read more.
Rapeseed mapping is crucial for refined agricultural management and food security. However, existing remote sensing-based methods for rapeseed mapping in Southwest China are severely limited by insufficient training samples and persistent cloud cover. To address the above challenges, this study presents an automatic rapeseed mapping framework that integrates multi-source remote sensing data fusion, automated sample generation, and deep learning models. The framework was applied in Santai County, Sichuan Province, Southwest China, which has typical topographical and climatic characteristics. First, MODIS and Landsat data were used to fill the gaps in Sentinel-2 imagery, creating time-series images through the object-level processing version of the spatial and temporal adaptive reflectance fusion model (OL-STARFM). In addition, a novel spectral phenology approach was developed to automatically generate training samples, which were then input into the improved TS-ConvNeXt ECAPA-TDNN (NeXt-TDNN) deep learning model for accurate rapeseed mapping. The results demonstrated that the OL-STARFM approach was effective in rapeseed mapping. The proposed automated sample generation method proved effective in producing reliable rapeseed samples, achieving a low Dynamic Time Warping (DTW) distance (<0.81) when compared to field samples. The NeXt-TDNN model showed an overall accuracy (OA) of 90.12% and a mean Intersection over Union (mIoU) of 81.96% in Santai County, outperforming other models such as random forest, XGBoost, and UNet-LSTM. These results highlight the effectiveness of the proposed automatic rapeseed mapping framework in accurately identifying rapeseed. This framework offers a valuable reference for monitoring other crops in similar environments. Full article
Show Figures

Figure 1

22 pages, 10004 KB  
Article
High-Resolution Dynamic Monitoring of Rocky Desertification of Agricultural Land Based on Spatio-Temporal Fusion
by Xin Zhao, Zhongfa Zhou, Guijie Wu, Yangyang Long, Jiancheng Luo, Xingxin Huang, Jing Chen and Tianjun Wu
Land 2024, 13(12), 2173; https://doi.org/10.3390/land13122173 - 13 Dec 2024
Cited by 6 | Viewed by 1425
Abstract
The current research on rocky desertification primarily prioritizes large-scale surveillance, with minimal attention given to internal agricultural areas. This study offers a comprehensive framework for bedrock extraction in agricultural areas, employing spatial constraints and spatio-temporal fusion methodologies. Utilizing the high resolution and capabilities [...] Read more.
The current research on rocky desertification primarily prioritizes large-scale surveillance, with minimal attention given to internal agricultural areas. This study offers a comprehensive framework for bedrock extraction in agricultural areas, employing spatial constraints and spatio-temporal fusion methodologies. Utilizing the high resolution and capabilities of Gaofen-2 imagery, we first delineate agricultural land, use these boundaries as spatial constraints to compute the agricultural land bedrock response Index (ABRI), and apply the spatial and temporal adaptive reflectance fusion model (STARFM) to achieve spatio-temporal fusion of Gaofen-2 imagery and Sentinel-2 imagery from multiple time periods, resulting in a high-spatio-temporal-resolution bedrock discrimination index (ABRI*) for analysis. This work demonstrates the pronounced rocky desertification phenomenon in the agricultural land in the study area. The ABRI* effectively captures this phenomenon, with the classification accuracy for the bedrock, based on the ABRI* derived from Gaofen-2 imagery, reaching 0.86. The bedrock exposure area in the farmland showed a decreasing trend from 2019 to 2021, a significant increase from 2021 to 2022, and a gradual decline from 2022 to 2024. Cultivation activities have a significant impact on rocky desertification within agricultural land. The ABRI significantly enhances the capabilities for the dynamic monitoring of rocky desertification in agricultural areas, providing data support for the management of specialized farmland. For vulnerable areas, timely adjustments to planting schemes and the prioritization of intervention measures such as soil conservation, vegetation restoration, and water resource management could help to improve the resilience and stability of agriculture, particularly in karst regions. Full article
Show Figures

Figure 1

20 pages, 10820 KB  
Article
Mapping Crop Evapotranspiration by Combining the Unmixing and Weight Image Fusion Methods
by Xiaochun Zhang, Hongsi Gao, Liangsheng Shi, Xiaolong Hu, Liao Zhong and Jiang Bian
Remote Sens. 2024, 16(13), 2414; https://doi.org/10.3390/rs16132414 - 1 Jul 2024
Cited by 1 | Viewed by 1442
Abstract
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the [...] Read more.
The demand for freshwater is increasing with population growth and rapid socio-economic development. It is more and more important for refined irrigation water management to conduct research on crop evapotranspiration (ET) data with a high spatiotemporal resolution in agricultural regions. We propose the unmixing–weight ET image fusion model (UWET), which integrates the advantages of the unmixing method in spatial downscaling and the weight-based method in temporal prediction to produce daily ET maps with a high spatial resolution. The Landsat-ET and MODIS-ET datasets for the UWET fusion data are retrieved from Landsat and MODIS images based on the surface energy balance model. The UWET model considers the effects of crop phenology, precipitation, and land cover in the process of the ET image fusion. The precision evaluation is conducted on the UWET results, and the measured ET values are monitored by eddy covariance at the Luancheng station, with average MAE values of 0.57 mm/day. The image results of UWET show fine spatial details and capture the dynamic ET changes. The seasonal ET values of winter wheat from the ET map mainly range from 350 to 660 mm in 2019–2020 and from 300 to 620 mm in 2020–2021. The average seasonal ET in 2019–2020 is 499.89 mm, and in 2020–2021, it is 459.44 mm. The performance of UWET is compared with two other fusion models: the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and the Spatial and Temporal Reflectance Unmixing Model (STRUM). UWET performs better in the spatial details than the STARFM and is better in the temporal characteristics than the STRUM. The results indicate that UWET is suitable for generating ET products with a high spatial–temporal resolution in agricultural regions. Full article
Show Figures

Figure 1

17 pages, 13688 KB  
Technical Note
Fast Fusion of Sentinel-2 and Sentinel-3 Time Series over Rangelands
by Paul Senty, Radoslaw Guzinski, Kenneth Grogan, Robert Buitenwerf, Jonas Ardö, Lars Eklundh, Alkiviadis Koukos, Torbern Tagesson and Michael Munk
Remote Sens. 2024, 16(11), 1833; https://doi.org/10.3390/rs16111833 - 21 May 2024
Cited by 6 | Viewed by 3509
Abstract
Monitoring ecosystems at regional or continental scales is paramount for biodiversity conservation, climate change mitigation, and sustainable land management. Effective monitoring requires satellite imagery with both high spatial resolution and high temporal resolution. However, there is currently no single, freely available data source [...] Read more.
Monitoring ecosystems at regional or continental scales is paramount for biodiversity conservation, climate change mitigation, and sustainable land management. Effective monitoring requires satellite imagery with both high spatial resolution and high temporal resolution. However, there is currently no single, freely available data source that fulfills these needs. A seamless fusion of data from the Sentinel-3 and Sentinel-2 optical sensors could meet these monitoring requirements as Sentinel-2 observes at the required spatial resolution (10 m) while Sentinel-3 observes at the required temporal resolution (daily). We introduce the Efficient Fusion Algorithm across Spatio-Temporal scales (EFAST), which interpolates Sentinel-2 data into smooth time series (both spatially and temporally). This interpolation is informed by Sentinel-3’s temporal profile such that the phenological changes occurring between two Sentinel-2 acquisitions at a 10 m resolution are assumed to mirror those observed at Sentinel-3’s resolution. The EFAST consists of a weighted sum of Sentinel-2 images (weighted by a distance-to-clouds score) coupled with a phenological correction derived from Sentinel-3. We validate the capacity of our method to reconstruct the phenological profile at a 10 m resolution over one rangeland area and one irrigated cropland area. The EFAST outperforms classical interpolation techniques over both rangeland (−72% in the mean absolute error, MAE) and agricultural areas (−43% MAE); it presents a performance comparable to the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) (+5% MAE in both test areas) while being 140 times faster. The computational efficiency of our approach and its temporal smoothing enable the creation of seamless and high-resolution phenology products on a regional to continental scale. Full article
(This article belongs to the Section Ecological Remote Sensing)
Show Figures

Figure 1

19 pages, 7992 KB  
Article
Improving the STARFM Fusion Method for Downscaling the SSEBOP Evapotranspiration Product from 1 km to 30 m in an Arid Area in China
by Jingjing Sun, Wen Wang, Xiaogang Wang and Luca Brocca
Remote Sens. 2023, 15(22), 5411; https://doi.org/10.3390/rs15225411 - 18 Nov 2023
Viewed by 3090
Abstract
Continuous evapotranspiration (ET) data with high spatial resolution are crucial for water resources management in irrigated agricultural areas in arid regions. Many global ET products are available now but with a coarse spatial resolution. Spatial-temporal fusion methods, such as the spatial and temporal [...] Read more.
Continuous evapotranspiration (ET) data with high spatial resolution are crucial for water resources management in irrigated agricultural areas in arid regions. Many global ET products are available now but with a coarse spatial resolution. Spatial-temporal fusion methods, such as the spatial and temporal adaptive reflectance fusion model (STARFM), can help to downscale coarse spatial resolution ET products. In this paper, the STARFM model is improved by incorporating the temperature vegetation dryness index (TVDI) into the data fusion process, and we propose a spatial and temporal adaptive evapotranspiration downscaling method (STAEDM). The modified method STAEDM was applied to the 1 km SSEBOP ET product to derive a downscaled 30 m ET for irrigated agricultural fields of Northwest China. The STAEDM exhibits a significant improvement compared to the original STARFM method for downscaling SSEBOP ET on Landsat-unavailable dates, with an increase in the squared correlation coefficients (r2) from 0.68 to 0.77 and a decrease in the root mean square error (RMSE) from 10.28 mm/10 d to 8.48 mm/10 d. The ET based on the STAEDM additionally preserves more spatial details than STARFM for heterogeneous agricultural fields and can better capture the ET seasonal dynamics. The STAEDM ET can better capture the temporal variation of 10-day ET during the whole crop growing season than SSEBOP. Full article
Show Figures

Figure 1

17 pages, 3425 KB  
Article
Adaptability Evaluation of the Spatiotemporal Fusion Model of Sentinel-2 and MODIS Data in a Typical Area of the Three-River Headwater Region
by Mengyao Fan, Dawei Ma, Xianglin Huang and Ru An
Sustainability 2023, 15(11), 8697; https://doi.org/10.3390/su15118697 - 27 May 2023
Cited by 7 | Viewed by 2376
Abstract
The study of surface vegetation monitoring in the “Three-River Headwaters” Region (TRHR) relies on satellite data with high spatial and temporal resolutions. The spatial and temporal fusion method for multiple data sources can effectively overcome the limitations of weather, the satellite return period, [...] Read more.
The study of surface vegetation monitoring in the “Three-River Headwaters” Region (TRHR) relies on satellite data with high spatial and temporal resolutions. The spatial and temporal fusion method for multiple data sources can effectively overcome the limitations of weather, the satellite return period, and funding on research data to obtain data higher spatial and temporal resolutions. This paper explores the spatial and temporal adaptive reflectance fusion model (STARFM), the enhanced spatial and temporal adaptive reflectance fusion model (ESTARFM), and the flexible spatiotemporal data fusion (FSDAF) method applied to Sentinel-2 and MODIS data in a typical area of the TRHR. In this study, the control variable method was used to analyze the parameter sensitivity of the models and explore the adaptation parameters of the Sentinel-2 and MODIS data in the study area. Since the spatiotemporal fusion model was directly used in the product data of the vegetation index, this study used NDVI fusion as an example and set up a comparison experiment (experiment I first performed the band spatiotemporal fusion and then calculated the vegetation index; experiment II calculated the vegetation index first and then performed the spatiotemporal fusion) to explore the feasibility and applicability of the two methods for the vegetation index fusion. The results showed the following. (1) The three spatiotemporal fusion models generated high spatial resolution and high temporal resolution data based on the fusion of Sentinel-2 and MODIS data, the STARFM and FSDAF model had a higher fusion accuracy, and the R2 values after fusion were higher than 0.8, showing greater applicability. (2) The fusion accuracy of each model was affected by the model parameters. The errors between the STARFM, ESTARFM, and FSDAF fusion results and the validation data all showed a decreasing trend with an increase in the size of the sliding window or the number of similar pixels, which stabilized after the sliding window became larger than 50 and the similar pixels became larger than 80. (3) The comparative experimental results showed that the spatiotemporal fusion model can be directly fused based on the vegetation index products, and higher quality vegetation index data can be obtained by calculating the vegetation index first and then performing the spatiotemporal fusion. The high spatial and temporal resolution data obtained using a suitable spatial and temporal fusion model are important for the identification and monitoring of surface cover types in the TRHR. Full article
Show Figures

Figure 1

24 pages, 11742 KB  
Article
Tree Species Classification over Cloudy Mountainous Regions by Spatiotemporal Fusion and Ensemble Classifier
by Liang Cui, Shengbo Chen, Yongling Mu, Xitong Xu, Bin Zhang and Xiuying Zhao
Forests 2023, 14(1), 107; https://doi.org/10.3390/f14010107 - 5 Jan 2023
Cited by 10 | Viewed by 2323
Abstract
Accurate mapping of tree species is critical for the sustainable development of the forestry industry. However, the lack of cloud-free optical images makes it challenging to map tree species accurately in cloudy mountainous regions. In order to improve tree species identification in this [...] Read more.
Accurate mapping of tree species is critical for the sustainable development of the forestry industry. However, the lack of cloud-free optical images makes it challenging to map tree species accurately in cloudy mountainous regions. In order to improve tree species identification in this context, a classification method using spatiotemporal fusion and ensemble classifier is proposed. The applicability of three spatiotemporal fusion methods, i.e., the spatial and temporal adaptive reflectance fusion model (STARFM), the flexible spatiotemporal data fusion (FSDAF), and the spatial and temporal nonlocal filter-based fusion model (STNLFFM), in fusing MODIS and Landsat 8 images was investigated. The fusion results in Helong City show that the STNLFFM algorithm generated the best fused images. The correlation coefficients between the fusion images and actual Landsat images on May 28 and October 19 were 0.9746 and 0.9226, respectively, with an average of 0.9486. Dense Landsat-like time series at 8-day time intervals were generated using this method. This time series imagery and topography-derived features were used as predictor variables. Four machine learning methods, i.e., K-nearest neighbors (KNN), random forest (RF), artificial neural networks (ANNs), and light gradient boosting machine (LightGBM), were selected for tree species classification in Helong City, Jilin Province. An ensemble classifier combining these classifiers was constructed to further improve the accuracy. The ensemble classifier consistently achieved the highest accuracy in almost all classification scenarios, with a maximum overall accuracy improvement of approximately 3.4% compared to the best base classifier. Compared to only using a single temporal image, utilizing dense time series and the ensemble classifier can improve the classification accuracy by about 20%, and the overall accuracy reaches 84.32%. In conclusion, using spatiotemporal fusion and the ensemble classifier can significantly enhance tree species identification in cloudy mountainous areas with poor data availability. Full article
(This article belongs to the Special Issue Mapping Forest Vegetation via Remote Sensing Tools)
Show Figures

Figure 1

18 pages, 4946 KB  
Article
A Spatio-Temporal Fusion Framework of UAV and Satellite Imagery for Winter Wheat Growth Monitoring
by Yan Li, Wen Yan, Sai An, Wanlin Gao, Jingdun Jia, Sha Tao and Wei Wang
Drones 2023, 7(1), 23; https://doi.org/10.3390/drones7010023 - 29 Dec 2022
Cited by 24 | Viewed by 5174
Abstract
Accurate and continuous monitoring of crop growth is vital for the development of precision agriculture. Unmanned aerial vehicle (UAV) and satellite platforms have considerable complementarity in high spatial resolution (centimeter-scale) and fixed revisit cycle. It is meaningful to optimize the cross-platform synergy for [...] Read more.
Accurate and continuous monitoring of crop growth is vital for the development of precision agriculture. Unmanned aerial vehicle (UAV) and satellite platforms have considerable complementarity in high spatial resolution (centimeter-scale) and fixed revisit cycle. It is meaningful to optimize the cross-platform synergy for agricultural applications. Considering the characteristics of UAV and satellite platforms, a spatio-temporal fusion (STF) framework of UAV and satellite imagery is developed. It includes registration, radiometric normalization, preliminary fusion, and reflectance reconstruction. The proposed STF framework significantly improves the fusion accuracy with both better quantitative metrics and visualized results compared with four existing STF methods with different fusion strategies. Especially for the prediction of object boundary and spatial texture, the absolute values of Robert’s edge (EDGE) and local binary pattern (LBP) decreased by a maximum of more than 0.25 and 0.10, respectively, compared with the spatial and temporal adaptive reflectance fusion model (STARFM). Moreover, the STF framework enhances the temporal resolution to daily, although the satellite imagery is discontinuous. Further, its application potential for winter wheat growth monitoring is explored. The daily synthetic imagery with UAV spatial resolution describes the seasonal dynamics of winter wheat well. The synthetic Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index 2 (EVI2) are consistent with the observations. However, the error in NDVI and EVI2 at boundary changes is relatively large, which needs further exploration. This research provides an STF framework to generate very dense and high-spatial-resolution remote sensing data at a low cost. It not only contributes to precision agriculture applications, but also is valuable for land-surface dynamic monitoring. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

23 pages, 11564 KB  
Article
Applicability Assessment of a Spatiotemporal Geostatistical Fusion Model for Disaster Monitoring: Two Cases of Flood and Wildfire
by Yeseul Kim
Remote Sens. 2022, 14(24), 6204; https://doi.org/10.3390/rs14246204 - 7 Dec 2022
Cited by 2 | Viewed by 2078
Abstract
A spatial time series geostatistical deconvolution/fusion model (STGDFM), as one of spatiotemporal data fusion model, combines Dense time series data with a Coarse-scale (i.e., DC data) and Sparse time series data with a Fine-scale (i.e., SF data) to generate Synthetic Dense time series [...] Read more.
A spatial time series geostatistical deconvolution/fusion model (STGDFM), as one of spatiotemporal data fusion model, combines Dense time series data with a Coarse-scale (i.e., DC data) and Sparse time series data with a Fine-scale (i.e., SF data) to generate Synthetic Dense time series data with a Fine-scale (i.e., SDF data). Specifically, STGDFM uses a geostatistics-based spatial time series modeling to capture the temporal trends included in time series DC data. This study evaluated the prediction performance of STGDFM for abrupt changes in reflectance due to disasters in spatiotemporal data fusion, and a spatial and temporal adaptive reflectance fusion model (STARFM) and an enhanced STARFM (ESTARFM) were selected as comparative models. For the applicability assessment, flood and wildfire were selected as case studies. In the case of flood, MODIS-like data (240 m) with spatial resolution converted from Landsat data and Landsat data (30 m) were used as DC and SF data, respectively. In the case of wildfire, MODIS and Landsat data were used as DC and SF data, respectively. The case study results showed that among the three spatiotemporal fusion models, STGDFM presented the best prediction performance with 0.894 to 0.979 at the structure similarity and 0.760 to 0.872 at the R-squared values in the flood- and wildfire-affected areas. Unlike STARFM and ESTARFM that adopt the assumptions for reflectance changes, STGDFM combines the temporal trends using time series DC data. Therefore, STGDFM could capture the abrupt changes in reflectance due to the flood and wildfire. These results indicate that STGDFM can be used for cases where satellite images of appropriate temporal and spatial resolution are difficult to acquire for disaster monitoring. Full article
Show Figures

Graphical abstract

23 pages, 4335 KB  
Article
An Object-Based Weighting Approach to Spatiotemporal Fusion of High Spatial Resolution Satellite Images for Small-Scale Cropland Monitoring
by Soyeon Park, No-Wook Park and Sang-il Na
Agronomy 2022, 12(10), 2572; https://doi.org/10.3390/agronomy12102572 - 19 Oct 2022
Cited by 3 | Viewed by 2239
Abstract
Continuous crop monitoring often requires a time-series set of satellite images. Since satellite images have a trade-off in spatial and temporal resolution, spatiotemporal image fusion (STIF) has been applied to construct time-series images at a consistent scale. With the increased availability of high [...] Read more.
Continuous crop monitoring often requires a time-series set of satellite images. Since satellite images have a trade-off in spatial and temporal resolution, spatiotemporal image fusion (STIF) has been applied to construct time-series images at a consistent scale. With the increased availability of high spatial resolution images, it is necessary to develop a new STIF model that can effectively reflect the properties of high spatial resolution satellite images for small-scale crop field monitoring. This paper proposes an advanced STIF model using a single image pair, called high spatial resolution image fusion using object-based weighting (HIFOW), for blending high spatial resolution satellite images. The four-step weighted-function approach of HIFOW includes (1) temporal relationship modeling, (2) object extraction using image segmentation, (3) weighting based on object information, and (4) residual correction to quantify temporal variability between the base and prediction dates and also represent both spectral patterns at the prediction date and spatial details of fine-scale images. The specific procedures tailored for blending fine-scale images are the extraction of object-based change and structural information and their application to weight determination. The potential of HIFOW was evaluated from the experiments on agricultural sites using Sentinel-2 and RapidEye images. HIFOW was compared with three existing STIF models, including the spatial and temporal adaptive reflectance fusion model (STARFM), flexible spatiotemporal data fusion (FSDAF), and Fit-FC. Experimental results revealed that the HIFOW prediction could restore detailed spatial patterns within crop fields and clear crop boundaries with less spectral distortion, which was not represented in the prediction results of the other three models. Consequently, HIFOW achieved the best prediction performance in terms of accuracy and structural similarity for all the spectral bands. Other than the reflectance prediction, HIFOW also yielded superior prediction performance for blending normalized difference vegetation index images. These findings indicate that HIFOW could be a potential solution for constructing high spatial resolution time-series images in small-scale croplands. Full article
(This article belongs to the Special Issue Use of Satellite Imagery in Agriculture)
Show Figures

Graphical abstract

20 pages, 12124 KB  
Article
A Sensor Bias Correction Method for Reducing the Uncertainty in the Spatiotemporal Fusion of Remote Sensing Images
by Hongwei Zhang, Fang Huang, Xiuchao Hong and Ping Wang
Remote Sens. 2022, 14(14), 3274; https://doi.org/10.3390/rs14143274 - 7 Jul 2022
Cited by 7 | Viewed by 3519
Abstract
With the development of multisource satellite platforms and the deepening of remote sensing applications, the growing demand for high-spatial resolution and high-temporal resolution remote sensing images has aroused extensive interest in spatiotemporal fusion research. However, reducing the uncertainty of fusion results caused by [...] Read more.
With the development of multisource satellite platforms and the deepening of remote sensing applications, the growing demand for high-spatial resolution and high-temporal resolution remote sensing images has aroused extensive interest in spatiotemporal fusion research. However, reducing the uncertainty of fusion results caused by sensor inconsistencies and input data preprocessing is one of the challenges in spatiotemporal fusion algorithms. Here, we propose a novel sensor bias correction method to correct the input data of the spatiotemporal fusion model through a machine learning technique learning the bias between different sensors. Taking the normalized difference vegetation index (NDVI) images with low-spatial resolution (MODIS) and high-spatial resolution (Landsat) as the basic data, we generated the neighborhood gray matrices from the MODIS image and established the image bias pairs of MODIS and Landsat. The light gradient boosting machine (LGBM) regression model was used for the nonlinear fitting of the bias pairs to correct MODIS NDVI images. For three different landscape areas with various spatial heterogeneities, the fusion of the bias-corrected MODIS NDVI and Landsat NDVI was conducted by using the spatiotemporal adaptive reflection fusion model (STARFM) and the flexible spatiotemporal data fusion method (FSDAF), respectively. The results show that the sensor bias correction method can enhance the spatially detailed information in the input data, significantly improve the accuracy and robustness of the spatiotemporal fusion technology, and extend the applicability of the spatiotemporal fusion models. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

16 pages, 5202 KB  
Article
Quantitative Evaluation of Grassland SOS Estimation Accuracy Based on Different MODIS-Landsat Spatio-Temporal Fusion Datasets
by Yungang Cao, Puying Du, Min Zhang, Xueqin Bai, Ruodan Lei and Xiuchun Yang
Remote Sens. 2022, 14(11), 2542; https://doi.org/10.3390/rs14112542 - 26 May 2022
Cited by 5 | Viewed by 2319
Abstract
Estimating the Start of Growing Season (SOS) of grassland on the global scale is an important scientific issue since it can reflect the response of the terrestrial ecosystem to environmental changes and determine the start time of grazing. However, most remote sensing data [...] Read more.
Estimating the Start of Growing Season (SOS) of grassland on the global scale is an important scientific issue since it can reflect the response of the terrestrial ecosystem to environmental changes and determine the start time of grazing. However, most remote sensing data has coarse- temporal and spatial resolution, resulting in low accuracy of SOS retrieval based on remote sensing methods. In recent years, much research has focused on multi-source data fusion technology to improve the spatio-temporal resolution of remote sensing information, and to provide a feasible path for high-accuracy remote sensing inversion of SOS. Nevertheless, there is still a lack of quantitative evaluation for the accuracy of these data fusion methods in SOS estimation. Therefore, in this study, the SOS estimation accuracy is quantitatively evaluated based on the spatio-temporal fusion daily datasets through the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) and other models in Xilinhot City, Inner Mongolia, China. The results show that: (1) the accuracy of SOS estimation based on spatio-temporal fusion daily datasets has been slightly improved, the average Root Mean Square Error (RMSE) of SOS based on 8d composite datasets is 11.1d, and the best is 9.7d (fstarfm8); (2) the estimation accuracy based on 8d composite datasets (RMSE¯ = 11.1d) is better than daily fusion datasets (RMSE¯ = 18.2d); (3) the lack of the Landsat data during the SOS would decrease the quality of the fusion datasets, which ultimately reduces the accuracy of the SOS estimation. The RMSE¯ of SOS based on all three models increases by 11.1d, and the STARFM is least affected, just increases 2.7d. The results highlight the potential of the spatio-temporal data fusion method in high-accuracy grassland SOS estimation. It also shows that the dataset fused by the STARFM algorithm and composed for 8 days is better for SOS estimation. Full article
Show Figures

Graphical abstract

16 pages, 5880 KB  
Article
Improved Daily Evapotranspiration Estimation Using Remotely Sensed Data in a Data Fusion System
by Yun Yang, Martha Anderson, Feng Gao, Jie Xue, Kyle Knipper and Christopher Hain
Remote Sens. 2022, 14(8), 1772; https://doi.org/10.3390/rs14081772 - 7 Apr 2022
Cited by 25 | Viewed by 5381
Abstract
Evapotranspiration (ET) represents crop water use and is a key indicator of crop health. Accurate estimation of ET is critical for agricultural irrigation and water resource management. ET retrieval using energy balance methods with remotely sensed thermal infrared data as the key input [...] Read more.
Evapotranspiration (ET) represents crop water use and is a key indicator of crop health. Accurate estimation of ET is critical for agricultural irrigation and water resource management. ET retrieval using energy balance methods with remotely sensed thermal infrared data as the key input has been widely applied for irrigation scheduling, yield prediction, drought monitoring and so on. However, limitations on the spatial and temporal resolution of available thermal satellite data combined with the effects of cloud contamination constrain the amount of detail that a single satellite can provide. Fusing satellite data from different satellites with varying spatial and temporal resolutions can provide a more continuous estimation of daily ET at field scale. In this study, we applied an ET fusion modeling system, which uses a surface energy balance model to retrieve ET using both Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS) data and then fuses the Landsat and MODIS ET retrieval timeseries using the Spatial-Temporal Adaptive Reflectance Fusion Model (STARFM). In this paper, we compared different STARFM ET fusion implementation strategies over various crop lands in the central California. In particular, the use of single versus two Landsat-MODIS pair images to constrain the fusion is explored in cases of rapidly changing crop conditions, as in frequently harvested alfalfa fields, as well as an improved dual-pair method. The daily 30 m ET retrievals are evaluated with flux tower observations and analyzed based on land cover type. This study demonstrates improvement using the new dual-pair STARFM method compared with the standard one-pair STARFM method in estimating daily field scale ET for all the major crop types in the study area. Full article
(This article belongs to the Special Issue Smart Farming and Land Management Enabled by Remotely Sensed Big Data)
Show Figures

Figure 1

21 pages, 3947 KB  
Article
Land Surface Phenology Retrieval through Spectral and Angular Harmonization of Landsat-8, Sentinel-2 and Gaofen-1 Data
by Jun Lu, Tao He, Dan-Xia Song and Cai-Qun Wang
Remote Sens. 2022, 14(5), 1296; https://doi.org/10.3390/rs14051296 - 7 Mar 2022
Cited by 15 | Viewed by 4692
Abstract
Land Surface Phenology is an important characteristic of vegetation, which can be informative of its response to climate change. However, satellite-based identification of vegetation transition dates is hindered by inconsistencies in different observation platforms, including band settings, viewing angles, and scale effects. Therefore, [...] Read more.
Land Surface Phenology is an important characteristic of vegetation, which can be informative of its response to climate change. However, satellite-based identification of vegetation transition dates is hindered by inconsistencies in different observation platforms, including band settings, viewing angles, and scale effects. Therefore, time-series data with high consistency are necessary for monitoring vegetation phenology. This study proposes a data harmonization approach that involves band conversion and bidirectional reflectance distribution function (BRDF) correction to create normalized reflectance from Landsat-8, Sentinel-2A, and Gaofen-1 (GF-1) satellite data, characterized by the same spectral and illumination-viewing angles as the Moderate-Resolution Imaging Spectroradiometer (MODIS) and Nadir BRDF Adjusted Reflectance (NBAR). The harmonized data are then subjected to the spatial and temporal adaptive reflectance fusion model (STARFM) to produce time-series data with high spatio–temporal resolution. Finally, the transition date of typical vegetation was estimated using regular 30 m spatial resolution data. The results show that the data harmonization method proposed in this study assists in improving the consistency of different observations under different viewing angles. The fusion result of STARFM was improved after eliminating differences in the input data, and the accuracy of the remote-sensing-based vegetation transition date was improved by the fused time-series curve with the input of harmonized data. The root mean square error (RMSE) estimation of the vegetation transition date decreased by 9.58 days. We concluded that data harmonization eliminates the viewing-angle effect and is essential for time-series vegetation monitoring through improved data fusion. Full article
Show Figures

Figure 1

Back to TopTop