Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (147)

Search Parameters:
Keywords = multi-sensor earth observation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6539 KiB  
Article
Development of a Multi-Sensor GNSS-IoT System for Precise Water Surface Elevation Measurement
by Jun Wang, Matthew C. Garthwaite, Charles Wang and Lee Hellen
Sensors 2025, 25(11), 3566; https://doi.org/10.3390/s25113566 - 5 Jun 2025
Viewed by 573
Abstract
The Global Navigation Satellite System (GNSS), Internet of Things (IoT) and cloud computing technologies enable high-precision positioning with flexible data communication, making real-time/near-real-time monitoring more economical and efficient. In this study, a multi-sensor GNSS-IoT system was developed for measuring precise water surface elevation [...] Read more.
The Global Navigation Satellite System (GNSS), Internet of Things (IoT) and cloud computing technologies enable high-precision positioning with flexible data communication, making real-time/near-real-time monitoring more economical and efficient. In this study, a multi-sensor GNSS-IoT system was developed for measuring precise water surface elevation (WSE). The system, which includes ultrasonic and accelerometer sensors, was deployed on a floating platform in Googong reservoir, Australia, over a four-month period in 2024. WSE data derived from the system were compared against independent reference measurements from the reservoir operator, achieving an accuracy of 7 mm for 6 h averaged solutions and 28 mm for epoch-by-epoch solutions. The results demonstrate the system’s potential for remote, autonomous WSE monitoring and its suitability for validating satellite Earth observation data, particularly from the Surface Water and Ocean Topography (SWOT) mission. Despite environmental challenges such as moderate gale conditions, the system maintained robust performance, with over 90% of solutions meeting quality assurance standards. This study highlights the advantages of combining the GNSS with IoT technologies and multiple sensors for cost-effective, long-term WSE monitoring in remote and dynamic environments. Future work will focus on optimizing accuracy and expanding applications to diverse aquatic settings. Full article
Show Figures

Figure 1

10 pages, 1880 KiB  
Data Descriptor
Historical Bolide Infrasound Dataset (1960–1972)
by Elizabeth A. Silber and Rodney W. Whitaker
Data 2025, 10(5), 71; https://doi.org/10.3390/data10050071 - 9 May 2025
Viewed by 457
Abstract
We present the first fully curated, publicly accessible archive of infrasonic records from ten large bolide events documented by the U.S. Air Force Technical Applications Center’s global microbarometer network between 1960 and 1972. Captured on analog strip-chart paper, these waveforms predate modern digital [...] Read more.
We present the first fully curated, publicly accessible archive of infrasonic records from ten large bolide events documented by the U.S. Air Force Technical Applications Center’s global microbarometer network between 1960 and 1972. Captured on analog strip-chart paper, these waveforms predate modern digital arrays and space-based sensors, making them a unique window on meteoroid activity in the mid-twentieth century. Prior studies drew important scientific conclusions from the records but released only limited artifacts, chiefly period–amplitude tables and unprocessed scans, leaving the underlying data inaccessible for independent study. The present release transforms those limited excerpts into a research-ready resource. By capturing ten large events in the mid-20th century, the dataset constitutes a critical reference point for assessing bolide activity before the advent of modern space-based and digital ground-based monitoring. The multi-year coverage and worldwide distribution of events provide a valuable reference for comparing past and more recent detections, facilitating assessments of long-term flux and the dynamics of acoustic wave propagation in Earth’s atmosphere. The dataset’s availability in a consolidated format ensures straightforward access to waveforms and derived measurements, supporting a wide range of scientific inquiries into bolide physics and infrasound monitoring. By preserving these historical acoustic observations, the collection maintains a significant record of mid-20th-century meteoroid entries. It thereby establishes a basis for further refinement of impact hazard evaluations, contributes to historical continuity in atmospheric observation, and enriches the study of meteoroid-generated infrasound signals on a global scale. Full article
Show Figures

Figure 1

30 pages, 5699 KiB  
Article
Mission Sequence Model and Deep Reinforcement Learning-Based Replanning Method for Multi-Satellite Observation
by Peiyan Li, Peixing Cui and Huiquan Wang
Sensors 2025, 25(6), 1707; https://doi.org/10.3390/s25061707 - 10 Mar 2025
Cited by 1 | Viewed by 987
Abstract
With the rapid increase in the number of Earth Observation Satellites (EOSs), research on autonomous mission scheduling has become increasingly critical for optimizing satellite sensor operations. While most existing studies focus on static environments or initial planning states, few address the challenge of [...] Read more.
With the rapid increase in the number of Earth Observation Satellites (EOSs), research on autonomous mission scheduling has become increasingly critical for optimizing satellite sensor operations. While most existing studies focus on static environments or initial planning states, few address the challenge of dynamic request replanning for real-time sensor management. In this paper, we tackle the problem of multi-satellite rapid mission replanning under dynamic batch-arrival observation requests. The objective is to maximize overall observation revenue while minimizing disruptions to the original scheme. We propose a framework that integrates stochastic master-satellite mission allocation with single-satellite replanning, supported by reactive scheduling policies trained via deep reinforcement learning. Our approach leverages mission sequence modeling with attention mechanisms and time-attitude-aware rotary positional encoding to guide replanning. Additionally, scalable embeddings are employed to handle varying volumes of dynamic requests. The mission allocation phase efficiently generates assignment solutions using a pointer network, while the replanning phase introduces a hybrid action space for direct task insertion. Both phases are formulated as Markov Decision Processes (MDPs) and optimized using the PPO algorithm. Extensive simulations demonstrate that our method significantly outperforms state-of-the-art approaches, achieving a 15.27% higher request insertion revenue rate and a 3.05% improvement in overall mission revenue rate, while maintaining a 1.17% lower modification rate and achieving faster computational speeds. This demonstrates the effectiveness of our approach in real-world satellite sensor applications. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

28 pages, 60546 KiB  
Article
Adapting Cross-Sensor High-Resolution Remote Sensing Imagery for Land Use Classification
by Wangbin Li, Kaimin Sun and Jinjiang Wei
Remote Sens. 2025, 17(5), 927; https://doi.org/10.3390/rs17050927 - 5 Mar 2025
Viewed by 1292
Abstract
High-resolution visible remote sensing imagery, as a fundamental contributor to Earth observation, has found extensive application in land use classification. However, the heterogeneous array of optical sensors, distinguished by their unique design architectures, exhibit disparate spectral responses and spatial distributions when observing ground [...] Read more.
High-resolution visible remote sensing imagery, as a fundamental contributor to Earth observation, has found extensive application in land use classification. However, the heterogeneous array of optical sensors, distinguished by their unique design architectures, exhibit disparate spectral responses and spatial distributions when observing ground objects. These discrepancies between multi-sensor data present a significant obstacle to the widespread application of intelligent methods. In this paper, we propose a method tailored to accommodate these disparities, with the aim of achieving a smooth transfer for the model across diverse sets of images captured by different sensors. Specifically, to address the discrepancies in spatial resolution, a novel positional encoding has been incorporated to capture the correlation between the spatial resolution details and the characteristics of ground objects. To tackle spectral disparities, random amplitude mixup augmentation is introduced to mitigate the impact of feature anisotropy resulting from discrepancies in low-level features between multi-sensor images. Additionally, we integrate convolutional neural networks and Transformers to enhance the model’s feature extraction capabilities, and employ a fine-tuning strategy with dynamic pseudo-labels to reduce the reliance on annotated data from the target domain. In the experimental section, the Gaofen-2 images (4 m) and the Sentinel-2 images (10 m) were selected as training and test datasets to simulate cross-sensor model transfer scenarios. Also, Google Earth images of Suzhou City, Jiangsu Province, were utilized for further validation. The results indicate that our approach effectively mitigates the degradation in model performance attributed to image source inconsistencies. Full article
Show Figures

Graphical abstract

20 pages, 20159 KiB  
Article
High-Accuracy Mapping of Soil Organic Carbon by Mining Sentinel-1/2 Radar and Optical Time-Series Data with Super Ensemble Model
by Zhibo Cui, Songchao Chen, Bifeng Hu, Nan Wang, Jiaxiang Zhai, Jie Peng and Zijin Bai
Remote Sens. 2025, 17(4), 678; https://doi.org/10.3390/rs17040678 - 17 Feb 2025
Cited by 1 | Viewed by 1024
Abstract
Accurate digital soil organic carbon mapping is of great significance for regulating the global carbon cycle and addressing climate change. With the advent of the remote sensing big data era, multi-source and multi-temporal remote sensing techniques have been extensively applied in Earth observation. [...] Read more.
Accurate digital soil organic carbon mapping is of great significance for regulating the global carbon cycle and addressing climate change. With the advent of the remote sensing big data era, multi-source and multi-temporal remote sensing techniques have been extensively applied in Earth observation. However, how to fully mine multi-source remote sensing time-series data for high-accuracy digital SOC mapping remains a key challenge. To address this challenge, this study introduced a new idea for mining multi-source remote sensing time-series data. We used 413 topsoil organic carbon samples from southern Xinjiang, China, as an example. By mining multi-source (Sentinel-1/2) remote sensing time-series data from 2017 to 2023, we revealed the temporal variation pattern of the correlation between Sentinel-1/2 time-series data and SOC, thereby identifying the optimal time window for monitoring SOC using Sentinel-1/2 data. By integrating environmental covariates and a super ensemble model, we achieved high-accuracy mapping of SOC in Southern Xinjiang, China. The results showed the following aspects: (1) The optimal time windows for monitoring SOC using Sentinel-1/2 data were July–September and July–August, respectively; (2) the modeling accuracy using multi-source sensor data integrated with environmental covariates was superior to using single-source sensor data integrated with environmental covariates alone. In the optimal model based on multi-source data, the cumulative contribution rate of Sentinel-2 data is 51.71% higher than that of Sentinel-1 data; (3) the stacking super ensemble model’s predictive performance outperformed the weight average and simple average ensemble models. Therefore, mining the optimal time windows of multi-source remote sensing data and environmental covariates, driven a super ensemble model, represents a high-accuracy strategy for digital SOC mapping. Full article
Show Figures

Figure 1

31 pages, 15726 KiB  
Article
Multi-Objective Manoeuvring Optimization for Multi-Satellite Responsive Earth Observation
by Annarita Argirò, Nicola Cimmino, Giorgio Isoletta, Roberto Opromolla and Giancarmine Fasano
Aerospace 2025, 12(2), 143; https://doi.org/10.3390/aerospace12020143 - 13 Feb 2025
Viewed by 785
Abstract
Many space missions require that an area of interest on the ground is observed in a timely manner. Several approaches have been proposed in literature for this purpose, which involve modifying the ground track of an in-orbit satellite to overfly one or more [...] Read more.
Many space missions require that an area of interest on the ground is observed in a timely manner. Several approaches have been proposed in literature for this purpose, which involve modifying the ground track of an in-orbit satellite to overfly one or more Earth sites. Multi-satellite systems can clearly provide advantages for addressing this task in terms of responsiveness. In this context, this paper proposes a decision-making architecture to select the optimal manoeuvring or non-manoeuvring solution that enables a set of multiple sensor-equipped satellites in low Earth orbit to observe an area of interest in a timely fashion. For satellites that do not overfly the Earth site within the specified time period, dual coplanar impulsive manoeuvres are designed by applying a sensor-aware ground-track adjustment method. In particular, sensor footprints and percentage coverage of the assumed areas of interest are explicitly taken into account. A multi-objective optimization problem is then solved to determine which satellite provides the best solution to cover the area of interest in terms of fuel consumption (if ground-track adjustment is required) and time to overflight. Both simulated and real-world scenarios are considered to numerically validate the proposed methodology. Full article
(This article belongs to the Special Issue Deep Space Exploration)
Show Figures

Figure 1

25 pages, 6944 KiB  
Article
Representation Learning of Multi-Spectral Earth Observation Time Series and Evaluation for Crop Type Classification
by Andrea González-Ramírez, Clement Atzberger, Deni Torres-Roman and Josué López
Remote Sens. 2025, 17(3), 378; https://doi.org/10.3390/rs17030378 - 23 Jan 2025
Cited by 2 | Viewed by 1212
Abstract
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To [...] Read more.
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To develop accurate solutions for RS-based applications, often supervised shallow/deep learning algorithms are used. However, such approaches usually require fixed-length inputs and large labeled datasets. Unfortunately, RS images acquired by optical sensors are frequently degraded by aerosol contamination, clouds and cloud shadows, resulting in missing observations and irregular observation patterns. To address these issues, efforts have been made to implement frameworks that generate meaningful representations from the irregularly sampled data streams and alleviate the deficiencies of the data sources and supervised algorithms. Here, we propose a conceptually and computationally simple representation learning (RL) approach based on autoencoders (AEs) to generate discriminative features for crop type classification. The proposed methodology includes a set of single-layer AEs with a very limited number of neurons, each one trained with the mono-temporal spectral features of a small set of samples belonging to a class, resulting in a model capable of processing very large areas in a short computational time. Importantly, the developed approach remains flexible with respect to the availability of clear temporal observations. The signal derived from the ensemble of AEs is the reconstruction difference vector between input samples and their corresponding estimations, which are averaged over all cloud-/shadow-free temporal observations of a pixel location. This averaged reconstruction difference vector is the base for the representations and the subsequent classification. Experimental results show that the proposed extremely light-weight architecture indeed generates separable features for competitive performances in crop type classification, as distance metrics scores achieved with the derived representations significantly outperform those obtained with the initial data. Conventional classification models were trained and tested with representations generated from a widely used Sentinel-2 multi-spectral multi-temporal dataset, BreizhCrops. Our method achieved 77.06% overall accuracy, which is 6% higher than that achieved using original Sentinel-2 data within conventional classifiers and even 4% better than complex deep models such as OmnisCNN. Compared to extremely complex and time-consuming models such as Transformer and long short-term memory (LSTM), only a 3% reduction in overall accuracy was noted. Our method uses only 6.8k parameters, i.e., 400x fewer than OmnicsCNN and 27x fewer than Transformer. The results prove that our method is competitive in terms of classification performance compared with state-of-the-art methods while substantially reducing the computational load. Full article
(This article belongs to the Collection Sentinel-2: Science and Applications)
Show Figures

Figure 1

21 pages, 3681 KiB  
Article
Optimizing Deep Learning Models for Fire Detection, Classification, and Segmentation Using Satellite Images
by Abdallah Waleed Ali and Sefer Kurnaz
Fire 2025, 8(2), 36; https://doi.org/10.3390/fire8020036 - 21 Jan 2025
Cited by 2 | Viewed by 2807
Abstract
Earth observation (EO) satellites offer significant potential in wildfire detection and assessment due to their ability to provide fine spatial, temporal, and spectral resolutions. Over the past decade, satellite data have been systematically utilized to monitor wildfire dynamics and evaluate their impacts, leading [...] Read more.
Earth observation (EO) satellites offer significant potential in wildfire detection and assessment due to their ability to provide fine spatial, temporal, and spectral resolutions. Over the past decade, satellite data have been systematically utilized to monitor wildfire dynamics and evaluate their impacts, leading to substantial advancements in wildfire management strategies. The present study contributes to this field by enhancing the frequency and accuracy of wildfire detection through advanced techniques for detecting, classifying, and segmenting wildfires using satellite imagery. Publicly available multi-sensor satellite data, such as Landsat, Sentinel-1, and Sentinel-2, from 2018 to 2020 were employed, providing temporal observation frequencies of up to five days, which represents a 25% increase compared to traditional monitoring approaches. Sophisticated algorithms were developed and implemented to improve the accuracy of fire detection while minimizing false alarms. The study evaluated the performance of three distinct models: an autoencoder, a U-Net, and a convolutional neural network (CNN), comparing their effectiveness in predicting wildfire occurrences. The results indicated that the CNN model demonstrated superior performance, achieving a fire detection accuracy of 82%, which is approximately 10% higher than the best-performing model in similar studies. This accuracy, coupled with the model’s ability to balance various performance metrics and learnable weights, positions it as a promising tool for real-time wildfire detection. The findings underscore the significant potential of optimized machine learning approaches in predicting extreme events, such as wildfires, and improving fire management strategies. Achieving 82% detection accuracy in real-world applications could drastically reduce response times, minimize the damage caused by wildfires, and enhance resource allocation for firefighting efforts, emphasizing the importance of continued research in this domain. Full article
Show Figures

Figure 1

22 pages, 33216 KiB  
Article
Characterizing Sparse Spectral Diversity Within a Homogenous Background: Hydrocarbon Production Infrastructure in Arctic Tundra near Prudhoe Bay, Alaska
by Daniel Sousa, Latha Baskaran, Kimberley Miner and Elizabeth Josephine Bushnell
Remote Sens. 2025, 17(2), 244; https://doi.org/10.3390/rs17020244 - 11 Jan 2025
Viewed by 1180
Abstract
We explore a new approach for the parsimonious, generalizable, efficient, and potentially automatable characterization of spectral diversity of sparse targets in spectroscopic imagery. The approach focuses on pixels which are not well modeled by linear subpixel mixing of the Substrate, Vegetation and Dark [...] Read more.
We explore a new approach for the parsimonious, generalizable, efficient, and potentially automatable characterization of spectral diversity of sparse targets in spectroscopic imagery. The approach focuses on pixels which are not well modeled by linear subpixel mixing of the Substrate, Vegetation and Dark (S, V, and D) endmember spectra which dominate spectral variance for most of Earth’s land surface. We illustrate the approach using AVIRIS-3 imagery of anthropogenic surfaces (primarily hydrocarbon extraction infrastructure) embedded in a background of Arctic tundra near Prudhoe Bay, Alaska. Computational experiments further explore sensitivity to spatial and spectral resolution. Analysis involves two stages: first, computing the mixture residual of a generalized linear spectral mixture model; and second, nonlinear dimensionality reduction via manifold learning. Anthropogenic targets and lakeshore sediments are successfully isolated from the Arctic tundra background. Dependence on spatial resolution is observed, with substantial degradation of manifold topology as images are blurred from 5 m native ground sampling distance to simulated 30 m ground projected instantaneous field of view of a hypothetical spaceborne sensor. Degrading spectral resolution to mimicking the Sentinel-2A MultiSpectral Imager (MSI) also results in loss of information but is less severe than spatial blurring. These results inform spectroscopic characterization of sparse targets using spectroscopic images of varying spatial and spectral resolution. Full article
Show Figures

Figure 1

20 pages, 18304 KiB  
Article
Assessment of Radiometric Calibration Consistency of Thermal Emissive Bands Between Terra and Aqua Moderate-Resolution Imaging Spectroradiometers
by Tiejun Chang, Xiaoxiong Xiong, Carlos Perez Diaz, Aisheng Wu and Hanzhi Lin
Remote Sens. 2025, 17(2), 182; https://doi.org/10.3390/rs17020182 - 7 Jan 2025
Viewed by 749
Abstract
Moderate-Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua spacecraft have been in orbit for over 24 and 22 years, respectively, providing continuous observations of the Earth’s surface. Among the instrument’s 36 bands, 16 of them are thermal emissive bands (TEBs) with [...] Read more.
Moderate-Resolution Imaging Spectroradiometer (MODIS) sensors onboard the Terra and Aqua spacecraft have been in orbit for over 24 and 22 years, respectively, providing continuous observations of the Earth’s surface. Among the instrument’s 36 bands, 16 of them are thermal emissive bands (TEBs) with wavelengths that range from 3.75 to 14.24 μm. Routine post-launch calibrations are performed using the sensor’s onboard blackbody and space view port, the moon, and vicarious targets that include the ocean, Dome Concordia (Dome C) in Antarctica, and quasi-deep convective clouds (DCC). The calibration consistency between the satellite measurements from the two instruments is essential in generating a multi-year data record for the long-term monitoring of the Earth’s Level 1B (L1B) data. This paper presents the Terra and Aqua MODIS TEB comparison for the upcoming Collection 7 (C7) L1B products using measurements over Dome C and the ocean, as well as the double difference via simultaneous nadir overpasses with the Infrared Atmospheric Sounding Interferometer (IASI) sensor. The mission-long trending of the Terra and Aqua MODIS TEB is presented, and their cross-comparison is also presented and discussed. Results show that the calibration of the two MODIS sensors and their respective Earth measurements are generally consistent and within their design specifications. Due to the electronic crosstalk contamination, the PV LWIR bands show slightly larger drifts for both MODIS instruments across different Earth measurements. These drifts also have an impact on the Terra-to-Aqua calibration consistency. This thorough assessment serves as a robust record containing a summary of the MODIS calibration performance and the consistency between the two MODIS sensors over Earth view retrievals. Full article
Show Figures

Figure 1

21 pages, 5465 KiB  
Article
Deep Learning Approaches for Wildfire Severity Prediction: A Comparative Study of Image Segmentation Networks and Visual Transformers on the EO4WildFires Dataset
by Dimitris Sykas, Dimitrios Zografakis and Konstantinos Demestichas
Fire 2024, 7(11), 374; https://doi.org/10.3390/fire7110374 - 23 Oct 2024
Cited by 1 | Viewed by 3432
Abstract
This paper investigates the applicability of deep learning models for predicting the severity of forest wildfires, utilizing an innovative benchmark dataset called EO4WildFires. EO4WildFires integrates multispectral imagery from Sentinel-2, SAR data from Sentinel-1, and meteorological data from NASA Power annotated with EFFIS data [...] Read more.
This paper investigates the applicability of deep learning models for predicting the severity of forest wildfires, utilizing an innovative benchmark dataset called EO4WildFires. EO4WildFires integrates multispectral imagery from Sentinel-2, SAR data from Sentinel-1, and meteorological data from NASA Power annotated with EFFIS data for forest fire detection and size estimation. These data cover 45 countries with a total of 31,730 wildfire events from 2018 to 2022. All of these various sources of data are archived into data cubes, with the intention of assessing wildfire severity by considering both current and historical forest conditions, utilizing a broad range of data including temperature, precipitation, and soil moisture. The experimental setup has been arranged to test the effectiveness of different deep learning architectures in predicting the size and shape of wildfire-burned areas. This study incorporates both image segmentation networks and visual transformers, employing a consistent experimental design across various models to ensure the comparability of the results. Adjustments were made to the training data, such as the exclusion of empty labels and very small events, to refine the focus on more significant wildfire events and potentially improve prediction accuracy. The models’ performance was evaluated using metrics like F1 score, IoU score, and Average Percentage Difference (aPD). These metrics offer a multi-faceted view of model performance, assessing aspects such as precision, sensitivity, and the accuracy of the burned area estimation. Through extensive testing the final model utilizing LinkNet and ResNet-34 as backbones, we obtained the following metric results on the test set: 0.86 F1 score, 0.75 IoU, and 70% aPD. These results were obtained when all of the available samples were used. When the empty labels were absent during the training and testing, the model increased its performance significantly: 0.87 F1 score, 0.77 IoU, and 44.8% aPD. This indicates that the number of samples, as well as their respectively size (area), tend to have an impact on the model’s robustness. This restriction is well known in the remote sensing domain, as accessible, accurately labeled data may be limited. Visual transformers like TeleViT showed potential but underperformed compared to segmentation networks in terms of F1 and IoU scores. Full article
Show Figures

Figure 1

19 pages, 6418 KiB  
Article
Evaluating Sugarcane Yield Estimation in Thailand Using Multi-Temporal Sentinel-2 and Landsat Data Together with Machine-Learning Algorithms
by Jaturong Som-ard, Savittri Ratanopad Suwanlee, Dusadee Pinasu, Surasak Keawsomsee, Kemin Kasa, Nattawut Seesanhao, Sarawut Ninsawat, Enrico Borgogno-Mondino and Filippo Sarvia
Land 2024, 13(9), 1481; https://doi.org/10.3390/land13091481 - 13 Sep 2024
Cited by 1 | Viewed by 3657
Abstract
Updated and accurate crop yield maps play a key role in the agricultural environment. Their application enables the support for sustainable agricultural practices and the formulation of effective strategies to mitigate the impacts of climate change. Farmers can apply the maps to gain [...] Read more.
Updated and accurate crop yield maps play a key role in the agricultural environment. Their application enables the support for sustainable agricultural practices and the formulation of effective strategies to mitigate the impacts of climate change. Farmers can apply the maps to gain an overview of the yield variability, improving farm management practices and optimizing inputs to increase productivity and sustainability such as fertilizers. Earth observation (EO) data make it possible to map crop yield estimations over large areas, although this will remain challenging for specific crops such as sugarcane. Yield data collection is an expensive and time-consuming practice that often limits the number of samples collected. In this study, the sugarcane yield estimation based on a small number of training datasets within smallholder crop systems in the Tha Khan Tho District, Thailand for the year 2022 was assessed. Specifically, multi-temporal satellite datasets from multiple sensors, including Sentinel-2 and Landsat 8/9, were involved. Moreover, in order to generate the sugarcane yield estimation maps, only 75 sampling plots were selected and surveyed to provide training and validation data for several powerful machine-learning algorithms, including multiple linear regression (MLR), stepwise multiple regression (SMR), partial least squares regression (PLS), random forest regression (RFR), and support vector regression (SVR). Among these algorithms, the RFR model demonstrated outstanding performance, yielding an excellent result compared to existing techniques, achieving an R-squared (R2) value of 0.79 and a root mean square error (RMSE) of 3.93 t/ha (per 10 m × 10 m pixel). Furthermore, the mapped yields across the region closely aligned with the official statistical data from the Office of the Cane and Sugar Board (with a range value of 36,000 ton). Finally, the sugarcane yield estimation model was applied to over 2100 sugarcane fields in order to provide an overview of the current state of the yield and total production in the area. In this work, the different yield rates at the field level were highlighted, providing a powerful workflow for mapping sugarcane yields across large regions, supporting sugarcane crop management and facilitating decision-making processes. Full article
Show Figures

Figure 1

23 pages, 12771 KiB  
Article
Harmonized Landsat and Sentinel-2 Data with Google Earth Engine
by Elias Fernando Berra, Denise Cybis Fontana, Feng Yin and Fabio Marcelo Breunig
Remote Sens. 2024, 16(15), 2695; https://doi.org/10.3390/rs16152695 - 23 Jul 2024
Cited by 10 | Viewed by 10650
Abstract
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging [...] Read more.
Continuous and dense time series of satellite remote sensing data are needed for several land monitoring applications, including vegetation phenology, in-season crop assessments, and improving land use and land cover classification. Supporting such applications at medium to high spatial resolution may be challenging with a single optical satellite sensor, as the frequency of good-quality observations can be low. To optimize good-quality data availability, some studies propose harmonized databases. This work aims at developing an ‘all-in-one’ Google Earth Engine (GEE) web-based workflow to produce harmonized surface reflectance data from Landsat-7 (L7) ETM+, Landsat-8 (L8) OLI, and Sentinel-2 (S2) MSI top of atmosphere (TOA) reflectance data. Six major processing steps to generate a new source of near-daily Harmonized Landsat and Sentinel (HLS) reflectance observations at 30 m spatial resolution are proposed and described: band adjustment, atmospheric correction, cloud and cloud shadow masking, view and illumination angle adjustment, co-registration, and reprojection and resampling. The HLS is applied to six equivalent spectral bands, resulting in a surface nadir BRDF-adjusted reflectance (NBAR) time series gridded to a common pixel resolution, map projection, and spatial extent. The spectrally corresponding bands and derived Normalized Difference Vegetation Index (NDVI) were compared, and their sensor differences were quantified by regression analyses. Examples of HLS time series are presented for two potential applications: agricultural and forest phenology. The HLS product is also validated against ground measurements of NDVI, achieving very similar temporal trajectories and magnitude of values (R2 = 0.98). The workflow and script presented in this work may be useful for the scientific community aiming at taking advantage of multi-sensor harmonized time series of optical data. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

20 pages, 10101 KiB  
Article
An Invariant Filtering Method Based on Frame Transformed for Underwater INS/DVL/PS Navigation
by Can Wang, Chensheng Cheng, Chun Cao, Xinyu Guo, Guang Pan and Feihu Zhang
J. Mar. Sci. Eng. 2024, 12(7), 1178; https://doi.org/10.3390/jmse12071178 - 13 Jul 2024
Cited by 3 | Viewed by 1729
Abstract
Underwater vehicles heavily depend on the integration of inertial navigation with Doppler Velocity Log (DVL) for fusion-based localization. Given the constraints imposed by sensor costs, ensuring the optimization ability and robustness of fusion algorithms is of paramount importance. While filtering-based techniques such as [...] Read more.
Underwater vehicles heavily depend on the integration of inertial navigation with Doppler Velocity Log (DVL) for fusion-based localization. Given the constraints imposed by sensor costs, ensuring the optimization ability and robustness of fusion algorithms is of paramount importance. While filtering-based techniques such as Extended Kalman Filter (EKF) offer mature solutions to nonlinear problems, their reliance on linearization approximation may compromise final accuracy. Recently, Invariant EKF (IEKF) methods based on the concept of smooth manifolds have emerged to address this limitation. However, the optimization by matrix Lie groups must satisfy the “group affine” property to ensure state independence, which constrains the applicability of IEKF to high-precision positioning of underwater multi-sensor fusion. In this study, an alternative state-independent underwater fusion invariant filtering approach based on a two-frame group utilizing DVL, Inertial Measurement Unit (IMU), and Earth-Centered Earth-Fixed (ECEF) configuration is proposed. This methodology circumvents the necessity for group affine in the presence of biases. We account for inertial biases and DVL pole-arm effects, achieving convergence in an imperfect IEKF by either fixed observation or body observation information. Through simulations and real datasets that are time-synchronized, we demonstrate the effectiveness and robustness of the proposed algorithm. Full article
(This article belongs to the Special Issue Autonomous Marine Vehicle Operations—2nd Edition)
Show Figures

Figure 1

21 pages, 10773 KiB  
Article
A Synthetic Aperture Radar-Based Robust Satellite Technique (RST) for Timely Mapping of Floods
by Meriam Lahsaini, Felice Albano, Raffaele Albano, Arianna Mazzariello and Teodosio Lacava
Remote Sens. 2024, 16(12), 2193; https://doi.org/10.3390/rs16122193 - 17 Jun 2024
Cited by 7 | Viewed by 2376
Abstract
Satellite data have been widely utilized for flood detection and mapping tasks, and in recent years, there has been a growing interest in using Synthetic Aperture Radar (SAR) data due to the increased availability of recent missions with enhanced temporal resolution. This capability, [...] Read more.
Satellite data have been widely utilized for flood detection and mapping tasks, and in recent years, there has been a growing interest in using Synthetic Aperture Radar (SAR) data due to the increased availability of recent missions with enhanced temporal resolution. This capability, when combined with the inherent advantages of SAR technology over optical sensors, such as spatial resolution and independence from weather conditions, allows for timely and accurate information on flood event dynamics. In this study, we present an innovative automated approach, SAR-RST-FLOOD, for mapping flooded areas using SAR data. Based on a multi-temporal analysis of Sentinel 1 data, such an approach would allow for robust and automatic identification of flooded areas. To assess its reliability and accuracy, we analyzed five case studies in areas where floods caused significant damage. Performance metrics, such as overall (OA), user (UA), and producer (PA) accuracy, as well as the Kappa index (K), were used to evaluate the methodology by considering several reference flood maps. The results demonstrate a user accuracy exceeding 0.78 for each test map when compared to the observed flood data. Additionally, the overall accuracy values surpassed 0.96, and the kappa index values exceeded 0.78 when compared to the mapping processes from observed data or other reference datasets from the Copernicus Emergency Management System. Considering these results and the fact that the proposed approach has been implemented within the Google Earth Engine framework, its potential for global-scale applications is evident. Full article
Show Figures

Figure 1

Back to TopTop