Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (976)

Search Parameters:
Keywords = operational land imager

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4169 KiB  
Article
Multi-Scale Differentiated Network with Spatial–Spectral Co-Operative Attention for Hyperspectral Image Denoising
by Xueli Chang, Xiaodong Wang, Xiaoyu Huang, Meng Yan and Luxiao Cheng
Appl. Sci. 2025, 15(15), 8648; https://doi.org/10.3390/app15158648 - 5 Aug 2025
Abstract
Hyperspectral image (HSI) denoising is a crucial step in image preprocessing as its effectiveness has a direct impact on the accuracy of subsequent tasks such as land cover classification, target recognition, and change detection. However, existing methods suffer from limitations in effectively integrating [...] Read more.
Hyperspectral image (HSI) denoising is a crucial step in image preprocessing as its effectiveness has a direct impact on the accuracy of subsequent tasks such as land cover classification, target recognition, and change detection. However, existing methods suffer from limitations in effectively integrating multi-scale features and adaptively modeling complex noise distributions, making it difficult to construct effective spatial–spectral joint representations. This often leads to issues like detail loss and spectral distortion, especially when dealing with complex mixed noise. To address these challenges, this paper proposes a multi-scale differentiated denoising network based on spatial–spectral cooperative attention (MDSSANet). The network first constructs a multi-scale image pyramid using three downsampling operations and independently models the features at each scale to better capture noise characteristics at different levels. Additionally, a spatial–spectral cooperative attention module (SSCA) and a differentiated multi-scale feature fusion module (DMF) are introduced. The SSCA module effectively captures cross-spectral dependencies and spatial feature interactions through parallel spectral channel and spatial attention mechanisms. The DMF module adopts a multi-branch parallel structure with differentiated processing to dynamically fuse multi-scale spatial–spectral features and incorporates a cross-scale feature compensation strategy to improve feature representation and mitigate information loss. The experimental results show that the proposed method outperforms state-of-the-art methods across several public datasets, exhibiting greater robustness and superior visual performance in tasks such as handling complex noise and recovering small targets. Full article
(This article belongs to the Special Issue Remote Sensing Image Processing and Application, 2nd Edition)
Show Figures

Figure 1

36 pages, 9354 KiB  
Article
Effects of Clouds and Shadows on the Use of Independent Component Analysis for Feature Extraction
by Marcos A. Bosques-Perez, Naphtali Rishe, Thony Yan, Liangdong Deng and Malek Adjouadi
Remote Sens. 2025, 17(15), 2632; https://doi.org/10.3390/rs17152632 - 29 Jul 2025
Viewed by 159
Abstract
One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such [...] Read more.
One of the persistent challenges in multispectral image analysis is the interference caused by dense cloud cover and its resulting shadows, which can significantly obscure surface features. This becomes especially problematic when attempting to monitor surface changes over time using satellite imagery, such as from Landsat-8. In this study, rather than simply masking visual obstructions, we aimed to investigate the role and influence of clouds within the spectral data itself. To achieve this, we employed Independent Component Analysis (ICA), a statistical method capable of decomposing mixed signals into independent source components. By applying ICA to selected Landsat-8 bands and analyzing each component individually, we assessed the extent to which cloud signatures are entangled with surface data. This process revealed that clouds contribute to multiple ICA components simultaneously, indicating their broad spectral influence. With this influence on multiple wavebands, we managed to configure a set of components that could perfectly delineate the extent and location of clouds. Moreover, because Landsat-8 lacks cloud-penetrating wavebands, such as those in the microwave range (e.g., SAR), the surface information beneath dense cloud cover is not captured at all, making it physically impossible for ICA to recover what is not sensed in the first place. Despite these limitations, ICA proved effective in isolating and delineating cloud structures, allowing us to selectively suppress them in reconstructed images. Additionally, the technique successfully highlighted features such as water bodies, vegetation, and color-based land cover differences. These findings suggest that while ICA is a powerful tool for signal separation and cloud-related artifact suppression, its performance is ultimately constrained by the spectral and spatial properties of the input data. Future improvements could be realized by integrating data from complementary sensors—especially those operating in cloud-penetrating wavelengths—or by using higher spectral resolution imagery with narrower bands. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

18 pages, 3315 KiB  
Article
Real-Time Geo-Localization for Land Vehicles Using LIV-SLAM and Referenced Satellite Imagery
by Yating Yao, Jing Dong, Songlai Han, Haiqiao Liu, Quanfu Hu and Zhikang Chen
Appl. Sci. 2025, 15(15), 8257; https://doi.org/10.3390/app15158257 - 24 Jul 2025
Viewed by 211
Abstract
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack [...] Read more.
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack of inherent global constraints. In this paper, we propose a real-time geo-localization method for land vehicles, which only relies on a LiDAR-inertial-visual SLAM (LIV-SLAM) and a referenced image. The proposed method enables long-distance navigation without requiring GPS or loop closure, while eliminating accumulated localization errors. To achieve this, the local map constructed by SLAM is real-timely projected onto a downward-view image, and a highly efficient cross modal matching algorithm is proposed to estimate the global position by aligning the projected local image to a geo-referenced satellite image. The cross-modal algorithm leverages dense texture orientation features, ensuring robustness against cross-modal distortion and local scene changes, and supports efficient correlation in the frequency domain for real-time performance. We also propose a novel adaptive Kalman filter (AKF) to integrate the global position provided by the cross-modal matching and the pose estimated by LIV-SLAM. The proposed AKF is designed to effectively handle observation delays and asynchronous updates while simultaneously rejecting the impact of erroneous matches through an Observation-Aware Gain Scaling (OAGS) mechanism. We verify the proposed algorithm through R3LIVE and NCLT datasets, demonstrating superior computational efficiency, reliability, and accuracy compared to existing methods. Full article
(This article belongs to the Special Issue Navigation and Positioning Based on Multi-Sensor Fusion Technology)
Show Figures

Figure 1

22 pages, 16984 KiB  
Article
Small Ship Detection Based on Improved Neural Network Algorithm and SAR Images
by Jiaqi Li, Hongyuan Huo, Li Guo, De Zhang, Wei Feng, Yi Lian and Long He
Remote Sens. 2025, 17(15), 2586; https://doi.org/10.3390/rs17152586 - 24 Jul 2025
Viewed by 286
Abstract
Synthetic aperture radar images can be used for ship target detection. However, due to the unclear ship outline in SAR images, noise and land background factors affect the difficulty and accuracy of ship (especially small target ship) detection. Therefore, based on the YOLOv5s [...] Read more.
Synthetic aperture radar images can be used for ship target detection. However, due to the unclear ship outline in SAR images, noise and land background factors affect the difficulty and accuracy of ship (especially small target ship) detection. Therefore, based on the YOLOv5s model, this paper improves its backbone network and feature fusion network algorithm to improve the accuracy of ship detection target recognition. First, the LSKModule is used to improve the backbone network of YOLOv5s. By adaptively aggregating the features extracted by large-size convolution kernels to fully obtain context information, at the same time, key features are enhanced and noise interference is suppressed. Secondly, multiple Depthwise Separable Convolution layers are added to the SPPF (Spatial Pyramid Pooling-Fast) structure. Although a small number of parameters and calculations are introduced, features of different receptive fields can be extracted. Third, the feature fusion network of YOLOv5s is improved based on BIFPN, and the shallow feature map is used to optimize the small target detection performance. Finally, the CoordConv module is added before the detect head of YOLOv5, and two coordinate channels are added during the convolution operation to further improve the accuracy of target detection. The map50 of this method for the SSDD dataset and HRSID dataset reached 97.6% and 91.7%, respectively, and was compared with a variety of advanced target detection models. The results show that the detection accuracy of this method is higher than other similar target detection algorithms. Full article
Show Figures

Figure 1

19 pages, 51503 KiB  
Article
LSANet: Lightweight Super Resolution via Large Separable Kernel Attention for Edge Remote Sensing
by Tingting Yong and Xiaofang Liu
Appl. Sci. 2025, 15(13), 7497; https://doi.org/10.3390/app15137497 - 3 Jul 2025
Viewed by 339
Abstract
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information [...] Read more.
In recent years, remote sensing imagery has become indispensable for applications such as environmental monitoring, land use classification, and urban planning. However, the physical constraints of satellite imaging systems frequently limit the spatial resolution of these images, impeding the extraction of fine-grained information critical to downstream tasks. Super-resolution (SR) techniques thus emerge as a pivotal solution to enhance the spatial fidelity of remote sensing images via computational approaches. While deep learning-based SR methods have advanced reconstruction accuracy, their high computational complexity and large parameter counts restrict practical deployment in real-world remote sensing scenarios—particularly on edge or low-power devices. To address this gap, we propose LSANet, a lightweight SR network customized for remote sensing imagery. The core of LSANet is the large separable kernel attention mechanism, which efficiently expands the receptive field while retaining low computational overhead. By integrating this mechanism into an enhanced residual feature distillation module, the network captures long-range dependencies more effectively than traditional shallow residual blocks. Additionally, a residual feature enhancement module, leveraging contrast-aware channel attention and hierarchical skip connections, strengthens the extraction and integration of multi-level discriminative features. This design preserves fine textures and ensures smooth information propagation across the network. Extensive experiments on public datasets such as UC Merced Land Use and NWPU-RESISC45 demonstrate LSANet’s competitive or superior performance compared to state-of-the-art methods. On the UC Merced Land Use dataset, LSANet achieves a PSNR of 34.33, outperforming the best-baseline HSENet with its PSNR of 34.23 by 0.1. For SSIM, LSANet reaches 0.9328, closely matching HSENet’s 0.9332 while demonstrating excellent metric-balancing performance. On the NWPU-RESISC45 dataset, LSANet attains a PSNR of 35.02, marking a significant improvement over prior methods, and an SSIM of 0.9305, maintaining strong competitiveness. These results, combined with the notable reduction in parameters and floating-point operations, highlight the superiority of LSANet in remote sensing image super-resolution tasks. Full article
Show Figures

Figure 1

18 pages, 3896 KiB  
Article
The Contribution of Meteosat Third Generation–Flexible Combined Imager (MTG-FCI) Observations to the Monitoring of Thermal Volcanic Activity: The Mount Etna (Italy) February–March 2025 Eruption
by Carolina Filizzola, Giuseppe Mazzeo, Francesco Marchese, Carla Pietrapertosa and Nicola Pergola
Remote Sens. 2025, 17(12), 2102; https://doi.org/10.3390/rs17122102 - 19 Jun 2025
Viewed by 549
Abstract
The Flexible Combined Imager (FCI) instrument aboard the Meteosat Third Generation (MTG-I) geostationary satellite, launched in December 2022 and operational since September 2024, by providing shortwave infrared (SWIR), medium infrared (MIR) and thermal infrared (TIR) data, with an image refreshing time of 10 [...] Read more.
The Flexible Combined Imager (FCI) instrument aboard the Meteosat Third Generation (MTG-I) geostationary satellite, launched in December 2022 and operational since September 2024, by providing shortwave infrared (SWIR), medium infrared (MIR) and thermal infrared (TIR) data, with an image refreshing time of 10 min and a spatial resolution ranging between 500 m in the high-resolution (HR) and 1–2 km in the normal-resolution (NR) mode, may represent a very promising instrument for monitoring thermal volcanic activity from space, also in operational contexts. In this work, we assess this potential by investigating the recent Mount Etna (Italy, Sicily) eruption of February–March 2025 through the analysis of daytime and night-time SWIR observations in the NR mode. The time series of a normalized hotspot index retrieved over Mt. Etna indicates that the effusive eruption started on 8 February at 13:40 UTC (14:40 LT), i.e., before information from independent sources. This observation is corroborated by the analysis of the MIR signal performed using an adapted Robust Satellite Technique (RST) approach, also revealing the occurrence of less intense thermal activity over the Mt. Etna area a few hours before (10.50 UTC) the possible start of lava effusion. By analyzing changes in total SWIR radiance (TSR), calculated starting from hot pixels detected using the preliminary NHI algorithm configuration tailored to FCI data, we inferred information about variations in thermal volcanic activity. The results show that the Mt. Etna eruption was particularly intense during 17–19 February, when the radiative power was estimated to be around 1–3 GW from other sensors. These outcomes, which are consistent with Multispectral Instrument (MSI) and Operational Land Imager (OLI) observations at a higher spatial resolution, providing accurate information about areas inundated by the lava, demonstrate that the FCI may provide a relevant contribution to the near-real-time monitoring of Mt. Etna activity. The usage of FCI data, in the HR mode, may further improve the timely identification of high-temperature features in the framework of early warning contexts, devoted to mitigating the social, environmental and economic impacts of effusive eruptions, especially over less monitored volcanic areas. Full article
Show Figures

Figure 1

42 pages, 29424 KiB  
Article
Mapping of Flood Impacts Caused by the September 2023 Storm Daniel in Thessaly’s Plain (Greece) with the Use of Remote Sensing Satellite Data
by Triantafyllos Falaras, Anna Dosiou, Stamatina Tounta, Michalis Diakakis, Efthymios Lekkas and Issaak Parcharidis
Remote Sens. 2025, 17(10), 1750; https://doi.org/10.3390/rs17101750 - 16 May 2025
Viewed by 1909
Abstract
Floods caused by extreme weather events critically impact human and natural systems. Remote sensing can be a very useful tool in mapping these impacts. However, processing and analyzing satellite imagery covering extensive periods is computationally intensive and time-consuming, especially when data from different [...] Read more.
Floods caused by extreme weather events critically impact human and natural systems. Remote sensing can be a very useful tool in mapping these impacts. However, processing and analyzing satellite imagery covering extensive periods is computationally intensive and time-consuming, especially when data from different sensors need to be integrated, hampering its operational use. To address this issue, the present study focuses on mapping flooded areas and analyzing the impacts of the 2023 Storm Daniel flood in the Thessaly region (Greece), utilizing Earth Observation and GIS methods. The study uses multiple Sentinel-1, Sentinel-2, and Landsat 8/9 satellite images based on backscatter histogram statistics thresholding for SAR and Modified Normalized Difference Water Index (MNDWI) for multispectral images to delineate the extent of flooded areas triggered by the 2023 Storm Daniel in Thessaly region (Greece). Cloud computing on the Google Earth Engine (GEE) platform is utilized to process satellite image acquisitions and track floodwater evolution dynamics until the complete drainage of the area, making the process significantly faster. The study examines the usability and transferability of the approach to evaluate flood impact through land cover, linear infrastructure, buildings, and population-related geospatial datasets. The results highlight the vital role of the proposed approach of integrating remote sensing and geospatial analysis for effective emergency response, disaster management, and recovery planning. Full article
Show Figures

Figure 1

20 pages, 4617 KiB  
Article
Rapid Probabilistic Inundation Mapping Using Local Thresholds and Sentinel-1 SAR Data on Google Earth Engine
by Jiayong Liang, Desheng Liu, Lihan Feng and Kangning Huang
Remote Sens. 2025, 17(10), 1747; https://doi.org/10.3390/rs17101747 - 16 May 2025
Viewed by 627
Abstract
Traditional inundation mapping often relies on deterministic methods that offer only binary outcomes (inundated or not) based on satellite imagery analysis. While widely used, these methods do not convey the level of confidence in inundation classifications to account for ambiguity or uncertainty, limiting [...] Read more.
Traditional inundation mapping often relies on deterministic methods that offer only binary outcomes (inundated or not) based on satellite imagery analysis. While widely used, these methods do not convey the level of confidence in inundation classifications to account for ambiguity or uncertainty, limiting their utility in operational decision-making and rapid response contexts. To address these limitations, we propose a rapid probabilistic inundation mapping method that integrates local thresholds derived from Sentinel-1 SAR images and land cover data to estimate surface water probabilities. Tested on different flood events across five continents, this approach proved both efficient and effective, particularly when deployed via the Google Earth Engine (GEE) platform. The performance metrics—Brier Scores (0.05–0.07), Logarithmic Loss (0.1–0.2), Expected Calibration Error (0.03–0.04), and Reliability Diagrams—demonstrated reliable accuracy. VV (vertical transmit and vertical receive) polarization, given appropriate samples, yielded strong results. Additionally, the influence of different land cover types on the performance was also observed. Unlike conventional deterministic methods, this probabilistic framework allows for the estimation of inundation likelihood while accounting for variations in SAR signal characteristics across different land cover types. Moreover, it enables users to refine local thresholds or integrate on-the-ground knowledge, providing enhanced adaptability over traditional methods. Full article
Show Figures

Graphical abstract

30 pages, 3489 KiB  
Article
Assessing the Robustness of Multispectral Satellite Imagery with LiDAR Topographic Attributes and Ancillary Data to Predict Vertical Structure in a Wet Eucalypt Forest
by Bechu K. V. Yadav, Arko Lucieer, Gregory J. Jordan and Susan C. Baker
Remote Sens. 2025, 17(10), 1733; https://doi.org/10.3390/rs17101733 - 15 May 2025
Viewed by 693
Abstract
Remote sensing approaches can be cost-effective for estimating forest structural attributes. This study aims to use airborne LiDAR data to assess the robustness of multispectral satellite imagery and topographic attributes derived from DEMs to predict the density of three vegetation layers in a [...] Read more.
Remote sensing approaches can be cost-effective for estimating forest structural attributes. This study aims to use airborne LiDAR data to assess the robustness of multispectral satellite imagery and topographic attributes derived from DEMs to predict the density of three vegetation layers in a wet eucalypt forest in Tasmania, Australia. We compared the predictive capacity of medium-resolution Landsat-8 Operational Land Imager (OLI) surface reflectance and three pixel sizes from high-resolution WorldView-3 satellite imagery. These datasets were combined with topographic attributes extracted from resampled LiDAR-derived DEMs and a geology layer and validated with vegetation density layers extracted from high-density LiDAR. Using spectral bands, indices, texture features, a geology layer, and topographic attributes as predictor variables, we evaluated the predictive power of 13 data schemes at three different pixel sizes (1.6 m, 7.5 m, and 30 m). The schemes of the 30 m Landsat-8 (OLI) dataset provided better model accuracy than the WorldView-3 dataset across all three pixel sizes (R2 values from 0.15 to 0.65) and all three vegetation layers. The model accuracies increased with an increase in the number of predictor variables. For predicting the density of the overstorey vegetation, spectral indices (R2 = 0.48) and texture features (R2 = 0.47) were useful, and when both were combined, they produced higher model accuracy (R2 = 0.56) than either dataset alone. Model prediction improved further when all five data sources were included (R2 = 0.65). The best models for mid-storey (R2 = 0.46) and understorey (R2 = 0.44) vegetation had lower predictive capacity than for the overstorey. The models validated using an independent dataset confirmed the robustness. The spectral indices and texture features derived from the Landsat data products integrated with the low-density LiDAR data can provide valuable information on the forest structure of larger geographical areas for sustainable management and monitoring of the forest landscape. Full article
Show Figures

Figure 1

21 pages, 6759 KiB  
Article
Changes in Land Use and Land Cover Patterns in Two Desert Basins Using Remote Sensing Data
by Abdullah F. Alqurashi and Omar A. Alharbi
Geosciences 2025, 15(5), 178; https://doi.org/10.3390/geosciences15050178 - 15 May 2025
Viewed by 565
Abstract
Land use and land cover (LULC) changes can potentially impact natural ecosystems and are considered key components of global environmental change. The majority of LULC changes are related to human activities. Anthropogenic modifications have resulted in significant changes in the structure and fragmentation [...] Read more.
Land use and land cover (LULC) changes can potentially impact natural ecosystems and are considered key components of global environmental change. The majority of LULC changes are related to human activities. Anthropogenic modifications have resulted in significant changes in the structure and fragmentation of landscapes. This research aimed to analyze LULC changes using satellite images in the following two main basins in the Makkah region: the Wadi Fatimah and Wadi Uranah fluvial systems. First, image classification was conducted using remote sensing data from different satellite platforms, namely the Multispectral Scanner, the Landsat Thematic Mapper, the Enhanced Thematic Mapper Plus, and the Operational Land Imager. Images from these platforms were acquired for the years 1972, 1985, 1990, 2000, 2014, and 2022. A combination of object-based image analysis and a support vector machine classifier was used to produce LULC thematic maps. The obtained results were then used to calculate landscape metrics to quantify landscape patterns and fragmentation. The results showed that the landscape has undergone remarkable changes over the past 46 years. Built-up areas exhibited the most significant increase, while vegetation cover was the most dynamic land cover type. This was attributed mainly to the dry climatic conditions in the study area. These results suggest that LULC changes have influenced the natural environment in the studied area and are likely to contribute to further environmental impacts in the future. Measuring the spatial LULC distribution will help planners and ecologists to develop sustainable management strategies to mitigate future environmental consequences. Full article
Show Figures

Figure 1

28 pages, 32576 KiB  
Article
Machine Learning Algorithms of Remote Sensing Data Processing for Mapping Changes in Land Cover Types over Central Apennines, Italy
by Polina Lemenkova
J. Imaging 2025, 11(5), 153; https://doi.org/10.3390/jimaging11050153 - 12 May 2025
Viewed by 1174
Abstract
This work presents the use of remote sensing data for land cover mapping with a case of Central Apennines, Italy. The data include 8 Landsat 8-9 Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) satellite images in six-year period (2018–2024). The operational workflow included satellite [...] Read more.
This work presents the use of remote sensing data for land cover mapping with a case of Central Apennines, Italy. The data include 8 Landsat 8-9 Operational Land Imager/Thermal Infrared Sensor (OLI/TIRS) satellite images in six-year period (2018–2024). The operational workflow included satellite image processing which were classified into raster maps with automatically detected 10 classes of land cover types over the tested study. The approach was implemented by using a set of modules in Geographic Resources Analysis Support System (GRASS) Geographic Information System (GIS). To classify remote sensing (RS) data, two types of approaches were carried out. The first is unsupervised classification based on the MaxLike approach and clustering which extracted Digital Numbers (DN) of landscape feature based on the spectral reflectance of signals, and the second is supervised classification performed using several methods of Machine Learning (ML), technically realised in GRASS GIS scripting software. The latter included four ML algorithms embedded from the Python’s Scikit-Learn library. These classifiers have been implemented to detect subtle changes in land cover types as derived from the satellite images showing different vegetation conditions in spring and autumn periods in central Apennines, northern Italy. Full article
Show Figures

Figure 1

17 pages, 8295 KiB  
Article
CGLCS-Net: Addressing Multi-Temporal and Multi-Angle Challenges in Remote Sensing Change Detection
by Ke Liu, Hang Xue, Caiyi Huang, Jiaqi Huo and Guoxuan Chen
Sensors 2025, 25(9), 2836; https://doi.org/10.3390/s25092836 - 30 Apr 2025
Viewed by 399
Abstract
Currently, deep learning networks based on architectures such as CNN and Transformer have achieved significant advances in remote sensing image change detection, effectively addressing the issue of false changes due to spectral and radiometric discrepancies. However, when handling remote sensing image data from [...] Read more.
Currently, deep learning networks based on architectures such as CNN and Transformer have achieved significant advances in remote sensing image change detection, effectively addressing the issue of false changes due to spectral and radiometric discrepancies. However, when handling remote sensing image data from multiple sensors, different viewing angles, and extended periods, these models show limitations in modelling dynamic interactions and feature representations in change regions, restricting their ability to model the integrity and precision of irregular change areas. We propose the Context-Aware Global-Local Subspace Attention Change Detection Network (CGLCS-Net) to resolve these issues and introduce the Global-Local Context-Aware Selector (GLCAS) and the Subspace-based Self-Attention Fusion (SSAF) module. GLCAS dynamically selects receptive fields at different feature extraction stages through a joint pooling attention mechanism and depthwise separable convolution, enhancing global context and local feature extraction capabilities and improving feature representation for multi-scale and irregular change regions. The SSAF module establishes dynamic interactions between dual-temporal features via feature decomposition and self-attention mechanisms, focusing on semantic change areas to address challenges such as sensor viewpoint variations and the texture and spectral inconsistencies caused by long periods. Compared to ChangeFormer, CGLCS-Net achieved improvements in the IoU metric of 0.95%, 9.23%, and 13.16% on the three public datasets, i.e., LEVIR-CD, SYSU-CD, and S2Looking, respectively. Additionally, it reduced model parameters by 70.05%, floating-point operations by 7.5%, and inference time by 11.5%. These improvements enhance its applicability for continuous land use and land cover change monitoring. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 5514 KiB  
Article
Crop-Free-Ridge Navigation Line Recognition Based on the Lightweight Structure Improvement of YOLOv8
by Runyi Lv, Jianping Hu, Tengfei Zhang, Xinxin Chen and Wei Liu
Agriculture 2025, 15(9), 942; https://doi.org/10.3390/agriculture15090942 - 26 Apr 2025
Cited by 3 | Viewed by 576
Abstract
This study is situated against the background of shortages in the agricultural labor force and shortages of cultivated land. In order to improve the intelligence level and operational efficiency of agricultural machinery and solve the problems of difficulties in recognizing navigation lines and [...] Read more.
This study is situated against the background of shortages in the agricultural labor force and shortages of cultivated land. In order to improve the intelligence level and operational efficiency of agricultural machinery and solve the problems of difficulties in recognizing navigation lines and a lack of real-time performance of transplanters in the crop-free ridge environment, we propose a crop-free-ridge navigation line recognition method based on an improved YOLOv8 segmentation algorithm. First, this method reduces the parameters and computational complexity of the model by replacing the YOLOv8 backbone network with MobileNetV4 and the feature extraction module C2f with ShuffleNetV2, thereby improving the real-time segmentation of crop-free ridges. Second, we use the least-squares method to fit the obtained point set to accurately obtain navigation lines. Finally, the method is applied to testing and analyzing the field experimental ridges. The results showed that the average precision of the improved neural network model using this method was 90.4%, with a Params of 1.8 M, a FLOPs of 8.8 G, and an FPS of 49.5. The results indicate that the model maintains high accuracy while significantly outperforming Mask-RCNN, YOLACT++, YOLOv8, and YOLO11 in terms of computational speed. The detection frame rate increased significantly, improving the real-time performance of detection. This method uses the least-squares method to fit the 55% ridge contour feature points under the picture, and the fitting navigation line shows no large deviation compared with the image ridge centerline; the result is better than that of the RANSAC fitting method. The research results indicate that this method significantly reduces the size of the model parameters and improves the recognition speed, providing a more efficient solution for the autonomous navigation of intelligent carrier aircraft. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

27 pages, 10030 KiB  
Article
Enhancing Deforestation Detection Through Multi-Domain Adaptation with Uncertainty Estimation
by Luiz Fernando de Moura, Pedro Juan Soto Vega, Gilson Alexandre Ostwald Pedro da Costa and Guilherme Lucio Abelha Mota
Forests 2025, 16(5), 742; https://doi.org/10.3390/f16050742 - 26 Apr 2025
Viewed by 590
Abstract
Deep learning models have shown great potential in scientific research, particularly in remote sensing for monitoring natural resources, environmental changes, land cover, and land use. Deep semantic segmentation techniques enable land cover classification, change detection, object identification, and vegetation health assessment, among other [...] Read more.
Deep learning models have shown great potential in scientific research, particularly in remote sensing for monitoring natural resources, environmental changes, land cover, and land use. Deep semantic segmentation techniques enable land cover classification, change detection, object identification, and vegetation health assessment, among other applications. However, their effectiveness relies on large labeled datasets, which are costly and time-consuming to obtain. Domain adaptation (DA) techniques address this challenge by transferring knowledge from a labeled source domain to one or more unlabeled target domains. While most DA research focuses on single-target single-source problems, multi-target and multi-source scenarios remain underexplored. This work proposes a deep learning approach that uses Domain Adversarial Neural Networks (DANNs) for deforestation detection in multi-domain settings. Additionally, an uncertainty estimation phase is introduced to guide human review in high-uncertainty areas. Our approach is evaluated on a set of Landsat-8 images from the Amazon and Brazilian Cerrado biomes. In the multi-target experiments, a single source domain contains labeled data, while samples from the target domains are unlabeled. In multi-source scenarios, labeled samples from multiple source domains are used to train the deep learning models, later evaluated on a single target domain. The results show significant accuracy improvements over lower-bound baselines, as indicated by F1-Score values, and the uncertainty-based review showed a further potential to enhance performance, reaching upper-bound baselines in certain domain combinations. As our approach is independent of the semantic segmentation network architecture, we believe it opens new perspectives for improving the generalization capacity of deep learning-based deforestation detection methods. Furthermore, from an operational point of view, it has the potential to enable deforestation detection in areas around the world that lack accurate reference data to adequately train deep learning models for the task. Full article
(This article belongs to the Special Issue Modeling Forest Dynamics)
Show Figures

Figure 1

17 pages, 7946 KiB  
Article
Optical Camera Characterization for Feature-Based Navigation in Lunar Orbit
by Pierluigi Federici, Antonio Genova, Simone Andolfo, Martina Ciambellini, Riccardo Teodori and Tommaso Torrini
Aerospace 2025, 12(5), 374; https://doi.org/10.3390/aerospace12050374 - 26 Apr 2025
Viewed by 574
Abstract
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to [...] Read more.
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to the versatility and suitability of these data for onboard processing. Navigation frameworks based on optical data analysis have been developed to support semi- or fully-autonomous onboard systems, enabling precise relative localization. To achieve high-accuracy navigation, optical data have been combined with complementary measurements using sensor fusion techniques. Absolute localization is further supported by integrating onboard maps of cataloged surface features, enabling position estimation in an inertial reference frame. This study presents a navigation framework for optical image processing aimed at supporting the autonomous operations of lunar orbiters. The primary objective is a comprehensive characterization of the navigation camera’s properties and performance to ensure orbit determination uncertainties remain below 1% of the spacecraft altitude. In addition to an analysis of measurement noise, which accounts for both hardware and software contributions and is evaluated across multiple levels consistent with prior literature, this study emphasizes the impact of process noise on orbit determination accuracy. The mismodeling of orbital dynamics significantly degrades orbit estimation performance, even in scenarios involving high-performing navigation cameras. To evaluate the trade-off between measurement and process noise, representing the relative accuracy of the navigation camera and the onboard orbit propagator, numerical simulations were carried out in a synthetic lunar environment using a near-polar, low-altitude orbital configuration. Under nominal conditions, the optical measurement noise was set to 2.5 px, corresponding to a ground resolution of approximately 160 m based on the focal length, pixel pitch, and altitude of the modeled camera. With a conservative process noise model, position errors of about 200 m are observed in both transverse and normal directions. The results demonstrate the estimation framework’s robustness to modeling uncertainties, adaptability to varying measurement conditions, and potential to support increased onboard autonomy for small spacecraft in deep-space missions. Full article
(This article belongs to the Special Issue Planetary Exploration)
Show Figures

Figure 1

Back to TopTop