Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (380)

Search Parameters:
Keywords = 1 m pixel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 8755 KiB  
Article
Mapping Wetlands with High-Resolution Planet SuperDove Satellite Imagery: An Assessment of Machine Learning Models Across the Diverse Waterscapes of New Zealand
by Md. Saiful Islam Khan, Maria C. Vega-Corredor and Matthew D. Wilson
Remote Sens. 2025, 17(15), 2626; https://doi.org/10.3390/rs17152626 - 29 Jul 2025
Viewed by 366
Abstract
(1) Background: Wetlands are ecologically significant ecosystems that support biodiversity and contribute to essential environmental functions such as water purification, carbon storage and flood regulation. However, these ecosystems face increasing pressures from land-use change and degradation, prompting the need for scalable and accurate [...] Read more.
(1) Background: Wetlands are ecologically significant ecosystems that support biodiversity and contribute to essential environmental functions such as water purification, carbon storage and flood regulation. However, these ecosystems face increasing pressures from land-use change and degradation, prompting the need for scalable and accurate classification methods to support conservation and policy efforts. In this research, our motivation was to test whether high-spatial-resolution PlanetScope imagery can be used with pixel-based machine learning to support the mapping and monitoring of wetlands at a national scale. (2) Methods: This study compared four machine learning classification models—Random Forest (RF), XGBoost (XGB), Histogram-Based Gradient Boosting (HGB) and a Multi-Layer Perceptron Classifier (MLPC)—to detect and map wetland areas across New Zealand. All models were trained using eight-band SuperDove satellite imagery from PlanetScope, with a spatial resolution of ~3 m, and ancillary geospatial datasets representing topography and soil drainage characteristics, each of which is available globally. (3) Results: All four machine learning models performed well in detecting wetlands from SuperDove imagery and environmental covariates, with varying strengths. The highest accuracy was achieved using all eight image bands alongside features created from supporting geospatial data. For binary wetland classification, the highest F1 scores were recorded by XGB (0.73) and RF/HGB (both 0.72) when including all covariates. MLPC also showed competitive performance (wetland F1 score of 0.71), despite its relatively lower spatial consistency. However, each model over-predicts total wetland area at a national level, an issue which was able to be reduced by increasing the classification probability threshold and spatial filtering. (4) Conclusions: The comparative analysis highlights the strengths and trade-offs of RF, XGB, HGB and MLPC models for wetland classification. While all four methods are viable, RF offers some key advantages, including ease of deployment and transferability, positioning it as a promising candidate for scalable, high-resolution wetland monitoring across diverse ecological settings. Further work is required for verification of small-scale wetlands (<~0.5 ha) and the addition of fine-spatial-scale covariates. Full article
Show Figures

Figure 1

30 pages, 92065 KiB  
Article
A Picking Point Localization Method for Table Grapes Based on PGSS-YOLOv11s and Morphological Strategies
by Jin Lu, Zhongji Cao, Jin Wang, Zhao Wang, Jia Zhao and Minjie Zhang
Agriculture 2025, 15(15), 1622; https://doi.org/10.3390/agriculture15151622 - 26 Jul 2025
Viewed by 266
Abstract
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, [...] Read more.
During the automated picking of table grapes, the automatic recognition and segmentation of grape pedicels, along with the positioning of picking points, are vital components for all the following operations of the harvesting robot. In the actual scene of a grape plantation, however, it is extremely difficult to accurately and efficiently identify and segment grape pedicels and then reliably locate the picking points. This is attributable to the low distinguishability between grape pedicels and the surrounding environment such as branches, as well as the impacts of other conditions like weather, lighting, and occlusion, which are coupled with the requirements for model deployment on edge devices with limited computing resources. To address these issues, this study proposes a novel picking point localization method for table grapes based on an instance segmentation network called Progressive Global-Local Structure-Sensitive Segmentation (PGSS-YOLOv11s) and a simple combination strategy of morphological operators. More specifically, the network PGSS-YOLOv11s is composed of an original backbone of the YOLOv11s-seg, a spatial feature aggregation module (SFAM), an adaptive feature fusion module (AFFM), and a detail-enhanced convolutional shared detection head (DE-SCSH). And the PGSS-YOLOv11s have been trained with a new grape segmentation dataset called Grape-⊥, which includes 4455 grape pixel-level instances with the annotation of ⊥-shaped regions. After the PGSS-YOLOv11s segments the ⊥-shaped regions of grapes, some morphological operations such as erosion, dilation, and skeletonization are combined to effectively extract grape pedicels and locate picking points. Finally, several experiments have been conducted to confirm the validity, effectiveness, and superiority of the proposed method. Compared with the other state-of-the-art models, the main metrics F1 score and mask mAP@0.5 of the PGSS-YOLOv11s reached 94.6% and 95.2% on the Grape-⊥ dataset, as well as 85.4% and 90.0% on the Winegrape dataset. Multi-scenario tests indicated that the success rate of positioning the picking points reached up to 89.44%. In orchards, real-time tests on the edge device demonstrated the practical performance of our method. Nevertheless, for grapes with short pedicels or occluded pedicels, the designed morphological algorithm exhibited the loss of picking point calculations. In future work, we will enrich the grape dataset by collecting images under different lighting conditions, from various shooting angles, and including more grape varieties to improve the method’s generalization performance. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

15 pages, 7876 KiB  
Article
Fine-Scale Risk Mapping for Dengue Vector Using Spatial Downscaling in Intra-Urban Areas of Guangzhou, China
by Yunpeng Shen, Zhoupeng Ren, Junfu Fan, Jianpeng Xiao, Yingtao Zhang and Xiaobo Liu
Insects 2025, 16(7), 661; https://doi.org/10.3390/insects16070661 - 25 Jun 2025
Viewed by 593
Abstract
Generating fine-scale risk maps for mosquito-borne diseases vectors is an essential tool for guiding spatially targeted vector control interventions in urban settings, given the limited public health resources. This study aimed to generate fine-scale risk maps for dengue vectors using routine vector surveillance [...] Read more.
Generating fine-scale risk maps for mosquito-borne diseases vectors is an essential tool for guiding spatially targeted vector control interventions in urban settings, given the limited public health resources. This study aimed to generate fine-scale risk maps for dengue vectors using routine vector surveillance data collected at the township scale. We integrated monthly township-specific Breteau Index (BI) data from Guangzhou city (2019 to 2020) with covariates extracted from remote sensing imagery and other geospatial datasets to develop an original random forest (RF) model for predicting hotspot areas (BI ≥ 5). We implemented three data resampling techniques (undersampling, oversampling, and hybrid sampling) to improve the model’s performance and evaluate it using the ROC-AUC, Recall, Specificity, and G-means metrics. Finally, we generated a downscaled risk maps for BI hotspot areas at a 1000 m grid scale by applying the optimal model to fine-scale input data. Our findings indicate the following: (1) data resampling techniques significantly improved the prediction accuracy of the original RF model, demonstrating robust spatial downscaling capabilities for fine-scale grids; (2) the spatial distribution of BI hotspot areas within townships exhibits significant heterogeneity. The fine-scale risk mapping approach overcomes the limitations of previous coarse-scale risk maps and provides critical evidence for policymakers to better understand the distribution of BI hotspot areas, facilitating pixel-level spatially targeted vector control interventions in intra-urban areas. Full article
Show Figures

Figure 1

17 pages, 9212 KiB  
Article
Monolithically Integrated THz Detectors Based on High-Electron-Mobility Transistors
by Adam Rämer, Edoardo Negri, Eugen Dischke, Serguei Chevtchenko, Hossein Yazdani, Lars Schellhase, Viktor Krozer and Wolfgang Heinrich
Sensors 2025, 25(11), 3539; https://doi.org/10.3390/s25113539 - 4 Jun 2025
Viewed by 447
Abstract
We present THz direct detectors based on an AlGaN/GaN high electron mobility transistor (HEMT), featuring excellent optical sensitivity and low noise-equivalent power (NEP). These detectors are monolithically integrated with various antenna designs and exhibit state-of-the-art performance at room temperature. Their architecture enables straightforward [...] Read more.
We present THz direct detectors based on an AlGaN/GaN high electron mobility transistor (HEMT), featuring excellent optical sensitivity and low noise-equivalent power (NEP). These detectors are monolithically integrated with various antenna designs and exhibit state-of-the-art performance at room temperature. Their architecture enables straightforward scaling to two-dimensional formats, paving the way for terahertz focal plane arrays (FPAs). In particular, for one detector type, a fully realized THz FPA has been demonstrated in this paper. Theoretical and experimental characterizations are provided for both single-pixel detectors (0.1–1.5 THz) and the FPA (0.1–1.1 THz). The broadband single detectors achieve optical sensitivities exceeding 20 mA/W up to 1 THz and NEP values below 100 pW/Hz. The best optical NEP is below 10 pW/Hz at 175 GHz. The reported sensitivity and NEP values were achieved including antenna and optical coupling losses, underlining the excellent overall performance of the detectors. Full article
Show Figures

Figure 1

18 pages, 4439 KiB  
Article
Combining Infrared Thermography with Computer Vision Towards Automatic Detection and Localization of Air Leaks
by Ângela Semitela, João Silva, André F. Girão, Samuel Verdasca, Rita Futre, Nuno Lau, José P. Santos and António Completo
Sensors 2025, 25(11), 3272; https://doi.org/10.3390/s25113272 - 22 May 2025
Viewed by 635
Abstract
This paper proposes an automated system integrating infrared thermography (IRT) and computer vision for air leak detection and localization in end-of-line (EOL) testing stations. This system consists of (1) a leak tester for detection and quantification of leaks, (2) an infrared camera for [...] Read more.
This paper proposes an automated system integrating infrared thermography (IRT) and computer vision for air leak detection and localization in end-of-line (EOL) testing stations. This system consists of (1) a leak tester for detection and quantification of leaks, (2) an infrared camera for real-time thermal image acquisition; and (3) an algorithm for automatic leak localization. The python-based algorithm acquires thermal frames from the camera’s streaming video, identifies potential leak regions by selecting a region of interest, mitigates environmental interferences via image processing, and pinpoints leaks by employing pixel intensity thresholding. A closed circuit with an embedded leak system simulated relevant leakage scenarios, varying leak apertures (ranging from 0.25 to 3 mm), and camera–leak system distances (0.2 and 1 m). Results confirmed that (1) the leak tester effectively detected and quantified leaks, with larger apertures generating higher leak rates; (2) the IRT performance was highly dependent on leak aperture and camera–leak system distance, confirming that shorter distances improve localization accuracy; and (3) the algorithm localized all leaks in both lab and industrial environments, regardless of the camera–leak system distance, mostly achieving accuracies higher than 0.7. Overall, the combined system demonstrated great potential for long-term implementation in EOL leakage stations in the manufacturing sector, offering an effective and cost-effective alternative for manual inspections. Full article
Show Figures

Figure 1

29 pages, 6039 KiB  
Article
Tree Species Detection and Enhancing Semantic Segmentation Using Machine Learning Models with Integrated Multispectral Channels from PlanetScope and Digital Aerial Photogrammetry in Young Boreal Forest
by Arun Gyawali, Mika Aalto and Tapio Ranta
Remote Sens. 2025, 17(11), 1811; https://doi.org/10.3390/rs17111811 - 22 May 2025
Viewed by 905
Abstract
The precise identification and classification of tree species in young forests during their early development stages are vital for forest management and silvicultural efforts that support their growth and renewal. However, achieving accurate geolocation and species classification through field-based surveys is often a [...] Read more.
The precise identification and classification of tree species in young forests during their early development stages are vital for forest management and silvicultural efforts that support their growth and renewal. However, achieving accurate geolocation and species classification through field-based surveys is often a labor-intensive and complicated task. Remote sensing technologies combined with machine learning techniques present an encouraging solution, offering a more efficient alternative to conventional field-based methods. This study aimed to detect and classify young forest tree species using remote sensing imagery and machine learning techniques. The study mainly involved two different objectives: first, tree species detection using the latest version of You Only Look Once (YOLOv12), and second, semantic segmentation (classification) using random forest, Categorical Boosting (CatBoost), and a Convolutional Neural Network (CNN). To the best of our knowledge, this marks the first exploration utilizing YOLOv12 for tree species identification, along with the study that integrates digital aerial photogrammetry with Planet imagery to achieve semantic segmentation in young forests. The study used two remote sensing datasets: RGB imagery from unmanned aerial vehicle (UAV) ortho photography and RGB-NIR from PlanetScope. For YOLOv12-based tree species detection, only RGB from ortho photography was used, while semantic segmentation was performed with three sets of data: (1) Ortho RGB (3 bands), (2) Ortho RGB + canopy height model (CHM) + Planet RGB-NIR (8 bands), and (3) ortho RGB + CHM + Planet RGB-NIR + 12 vegetation indices (20 bands). With three models applied to these datasets, nine machine learning models were trained and tested using 57 images (1024 × 1024 pixels) and their corresponding mask tiles. The YOLOv12 model achieved 79% overall accuracy, with Scots pine performing best (precision: 97%, recall: 92%, mAP50: 97%, mAP75: 80%) and Norway spruce showing slightly lower accuracy (precision: 94%, recall: 82%, mAP50: 90%, mAP75: 71%). For semantic segmentation, the CatBoost model with 20 bands outperformed other models, achieving 85% accuracy, 80% Kappa, and 81% MCC, with CHM, EVI, NIRPlanet, GreenPlanet, NDGI, GNDVI, and NDVI being the most influential variables. These results indicate that a simple boosting model like CatBoost can outperform more complex CNNs for semantic segmentation in young forests. Full article
Show Figures

Graphical abstract

33 pages, 13161 KiB  
Article
Using Landscape Metrics of Pixel Scale Land Cover Extracted from High Spatial Resolution Images to Classify Block-Level Urban Land Use
by Haofeng Luo, Xiaomei Yang, Zhihua Wang, Yueming Liu, Huifang Zhang, Ku Gao and Qingyang Zhang
Land 2025, 14(5), 1100; https://doi.org/10.3390/land14051100 - 18 May 2025
Viewed by 451
Abstract
Block-level urban land use classification (BLULUC), like residential and commercial classification, is highly useful for urban planners. It can be achieved in the form of high-frequency full coverage without biases based on the data of high-spatial-resolution remote sensing images (HSRRSIs), which social sensing [...] Read more.
Block-level urban land use classification (BLULUC), like residential and commercial classification, is highly useful for urban planners. It can be achieved in the form of high-frequency full coverage without biases based on the data of high-spatial-resolution remote sensing images (HSRRSIs), which social sensing data like POI data or mobile phone data cannot provide. However, at present, the extraction of quantitative features from HSRRSIs for BLULUC primarily relies on computer vision or deep learning methods based on image signal characteristics rather than land cover patterns, like vegetation, water, or buildings, thus disconnecting existing knowledge between the landscape patterns and their functions as well as greatly hindering BLULUC by HSRRSIs. Well-known landscape metrics could play an important connecting role, but these also encounter the scale selection issue; i.e., the optimal spatial unit size is an image pixel or a segmented image object. Here, we use the task of BLULUC with 2 m satellite images in Beijing as a case study. The results show the following: (1) pixel-based classification can achieve higher accuracy than segmented object-based classification, with an average of 3% in overall aspects, while some land use types could reach 10%, such as commercial land. (2) At the pixel scale, if the quantity metrics at the class level, such as the number of patches, and the proportion metrics at the landscape level, such as vegetation proportion, are removed, the accuracy can be greatly reduced. Moreover, removing landscape-level metrics can lead to a more significant reduction in accuracy than removing class-level metrics. This indicates that in order to achieve a higher accuracy in BLULUC from HSRRSIs, landscape-level land cover metrics, including patch numbers and proportions at the pixel scale, can be used instead of object-scale metrics. Full article
Show Figures

Figure 1

21 pages, 7212 KiB  
Article
Combining Cirrus and Aerosol Corrections for Improved Reflectance Retrievals over Turbid Waters from Visible Infrared Imaging Radiometer Suite Data
by Bo-Cai Gao, Rong-Rong Li, Marcos J. Montes and Sean C. McCarthy
Oceans 2025, 6(2), 28; https://doi.org/10.3390/oceans6020028 - 14 May 2025
Viewed by 502
Abstract
The multi-band atmospheric correction algorithms, now referred to as remote sensing reflectance (Rrs) algorithms, have been implemented on a NASA computing facility for global remote sensing of ocean color and atmospheric aerosol parameters from data acquired with several satellite instruments, including [...] Read more.
The multi-band atmospheric correction algorithms, now referred to as remote sensing reflectance (Rrs) algorithms, have been implemented on a NASA computing facility for global remote sensing of ocean color and atmospheric aerosol parameters from data acquired with several satellite instruments, including the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi spacecraft platform. These algorithms are based on the 2-band version of the SeaWiFS (Sea-Viewing Wide Field-of-View Sensor) algorithm. The bands centered near 0.75 and 0.865 μm are used for atmospheric corrections. In order to obtain high-quality Rrs values over Case 1 waters (deep clear ocean waters), strict masking criteria are implemented inside these algorithms to mask out thin clouds and very turbid water pixels. As a result, Rrs values are often not retrieved over bright Case 2 waters. Through our analysis of VIIRS data, we have found that spatial features of bright Case 2 waters are observed in VIIRS visible band images contaminated by thin cirrus clouds. In this article, we describe methods of combining cirrus and aerosol corrections to improve spatial coverage in Rrs retrievals over Case 2 waters. One method is to remove cirrus cloud effects using our previously developed operational VIIRS cirrus reflectance algorithm and then to perform atmospheric corrections with our updated version of the spectrum-matching algorithm, which uses shortwave IR (SWIR) bands above 1 μm for retrieving atmospheric aerosol parameters and extrapolates the aerosol parameters to the visible region to retrieve water-leaving reflectances of VIIRS visible bands. Another method is to remove the cirrus effect first and then make empirical atmospheric and sun glint corrections for water-leaving reflectance retrievals. The two methods produce comparable retrieved results, but the second method is about 20 times faster than the spectrum-matching method. We compare our retrieved results with those obtained from the NASA VIIRS Rrs algorithm. We will show that the assumption of zero water-leaving reflectance for the VIIRS band centered at 0.75 μm (M6) over Case 2 waters with the NASA Rrs algorithm can sometimes result in slight underestimates of water-leaving reflectances of visible bands over Case 2 waters, where the M6 band water-leaving reflectances are actually not equal to zero. We will also show conclusively that the assumption of thin cirrus clouds as ‘white’ aerosols during atmospheric correction processes results in overestimates of aerosol optical thicknesses and underestimates of aerosol Ångström coefficients. Full article
(This article belongs to the Special Issue Ocean Observing Systems: Latest Developments and Challenges)
Show Figures

Figure 1

21 pages, 7835 KiB  
Article
Extraction of Cropland Based on Multi-Source Remote Sensing and an Improved Version of the Deep Learning-Based Segment Anything Model (SAM)
by Kunjian Tao, He Li, Chong Huang, Qingsheng Liu, Junyan Zhang and Ruoqi Du
Agronomy 2025, 15(5), 1139; https://doi.org/10.3390/agronomy15051139 - 6 May 2025
Viewed by 742
Abstract
Fine extraction of cropland parcels is an essential prerequisite for achieving precision agriculture. Remote sensing technology, due to its large-scale and multi-dimensional characteristics, can effectively enhance the efficiency of collecting information on agricultural land parcels. Currently, semantic segmentation models based on high-resolution remote [...] Read more.
Fine extraction of cropland parcels is an essential prerequisite for achieving precision agriculture. Remote sensing technology, due to its large-scale and multi-dimensional characteristics, can effectively enhance the efficiency of collecting information on agricultural land parcels. Currently, semantic segmentation models based on high-resolution remote sensing imagery utilize limited spectral information and rely heavily on a large amount of fine data annotation, while pixel classification models based on medium-to-low-resolution multi-temporal remote sensing imagery are limited by the mixed pixel problem. To address this, the study utilizes GF-2 high-resolution imagery and Sentinel-2 multi-temporal data, in conjunction with the basic image segmentation model SAM, by additionally introducing a prompt generation module (Box module and Auto module) to achieve automatic fine extraction of cropland parcels. The research results indicate the following: (1) The mIoU of SAM with the Box module is 0.711, and the OA is 0.831, showing better performance, while the mIoU of SAM with the Auto module is 0.679, and the OA is 0.81, yielding higher-quality cropland masks; (2) The combination of various prompts (box, point, and mask), along with the hierarchical extraction strategy, can effectively improve the performance of Box module SAM; (3) Employing a more accurate prompt data source can significantly boost model performance. The mIoU of the superior-performing Box module SAM is increased to 0.920, and the OA is raised to 0.958. Overall, the improved SAM, while reducing the demand for mask annotation and model training, can achieve high-precision extraction results for cropland parcels. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

30 pages, 19525 KiB  
Article
Disease Monitoring and Characterization of Feeder Road Network Based on Improved YOLOv11
by Ying Fan, Kun Zhi, Haichao An, Runyin Gu, Xiaobing Ding and Jianhua Tang
Electronics 2025, 14(9), 1818; https://doi.org/10.3390/electronics14091818 - 29 Apr 2025
Viewed by 669
Abstract
In response to the challenges of the low accuracy and high misdetection and omission rate of disease detection on feeder roads, an improved Rural-YOLO (SAConv-C2f+C2PSA_CAA+MCSAttention+WIOU) disease detection algorithm is proposed in this paper, which is an enhanced target detection framework based on the [...] Read more.
In response to the challenges of the low accuracy and high misdetection and omission rate of disease detection on feeder roads, an improved Rural-YOLO (SAConv-C2f+C2PSA_CAA+MCSAttention+WIOU) disease detection algorithm is proposed in this paper, which is an enhanced target detection framework based on the YOLOv11 architecture, for the identification of common diseases in the complex feeder road environment. The proposed methodology introduces four key innovations: (1) Switchable Atrous Convolution (SAConv) is introduced into the backbone network to enhance multiscale disease feature extraction under occlusion conditions; (2) Multi-Channel and Spatial Attention (MCSAttention) is constructed in the feature fusion process, and the weight distribution of multiscale diseases is adjusted through adaptive weight redistribution. By adjusting the weight distribution, the model’s sensitivity to subtle disease features is improved. To enhance its ability to discriminate between different disease types, Cross Stage Partial with Parallel Spatial Attention and Channel Adaptive Aggregation (C2PSA_CAA) is constructed at the end of the backbone network. (3) To mitigate category imbalance issues, Weighted Intersection over Union loss (WIoU_loss) is introduced, which helps optimize the bounding box regression process in disease detection and improve the detection of relevant diseases. Based on experimental validation, Rural-YOLO demonstrated superior performance with minimal computational overhead. Only 0.7 M additional parameters is required, and an 8.4% improvement in recall and a 7.8% increase in mAP50 were achieved compared to the initial models. The optimized architecture also reduced the model size by 21%. The test results showed that the proposed model achieved 3.28 M parameters with a computational complexity of 5.0 GFLOPs, meeting the requirements for lightweight deployment scenarios. Cross-validation on multi-scenario public datasets was carried out, and the model’s robustness across diverse road conditions. In the quantitative experiments, the center skeleton method and the maximum internal tangent circle method were used to calculate crack width, and the pixel occupancy ratio method was used to assess the area damage degree of potholes and other diseases. The measurements were converted to actual physical dimensions using a calibrated scale of 0.081:1. Full article
Show Figures

Figure 1

17 pages, 7946 KiB  
Article
Optical Camera Characterization for Feature-Based Navigation in Lunar Orbit
by Pierluigi Federici, Antonio Genova, Simone Andolfo, Martina Ciambellini, Riccardo Teodori and Tommaso Torrini
Aerospace 2025, 12(5), 374; https://doi.org/10.3390/aerospace12050374 - 26 Apr 2025
Viewed by 563
Abstract
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to [...] Read more.
Accurate localization is a key requirement for deep-space exploration, enabling spacecraft operations with limited ground support. Upcoming commercial and scientific missions to the Moon are designed to extensively use optical measurements during low-altitude orbital phases, descent and landing, and high-risk operations, due to the versatility and suitability of these data for onboard processing. Navigation frameworks based on optical data analysis have been developed to support semi- or fully-autonomous onboard systems, enabling precise relative localization. To achieve high-accuracy navigation, optical data have been combined with complementary measurements using sensor fusion techniques. Absolute localization is further supported by integrating onboard maps of cataloged surface features, enabling position estimation in an inertial reference frame. This study presents a navigation framework for optical image processing aimed at supporting the autonomous operations of lunar orbiters. The primary objective is a comprehensive characterization of the navigation camera’s properties and performance to ensure orbit determination uncertainties remain below 1% of the spacecraft altitude. In addition to an analysis of measurement noise, which accounts for both hardware and software contributions and is evaluated across multiple levels consistent with prior literature, this study emphasizes the impact of process noise on orbit determination accuracy. The mismodeling of orbital dynamics significantly degrades orbit estimation performance, even in scenarios involving high-performing navigation cameras. To evaluate the trade-off between measurement and process noise, representing the relative accuracy of the navigation camera and the onboard orbit propagator, numerical simulations were carried out in a synthetic lunar environment using a near-polar, low-altitude orbital configuration. Under nominal conditions, the optical measurement noise was set to 2.5 px, corresponding to a ground resolution of approximately 160 m based on the focal length, pixel pitch, and altitude of the modeled camera. With a conservative process noise model, position errors of about 200 m are observed in both transverse and normal directions. The results demonstrate the estimation framework’s robustness to modeling uncertainties, adaptability to varying measurement conditions, and potential to support increased onboard autonomy for small spacecraft in deep-space missions. Full article
(This article belongs to the Special Issue Planetary Exploration)
Show Figures

Figure 1

22 pages, 1077 KiB  
Article
SECrackSeg: A High-Accuracy Crack Segmentation Network Based on Proposed UNet with SAM2 S-Adapter and Edge-Aware Attention
by Xiyin Chen, Yonghua Shi and Junjie Pang
Sensors 2025, 25(9), 2642; https://doi.org/10.3390/s25092642 - 22 Apr 2025
Cited by 1 | Viewed by 797
Abstract
Crack segmentation is essential for structural health monitoring and infrastructure maintenance, playing a crucial role in early damage detection and safety risk reduction. Traditional methods, including digital image processing techniques have limitations in complex environments. Deep learning-based methods have shown potential, but still [...] Read more.
Crack segmentation is essential for structural health monitoring and infrastructure maintenance, playing a crucial role in early damage detection and safety risk reduction. Traditional methods, including digital image processing techniques have limitations in complex environments. Deep learning-based methods have shown potential, but still face challenges, such as poor generalization with limited samples, insufficient extraction of fine-grained features, feature loss during upsampling, and inadequate capture of crack edge details. This study proposes SECrackSeg, a high-accuracy crack segmentation network that integrates an improved UNet architecture, Segment Anything Model 2 (SAM2), MI-Upsampling, and an Edge-Aware Attention mechanism. The key innovations include: (1) using a SAM2 S-Adapter with a frozen backbone to enhance generalization in low-data scenarios; (2) employing a Multi-Scale Dilated Convolution (MSDC) module to promote multi-scale feature fusion; (3) introducing MI-Upsampling to reduce feature loss during upsampling; and (4) implementing an Edge-Aware Attention mechanism to improve crack edge segmentation precision. Additionally, a custom loss function incorporating weighted binary cross-entropy and weighted IoU loss is utilized to emphasize challenging pixels. This function also applies Multi-Granularity Supervision by optimizing segmentation outputs at three different resolution levels, ensuring better feature consistency and improved model robustness across varying image scales. Experimental results show that SECrackSeg achieves higher precision, recall, F1-score, and mIoU scores on the CFD, Crack500, and DeepCrack datasets compared to state-of-the-art models, demonstrating its excellent performance in fine-grained feature recognition, edge segmentation, and robustness. Full article
(This article belongs to the Collection Sensors and Sensing Technology for Industry 4.0)
Show Figures

Figure 1

25 pages, 18221 KiB  
Article
Self-Supervised Feature Contrastive Learning for Small Weak Object Detection in Remote Sensing
by Zheng Li, Xueyan Hu, Jin Qian, Tianqi Zhao, Dongdong Xu and Yongcheng Wang
Remote Sens. 2025, 17(8), 1438; https://doi.org/10.3390/rs17081438 - 17 Apr 2025
Cited by 1 | Viewed by 865
Abstract
Despite advances in remote sensing object detection, accurately identifying small, weak objects remains challenging. Their limited pixel representation often fails to capture distinctive features, making them susceptible to environmental interference. Current detectors frequently miss these subtle feature variations. To address these challenges, we [...] Read more.
Despite advances in remote sensing object detection, accurately identifying small, weak objects remains challenging. Their limited pixel representation often fails to capture distinctive features, making them susceptible to environmental interference. Current detectors frequently miss these subtle feature variations. To address these challenges, we propose FCDet, a feature contrast-based detector for small, weak objects. Our approach introduces: (1) a spatial-guided feature upsampler (SGFU) that aligns features by adaptive sampling based on spatial distribution, thus achieving fine-grained alignment during feature aggregation; (2) a feature contrast head (FCH) that projects GT and RoI features into an embedding space for discriminative learning; and (3) an instance-controlled label assignment (ICLA) strategy that optimizes sample selection for feature contrastive learning. We conduct comprehensive experiments on challenging datasets, with the proposed method achieving 73.89% mAP on DIOR, 95.04% mAP on NWPU VHR-10, and 26.4% AP on AI-TOD, demonstrating its effectiveness and superior performance. Full article
Show Figures

Figure 1

23 pages, 5613 KiB  
Article
Neural Network Based on Dynamic Collaboration of Flows for Temporal Downscaling
by Junkai Wang, Lianlei Lin, Yu Zhang, Zongwei Zhang, Sheng Gao and Hanqing Zhao
Remote Sens. 2025, 17(8), 1434; https://doi.org/10.3390/rs17081434 - 17 Apr 2025
Viewed by 473
Abstract
Time downscaling is one of the most challenging topics in remote sensing and meteorological data processing. Traditional methods often face the problems of high computing cost and poor generalization ability. The framework interpolation method based on deep learning provides a new idea for [...] Read more.
Time downscaling is one of the most challenging topics in remote sensing and meteorological data processing. Traditional methods often face the problems of high computing cost and poor generalization ability. The framework interpolation method based on deep learning provides a new idea for the time downscaling of meteorological data. A deep neural network for the time downscaling of multivariate meteorological data is designed in this paper. It estimates the kernel weight and offset vector of each target pixel independently among different meteorological variables and generates output frames guided by the feature space. Compared with other methods, this model can deal with a large range of complex meteorological movements. The 2 h interval downscaling experiments for 2 m temperature, surface pressure, and 1000 hPa specific humidity show that the MAE of the proposed method is reduced by about 14%, 25%, and 18%, respectively, compared with advanced methods such as AdaCof and Zooming Slow-Mo. Performance fluctuates very little over time, with an average performance fluctuation of only about 1% across all metrics. Even in the downscaling experiment with a 6 h interval, the proposed model still maintains a leading performance advantage, which indicates that the proposed model has not only good performance and robustness but also excellent scalability and transferability in the downscaling task in the multivariate meteorological field. Full article
Show Figures

Figure 1

18 pages, 12759 KiB  
Article
Validation of Inland Water Surface Elevation from SWOT Satellite Products: A Case Study in the Middle and Lower Reaches of the Yangtze River
by Yao Zhao, Jun’e Fu, Zhiguo Pang, Wei Jiang, Pengjie Zhang and Zixuan Qi
Remote Sens. 2025, 17(8), 1330; https://doi.org/10.3390/rs17081330 - 8 Apr 2025
Cited by 2 | Viewed by 1788
Abstract
The Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA and several international collaboration agencies, aims to achieve high-resolution two-dimensional observations of global surface water. Equipped with the advanced Ka-band radar interferometer (KaRIn), it significantly enhances the ability to monitor [...] Read more.
The Surface Water and Ocean Topography (SWOT) satellite mission, jointly developed by NASA and several international collaboration agencies, aims to achieve high-resolution two-dimensional observations of global surface water. Equipped with the advanced Ka-band radar interferometer (KaRIn), it significantly enhances the ability to monitor surface water and provides a new data source for obtaining large-scale water surface elevation (WSE) data at high temporal and spatial resolution. However, the accuracy and applicability of its scientific data products for inland water bodies still require validation. This study obtained three scientific data products from the SWOT satellite between August 2023 and December 2024: the Level 2 KaRIn high-rate river single-pass vector product (L2_HR_RiverSP), the Level 2 KaRIn high-rate lake single-pass vector product (L2_HR_LakeSP), and the Level 2 KaRIn high-rate water mask pixel cloud product (L2_HR_PIXC). These were compared with in situ water level data to validate their accuracy in retrieving inland water levels across eight different regions in the middle and lower reaches of the Yangtze River (MLRYR) and to evaluate the applicability of each product. The experimental results show the following: (1) The inversion accuracy of L2_HR_RiverSP and L2_HR_LakeSP varies significantly across different regions. In some areas, the extracted WSE aligns closely with the in situ water level trend, with a coefficient of determination (R2) exceeding 0.9, while in other areas, the R2 is lower (less than 0.8), and the error compared to in situ water levels is larger (with Root Mean Square Error (RMSE) greater than 1.0 m). (2) This study proposes a combined denoising method based on the Interquartile Range (IQR) and Adaptive Statistical Outlier Removal (ASOR). Compared to the L2_HR_RiverSP and L2_HR_LakeSP products, the L2_HR_PIXC product, after denoising, shows significant improvements in all accuracy metrics for water level inversion, with R2 greater than 0.85, Mean Absolute Error (MAE) less than 0.4 m, and RMSE less than 0.5 m. Overall, the SWOT satellite demonstrates the capability to monitor inland water bodies with high precision, especially through the L2_HR_PIXC product, which shows broader application potential and will play an important role in global water dynamics monitoring and refined water resource management research. Full article
Show Figures

Figure 1

Back to TopTop