Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (105)

Search Parameters:
Keywords = small shadow images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 15594 KiB  
Article
Seasonally Robust Offshore Wind Turbine Detection in Sentinel-2 Imagery Using Imaging Geometry-Aware Deep Learning
by Xike Song and Ziyang Li
Remote Sens. 2025, 17(14), 2482; https://doi.org/10.3390/rs17142482 - 17 Jul 2025
Viewed by 301
Abstract
Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing [...] Read more.
Remote sensing has emerged as a promising technology for large-scale detection and updating of global wind turbine databases. High-resolution imagery (e.g., Google Earth) facilitates the identification of offshore wind turbines (OWTs) but offers limited offshore coverage due to the high cost of capturing vast ocean areas. In contrast, medium-resolution imagery, such as 10-m Sentinel-2, provides broad ocean coverage but depicts turbines only as small bright spots and shadows, making accurate detection challenging. To address these limitations, We propose a novel deep learning approach to capture the variability in OWT appearance and shadows caused by changes in solar illumination and satellite viewing geometry. Our method learns intrinsic, imaging geometry-invariant features of OWTs, enabling robust detection across multi-seasonal Sentinel-2 imagery. This approach is implemented using Faster R-CNN as the baseline, with three enhanced extensions: (1) direct integration of imaging parameters, where Geowise-Net incorporates solar and view angular information of satellite metadata to improve geometric awareness; (2) implicit geometry learning, where Contrast-Net employs contrastive learning on seasonal image pairs to capture variability in turbine appearance and shadows caused by changes in solar and viewing geometry; and (3) a Composite model that integrates the above two geometry-aware models to utilize their complementary strengths. All four models were evaluated using Sentinel-2 imagery from offshore regions in China. The ablation experiments showed a progressive improvement in detection performance in the following order: Faster R-CNN < Geowise-Net < Contrast-Net < Composite. Seasonal tests demonstrated that the proposed models maintained high performance on summer images against the baseline, where turbine shadows are significantly shorter than in winter scenes. The Composite model, in particular, showed only a 0.8% difference in the F1 score between the two seasons, compared to up to 3.7% for the baseline, indicating strong robustness to seasonal variation. By applying our approach to 887 Sentinel-2 scenes from China’s offshore regions (2023.1–2025.3), we built the China OWT Dataset, mapping 7369 turbines as of March 2025. Full article
Show Figures

Graphical abstract

29 pages, 5178 KiB  
Article
HASSDE-NAS: Heuristic–Adaptive Spectral–Spatial Neural Architecture Search with Dynamic Cell Evolution for Hyperspectral Water Body Identification
by Feng Chen, Baishun Su and Zongpu Jia
Information 2025, 16(6), 495; https://doi.org/10.3390/info16060495 - 13 Jun 2025
Viewed by 424
Abstract
The accurate identification of water bodies in hyperspectral images (HSIs) remains challenging due to hierarchical representation imbalances in deep learning models, where shallow layers overly focus on spectral features, boundary ambiguities caused by the relatively low spatial resolution of satellite imagery, and limited [...] Read more.
The accurate identification of water bodies in hyperspectral images (HSIs) remains challenging due to hierarchical representation imbalances in deep learning models, where shallow layers overly focus on spectral features, boundary ambiguities caused by the relatively low spatial resolution of satellite imagery, and limited detection capability for small-scale aquatic features such as narrow rivers. To address these challenges, this study proposes Heuristic–Adaptive Spectral–Spatial Neural Architecture Search with Dynamic Cell Evaluation (HASSDE-NAS). The architecture integrates three specialized units; a spectral-aware dynamic band selection cell suppresses redundant spectral bands, while a geometry-enhanced edge attention cell refines fragmented spatial boundaries. Additionally, a bidirectional fusion alignment cell jointly optimizes spectral and spatial dependencies. A heuristic cell search algorithm optimizes the network architecture through architecture stability, feature diversity, and gradient sensitivity analysis, which improves search efficiency and model robustness. Evaluated on the Gaofen-5 datasets from the Guangdong and Henan regions, HASSDE-NAS achieves overall accuracies of 92.61% and 96%, respectively. This approach outperforms existing methods in delineating narrow river systems and resolving water bodies with weak spectral contrast under complex backgrounds, such as vegetation or cloud shadows. By adaptively prioritizing task-relevant features, the framework provides an interpretable solution for hydrological monitoring and advances neural architecture search in intelligent remote sensing. Full article
Show Figures

Figure 1

17 pages, 1585 KiB  
Perspective
Hyperreflective Retinal Foci (HRF): Definition and Role of an Invaluable OCT Sign
by Luisa Frizziero, Giulia Midena, Luca Danieli, Tommaso Torresin, Antonio Perfetto, Raffaele Parrozzani, Elisabetta Pilotto and Edoardo Midena
J. Clin. Med. 2025, 14(9), 3021; https://doi.org/10.3390/jcm14093021 - 27 Apr 2025
Cited by 1 | Viewed by 1228
Abstract
Background: Hyperreflective retinal foci (HRF) are small, discrete, hyperreflective elements observed in the retina using optical coherence tomography (OCT). They appear in many retinal diseases and have been linked to disease progression, treatment response, and prognosis. However, their definition and clinical use [...] Read more.
Background: Hyperreflective retinal foci (HRF) are small, discrete, hyperreflective elements observed in the retina using optical coherence tomography (OCT). They appear in many retinal diseases and have been linked to disease progression, treatment response, and prognosis. However, their definition and clinical use vary widely, not just between different diseases, but also within a single disorder. Methods: This perspective is based on a review of peer-reviewed studies examining HRF across different retinal diseases. The studies included analyzed HRF morphology, distribution, and clinical relevance using OCT. Particular attention was given to histopathological correlations, disease-specific patterns, and advancements in automated quantification methods. Results: HRF distribution and features vary with disease type and even within the same disease. A variety of descriptions have been proposed with different characteristics in terms of dimensions, reflectivity, location, and association with back shadowing. Automated OCT analysis has enhanced HRF detection, enabling quantitative analysis that may expand their use in clinical practice. However, differences in software and methods can lead to inconsistent results between studies. HRF have been linked to microglial cells and may be defined as neuro-inflammatory cells (Inflammatory, I-HRF), migrating retinal pigment epithelium cells (Pigmentary, P-HRF), blood vessels (Vascular, V-HRF), and deposits of proteinaceous or lipid elements leaking from vessels (Exudative, E-HRF). Conclusions: HRF are emerging as valuable imaging biomarkers in retinal diseases. Four main types have been identified, with different morphological features, pathophysiological origin, and, therefore, different implications in the management of retinal diseases. Advances in imaging and computational analysis are promising for their incorporation into personalized treatment strategies. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

22 pages, 5776 KiB  
Article
Using Pleiades Satellite Imagery to Monitor Multi-Annual Coastal Dune Morphological Changes
by Olivier Burvingt, Bruno Castelle, Vincent Marieu, Bertrand Lubac, Alexandre Nicolae Lerma and Nicolas Robin
Remote Sens. 2025, 17(9), 1522; https://doi.org/10.3390/rs17091522 - 25 Apr 2025
Viewed by 864
Abstract
In the context of sea levels rising, monitoring spatial and temporal topographic changes along coastal dunes is crucial to understand their dynamics since they represent natural barriers against coastal flooding and large sources of sediment that can mitigate coastal erosion. Different technologies are [...] Read more.
In the context of sea levels rising, monitoring spatial and temporal topographic changes along coastal dunes is crucial to understand their dynamics since they represent natural barriers against coastal flooding and large sources of sediment that can mitigate coastal erosion. Different technologies are currently used to monitor coastal dune topographic changes (GNSS, UAV, airborne LiDAR, etc.). Satellites recently emerged as a new source of topographic data by providing high-resolution images with a rather short revisit time at the global scale. Stereoscopic or tri-stereoscopic acquisition of some of these images enables the creation of 3D models using stereophotogrammetry methods. Here, the Ames Stereo Pipeline was used to produce digital elevation models (DEMs) from tri-stereo panchromatic and high-resolution Pleiades images along three 19 km long stretches of coastal dunes in SW France. The vertical errors of the Pleiades-derived DEMs were assessed by comparing them with DEMs produced from airborne LiDAR data collected a few months apart from the Pleiades images in 2017 and 2021 at the same three study sites. Results showed that the Pleiades-derived DEMs could reproduce the overall dune topography well, with averaged root mean square errors that ranged from 0.5 to 1.1 m for the six sets of tri-stereo images. The differences between DEMs also showed that Pleiades images can be used to monitor multi-annual coastal dune morphological changes. Strong erosion and accretion patterns over spatial scales ranging from hundreds of meters (e.g., blowouts) to tens of kilometers (e.g., dune retreat) were captured well, and allowed to quantify changes with reasonable errors (30%). Furthermore, relatively small averaged root mean square errors (0.63 m) can be obtained with a limited number of field-collected elevation points (five ground control points) to perform a simple vertical correction on the generated Pleiades DEMs. Among different potential sources of errors, shadow areas due to the steepness of the dune stoss slope and crest, along with planimetric errors that can also occur due to the steepness of the terrain, remain the major causes of errors still limiting accurate enough volumetric change assessment. However, ongoing improvements on the stereo matching algorithms and spatial resolution of the satellite sensors (e.g., Pleiades Neo) highlight the growing potential of Pleiades images as a cost-effective alternative to other mapping techniques of coastal dune topography. Full article
Show Figures

Figure 1

12 pages, 4616 KiB  
Article
Soil Moisture Monitoring Based on Deformable Convolution Unit Net Algorithm Combined with Water Area Changes
by Zihao Na, Zhonghua Guo and Yang Zhu
Electronics 2025, 14(5), 1011; https://doi.org/10.3390/electronics14051011 - 3 Mar 2025
Viewed by 796
Abstract
In response to the issue that existing soil moisture monitoring methods are significantly affected by surface roughness and the complex environment around water bodies, leading to a need for improvement in the accuracy of soil moisture inversion, a soil moisture detection algorithm based [...] Read more.
In response to the issue that existing soil moisture monitoring methods are significantly affected by surface roughness and the complex environment around water bodies, leading to a need for improvement in the accuracy of soil moisture inversion, a soil moisture detection algorithm based on a DCU-Net (Deformable Conv Unit-Net) water body extraction model is proposed, using the Ningxia region as the study area. The algorithm introduces the DCU (Deformable Conv Unit) module, which addresses the problem of extracting small water bodies at large scales with low resolution; reduces the probability of misjudgment during water body extraction caused by shadows from mountains, buildings, and other objects; and enhances the robustness and adaptability of the water body extraction algorithm. The method first creates a water body extraction dataset based on multi-year remote sensing images from Ningxia Province and trains the proposed DCU-Net model; then, it selects remote sensing images from certain areas for water body extraction; finally, it conducts regression analysis between the water body areas of Ningxia Province at different times and the corresponding measured soil moisture data to establish the intrinsic relationship between water body areas and soil moisture in the study area, achieving real-time regional soil moisture monitoring. The water body extraction performance of DCU-Net is verified based on extraction accuracy, with U-Net selected as the baseline network. The experimental results show that DCU-Net leads to improvements of 2.98%, 1.37%, 0.36%, and 1.49% in terms of IoU, Precision, Recall, and F1, respectively. The algorithm is more sensitive to water body feature information, can more accurately identify water bodies, and extracts water body contours more accurately. Additionally, a soil moisture inversion method based on a cubic polynomial is constructed. These results indicate that DCU-Net can precisely extract water body contours and accurately invert regional soil moisture, thereby providing support for the monitoring of large-scale soil moisture. Full article
Show Figures

Figure 1

21 pages, 9635 KiB  
Article
NTS-YOLO: A Nocturnal Traffic Sign Detection Method Based on Improved YOLOv5
by Yong He, Mengqi Guo, Yongchuan Zhang, Jun Xia, Xuelai Geng, Tao Zou and Rui Ding
Appl. Sci. 2025, 15(3), 1578; https://doi.org/10.3390/app15031578 - 4 Feb 2025
Cited by 2 | Viewed by 1727
Abstract
Accurate traffic sign recognition is one of the core technologies of intelligent driving systems, which face multiple challenges such as insufficient light and shadow interference at night. In this paper, we improve the YOLOv5 model for small, fuzzy, and partially occluded traffic sign [...] Read more.
Accurate traffic sign recognition is one of the core technologies of intelligent driving systems, which face multiple challenges such as insufficient light and shadow interference at night. In this paper, we improve the YOLOv5 model for small, fuzzy, and partially occluded traffic sign targets at night and propose a high-precision nighttime traffic sign recognition method, “NTS-YOLO”. The method firstly preprocessed the traffic sign dataset by adopting an unsupervised nighttime image enhancement method to improve the image quality under low-light conditions; secondly, it introduced the Convolutional Block Attention Module (CBAM) attentional mechanism, which focuses on the shape of the traffic sign by weighting the channel and spatial features inside the model and color to improve the perception under complex background and uneven illumination conditions; and finally, the Optimal Transport Assignment (OTA) loss function was adopted to optimize the accuracy of predicting the bounding box and thus improve the performance of the model by comparing the difference between two probability distributions, i.e., minimizing the difference. In order to evaluate the effectiveness of the method, 154 samples of typical traffic signs containing small targets and fuzzy and partially occluded traffic signs with different lighting conditions at nighttime were collected, and the data samples were subjected to the CBAM, OTA, and a combination of the two methods, respectively, and comparative experiments were conducted with the traditional YOLOv5 algorithm. The experimental results showed that “NTS-YOLO” achieved a significant performance improvement in nighttime traffic sign recognition, with a mean average accuracy improvement of 0.95% for the target detection of traffic signs and 0.17% for instance segmentation. Full article
Show Figures

Figure 1

25 pages, 6944 KiB  
Article
Representation Learning of Multi-Spectral Earth Observation Time Series and Evaluation for Crop Type Classification
by Andrea González-Ramírez, Clement Atzberger, Deni Torres-Roman and Josué López
Remote Sens. 2025, 17(3), 378; https://doi.org/10.3390/rs17030378 - 23 Jan 2025
Cited by 2 | Viewed by 1266
Abstract
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To [...] Read more.
Remote sensing (RS) spectral time series provide a substantial source of information for the regular and cost-efficient monitoring of the Earth’s surface. Important monitoring tasks include land use and land cover classification, change detection, forest monitoring and crop type identification, among others. To develop accurate solutions for RS-based applications, often supervised shallow/deep learning algorithms are used. However, such approaches usually require fixed-length inputs and large labeled datasets. Unfortunately, RS images acquired by optical sensors are frequently degraded by aerosol contamination, clouds and cloud shadows, resulting in missing observations and irregular observation patterns. To address these issues, efforts have been made to implement frameworks that generate meaningful representations from the irregularly sampled data streams and alleviate the deficiencies of the data sources and supervised algorithms. Here, we propose a conceptually and computationally simple representation learning (RL) approach based on autoencoders (AEs) to generate discriminative features for crop type classification. The proposed methodology includes a set of single-layer AEs with a very limited number of neurons, each one trained with the mono-temporal spectral features of a small set of samples belonging to a class, resulting in a model capable of processing very large areas in a short computational time. Importantly, the developed approach remains flexible with respect to the availability of clear temporal observations. The signal derived from the ensemble of AEs is the reconstruction difference vector between input samples and their corresponding estimations, which are averaged over all cloud-/shadow-free temporal observations of a pixel location. This averaged reconstruction difference vector is the base for the representations and the subsequent classification. Experimental results show that the proposed extremely light-weight architecture indeed generates separable features for competitive performances in crop type classification, as distance metrics scores achieved with the derived representations significantly outperform those obtained with the initial data. Conventional classification models were trained and tested with representations generated from a widely used Sentinel-2 multi-spectral multi-temporal dataset, BreizhCrops. Our method achieved 77.06% overall accuracy, which is 6% higher than that achieved using original Sentinel-2 data within conventional classifiers and even 4% better than complex deep models such as OmnisCNN. Compared to extremely complex and time-consuming models such as Transformer and long short-term memory (LSTM), only a 3% reduction in overall accuracy was noted. Our method uses only 6.8k parameters, i.e., 400x fewer than OmnicsCNN and 27x fewer than Transformer. The results prove that our method is competitive in terms of classification performance compared with state-of-the-art methods while substantially reducing the computational load. Full article
(This article belongs to the Collection Sentinel-2: Science and Applications)
Show Figures

Figure 1

16 pages, 1996 KiB  
Article
A Model for Detecting Xanthomonas campestris Using Machine Learning Techniques Enhanced by Optimization Algorithms
by Daniel-David Leal-Lara, Julio Barón-Velandia, Lina-María Molina-Parra and Ana-Carolina Cabrera-Blandón
Agriculture 2025, 15(3), 223; https://doi.org/10.3390/agriculture15030223 - 21 Jan 2025
Cited by 1 | Viewed by 1041
Abstract
The bacterium Xanthomonas campestris poses a significant threat to global agriculture due to its ability to infect leaves, fruits, and stems under various climatic conditions. Its rapid spread across large crop areas results in economic losses, compromises agricultural productivity, increases management costs, and [...] Read more.
The bacterium Xanthomonas campestris poses a significant threat to global agriculture due to its ability to infect leaves, fruits, and stems under various climatic conditions. Its rapid spread across large crop areas results in economic losses, compromises agricultural productivity, increases management costs, and threatens food security, especially in small-scale agricultural systems. To address this issue, this study developed a model that combines fuzzy logic and neural networks, optimized with intelligent algorithms, to detect symptoms of this foliar disease in 15 essential crop species under different environmental conditions using images. For this purpose, Sugeno-type fuzzy inference systems and adaptive neuro-fuzzy inference systems (ANFIS) were employed, configured with rules and clustering methods designed to address cases where diagnostic uncertainty arises due to the imprecision of different agricultural scenarios. The model achieved an accuracy of 93.81%, demonstrating robustness against variations in lighting, shadows, and capture angles, and proving effective in identifying patterns associated with the disease at early stages, enabling rapid and reliable diagnoses. This advancement represents a significant contribution to the automated detection of plant diseases, providing an accessible tool that enhances agricultural productivity and promotes sustainable practices in crop care. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
Show Figures

Figure 1

33 pages, 12646 KiB  
Article
A Binocular Vision-Assisted Method for the Accurate Positioning and Landing of Quadrotor UAVs
by Jie Yang, Kunling He, Jie Zhang, Jiacheng Li, Qian Chen, Xiaohui Wei and Hanlin Sheng
Drones 2025, 9(1), 35; https://doi.org/10.3390/drones9010035 - 6 Jan 2025
Cited by 2 | Viewed by 1013
Abstract
This paper introduces a vision-based target recognition and positioning system for UAV mobile landing scenarios, addressing challenges such as target occlusion due to shadows and the loss of the field of view. A novel image preprocessing technique is proposed, utilizing finite adaptive histogram [...] Read more.
This paper introduces a vision-based target recognition and positioning system for UAV mobile landing scenarios, addressing challenges such as target occlusion due to shadows and the loss of the field of view. A novel image preprocessing technique is proposed, utilizing finite adaptive histogram equalization in the HSV color space, to enhance UAV recognition and the detection of markers under shadow conditions. The system incorporates a Kalman filter-based target motion state estimation method and a binocular vision-based depth camera target height estimation method to achieve precise positioning. To tackle the problem of poor controller performance affecting UAV tracking and landing accuracy, a feedforward model predictive control (MPC) algorithm is integrated into a mobile landing control method. This enables the reliable tracking of both stationary and moving targets via the UAV. Additionally, with a consideration of the complexities of real-world flight environments, a mobile tracking and landing control strategy based on airspace division is proposed, significantly enhancing the success rate and safety of UAV mobile landings. The experimental results demonstrate a 100% target recognition success rate and high positioning accuracy, with x and y-axis errors not exceeding 0.01 m in close range, the x-axis relative error not exceeding 0.05 m, and the y-axis error not exceeding 0.03 m in the medium range. In long-range situations, the relative errors for both axes do not exceed 0.05 m. Regarding tracking accuracy, both KF and EKF exhibit good following performance with small steady-state errors when the target is stationary. Under dynamic conditions, EKF outperforms KF with better estimation results and a faster tracking speed. The landing accuracy is within 0.1 m, and the proposed method successfully accomplishes the mobile energy supply mission for the vehicle-mounted UAV system. Full article
(This article belongs to the Special Issue Swarm Intelligence in Multi-UAVs)
Show Figures

Figure 1

25 pages, 16474 KiB  
Article
The Mineral Composition and Grain Distribution of Difflugia Testate Amoebae: Through SEM-BEX Mapping and Software-Based Mineral Identification
by Jim Buckman and Vladimir Krivtsov
Minerals 2025, 15(1), 1; https://doi.org/10.3390/min15010001 - 24 Dec 2024
Cited by 2 | Viewed by 1170
Abstract
We tested a scanning electron microscope equipped with the newly developed Unity-BEX detector (SEM-BEX) system to study thirty-nine samples of the testate amoeba Difflugia. This produces fast single-scan backscattered (BSE) and combined elemental X-ray maps of selected areas, resulting in high-resolution data-rich [...] Read more.
We tested a scanning electron microscope equipped with the newly developed Unity-BEX detector (SEM-BEX) system to study thirty-nine samples of the testate amoeba Difflugia. This produces fast single-scan backscattered (BSE) and combined elemental X-ray maps of selected areas, resulting in high-resolution data-rich composite colour X-ray and combined BSE maps. Using a suitably user-defined elemental X-ray colour palette, minerals such as orthoclase, albite, quartz and mica were highlighted in blue, purple, magenta and green, respectively. Imaging was faster than comparable standard energy dispersive X-ray (EDX) analysis, of high quality, and did not suffer from problems associated with the analysis of rough surfaces by EDX, such as shadowing effects or working distance versus X-ray yield artifacts. In addition, we utilised the AZtecMatch v.6.1 software package to test its utility in identifying the mineral phases present on the Difflugia tests. Significantly, it was able to identify many minerals present but would require some further development due to the small size/thinness of many of the minerals analysed. The latter would also be further improved by the development of a bespoke mineral library based on actual collected X-ray data rather than based simply on stoichiometry. The investigation illustrates that in the case of the current material, minerals are preferentially selected and arranged on the test based upon their mineralogy and size, and likely upon inherent properties such as structural strength/flexibility and specific gravity. As with previous studies, mineral usage is ultimately controlled by source availability and therefore may be of limited taxonomic significance, although of value in areas such as palaeoenvironmental reconstruction. Full article
(This article belongs to the Section Biomineralization and Biominerals)
Show Figures

Figure 1

15 pages, 3524 KiB  
Article
Effective Detection of Cloud Masks in Remote Sensing Images
by Yichen Cui, Hong Shen and Chan-Tong Lam
Sensors 2024, 24(23), 7730; https://doi.org/10.3390/s24237730 - 3 Dec 2024
Viewed by 1154
Abstract
Effective detection of the contours of cloud masks and estimation of their distribution can be of practical help in studying weather changes and natural disasters. Existing deep learning methods are unable to extract the edges of clouds and backgrounds in a refined manner [...] Read more.
Effective detection of the contours of cloud masks and estimation of their distribution can be of practical help in studying weather changes and natural disasters. Existing deep learning methods are unable to extract the edges of clouds and backgrounds in a refined manner when detecting cloud masks (shadows) due to their unpredictable patterns, and they are also unable to accurately identify small targets such as thin and broken clouds. For these problems, we propose MDU-Net, a multiscale dual up-sampling segmentation network based on an encoder–decoder–decoder. The model uses an improved residual module to capture the multi-scale features of clouds more effectively. MDU-Net first extracts the feature maps using four residual modules at different scales, and then sends them to the context information full flow module for the first up-sampling. This operation refines the edges of clouds and shadows, enhancing the detection performance. Subsequently, the second up-sampling module concatenates feature map channels to fuse contextual spatial information, which effectively reduces the false detection rate of unpredictable targets hidden in cloud shadows. On a self-made cloud and cloud shadow dataset based on the Landsat8 satellite, MDU-Net achieves scores of 95.61% in PA and 84.97% in MIOU, outperforming other models in both metrics and result images. Additionally, we conduct experiments to test the model’s generalization capability on the landcover.ai dataset to show that it also achieves excellent performance in the visualization results. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 15074 KiB  
Article
The Standardized Spectroscopic Mixture Model
by Christopher Small and Daniel Sousa
Remote Sens. 2024, 16(20), 3768; https://doi.org/10.3390/rs16203768 - 11 Oct 2024
Cited by 4 | Viewed by 1115
Abstract
The standardized spectral mixture model combines the specificity of a physically based representation of a spectrally mixed pixel with the generality and portability of a spectral index. Earlier studies have used spectrally and geographically diverse collections of broadband and spectroscopic imagery to show [...] Read more.
The standardized spectral mixture model combines the specificity of a physically based representation of a spectrally mixed pixel with the generality and portability of a spectral index. Earlier studies have used spectrally and geographically diverse collections of broadband and spectroscopic imagery to show that the reflectance of the majority of ice-free landscapes on Earth can be represented as linear mixtures of rock and soil substrates (S), photosynthetic vegetation (V) and dark targets (D) composed of shadow and spectrally absorptive/transmissive materials. However, both broadband and spectroscopic studies of the topology of spectral mixing spaces raise questions about the completeness and generality of the Substrate, Vegetation, Dark (SVD) model for imaging spectrometer data. This study uses a spectrally diverse collection of 40 granules from the EMIT imaging spectrometer to verify the generality and stability of the spectroscopic SVD model and characterize the SVD topology and plane of substrates to assess linearity of spectral mixing. New endmembers for soil and non-photosynthetic vegetation (NPV; N) allow the planar SVD model to be extended to a tetrahedral SVDN model to better accommodate the 3D topology of the mixing space. The SVDN model achieves smaller misfit than the SVD, but does so at the expense of implausible fractions beyond [0, 1]. However, a refined spectroscopic SVD model still achieves small (<0.03) RMS misfit, negligible sensitivity to endmember variability and strongly linear scaling over more than an order of magnitude range of spatial resolution. Full article
Show Figures

Figure 1

21 pages, 11650 KiB  
Article
Livestock Detection and Counting in Kenyan Rangelands Using Aerial Imagery and Deep Learning Techniques
by Ian A. Ocholla, Petri Pellikka, Faith Karanja, Ilja Vuorinne, Tuomas Väisänen, Mark Boitt and Janne Heiskanen
Remote Sens. 2024, 16(16), 2929; https://doi.org/10.3390/rs16162929 - 9 Aug 2024
Cited by 4 | Viewed by 2161 | Correction
Abstract
Accurate livestock counts are essential for effective pastureland management. High spatial resolution remote sensing, coupled with deep learning, has shown promising results in livestock detection. However, challenges persist, particularly when the targets are small and in a heterogeneous environment, such as those in [...] Read more.
Accurate livestock counts are essential for effective pastureland management. High spatial resolution remote sensing, coupled with deep learning, has shown promising results in livestock detection. However, challenges persist, particularly when the targets are small and in a heterogeneous environment, such as those in African rangelands. This study evaluated nine state-of-the-art object detection models, four variants each from YOLOv5 and YOLOv8, and Faster R-CNN, for detecting cattle in 10 cm resolution aerial RGB imagery in Kenya. The experiment involved 1039 images with 9641 labels for training from sites with varying land cover characteristics. The trained models were evaluated on 277 images and 2642 labels in the test dataset, and their performance was compared using Precision, Recall, and Average Precision (AP0.5–0.95). The results indicated that reduced spatial resolution, dense shrub cover, and shadows diminish the model’s ability to distinguish cattle from the background. The YOLOv8m architecture achieved the best AP0.5–0.95 accuracy of 39.6% with Precision and Recall of 91.0% and 83.4%, respectively. Despite its superior performance, YOLOv8m had the highest counting error of −8%. By contrast, YOLOv5m with AP0.5–0.95 of 39.3% attained the most accurate cattle count with RMSE of 1.3 and R2 of 0.98 for variable cattle herd densities. These results highlight that a model with high AP0.5–0.95 detection accuracy may struggle with counting cattle accurately. Nevertheless, these findings suggest the potential to upscale aerial-imagery-trained object detection models to satellite imagery for conducting cattle censuses over large areas. In addition, accurate cattle counts will support sustainable pastureland management by ensuring stock numbers do not exceed the forage available for grazing, thereby mitigating overgrazing. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

22 pages, 1266 KiB  
Article
Multi-Branch Attention Fusion Network for Cloud and Cloud Shadow Segmentation
by Hongde Gu, Guowei Gu, Yi Liu, Haifeng Lin and Yao Xu
Remote Sens. 2024, 16(13), 2308; https://doi.org/10.3390/rs16132308 - 24 Jun 2024
Cited by 6 | Viewed by 1918
Abstract
In remote sensing image processing, the segmentation of clouds and their shadows is a fundamental and vital task. For cloud images, traditional deep learning methods often have weak generalization capabilities and are prone to interference from ground objects and noise, which not only [...] Read more.
In remote sensing image processing, the segmentation of clouds and their shadows is a fundamental and vital task. For cloud images, traditional deep learning methods often have weak generalization capabilities and are prone to interference from ground objects and noise, which not only results in poor boundary segmentation but also causes false and missed detections of small targets. To address these issues, we proposed a multi-branch attention fusion network (MAFNet). In the encoder section, the dual branches of ResNet50 and the Swin transformer extract features together. A multi-branch attention fusion module (MAFM) uses positional encoding to add position information. Additionally, multi-branch aggregation attention (MAA) in the MAFM fully fuses the same level of deep features extracted by ResNet50 and the Swin transformer, which enhances the boundary segmentation ability and small target detection capability. To address the challenge of detecting small cloud and shadow targets, an information deep aggregation module (IDAM) was introduced to perform multi-scale deep feature aggregation, which supplements high semantic information, improving small target detection. For the problem of rough segmentation boundaries, a recovery guided module (RGM) was designed in the decoder section, which enables the model to effectively allocate attention to complex boundary information, enhancing the network’s focus on boundary information. Experimental results on the Cloud and Cloud Shadow dataset, HRC-WHU dataset, and SPARCS dataset indicate that MAFNet surpasses existing advanced semantic segmentation techniques. Full article
Show Figures

Figure 1

14 pages, 5359 KiB  
Technical Note
Detection of Surface Rocks and Small Craters in Permanently Shadowed Regions of the Lunar South Pole Based on YOLOv7 and Markov Random Field Algorithms in SAR Images
by Tong Xia, Xuancheng Ren, Yuntian Liu, Niutao Liu, Feng Xu and Ya-Qiu Jin
Remote Sens. 2024, 16(11), 1834; https://doi.org/10.3390/rs16111834 - 21 May 2024
Cited by 2 | Viewed by 2261
Abstract
Excluding rough areas with surface rocks and craters is critical for the safety of landing missions, such as China’s Chang’e-7 mission, in the permanently shadowed region (PSR) of the lunar south pole. Binned digital elevation model (DEM) data can describe the undulating surface, [...] Read more.
Excluding rough areas with surface rocks and craters is critical for the safety of landing missions, such as China’s Chang’e-7 mission, in the permanently shadowed region (PSR) of the lunar south pole. Binned digital elevation model (DEM) data can describe the undulating surface, but the DEM data can hardly detect surface rocks because of median-averaging. High-resolution images from a synthetic aperture radar (SAR) can be used to map discrete rocks and small craters according to their strong backscattering. This study utilizes the You Only Look Once version 7 (YOLOv7) tool to detect varying-sized craters in SAR images. It also employs the Markov random field (MRF) algorithm to identify surface rocks, which are usually difficult to detect in DEM data. The results are validated by optical images and DEM data in non-PSR. With the assistance of the DEM data, regions with slopes larger than 10° are excluded. YOLOv7 and MRF are applied to detect craters and rocky surfaces and exclude regions with steep slopes in the PSRs of craters Shoemaker, Slater, and Shackleton, respectively. This study proves SAR images are feasible in the selection of landing sites in the PSRs for future missions. Full article
(This article belongs to the Special Issue Planetary Exploration Using Remote Sensing—Volume II)
Show Figures

Figure 1

Back to TopTop