Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (96)

Search Parameters:
Keywords = optical multisensor system

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 4233 KB  
Article
Enhanced Calculation of Kd(PAR) Using Kd(490) Based on a Recently Compiled Large In Situ and Satellite Database
by Jorvin A. Zapata-Hinestroza, Eduardo Santamaría-del-Ángel, Alejandra Castillo-Ramírez, Sergio Cerdeira-Estrada, Adriana González-Silvera, Hansel Caballero-Aragón, Jesús A. Aguilar-Maldonado, Raúl Martell-Dubois, Laura Rosique-de-la-Cruz and María-Teresa Sebastiá-Frasquet
Remote Sens. 2025, 17(24), 3990; https://doi.org/10.3390/rs17243990 - 10 Dec 2025
Viewed by 131
Abstract
The vertical attenuation coefficient of photosynthetically active radiation (Kd (PAR)) is essential for characterizing the underwater light field and for operational marine monitoring. Although there have been efforts to use the standard satellite light attenuation [...] Read more.
The vertical attenuation coefficient of photosynthetically active radiation (Kd (PAR)) is essential for characterizing the underwater light field and for operational marine monitoring. Although there have been efforts to use the standard satellite light attenuation product at 490 nm (Kd (490)) to estimate (Kd (PAR)) over a decade, earlier approaches were constrained by limited data. This study used a globally representative robust database of in-situ and satellite observations spanning diverse marine optical conditions and applied rigorous quality control. Three empirical models (linear, power, and a higher-order polynomial) were developed using four Kd (490) satellite variants validated against an independent dataset and benchmarked against six published algorithms (36 total approximations). Performance was assessed using a Model Performance Index (MPI), where values closer to 1 indicate a better model. The best model was a power regression driven by the standard satellite Kd490, which yielded an MPI of 0.8704, indicating a robust performance under a wide variability of marine optical conditions. These results highlight the value of multisensor products, which with a rigorous quality control protocol, could be used to estimate the Kd (PAR) from the standard satellite Kd (490). The objective of the proposed algorithm is to generate long-term Kd (PAR) time series. This algorithm will be operational for implementation in marine ecosystem monitoring systems and can contribute to strengthening decision-making. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

18 pages, 2527 KB  
Article
Monitoring Wet-Snow Avalanche Risk in Southeastern Tibet with a UAV-Based Multi-Sensor Framework
by Shuang Ye, Min Huang, Zijun Chen, Wenyu Jiang, Xianghuan Luo and Jiasong Zhu
Remote Sens. 2025, 17(22), 3698; https://doi.org/10.3390/rs17223698 - 12 Nov 2025
Viewed by 348
Abstract
Wet-snow avalanches constitute a major geomorphic hazard in southeastern Tibet, where warm, humid climatic conditions and a steep, high-relief terrain generate failure mechanisms that are distinct from those in cold, dry snow environments. This study investigates the snowpack conditions underlying avalanche initiation in [...] Read more.
Wet-snow avalanches constitute a major geomorphic hazard in southeastern Tibet, where warm, humid climatic conditions and a steep, high-relief terrain generate failure mechanisms that are distinct from those in cold, dry snow environments. This study investigates the snowpack conditions underlying avalanche initiation in this region by integrating UAV-based multi-sensor surveys with field validation. Ground-penetrating radar (GPR), infrared thermography, and optical imaging were employed to characterize snow depth, stratigraphy, liquid water content (LWC), snow water equivalent (SWE), and surface temperature across an inaccessible avalanche channel. Calibration at representative wet-snow sites established an appropriate LWC inversion model and clarified the dielectric properties of avalanche-prone snow. Results revealed SWE up to 1092.98 mm and LWC exceeding 13.8%, well above the critical thresholds for wet-snow instability, alongside near-isothermal profiles and weak bonding at the snow–ground interface. Stratigraphic and UAV-based observations consistently showed poorly bonded, water-saturated snow layers with ice lenses. These findings provide new insights into the hydro-thermal controls of wet-snow avalanche release under monsoonal influence and demonstrate the value of UAV-based surveys for advancing the monitoring and early warning of snow-related hazards in high-relief mountain systems. Full article
Show Figures

Figure 1

24 pages, 3366 KB  
Article
Study of the Optimal YOLO Visual Detector Model for Enhancing UAV Detection and Classification in Optoelectronic Channels of Sensor Fusion Systems
by Ildar Kurmashev, Vladislav Semenyuk, Alberto Lupidi, Dmitriy Alyoshin, Liliya Kurmasheva and Alessandro Cantelli-Forti
Drones 2025, 9(11), 732; https://doi.org/10.3390/drones9110732 - 23 Oct 2025
Cited by 1 | Viewed by 1266
Abstract
The rapid spread of unmanned aerial vehicles (UAVs) has created new challenges for airspace security, as drones are increasingly used for surveillance, smuggling, and potentially for attacks near critical infrastructure. A key difficulty lies in reliably distinguishing UAVs from visually similar birds in [...] Read more.
The rapid spread of unmanned aerial vehicles (UAVs) has created new challenges for airspace security, as drones are increasingly used for surveillance, smuggling, and potentially for attacks near critical infrastructure. A key difficulty lies in reliably distinguishing UAVs from visually similar birds in electro-optical surveillance channels, where complex backgrounds and visual noise often increase false alarms. To address this, we investigated recent YOLO architectures and developed an enhanced model named YOLOv12-ADBC, incorporating an adaptive hierarchical feature integration mechanism to strengthen multi-scale spatial fusion. This architectural refinement improves sensitivity to subtle inter-class differences between drones and birds. A dedicated dataset of 7291 images was used to train and evaluate five YOLO versions (v8–v12), together with the proposed YOLOv12-ADBC. Comparative experiments demonstrated that YOLOv12-ADBC achieved the best overall performance, with precision = 0.892, recall = 0.864, mAP50 = 0.881, mAP50–95 = 0.633, and per-class accuracy reaching 96.4% for drones and 80% for birds. In inference tests on three video sequences simulating realistic monitoring conditions, YOLOv12-ADBC consistently outperformed baselines, achieving a detection accuracy of 92.1–95.5% and confidence levels up to 88.6%, while maintaining real-time processing at 118–135 frames per second (FPS). These results demonstrate that YOLOv12-ADBC not only surpasses previous YOLO models but also offers strong potential as the optical module in multi-sensor fusion frameworks. Its integration with radar, RF, and acoustic channels is expected to further enhance system-level robustness, providing a practical pathway toward reliable UAV detection in modern airspace protection systems. Full article
Show Figures

Figure 1

34 pages, 3341 KB  
Review
Challenges and Opportunities in Predicting Future Beach Evolution: A Review of Processes, Remote Sensing, and Modeling Approaches
by Thierry Garlan, Rafael Almar and Erwin W. J. Bergsma
Remote Sens. 2025, 17(19), 3360; https://doi.org/10.3390/rs17193360 - 4 Oct 2025
Viewed by 921
Abstract
This review synthesizes the current knowledge of the various natural and human-caused processes that influence the evolution of sandy beaches and explores ways to improve predictions. Short-term storm-driven dynamics have been extensively studied, but long-term changes remain poorly understood due to a limited [...] Read more.
This review synthesizes the current knowledge of the various natural and human-caused processes that influence the evolution of sandy beaches and explores ways to improve predictions. Short-term storm-driven dynamics have been extensively studied, but long-term changes remain poorly understood due to a limited grasp of non-wave drivers, outdated topo-bathymetric (land–sea continuum digital elevation model) data, and an absence of systematic uncertainty assessments. In this study, we classify and analyze the various drivers of beach change, including meteorological, oceanographic, geological, biological, and human influences, and we highlight their interactions across spatial and temporal scales. We place special emphasis on the role of remote sensing, detailing the capacities and limitations of optical, radar, lidar, unmanned aerial vehicle (UAV), video systems and satellite Earth observation for monitoring shoreline change, nearshore bathymetry (or seafloor), sediment dynamics, and ecosystem drivers. A case study from the Langue de Barbarie in Senegal, West Africa, illustrates the integration of in situ measurements, satellite observations, and modeling to identify local forcing factors. Based on this synthesis, we propose a structured framework for quantifying uncertainty that encompasses data, parameter, structural, and scenario uncertainties. We also outline ways to dynamically update nearshore bathymetry to improve predictive ability. Finally, we identify key challenges and opportunities for future coastal forecasting and emphasize the need for multi-sensor integration, hybrid modeling approaches, and holistic classifications that move beyond wave-only paradigms. Full article
Show Figures

Figure 1

19 pages, 4672 KB  
Article
Monocular Visual/IMU/GNSS Integration System Using Deep Learning-Based Optical Flow for Intelligent Vehicle Localization
by Jeongmin Kang
Sensors 2025, 25(19), 6050; https://doi.org/10.3390/s25196050 - 1 Oct 2025
Viewed by 1034
Abstract
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual–inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, [...] Read more.
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual–inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, existing methods rely on simple bidirectional consistency checks that yield unreliable flow in low-texture or ambiguous regions. Global navigation satellite system (GNSS) measurements can complement VIO, but often degrade in urban areas due to multipath interference. This paper proposes a multi-sensor fusion system that integrates monocular VIO with GNSS measurements to achieve robust and drift-free localization. The proposed approach employs a hybrid VIO framework that utilizes a deep learning-based optical flow network, with an enhanced consistency constraint that incorporates local structure and motion coherence to extract robust flow measurements. The extracted optical flow serves as visual measurements, which are then fused with inertial measurements to improve localization accuracy. GNSS updates further enhance global localization stability by mitigating long-term drift. The proposed method is evaluated on the publicly available KITTI dataset. Extensive experiments demonstrate its superior localization performance compared to previous similar methods. The results show that the filter-based multi-sensor fusion framework with optical flow refined by the enhanced consistency constraint ensures accurate and reliable localization in large-scale outdoor environments. Full article
(This article belongs to the Special Issue AI-Driving for Autonomous Vehicles)
Show Figures

Figure 1

31 pages, 3129 KB  
Review
A Review on Gas Pipeline Leak Detection: Acoustic-Based, OGI-Based, and Multimodal Fusion Methods
by Yankun Gong, Chao Bao, Zhengxi He, Yifan Jian, Xiaoye Wang, Haineng Huang and Xintai Song
Information 2025, 16(9), 731; https://doi.org/10.3390/info16090731 - 25 Aug 2025
Cited by 1 | Viewed by 2665
Abstract
Pipelines play a vital role in material transportation within industrial settings. This review synthesizes detection technologies for early-stage small gas leaks from pipelines in the industrial sector, with a focus on acoustic-based methods, optical gas imaging (OGI), and multimodal fusion approaches. It encompasses [...] Read more.
Pipelines play a vital role in material transportation within industrial settings. This review synthesizes detection technologies for early-stage small gas leaks from pipelines in the industrial sector, with a focus on acoustic-based methods, optical gas imaging (OGI), and multimodal fusion approaches. It encompasses detection principles, inherent challenges, mitigation strategies, and the state of the art (SOTA). Small leaks refer to low flow leakage originating from defects with apertures at millimeter or submillimeter scales, posing significant detection difficulties. Acoustic detection leverages the acoustic wave signals generated by gas leaks for non-contact monitoring, offering advantages such as rapid response and broad coverage. However, its susceptibility to environmental noise interference often triggers false alarms. This limitation can be mitigated through time-frequency analysis, multi-sensor fusion, and deep-learning algorithms—effectively enhancing leak signals, suppressing background noise, and thereby improving the system’s detection robustness and accuracy. OGI utilizes infrared imaging technology to visualize leakage gas and is applicable to the detection of various polar gases. Its primary limitations include low image resolution, low contrast, and interference from complex backgrounds. Mitigation techniques involve background subtraction, optical flow estimation, fully convolutional neural networks (FCNNs), and vision transformers (ViTs), which enhance image contrast and extract multi-scale features to boost detection precision. Multimodal fusion technology integrates data from diverse sensors, such as acoustic and optical devices. Key challenges lie in achieving spatiotemporal synchronization across multiple sensors and effectively fusing heterogeneous data streams. Current methodologies primarily utilize decision-level fusion and feature-level fusion techniques. Decision-level fusion offers high flexibility and ease of implementation but lacks inter-feature interaction; it is less effective than feature-level fusion when correlations exist between heterogeneous features. Feature-level fusion amalgamates data from different modalities during the feature extraction phase, generating a unified cross-modal representation that effectively resolves inter-modal heterogeneity. In conclusion, we posit that multimodal fusion holds significant potential for further enhancing detection accuracy beyond the capabilities of existing single-modality technologies and is poised to become a major focus of future research in this domain. Full article
Show Figures

Figure 1

21 pages, 12036 KB  
Article
Temporal Analysis of Reservoirs, Lakes, and Rivers in the Euphrates–Tigris Basin from Multi-Sensor Data Between 2018 and 2022
by Omer Gokberk Narin, Roderik Lindenbergh and Saygin Abdikan
Remote Sens. 2025, 17(16), 2913; https://doi.org/10.3390/rs17162913 - 21 Aug 2025
Viewed by 3838
Abstract
Monitoring freshwater resources is essential for assessing the impacts of drought, water management and global warming. Spaceborne LiDAR altimeters allow researchers to obtain water height information, while water area and precipitation data can be obtained using different satellite systems. In our study, we [...] Read more.
Monitoring freshwater resources is essential for assessing the impacts of drought, water management and global warming. Spaceborne LiDAR altimeters allow researchers to obtain water height information, while water area and precipitation data can be obtained using different satellite systems. In our study, we examined 5 years (2018–2022) of data concerning the Euphrates–Tigris Basin (ETB), one of the most important freshwater resources of the Middle East, and the water bodies of both the ETB and the largest lake of Türkiye, Lake Van. A multi-sensor study aimed to detect and monitor water levels and water areas in the water scarcity basin. The ATL13 product of the Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) was used to determine water levels, while the normalized difference water index was applied to the Sentinel-2 optical imaging satellite to monitor the water area. Variations in both water level and area may be related to the time series of precipitation data from the ECMWF Reanalysis v5 (ERA5) product. In addition, our results were compared with global HydroWeb water level data. Consequently, it was observed that the water levels in the region decreased by 5–6 m in many reservoirs after 2019. It is noteworthy that there was a decrease of approximately 14 m in the water level and 684 km2 in the water area between July 2019 and July 2022 in Lake Therthar. Full article
(This article belongs to the Special Issue Multi-Source Remote Sensing Data in Hydrology and Water Management)
Show Figures

Figure 1

44 pages, 10149 KB  
Review
A Review of Machine Learning-Assisted Gas Sensor Arrays in Medical Diagnosis
by Yueting Yu, Xin Cao, Chenxi Li, Mingyue Zhou, Tianyu Liu, Jiang Liu and Lu Zhang
Biosensors 2025, 15(8), 548; https://doi.org/10.3390/bios15080548 - 20 Aug 2025
Cited by 3 | Viewed by 4378
Abstract
Volatile organic compounds (VOCs) present in human exhaled breath have emerged as promising biomarkers for non-invasive disease diagnosis. However, traditional VOC detection technology that relies on large instruments is not widely used due to high costs and cumbersome testing processes. Machine learning-assisted gas [...] Read more.
Volatile organic compounds (VOCs) present in human exhaled breath have emerged as promising biomarkers for non-invasive disease diagnosis. However, traditional VOC detection technology that relies on large instruments is not widely used due to high costs and cumbersome testing processes. Machine learning-assisted gas sensor arrays offer a compelling alternative by enabling the accurate identification of complex VOC mixtures through collaborative multi-sensor detection and advanced algorithmic analysis. This work systematically reviews the advanced applications of machine learning-assisted gas sensor arrays in medical diagnosis. The types and principles of sensors commonly employed for disease diagnosis are summarized, such as electrochemical, optical, and semiconductor sensors. Machine learning methods that can be used to improve the recognition ability of sensor arrays are systematically listed, including support vector machines (SVM), random forests (RF), artificial neural networks (ANN), and principal component analysis (PCA). In addition, the research progress of sensor arrays combined with specific algorithms in the diagnosis of respiratory, metabolism and nutrition, hepatobiliary, gastrointestinal, and nervous system diseases is also discussed. Finally, we highlight current challenges associated with machine learning-assisted gas sensors and propose feasible directions for future improvement. Full article
(This article belongs to the Special Issue AI-Enabled Biosensor Technologies for Boosting Medical Applications)
Show Figures

Figure 1

34 pages, 1262 KB  
Review
Deep Learning-Based Fusion of Optical, Radar, and LiDAR Data for Advancing Land Monitoring
by Yizhe Li and Xinqing Xiao
Sensors 2025, 25(16), 4991; https://doi.org/10.3390/s25164991 - 12 Aug 2025
Cited by 2 | Viewed by 4671
Abstract
Accurate and timely land monitoring is crucial for addressing global environmental, economic, and societal challenges, including climate change, sustainable development, and disaster mitigation. While single-source remote sensing data offers significant capabilities, inherent limitations such as cloud cover interference (optical), speckle noise (radar), or [...] Read more.
Accurate and timely land monitoring is crucial for addressing global environmental, economic, and societal challenges, including climate change, sustainable development, and disaster mitigation. While single-source remote sensing data offers significant capabilities, inherent limitations such as cloud cover interference (optical), speckle noise (radar), or limited spectral information (LiDAR) often hinder comprehensive and robust characterization of land surfaces. Recent advancements in synergistic harmonization technology for land monitoring, along with enhanced signal processing techniques and the integration of machine learning algorithms, have significantly broadened the scope and depth of geosciences. Therefore, it is essential to summarize the comprehensive applications of synergistic harmonization technology for geosciences, with a particular focus on recent advancements. Most of the existing review papers focus on the application of a single technology in a specific area, highlighting the need for a comprehensive review that integrates synergistic harmonization technology. This review provides a comprehensive review of advancements in land monitoring achieved through the synergistic harmonization of optical, radar, and LiDAR satellite technologies. It details the unique strengths and weaknesses of each sensor type, highlighting how their integration overcomes individual limitations by leveraging complementary information. This review analyzes current data harmonization and preprocessing techniques, various data fusion levels, and the transformative role of machine learning and deep learning algorithms, including emerging foundation models. Key applications across diverse domains such as land cover/land use mapping, change detection, forest monitoring, urban monitoring, agricultural monitoring, and natural hazard assessment are discussed, demonstrating enhanced accuracy and scope. Finally, this review identifies persistent challenges such as technical complexities in data integration, issues with data availability and accessibility, validation hurdles, and the need for standardization. It proposes future research directions focusing on advanced AI, novel fusion techniques, improved data infrastructure, integrated “space–air–ground” systems, and interdisciplinary collaboration to realize the full potential of multi-sensor satellite data for robust and timely land surface monitoring. Supported by deep learning, this synergy will improve our ability to monitor land surface conditions more accurately and reliably. Full article
Show Figures

Figure 1

26 pages, 14923 KB  
Article
Multi-Sensor Flood Mapping in Urban and Agricultural Landscapes of the Netherlands Using SAR and Optical Data with Random Forest Classifier
by Omer Gokberk Narin, Aliihsan Sekertekin, Caglar Bayik, Filiz Bektas Balcik, Mahmut Arıkan, Fusun Balik Sanli and Saygin Abdikan
Remote Sens. 2025, 17(15), 2712; https://doi.org/10.3390/rs17152712 - 5 Aug 2025
Viewed by 1715
Abstract
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning [...] Read more.
Floods stand as one of the most harmful natural disasters, which have become more dangerous because of climate change effects on urban structures and agricultural fields. This research presents a comprehensive flood mapping approach that combines multi-sensor satellite data with a machine learning method to evaluate the July 2021 flood in the Netherlands. The research developed 25 different feature scenarios through the combination of Sentinel-1, Landsat-8, and Radarsat-2 imagery data by using backscattering coefficients together with optical Normalized Difference Water Index (NDWI) and Hue, Saturation, and Value (HSV) images and Synthetic Aperture Radar (SAR)-derived Grey Level Co-occurrence Matrix (GLCM) texture features. The Random Forest (RF) classifier was optimized before its application based on two different flood-prone regions, which included Zutphen’s urban area and Heijen’s agricultural land. Results demonstrated that the multi-sensor fusion scenarios (S18, S20, and S25) achieved the highest classification performance, with overall accuracy reaching 96.4% (Kappa = 0.906–0.949) in Zutphen and 87.5% (Kappa = 0.754–0.833) in Heijen. For the flood class F1 scores of all scenarios, they varied from 0.742 to 0.969 in Zutphen and from 0.626 to 0.969 in Heijen. Eventually, the addition of SAR texture metrics enhanced flood boundary identification throughout both urban and agricultural settings. Radarsat-2 provided limited benefits to the overall results, since Sentinel-1 and Landsat-8 data proved more effective despite being freely available. This study demonstrates that using SAR and optical features together with texture information creates a powerful and expandable flood mapping system, and RF classification performs well in diverse landscape settings. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Flood Forecasting and Monitoring)
Show Figures

Figure 1

22 pages, 61181 KB  
Article
Stepwise Building Damage Estimation Through Time-Scaled Multi-Sensor Integration: A Case Study of the 2024 Noto Peninsula Earthquake
by Satomi Kimijima, Chun Ping, Shono Fujita, Makoto Hanashima, Shingo Toride and Hitoshi Taguchi
Remote Sens. 2025, 17(15), 2638; https://doi.org/10.3390/rs17152638 - 30 Jul 2025
Cited by 1 | Viewed by 1935
Abstract
Rapid and comprehensive assessment of building damage caused by earthquakes is essential for effective emergency response and rescue efforts in the immediate aftermath. Advanced technologies, including real-time simulations, remote sensing, and multi-sensor systems, can effectively enhance situational awareness and structural damage evaluations. However, [...] Read more.
Rapid and comprehensive assessment of building damage caused by earthquakes is essential for effective emergency response and rescue efforts in the immediate aftermath. Advanced technologies, including real-time simulations, remote sensing, and multi-sensor systems, can effectively enhance situational awareness and structural damage evaluations. However, most existing methods rely on isolated time snapshots, and few studies have systematically explored the continuous, time-scaled integration and update of building damage estimates from multiple data sources. This study proposes a stepwise framework that continuously updates time-scaled, single-damage estimation outputs using the best available multi-sensor data for estimating earthquake-induced building damage. We demonstrated the framework using the 2024 Noto Peninsula Earthquake as a case study and incorporated official damage reports from the Ishikawa Prefectural Government, real-time earthquake building damage estimation (REBDE) data, and satellite-based damage estimation data (ALOS-2-building damage estimation (BDE)). By integrating the REBDE and ALOS-2-BDE datasets, we created a composite damage estimation product (integrated-BDE). These datasets were statistically validated against official damage records. Our framework showed significant improvements in accuracy, as demonstrated by the mean absolute percentage error, when the datasets were integrated and updated over time: 177.2% for REBDE, 58.1% for ALOS-2-BDE, and 25.0% for integrated-BDE. Finally, for stepwise damage estimation, we proposed a methodological framework that incorporates social media content to further confirm the accuracy of damage assessments. Potential supplementary datasets, including data from Internet of Things-enabled home appliances, real-time traffic data, very-high-resolution optical imagery, and structural health monitoring systems, can also be integrated to improve accuracy. The proposed framework is expected to improve the timeliness and accuracy of building damage assessments, foster shared understanding of disaster impacts across stakeholders, and support more effective emergency response planning, resource allocation, and decision-making in the early stages of disaster management in the future, particularly when comprehensive official damage reports are unavailable. Full article
Show Figures

Figure 1

31 pages, 33353 KB  
Article
Assessment of the October 2024 Cut-Off Low Event Floods Impact in Valencia (Spain) with Satellite and Geospatial Data
by Ignacio Castro-Melgar, Triantafyllos Falaras, Eleftheria Basiou and Issaak Parcharidis
Remote Sens. 2025, 17(13), 2145; https://doi.org/10.3390/rs17132145 - 22 Jun 2025
Cited by 4 | Viewed by 7378
Abstract
The October 2024 cut-off low event triggered one of the most catastrophic floods recorded in the Valencia Metropolitan Area, exposing significant vulnerabilities in urban planning, infrastructure resilience, and emergency preparedness. This study presents a novel comprehensive assessment of the event, using a multi-sensor [...] Read more.
The October 2024 cut-off low event triggered one of the most catastrophic floods recorded in the Valencia Metropolitan Area, exposing significant vulnerabilities in urban planning, infrastructure resilience, and emergency preparedness. This study presents a novel comprehensive assessment of the event, using a multi-sensor satellite approach combined with socio-economic and infrastructure data at the metropolitan scale. It provides a comprehensive spatial assessment of the flood’s impacts by integrating of radar Sentinel-1 and optical Sentinel-2 and Landsat 8 imagery with datasets including population density, land use, and critical infrastructure layers. Approximately 199 km2 were inundated, directly affecting over 90,000 residents and compromising vital infrastructure such as hospitals, schools, transportation corridors, and agricultural lands. Results highlight the exposure of peri-urban zones and agricultural areas, reflecting the socio-economic risks associated with the rapid urban expansion into flood-prone plains. The applied methodology demonstrates the essential role of multi-sensor remote sensing in accurately delineating flood extents and assessing socio-economic impacts. This approach constitutes a transferable framework for enhancing disaster risk management strategies in other Mediterranean urban regions. As extreme hydrometeorological events become more frequent under changing climatic conditions, the findings underscore the urgent need for integrating remote sensing technologies, early warning systems, and nature-based solutions into regional governance to strengthen resilience, reduce vulnerabilities, and mitigate future flood risks. Full article
Show Figures

Figure 1

15 pages, 744 KB  
Article
Validation of a Commercially Available IMU-Based System Against an Optoelectronic System for Full-Body Motor Tasks
by Giacomo Villa, Serena Cerfoglio, Alessandro Bonfiglio, Paolo Capodaglio, Manuela Galli and Veronica Cimolin
Sensors 2025, 25(12), 3736; https://doi.org/10.3390/s25123736 - 14 Jun 2025
Cited by 1 | Viewed by 2551
Abstract
Inertial measurement units (IMUs) have gained popularity as portable and cost-effective alternatives to optoelectronic motion capture systems for assessing joint kinematics. This study aimed to validate a commercially available multi-sensor IMU-based system against a laboratory-grade motion capture system across lower limb, trunk, and [...] Read more.
Inertial measurement units (IMUs) have gained popularity as portable and cost-effective alternatives to optoelectronic motion capture systems for assessing joint kinematics. This study aimed to validate a commercially available multi-sensor IMU-based system against a laboratory-grade motion capture system across lower limb, trunk, and upper limb movements. Fifteen healthy participants performed a battery of single- and multi-joint tasks while motion data were simultaneously recorded by both systems. Range of motion (ROM) values were extracted from the two systems and compared. The IMU-based system demonstrated high concurrent validity, with non-significant differences in most tasks, root mean square error values generally below 7°, percentage of similarity greater than 97%, and strong correlations (r ≥ 0.77) with the reference system. Systematic biases were trivial (≤3.9°), and limits of agreement remained within clinically acceptable thresholds. The findings indicate that the tested IMU-based system provides ROM estimates statistically and clinically comparable to those obtained with optical reference systems. Given its portability, ease of use, and affordability, the IMU-based system presents a promising solution for motion analysis in both clinical and remote rehabilitation contexts, although future research should extend validation to pathological populations and longer monitoring periods. Full article
(This article belongs to the Special Issue IMU and Innovative Sensors for Healthcare)
Show Figures

Figure 1

24 pages, 17094 KB  
Article
Multi-Camera Machine Learning for Salt Marsh Species Classification and Mapping
by Marco Moreno, Sagar Dalai, Grace Cott, Ben Bartlett, Matheus Santos, Tom Dorian, James Riordan, Chris McGonigle, Fabio Sacchetti and Gerard Dooly
Remote Sens. 2025, 17(12), 1964; https://doi.org/10.3390/rs17121964 - 6 Jun 2025
Cited by 2 | Viewed by 1174
Abstract
Accurate classification of salt marsh vegetation is vital for conservation efforts and environmental monitoring, particularly given the critical role these ecosystems play as carbon sinks. Understanding and quantifying the extent and types of habitats present in Ireland is essential to support national biodiversity [...] Read more.
Accurate classification of salt marsh vegetation is vital for conservation efforts and environmental monitoring, particularly given the critical role these ecosystems play as carbon sinks. Understanding and quantifying the extent and types of habitats present in Ireland is essential to support national biodiversity goals and climate action plans. Unmanned Aerial Vehicles (UAVs) equipped with optical sensors offer a powerful means of mapping vegetation in these areas. However, many current studies rely on single-sensor approaches, which can constrain the accuracy of classification and limit our understanding of complex habitat dynamics. This study evaluates the integration of Red-Green-Blue (RGB), Multispectral Imaging (MSI), and Hyperspectral Imaging (HSI) to improve species classification compared to using individual sensors. UAV surveys were conducted with RGB, MSI, and HSI sensors, and the collected data were classified using Random Forest (RF), Spectral Angle Mapper (SAM), and Support Vector Machine (SVM) algorithms. The classification performance was assessed using Overall Accuracy (OA), Kappa Coefficient (k), Producer’s Accuracy (PA), and User’s Accuracy (UA), for both individual sensor datasets and the fused dataset generated via band stacking. The multi-camera approach achieved a 97% classification accuracy, surpassing the highest accuracy obtained by a single sensor (HSI, 92%). This demonstrates that data fusion and band reduction techniques improve species differentiation, particularly for vegetation with overlapping spectral signatures. The results suggest that multi-sensor UAV systems offer a cost-effective and efficient approach to ecosystem monitoring, biodiversity assessment, and conservation planning. Full article
Show Figures

Graphical abstract

33 pages, 10200 KB  
Review
Unmanned Surface Vessels in Marine Surveillance and Management: Advances in Communication, Navigation, Control, and Data-Driven Research
by Zhichao Lv, Xiangyu Wang, Gang Wang, Xuefei Xing, Chenlong Lv and Fei Yu
J. Mar. Sci. Eng. 2025, 13(5), 969; https://doi.org/10.3390/jmse13050969 - 16 May 2025
Cited by 6 | Viewed by 7183
Abstract
Unmanned Surface Vehicles (USVs) have emerged as vital tools in marine monitoring and management due to their high efficiency, low cost, and flexible deployment capabilities. This paper presents a systematic review focusing on four core areas of USV applications: communication networking, navigation, control, [...] Read more.
Unmanned Surface Vehicles (USVs) have emerged as vital tools in marine monitoring and management due to their high efficiency, low cost, and flexible deployment capabilities. This paper presents a systematic review focusing on four core areas of USV applications: communication networking, navigation, control, and data-driven operations. First, the characteristics and challenges of acoustic, electromagnetic, and optical communication methods for USV networking are analyzed, with an emphasis on the future trend toward multimodal communication integration. Second, a comprehensive review of global navigation, local navigation, cooperative navigation, and autonomous navigation technologies is provided, highlighting their applications and limitations in complex environments. Third, the evolution of USV control systems is examined, covering group control, distributed control, and adaptive control, with particular attention given to fault tolerance, delay compensation, and energy optimization. Finally, the application of USVs in data-driven marine tasks is summarized, including multi-sensor fusion, real-time perception, and autonomous decision-making mechanisms. This study aims to reveal the interaction and coordination mechanisms among communication, navigation, control, and data-driven operations from a system integration perspective, providing insights and guidance for the intelligent operations and comprehensive applications of USVs in marine environments. Full article
Show Figures

Figure 1

Back to TopTop