Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (158)

Search Parameters:
Keywords = satellite video

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 23817 KiB  
Article
Dual-Path Adversarial Denoising Network Based on UNet
by Jinchi Yu, Yu Zhou, Mingchen Sun and Dadong Wang
Sensors 2025, 25(15), 4751; https://doi.org/10.3390/s25154751 - 1 Aug 2025
Viewed by 234
Abstract
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a [...] Read more.
Digital image quality is crucial for reliable analysis in applications such as medical imaging, satellite remote sensing, and video surveillance. However, traditional denoising methods struggle to balance noise removal with detail preservation and lack adaptability to various types of noise. We propose a novel three-module architecture for image denoising, comprising a generator, a dual-path-UNet-based denoiser, and a discriminator. The generator creates synthetic noise patterns to augment training data, while the dual-path-UNet denoiser uses multiple receptive field modules to preserve fine details and dense feature fusion to maintain global structural integrity. The discriminator provides adversarial feedback to enhance denoising performance. This dual-path adversarial training mechanism addresses the limitations of traditional methods by simultaneously capturing both local details and global structures. Experiments on the SIDD, DND, and PolyU datasets demonstrate superior performance. We compare our architecture with the latest state-of-the-art GAN variants through comprehensive qualitative and quantitative evaluations. These results confirm the effectiveness of noise removal with minimal loss of critical image details. The proposed architecture enhances image denoising capabilities in complex noise scenarios, providing a robust solution for applications that require high image fidelity. By enhancing adaptability to various types of noise while maintaining structural integrity, this method provides a versatile tool for image processing tasks that require preserving detail. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 19550 KiB  
Article
TMTS: A Physics-Based Turbulence Mitigation Network Guided by Turbulence Signatures for Satellite Video
by Jie Yin, Tao Sun, Xiao Zhang, Guorong Zhang, Xue Wan and Jianjun He
Remote Sens. 2025, 17(14), 2422; https://doi.org/10.3390/rs17142422 - 12 Jul 2025
Viewed by 266
Abstract
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing [...] Read more.
Atmospheric turbulence severely degrades high-resolution satellite videos through spatiotemporally coupled distortions, including temporal jitter, spatial-variant blur, deformation, and scintillation, thereby constraining downstream analytical capabilities. Restoring turbulence-corrupted videos poses a challenging ill-posed inverse problem due to the inherent randomness of turbulent fluctuations. While existing turbulence mitigation methods for long-range imaging demonstrate partial success, they exhibit limited generalizability and interpretability in large-scale satellite scenarios. Inspired by refractive-index structure constant (Cn2) estimation from degraded sequences, we propose a physics-informed turbulence signature (TS) prior that explicitly captures spatiotemporal distortion patterns to enhance model transparency. Integrating this prior into a lucky imaging framework, we develop a Physics-Based Turbulence Mitigation Network guided by Turbulence Signature (TMTS) to disentangle atmospheric disturbances from satellite videos. The framework employs deformable attention modules guided by turbulence signatures to correct geometric distortions, iterative gated mechanisms for temporal alignment stability, and adaptive multi-frame aggregation to address spatially varying blur. Comprehensive experiments on synthetic and real-world turbulence-degraded satellite videos demonstrate TMTS’s superiority, achieving 0.27 dB PSNR and 0.0015 SSIM improvements over the DATUM baseline while maintaining practical computational efficiency. By bridging turbulence physics with deep learning, our approach provides both performance enhancements and interpretable restoration mechanisms, offering a viable solution for operational satellite video processing under atmospheric disturbances. Full article
Show Figures

Graphical abstract

20 pages, 2149 KiB  
Article
Accelerating Facial Image Super-Resolution via Sparse Momentum and Encoder State Reuse
by Kerang Cao, Na Bao, Shuai Zheng, Ye Liu and Xing Wang
Electronics 2025, 14(13), 2616; https://doi.org/10.3390/electronics14132616 - 28 Jun 2025
Viewed by 417
Abstract
Single image super-resolution (SISR) aims to reconstruct high-quality images from low-resolution inputs, a persistent challenge in computer vision with critical applications in medical imaging, satellite imagery, and video enhancement. Traditional diffusion model-based (DM-based) methods, while effective in restoring fine details, suffer from computational [...] Read more.
Single image super-resolution (SISR) aims to reconstruct high-quality images from low-resolution inputs, a persistent challenge in computer vision with critical applications in medical imaging, satellite imagery, and video enhancement. Traditional diffusion model-based (DM-based) methods, while effective in restoring fine details, suffer from computational inefficiency due to their iterative denoising process. To address this, we introduce the Sparse Momentum-based Faster Diffusion Model (SMFDM), designed for rapid and high-fidelity super-resolution. SMFDM integrates a novel encoder state reuse mechanism that selectively omits non-critical time steps during the denoising phase, significantly reducing computational redundancy. Additionally, the model employs a sparse momentum mechanism, enabling robust representation capabilities while utilizing only a fraction of the original model weights. Experiments demonstrate that SMFDM achieves an impressive 71.04% acceleration in the diffusion process, requiring only 15% of the original weights, while maintaining high-quality outputs with effective preservation of image details and textures. Our work highlights the potential of combining sparse learning and efficient sampling strategies to enhance the practical applicability of diffusion models for super-resolution tasks. Full article
Show Figures

Figure 1

17 pages, 1976 KiB  
Article
A Novel Reconfigurable Vector-Processed Interleaving Algorithm for a DVB-RCS2 Turbo Encoder
by Moshe Bensimon, Ohad Boxerman, Yehuda Ben-Shimol, Erez Manor and Shlomo Greenberg
Electronics 2025, 14(13), 2600; https://doi.org/10.3390/electronics14132600 - 27 Jun 2025
Viewed by 252
Abstract
Turbo Codes (TCs) are a family of convolutional codes that provide powerful Forward Error Correction (FEC) and operate near the Shannon limit for channel capacity. In the context of modern communication systems, such as those conforming to the DVB-RCS2 standard, Turbo Encoders (TEs) [...] Read more.
Turbo Codes (TCs) are a family of convolutional codes that provide powerful Forward Error Correction (FEC) and operate near the Shannon limit for channel capacity. In the context of modern communication systems, such as those conforming to the DVB-RCS2 standard, Turbo Encoders (TEs) play a crucial role in ensuring robust data transmission over noisy satellite links. A key computational bottleneck in the Turbo Encoder is the non-uniform interleaving stage, where input bits are rearranged according to a dynamically generated permutation pattern. This stage often requires the intermediate storage of data, resulting in increased latency and reduced throughput, especially in embedded or real-time systems. This paper introduces a vector processing algorithm designed to accelerate the interleaving stage of the Turbo Encoder. The proposed algorithm is tailored for vector DSP architectures (e.g., CEVA-XC4500), and leverages the hardware’s SIMD capabilities to perform the permutation operation in a structured, phase-wise manner. Our method adopts a modular Load–Execute–Store design, facilitating efficient memory alignment, deterministic latency, and hardware portability. We present a detailed breakdown of the algorithm’s implementation, compare it with a conventional scalar (serial) model, and analyze its compatibility with the DVB-RCS2 specification. Experimental results demonstrate significant performance improvements, achieving a speed-up factor of up to 3.4× in total cycles, 4.8× in write operations, and 7.3× in read operations, relative to the baseline scalar implementation. The findings highlight the effectiveness of vectorized permutation in FEC pipelines and its relevance for high-throughput, low-power communication systems. Full article
(This article belongs to the Special Issue Evolutionary Hardware-Software Codesign Based on FPGA)
Show Figures

Figure 1

20 pages, 119066 KiB  
Article
Coarse-Fine Tracker: A Robust MOT Framework for Satellite Videos via Tracking Any Point
by Hanru Shi, Xiaoxuan Liu, Xiyu Qi, Enze Zhu, Jie Jia and Lei Wang
Remote Sens. 2025, 17(13), 2167; https://doi.org/10.3390/rs17132167 - 24 Jun 2025
Viewed by 284
Abstract
Traditional Multiple Object Tracking (MOT) methods in satellite videos mostly follow the Detection-Based Tracking (DBT) framework. However, the DBT framework assumes that all objects are correctly recognized and localized by the detector. In practice, the low resolution of satellite videos, small objects, and [...] Read more.
Traditional Multiple Object Tracking (MOT) methods in satellite videos mostly follow the Detection-Based Tracking (DBT) framework. However, the DBT framework assumes that all objects are correctly recognized and localized by the detector. In practice, the low resolution of satellite videos, small objects, and complex backgrounds inevitably leads to a decline in detector performance. To alleviate the impact of detector degradation on track, we propose Coarse-Fine Tracker, a framework that integrates the MOT framework with the Tracking Any Point (TAP) method CoTracker for the first time, leveraging TAP’s persistent point correspondence modeling to compensate for detector failures. In our Coarse-Fine Tracker, we divide the satellite video into sub-videos. For one sub-video, we first use ByteTrack to track the outputs of the detector, referred to as coarse tracking, which involves the Kalman filter and box-level motion features. Given the small size of objects in satellite videos, we treat each object as a point to be tracked. We then use CoTracker to track the center point of each object, referred to as fine tracking, by calculating the appearance feature similarity between each point and its neighboring points. Finally, the Consensus Fusion Strategy eliminates mismatched detections in coarse tracking results by checking their geometric consistency against fine tracking results and recovers missed objects via linear interpolation or linear fitting. This method is validated on the VISO and SAT-MTB datasets. Experimental results in VISO show that the tracker achieves a multi-object tracking accuracy (MOTA) of 66.9, a multi-object tracking precision (MOTP) of 64.1, and an IDF1 score of 77.8, surpassing the detector-only baseline by 11.1% in MOTA while reducing ID switches by 139. Comparative experiments with ByteTrack demonstrate the robustness of our tracking method when the performance of the detector deteriorates. Full article
Show Figures

Figure 1

39 pages, 22038 KiB  
Article
UIMM-Tracker: IMM-Based with Uncertainty Detection for Video Satellite Infrared Small-Target Tracking
by Yuanxin Huang, Xiyang Zhi, Zhichao Xu, Wenbin Chen, Qichao Han, Jianming Hu, Yi Sui and Wei Zhang
Remote Sens. 2025, 17(12), 2052; https://doi.org/10.3390/rs17122052 - 14 Jun 2025
Viewed by 412
Abstract
Infrared video satellites have the characteristics of wide-area long-duration surveillance, enabling continuous operation day and night compared to visible light imaging methods. Therefore, they are widely used for continuous monitoring and tracking of important targets. However, energy attenuation caused by long-distance radiation transmission [...] Read more.
Infrared video satellites have the characteristics of wide-area long-duration surveillance, enabling continuous operation day and night compared to visible light imaging methods. Therefore, they are widely used for continuous monitoring and tracking of important targets. However, energy attenuation caused by long-distance radiation transmission reduces imaging contrast and leads to the loss of edge contours and texture details, posing significant challenges to target tracking algorithm design. This paper proposes an infrared small-target tracking method, the UIMM-Tracker, based on the tracking-by-detection (TbD) paradigm. First, detection uncertainty is measured and injected into the multi-model observation noise, transferring the distribution knowledge of the detection process to the tracking process. Second, a dynamic modulation mechanism is introduced into the Markov transition process of multi-model fusion, enabling the tracking model to autonomously adapt to targets with varying maneuvering states. Additionally, detection uncertainty is incorporated into the data association method, and a distance cost matrix between trajectories and detections is constructed based on scale and energy invariance assumptions, improving tracking accuracy. Finally, the proposed method achieves average performance scores of 68.5%, 45.6%, 56.2%, and 0.41 in IDF1, MOTA, HOTA, and precision metrics, respectively, across 20 challenging sequences, outperforming classical methods and demonstrating its effectiveness. Full article
Show Figures

Figure 1

22 pages, 4957 KiB  
Article
OITrack: Multi-Object Tracking for Small Targets in Satellite Video via Online Trajectory Completion and Iterative Expansion over Union
by Weishan Lu, Xueying Wang, Wei An, Chao Xiao, Qian Yin and Guoliang Zhang
Remote Sens. 2025, 17(12), 2042; https://doi.org/10.3390/rs17122042 - 13 Jun 2025
Viewed by 471
Abstract
Multi-object tracking (MOT) in satellite videos presents significant challenges, including small target sizes, dense distributions, and complex motion patterns. To address these issues, we propose OITrack, an improved tracking framework that integrates a Trajectory Completion Module (TCM), an Adaptive Kalman Filter (AKF), and [...] Read more.
Multi-object tracking (MOT) in satellite videos presents significant challenges, including small target sizes, dense distributions, and complex motion patterns. To address these issues, we propose OITrack, an improved tracking framework that integrates a Trajectory Completion Module (TCM), an Adaptive Kalman Filter (AKF), and an Iterative Expansion Intersection over Union Strategy (I-EIoU) strategy. Specifically, TCM enhances temporal continuity by compensating for missing trajectories, AKF improves tracking robustness by dynamically adjusting observation noise, and I-EIoU optimizes target association, leading to more accurate small-object matching. Experimental evaluations on the VIdeo Satellite Objects (VISO) dataset demonstrated that OITrack outperforms existing MOT methods across multiple key metrics, achieving a Multiple Object Tracking Accuracy (MOTA) of 57.0%, an Identity F1 Score (IDF1) of 67.5%, a reduction in False Negatives (FN) to 29,170, and a decrease in Identity Switches (ID switches) to 889. These results indicate that our method effectively improves tracking accuracy while minimizing identity mismatches, enhancing overall robustness. Full article
Show Figures

Figure 1

22 pages, 2382 KiB  
Article
A Quantitative Study on Multipoint Video Distribution Systems MVDS Interference to GEO Satellites in Lebanon
by Ali Karaki, Hiba Abdalla, Mohammed Al-Husseini and Hamza Issa
Telecom 2025, 6(2), 36; https://doi.org/10.3390/telecom6020036 - 28 May 2025
Viewed by 454
Abstract
This paper investigates the potential for interference from multipoint video distribution systems (MVDS) transmissions, specifically side lobe radiation in Lebanon, to geostationary Earth orbit (GEO) satellites. Through simulation and analysis of antenna radiation patterns, the impact of varying MVDS power levels on the [...] Read more.
This paper investigates the potential for interference from multipoint video distribution systems (MVDS) transmissions, specifically side lobe radiation in Lebanon, to geostationary Earth orbit (GEO) satellites. Through simulation and analysis of antenna radiation patterns, the impact of varying MVDS power levels on the carrier-to-noise ratio (C/N) at the satellite receiver is quantified. The results demonstrate a significant degradation in signal quality, with the C/N dropping to −2.29 dB at an MVDS power of 0 dBW for the current system. To mitigate this interference, a two-step potential strategy is proposed and evaluated. The study boosts the potential for the coexistence of MVDS and GEO satellite services in the Ku-band within the Lebanese context. Full article
Show Figures

Figure 1

15 pages, 126037 KiB  
Article
An Improved Dark Channel Prior Method for Video Defogging and Its FPGA Implementation
by Lin Wang, Zhongqiang Luo and Li Gao
Symmetry 2025, 17(6), 839; https://doi.org/10.3390/sym17060839 - 27 May 2025
Viewed by 507
Abstract
In fog, rain, snow, haze, and other complex environments, environmental objects photographed by imaging equipment are prone to image blurring, contrast degradation, and other problems. The decline in image quality fails to satisfy the requirements of application scenarios such as video surveillance, satellite [...] Read more.
In fog, rain, snow, haze, and other complex environments, environmental objects photographed by imaging equipment are prone to image blurring, contrast degradation, and other problems. The decline in image quality fails to satisfy the requirements of application scenarios such as video surveillance, satellite reconnaissance, and target tracking. Aiming at the shortcomings of the traditional dark channel prior algorithm in video defogging, this paper proposes a method to improve the guided filtering algorithm to refine the transmittance image and reduce the halo effect in the traditional algorithm. Meanwhile, a gamma correction method is proposed to recover the defogged image and enhance the image details in a low-light environment. The parallel symmetric pipeline design of the FPGA is used to improve the system’s overall stability. The improved dark channel prior algorithm is realized through the hardware–software co-design of ARM and the FPGA. Experiments show that this algorithm improves the Underwater Image Quality Measure (UIQM), Average Gradient (AG), and Information Entropy (IE) of the image, while the system is capable of stably processing video images with a resolution of 1280 × 720 @ 60 fps. By numerically analyzing the power consumption and resource usage at the board level, the power consumption on the FPGA is only 2.242 W, which puts the hardware circuit design in the category of low power consumption. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

38 pages, 4091 KiB  
Article
Mitigating the Impact of Satellite Vibrations on the Acquisition of Satellite Laser Links Through Optimized Scan Path and Parameters
by Muhammad Khalid, Wu Ji, Deng Li and Li Kun
Photonics 2025, 12(5), 444; https://doi.org/10.3390/photonics12050444 - 4 May 2025
Viewed by 775
Abstract
In the past two decades, there has been a tremendous increase in demand for services requiring a high bandwidth, a low latency, and high data rates, such as broadband internet services, video streaming, cloud computing, IoT devices, and mobile data services (5G and [...] Read more.
In the past two decades, there has been a tremendous increase in demand for services requiring a high bandwidth, a low latency, and high data rates, such as broadband internet services, video streaming, cloud computing, IoT devices, and mobile data services (5G and beyond). Optical wireless communication (OWC) technology, which is also envisioned for next-generation satellite networks using laser links, offers a promising solution to meet these demands. Establishing a line-of-sight (LOS) link and initiating communication in laser links is a challenging task. This process is managed by the acquisition, pointing, and tracking (APT) system, which must deal with the narrow beam divergence and the presence of satellite platform vibrations. These factors increase acquisition time and decrease acquisition probability. This study presents a framework for evaluating the acquisition time of four different scanning methods: spiral, raster, square spiral, and hexagonal, using a probabilistic approach. A satellite platform vibration model is used, and an algorithm for estimating its power spectral density is applied. Maximum likelihood estimation is employed to estimate key parameters from satellite vibrations to optimize scan parameters, such as the overlap factor and beam divergence. The simulation results show that selecting the scan path, overlap factor, and beam divergence based on an accurate estimation of satellite vibrations can prevent multiple scans of the uncertainty region, improve target satellite detection, and increase acquisition probability, given that the satellite vibration amplitudes are within the constraints imposed by the scan parameters. This study contributes to improving the acquisition process, which can, in turn, enhance the pointing and tracking phases of the APT system in laser links. Full article
Show Figures

Figure 1

17 pages, 9440 KiB  
Article
RACFME: Object Tracking in Satellite Videos by Rotation Adaptive Correlation Filters with Motion Estimations
by Xiongzhi Wu, Haifeng Zhang, Chao Mei, Jiaxin Wu and Han Ai
Symmetry 2025, 17(4), 608; https://doi.org/10.3390/sym17040608 - 16 Apr 2025
Viewed by 387
Abstract
Video satellites provide high-temporal-resolution remote sensing images that enable continuous monitoring of the ground for applications such as target tracking and airport traffic detection. In this paper, we address the problems of object occlusion and the tracking of rotating objects in satellite videos [...] Read more.
Video satellites provide high-temporal-resolution remote sensing images that enable continuous monitoring of the ground for applications such as target tracking and airport traffic detection. In this paper, we address the problems of object occlusion and the tracking of rotating objects in satellite videos by introducing a rotation-adaptive tracking algorithm for correlation filters with motion estimation (RACFME). Our algorithm proposes the following improvements over the KCF method: (a) A rotation-adaptive feature enhancement module (RA) is proposed to obtain the rotated image block by affine transformation combined with the target rotation direction prior, which overcomes the disadvantage of HOG features lacking rotation adaptability, improves tracking accuracy while ensuring real-time performance, and solves the problem of tracking failure due to insufficient valid positive samples when tracking rotating targets. (b) Based on the correlation between peak response and occlusion, an occlusion detection method for vehicles and ships in satellite video is proposed. (c) Motion estimations are achieved by combining Kalman filtering with motion trajectory averaging, which solves the problem of tracking failure in the case of object occlusion. The experimental results show that the proposed RACFME algorithm can track a moving target with a 95% success score, and the RA module and ME both play an effective role. Full article
(This article belongs to the Special Issue Advances in Image Processing with Symmetry/Asymmetry)
Show Figures

Figure 1

21 pages, 15502 KiB  
Article
Multi-Scale Spatiotemporal Feature Enhancement and Recursive Motion Compensation for Satellite Video Geographic Registration
by Yu Geng, Jingguo Lv, Shuwei Huang and Boyu Wang
J. Imaging 2025, 11(4), 112; https://doi.org/10.3390/jimaging11040112 - 8 Apr 2025
Viewed by 523
Abstract
Satellite video geographic alignment can be applied to target detection and tracking, true 3D scene construction, image geometry measurement, etc., which is a necessary preprocessing step for satellite video applications. In this paper, a multi-scale spatiotemporal feature enhancement and recursive motion compensation method [...] Read more.
Satellite video geographic alignment can be applied to target detection and tracking, true 3D scene construction, image geometry measurement, etc., which is a necessary preprocessing step for satellite video applications. In this paper, a multi-scale spatiotemporal feature enhancement and recursive motion compensation method for satellite video geographic alignment is proposed. Based on the SuperGlue matching algorithm, the method achieves automatic matching of inter-frame image points by introducing the multi-scale dilated attention (MSDA) to enhance the feature extraction and adopting a joint multi-frame optimization strategy (MFMO), designing a recursive motion compensation model (RMCM) to eliminate the cumulative effect of the orbit error and improve the accuracy of the inter-frame image point matching, and using a rational function model to establish the geometrical mapping between the video and the ground points to realize the georeferencing of satellite video. The geometric mapping between video and ground points is established by using the rational function model to realize the geographic alignment of satellite video. The experimental results show that the method achieves the inter-frame matching accuracy of 0.8 pixel level, and the georeferencing accuracy error is 3 m, which is a significant improvement compared with the traditional single-frame method, and the method in this paper can provide a certain reference for the subsequent related research. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

17 pages, 3846 KiB  
Article
Video Satellite Staring Control of Ground Targets Based on Visual Velocity Estimation and Uncalibrated Cameras
by Caizhi Fan, Chao Song and Zikai Zhong
Remote Sens. 2025, 17(7), 1116; https://doi.org/10.3390/rs17071116 - 21 Mar 2025
Viewed by 344
Abstract
Compared to traditional remote sensing technology, video satellites have unique advantages such as real-time continuous imaging and the ability to independently complete staring observation. To achieve effective staring control, the satellite needs to perform attitude maneuvers to ensure that the target’s projection stays [...] Read more.
Compared to traditional remote sensing technology, video satellites have unique advantages such as real-time continuous imaging and the ability to independently complete staring observation. To achieve effective staring control, the satellite needs to perform attitude maneuvers to ensure that the target’s projection stays within the camera’s visual field and gradually reaches the desired position. The generation of image-based control instructions relies on the calculation of projection coordinates and their rate of change (i.e., visual velocity) of the projection point on the camera’s image plane. However, the visual velocity is usually difficult to obtain directly. Traditional calculation methods of visual velocity using time differentials are limited by video frame rates and the computing power of onboard processors, and is greatly affected by measurement noise, resulting in decreased control accuracy and a higher consumption of maneuvering energy. In order to address the shortcomings of traditional calculations of visual speed by time difference methods, this paper proposes a control method based on the estimation of visual velocity, which achieves real-time calculation of the target’s visual speed through adaptive estimation; then, the stability of the closed-loop system is rigorously demonstrated. Finally, through simulation comparison with the traditional differential method, the results show that the proposed method has an improvement in attitude accuracy for about 74% and a reduction in energy consumption by about 77%. Full article
(This article belongs to the Special Issue Earth Observation Using Satellite Global Images of Remote Sensing)
Show Figures

Graphical abstract

19 pages, 10608 KiB  
Article
Urban Waterlogging Monitoring and Recognition in Low-Light Scenarios Using Surveillance Videos and Deep Learning
by Jian Zhao, Xing Wang, Cuiyan Zhang, Jing Hu, Jiaquan Wan, Lu Cheng, Shuaiyi Shi and Xinyu Zhu
Water 2025, 17(5), 707; https://doi.org/10.3390/w17050707 - 28 Feb 2025
Cited by 2 | Viewed by 1125
Abstract
With the intensification of global climate change, extreme precipitation events are occurring more frequently, making the monitoring and management of urban flooding a critical global issue. Urban surveillance camera sensor networks, characterized by their large-scale deployment, rapid data transmission, and low cost, have [...] Read more.
With the intensification of global climate change, extreme precipitation events are occurring more frequently, making the monitoring and management of urban flooding a critical global issue. Urban surveillance camera sensor networks, characterized by their large-scale deployment, rapid data transmission, and low cost, have emerged as a key complement to traditional remote sensing techniques. These networks offer new opportunities for high-spatiotemporal-resolution urban flood monitoring, enabling real-time, localized observations that satellite and aerial systems may not capture. However, in low-light environments—such as during nighttime or heavy rainfall—the image features of flooded areas become more complex and variable, posing significant challenges for accurate flood detection and timely warnings. To address these challenges, this study develops an imaging model tailored to flooded areas under low-light conditions and proposes an invariant feature extraction model for flooding areas within surveillance videos. By using extracted image features (i.e., brightness and invariant features of flooded areas) as inputs, a deep learning-based flood segmentation model is built on the U-Net architecture. A new low-light surveillance flood image dataset, named UWs, is constructed for training and testing the model. The experimental results demonstrate the efficacy of the proposed method, achieving an mRecall of 0.88, an mF1_score of 0.91, and an mIoU score of 0.85. These results significantly outperform the comparison algorithms, including LRASPP, DeepLabv3+ with MobileNet and ResNet backbones, and the classic DeepLabv3+, with improvements of 4.9%, 3.0%, and 4.4% in mRecall, mF1_score, and mIoU, respectively, compared to Res-UNet. Additionally, the method maintains its strong performance in real-world tests, and it is also effective for daytime flood monitoring, showcasing its robustness for all-weather applications. The findings of this study provide solid support for the development of an all-weather urban surveillance camera flood monitoring network, with significant practical value for enhancing urban emergency management and disaster reduction efforts. Full article
(This article belongs to the Section Urban Water Management)
Show Figures

Figure 1

18 pages, 4518 KiB  
Article
Running Parameter Analysis in 400 m Sprint Using Real-Time Kinematic Global Navigation Satellite Systems
by Keisuke Onodera, Naoto Miyamoto, Kiyoshi Hirose, Akiko Kondo, Wako Kajiwara, Hiroshi Nakano, Shunya Uda and Masaki Takeda
Sensors 2025, 25(4), 1073; https://doi.org/10.3390/s25041073 - 11 Feb 2025
Cited by 1 | Viewed by 1156
Abstract
Accurate measurement of running parameters, including the step length (SL), step frequency (SF), and velocity, is essential for optimizing sprint performance. Traditional methods, such as 2D video analysis and inertial measurement units (IMUs), face limitations in precision and [...] Read more.
Accurate measurement of running parameters, including the step length (SL), step frequency (SF), and velocity, is essential for optimizing sprint performance. Traditional methods, such as 2D video analysis and inertial measurement units (IMUs), face limitations in precision and practicality. This study introduces and evaluates two methods for estimating running parameters using real-time kinematic global navigation satellite systems (RTK GNSS) with 100 Hz sampling. Method 1 identifies mid-stance phases via vertical position minima, while Method 2 aligns with the initial contact (IC) events through vertical velocity minima. Two collegiate sprinters completed a 400 m sprint under controlled conditions, with RTK GNSS measurements validated against 3D video analysis and IMU data. Both methods estimated the SF, SL, and velocity, but Method 2 demonstrated superior accuracy, achieving a lower RMSE (SF: 0.205 Hz versus 0.291 Hz; SL: 0.143 m versus 0.190 m) and higher correlation with the reference data. Method 2 also exhibited improved performance in curved sections and detected stride asymmetries with higher consistency than Method 1. These findings highlight RTK GNSS, particularly the velocity minima approach, as a robust, drift-free, single-sensor solution for detailed per-step sprint analysis in outdoor conditions. This approach offers a practical alternative to IMU-based methods and enables training optimization and performance evaluation. Full article
Show Figures

Figure 1

Back to TopTop