Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (554)

Search Parameters:
Keywords = pixel tracking

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1551 KB  
Article
Photogating Regimes in Graphene: Memory-Bearing and Reset-Free Operation
by Afshan Khaliq, Hongsheng Xu, Akeel Qadir, Ayesha Salman, Sichao Du, Munir Ali and Shihua Huang
Nanomaterials 2025, 15(21), 1667; https://doi.org/10.3390/nano15211667 (registering DOI) - 2 Nov 2025
Abstract
We demonstrate photogating in a graphene/Si–SiO2 stack, where vertical motion of photogenerated charge is converted into a corresponding change in graphene channel conductance in real time. Under pulsed illumination, holes accumulate at the Si/SiO2 interface, creating a surface photovoltage that shifts [...] Read more.
We demonstrate photogating in a graphene/Si–SiO2 stack, where vertical motion of photogenerated charge is converted into a corresponding change in graphene channel conductance in real time. Under pulsed illumination, holes accumulate at the Si/SiO2 interface, creating a surface photovoltage that shifts the flat-band condition and electrostatically suppresses graphene conductance. A dual-readout scheme—simultaneously tracking interfacial charging dynamics and the graphene channel—cleanly separates optical charge injection (cause) from electronic transduction (effect). This separation allows for the direct extraction of practical figures of merit without conventional transfer sweeps, including flat-band shift per pulse, retention time constants, and trap occupancy. Interface kinetics then define two operating regimes: a fast, resettable detector when traps are sparse or rapid, and a trap-assisted analog-memory state when slow traps retain charge between pulses. The mechanism is CMOS-compatible and needs no cryogenics or exotic materials. Together, these results outline a compact route to engineer integrating photodetectors, pixel-level memory for adaptive imaging, and neuromorphic optoelectronic elements that couple sensing with in situ computation. Full article
(This article belongs to the Special Issue 2D Materials for High-Performance Optoelectronics)
20 pages, 7699 KB  
Article
Large-Gradient Displacement Monitoring and Parameter Inversion of Mining Collapse with the Optical Flow Method of Synthetic Aperture Radar Images
by Chuanjiu Zhang and Jie Chen
Remote Sens. 2025, 17(21), 3533; https://doi.org/10.3390/rs17213533 - 25 Oct 2025
Viewed by 328
Abstract
Monitoring large-gradient surface displacement caused by underground mining remains a significant challenge for conventional Synthetic Aperture Radar (SAR)-based techniques. This study introduces optical flow methods to monitor large-gradient displacement in mining areas and conducts a comprehensive comparison with Small Baseline Subset Interferometric SAR [...] Read more.
Monitoring large-gradient surface displacement caused by underground mining remains a significant challenge for conventional Synthetic Aperture Radar (SAR)-based techniques. This study introduces optical flow methods to monitor large-gradient displacement in mining areas and conducts a comprehensive comparison with Small Baseline Subset Interferometric SAR (SBAS-InSAR) and Pixel Offset Tracking (POT) methods. Using 12 high-resolution TerraSAR-X (TSX) SAR images over the Daliuta mining area in Yulin, China, we evaluate the performance of each method in terms of sensitivity to displacement gradients, computational efficiency, and monitoring accuracy. Results indicate that SBAS-InSAR is only capable of detecting displacement at the decimeter level in the Dalinta mining area and is unable to monitor rapid, large-gradient displacement exceeding the meter scale. While POT can detect meter-scale displacements, it suffers from low efficiency and low precision. In contrast, the proposed optical flow method (OFM) achieves sub-pixel accuracy with root mean square errors of 0.17 m (compared to 0.26 m for POT) when validated against Global Navigation Satellite System (GNSS) data while improving computational efficiency by nearly 30 times compared to POT. Furthermore, based on the optical flow results, mining parameters and three-dimensional (3D) displacement fields were successfully inverted, revealing maximum vertical subsidence exceeding 4.4 m and horizontal displacement over 1.5 m. These findings demonstrate that the OFM is a reliable and efficient tool for large-gradient displacement monitoring in mining areas, offering valuable support for hazard assessment and mining management. Full article
Show Figures

Figure 1

23 pages, 4617 KB  
Article
IAASNet: Ill-Posed-Aware Aggregated Stereo Matching Network for Cross-Orbit Optical Satellite Images
by Jiaxuan Huang, Haoxuan Sun and Taoyang Wang
Remote Sens. 2025, 17(21), 3528; https://doi.org/10.3390/rs17213528 - 24 Oct 2025
Viewed by 244
Abstract
Stereo matching estimates disparity by finding correspondences between stereo image pairs. Under ill-posed conditions such as geometric differences, radiometric differences, and temporal changes, accurate estimation becomes difficult due to insufficient matching information. In remote sensing imagery, such ill-posed regions are more common because [...] Read more.
Stereo matching estimates disparity by finding correspondences between stereo image pairs. Under ill-posed conditions such as geometric differences, radiometric differences, and temporal changes, accurate estimation becomes difficult due to insufficient matching information. In remote sensing imagery, such ill-posed regions are more common because of complex imaging conditions. This problem is particularly pronounced in cross-track satellite stereo images, where existing methods often fail to effectively handle noise due to insufficient features or excessive reliance on prior assumptions. In this work, we propose an ill-posed-aware aggregated satellite stereo matching network, which integrates monocular depth estimation with an ill-posed-guided adaptive aware geometry fusion module to balance local and global features while reducing noise interference. In addition, we design an enhanced mask augmentation strategy during training to simulate occlusions and texture loss in complex scenarios, thereby improving robustness. Experimental results demonstrate that our method outperforms existing state-of-the-art approaches on the US3D dataset, achieving a 5.38% D1-error and 0.958 pixels endpoint error (EPE). In particular, our method shows significant advantages in ill-posed regions. Overall, the proposed network not only exhibits strong feature learning ability but also demonstrates robust generalization in real-world remote sensing applications. Full article
Show Figures

Figure 1

16 pages, 2575 KB  
Article
Extending the ICESAT-2 ATLAS Lidar Capabilities to Other Planets Within Our Solar System
by John J. Degnan
Photonics 2025, 12(11), 1048; https://doi.org/10.3390/photonics12111048 - 23 Oct 2025
Viewed by 275
Abstract
The ATLAS lidar on NASA’s Earth-orbiting ICESat-2 satellite has operated continuously since its launch in September 2018, with no sign of degradation. Compared to previous international single-beam spaceborne lidars, which operated at a few tens of Hz, the single-photon-sensitive, six-beam ATLAS pushbroom lidar [...] Read more.
The ATLAS lidar on NASA’s Earth-orbiting ICESat-2 satellite has operated continuously since its launch in September 2018, with no sign of degradation. Compared to previous international single-beam spaceborne lidars, which operated at a few tens of Hz, the single-photon-sensitive, six-beam ATLAS pushbroom lidar provides 60,000 surface measurements per second and has accumulated almost 3 trillion surface measurements during its six years of operation. It also features a 0.5 m2 telescope aperture and a single, 5 Watt, frequency-doubled Nd:YAG laser generating a 10 KHz train of 1.5-nanosecond pulses at a green wavelength of 532 nm. The current paper investigates how, with minor modifications to the ATLAS lidar, this capability might be extended to other planets within our solar system. Crucial to this capability is the need to minimize the solar background seen by the lidar while simultaneously providing, for long time intervals (multiple months), an uninterrupted, modestly powered, multimegabit per second interplanetary laser communications link to a terminal in Earth orbit. The proposed solution is a pair of Earth and planetary satellites in high, parallel, quasi-synchronized orbits perpendicular to their host planet’s orbital planes about the Sun. High orbits significantly reduce the time intervals over which the interplanetary communications link is blocked by their host planets. Initial establishment of the interplanetary communications link is simplified during two specific time intervals per orbit when the sunlit image of the two planets are not displaced from their actual positions (“zero point ahead angle”). In this instance, sunlit planetary images and the orbiting satellite laser beacon can be displayed on the same pixelated detector array, thereby accelerating the coalignment of the two communication terminals. Various tables in the text provide insight for each of the eight planets regarding the impact of solar distance on the worst-case Signal-to-Noise Ratio (SNR), the effect of satellite orbital height on the duration of the unblocked interplanetary communications link, and the resulting planetary surface continuity and resolution in both the along-track and cross-track directions. For planets beyond Saturn, the laser power and/or transmit/receive telescope apertures required to transmit multimegabit-per-second lidar data back to Earth are major challenges given current technology. Full article
(This article belongs to the Special Issue Advances in Solid-State Laser Technology and Applications)
Show Figures

Figure 1

25 pages, 8062 KB  
Article
Time-Series Surface Velocity and Backscattering Coefficients from Sentinel-1 SAR Images Document Glacier Seasonal Dynamics and Surges on the Puruogangri Ice Field in the Central Tibetan Plateau
by Qingxin Wen and Teng Wang
Remote Sens. 2025, 17(20), 3490; https://doi.org/10.3390/rs17203490 - 20 Oct 2025
Viewed by 298
Abstract
The Puruogangri Ice Field (PIF) in the central Tibetan Plateau, known as the world’s Third Pole, is the largest modern ice field in the Tibetan Plateau and a crucial indicator of climate change. Although it was thought to be quiet, recent studies identified [...] Read more.
The Puruogangri Ice Field (PIF) in the central Tibetan Plateau, known as the world’s Third Pole, is the largest modern ice field in the Tibetan Plateau and a crucial indicator of climate change. Although it was thought to be quiet, recent studies identified possible surging behaviors. But comprehensive velocity fields remain largely unknown. Here we present the first comprehensive and high spatiotemporal resolution 3D displacement field of the PIF from 2017 to 2024 using synthetic aperture radar (SAR) imaging geodesy. Using time-series InSAR and time-series pixel offset tracking and integrating ascending and descending Sentinel-1 SAR images, we invert the time-series 3D displacement over eight years. Our results reveal significant seasonal variations and three surging glaciers, with peak displacements exceeding 110 m in 12 days. Combined with ERA5 reanalysis and SAR backscatter coefficients analysis, we demonstrate that these surges are hydrologically controlled, likely initiated by damaged subglacial drainage systems. This study enhances our understanding of glacier dynamics in the central Tibetan Plateau and highlights the potential of using SAR imaging geodesy to monitor glacial hazards in High Mountain Asia. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

19 pages, 2488 KB  
Article
Unsupervised Segmentation of Bolus and Residue in Videofluoroscopy Swallowing Studies
by Farnaz Khodami, Mehdy Dousty, James L. Coyle and Ervin Sejdić
J. Imaging 2025, 11(10), 368; https://doi.org/10.3390/jimaging11100368 - 17 Oct 2025
Viewed by 343
Abstract
Bolus tracking is a critical component of swallowing analysis, as the speed, course, and integrity of bolus movement from the mouth to the stomach, along with the presence of residue, serve as key indicators of potential abnormalities. Existing machine learning approaches for videofluoroscopic [...] Read more.
Bolus tracking is a critical component of swallowing analysis, as the speed, course, and integrity of bolus movement from the mouth to the stomach, along with the presence of residue, serve as key indicators of potential abnormalities. Existing machine learning approaches for videofluoroscopic swallowing study (VFSS) analysis heavily rely on annotated data and often struggle to detect residue, which is visually subtle and underrepresented. This study proposes an unsupervised architecture to segment both bolus and residue, marking the first successful machine learning-based residue segmentation in swallowing analysis with quantitative evaluation. We introduce an unsupervised convolutional autoencoder that segments bolus and residue without requiring pixel-level annotations. To address the locality bias inherent in convolutional architectures, we incorporate positional encoding into the input representation, enabling the model to capture global spatial context. The proposed model was validated on a diverse set of VFSS images annotated by certified raters. Our method achieves an intersection over union (IoU) of 61% for bolus segmentation—comparable to state-of-the-art supervised methods—and 52% for residue detection. Despite not using pixel-wise labels for training, our model significantly outperforms top-performing supervised baselines in residue detection, as confirmed by statistical testing. These findings suggest that learning from negative space provides a robust and generalizable pathway for detecting clinically significant but sparsely represented features like residue. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

21 pages, 7112 KB  
Article
A Two-Plane Proton Radiography System Using ATLAS IBL Pixel-Detector Modules
by Hendrik Speiser, Claus Maximillian Bäcker, Johannes Esser, Alina Hild, Marco Iampieri, Ann-Kristin Lüvelsmeyer, Annsofie Tappe, Helen Thews, Kevin Kröninger and Jens Weingarten
Instruments 2025, 9(4), 23; https://doi.org/10.3390/instruments9040023 - 14 Oct 2025
Viewed by 248
Abstract
Accurate knowledge of a patient’s anatomy during every treatment fraction in proton therapy is an important prerequisite to ensure a correct dose deposition in the target volume. Adaptive proton therapy aims to detect those changes and adjust the treatment plan accordingly. One way [...] Read more.
Accurate knowledge of a patient’s anatomy during every treatment fraction in proton therapy is an important prerequisite to ensure a correct dose deposition in the target volume. Adaptive proton therapy aims to detect those changes and adjust the treatment plan accordingly. One way to trigger a daily re-planning of the treatment is to take a proton radiograph from the beam’s-eye view before the treatment to check for possible changes in the water equivalent thickness (WET) along the path due to daily changes in the patient’s anatomy. In this paper, the Two-Plane Imaging System (TPIS) is presented, comprising two ATLAS IBL silicon pixel-detector modules developed for the tracking detector of the ATLAS experiment at CERN. The prototype of the TPIS is described in detail, and proof-of-principle WET images are presented, of two-step phantoms and more complex phantoms with bone-like inlays (WET 10 to 40 mm). This study shows the capability of the TPIS to measure WET images with high precision. In addition, the potential of the TPIS to accurately determine WET changes over time down to 1 mm between subsequently taken WET images of a changing phantom is shown. This demonstrates the possible application of the TPIS and ATLAS IBL pixel-detector module in adaptive proton therapy. Full article
(This article belongs to the Special Issue Medical Applications of Particle Physics, 2nd Edition)
Show Figures

Figure 1

22 pages, 3532 KB  
Article
Dual Weakly Supervised Anomaly Detection and Unsupervised Segmentation for Real-Time Railway Perimeter Intrusion Monitoring
by Donghua Wu, Yi Tian, Fangqing Gao, Xiukun Wei and Changfan Wang
Sensors 2025, 25(20), 6344; https://doi.org/10.3390/s25206344 - 14 Oct 2025
Viewed by 383
Abstract
The high operational velocities of high-speed trains present constraints on their onboard track intrusion detection systems for real-time capture and analysis, encompassing limited computational resources and motion image blurring. This emphasizes the critical necessity of track perimeter intrusion monitoring systems. Consequently, an intelligent [...] Read more.
The high operational velocities of high-speed trains present constraints on their onboard track intrusion detection systems for real-time capture and analysis, encompassing limited computational resources and motion image blurring. This emphasizes the critical necessity of track perimeter intrusion monitoring systems. Consequently, an intelligent monitoring system employing trackside cameras is constructed, integrating weakly supervised video anomaly detection and unsupervised foreground segmentation, which offers a solution for monitoring foreign objects on high-speed train tracks. To address the challenges of complex dataset annotation and unidentified target detection, weakly supervised learning detection is proposed to track foreign object intrusions based on video. The pretraining of Xception3D and the integration of multiple attention mechanisms have markedly enhanced the feature extraction capabilities. The Top-K sample selection alongside the amplitude score/feature loss function effectively discriminates abnormal from normal samples, incorporating time-smoothing constraints to ensure detection consistency across consecutive frames. Once abnormal video frames are identified, a multiscale variational autoencoder is proposed for the positioning of foreign objects. A downsampling/upsampling module is optimized to increase feature extraction efficiency. The pixel-level background weight distribution loss function is engineered to jointly balance background authenticity and noise resistance. Ultimately, the experimental results indicate that the video anomaly detection model achieved an AUC of 0.99 on the track anomaly detection dataset and processes 2 s video segments in 0.41 s. The proposed foreground segmentation algorithm achieved an F1 score of 0.9030 in the track anomaly dataset and 0.8375 on CDnet2014, with 91 Frames per Second, confirming its efficacy. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

35 pages, 777 KB  
Review
Predictive Autonomy for UAV Remote Sensing: A Survey of Video Prediction
by Zhan Chen, Enze Zhu, Zile Guo, Peirong Zhang, Xiaoxuan Liu, Lei Wang and Yidan Zhang
Remote Sens. 2025, 17(20), 3423; https://doi.org/10.3390/rs17203423 - 13 Oct 2025
Viewed by 589
Abstract
The analysis of dynamic remote sensing scenes from unmanned aerial vehicles (UAVs) is shifting from reactive processing to proactive, predictive intelligence. Central to this evolution is video prediction—forecasting future imagery from past observations—which enables critical remote sensing applications like persistent environmental monitoring, occlusion-robust [...] Read more.
The analysis of dynamic remote sensing scenes from unmanned aerial vehicles (UAVs) is shifting from reactive processing to proactive, predictive intelligence. Central to this evolution is video prediction—forecasting future imagery from past observations—which enables critical remote sensing applications like persistent environmental monitoring, occlusion-robust object tracking, and infrastructure anomaly detection under challenging aerial conditions. Yet, a systematic review of video prediction models tailored for the unique constraints of aerial remote sensing has been lacking. Existing taxonomies often obscure key design choices, especially for emerging operators like state-space models (SSMs). We address this gap by proposing a unified, multi-dimensional taxonomy with three orthogonal axes: (i) operator architecture; (ii) generative nature; and (iii) training/inference regime. Through this lens, we analyze recent methods, clarifying their trade-offs for deployment on UAV platforms that demand processing of high-resolution, long-horizon video streams under tight resource constraints. Our review assesses the utility of these models for key applications like proactive infrastructure inspection and wildlife tracking. We then identify open problems—from the scarcity of annotated aerial video data to evaluation beyond pixel-level metrics—and chart future directions. We highlight a convergence toward scalable dynamic world models for geospatial intelligence, which leverage physics-informed learning, multimodal fusion, and action-conditioning, powered by efficient operators like SSMs. Full article
Show Figures

Figure 1

46 pages, 2748 KB  
Review
Hardware, Algorithms, and Applications of the Neuromorphic Vision Sensor: A Review
by Claudio Cimarelli, Jose Andres Millan-Romera, Holger Voos and Jose Luis Sanchez-Lopez
Sensors 2025, 25(19), 6208; https://doi.org/10.3390/s25196208 - 7 Oct 2025
Viewed by 853
Abstract
Event-based (neuromorphic) cameras depart from frame-based sensing by reporting asynchronous per-pixel brightness changes. This produces sparse, low-latency data streams with extreme temporal resolution but demands new processing paradigms. In this survey, we systematically examine neuromorphic vision along three main dimensions. First, we highlight [...] Read more.
Event-based (neuromorphic) cameras depart from frame-based sensing by reporting asynchronous per-pixel brightness changes. This produces sparse, low-latency data streams with extreme temporal resolution but demands new processing paradigms. In this survey, we systematically examine neuromorphic vision along three main dimensions. First, we highlight the technological evolution and distinctive hardware features of neuromorphic cameras from their inception to recent models. Second, we review image-processing algorithms developed explicitly for event-based data, covering works on feature detection, tracking, optical flow, depth and pose estimation, and object recognition. These techniques, drawn from classical computer vision and modern data-driven approaches, illustrate the breadth of applications enabled by event-based cameras. Third, we present practical application case studies demonstrating how event cameras have been successfully used across various scenarios. Distinct from prior reviews, our survey provides a broader overview by uniquely integrating hardware developments, algorithmic progressions, and real-world applications into a structured, cohesive framework. This explicitly addresses the needs of researchers entering the field or those requiring a balanced synthesis of foundational and recent advancements, without overly specializing in niche areas. Finally, we analyze the challenges limiting widespread adoption, identify research gaps compared to standard imaging techniques, and outline promising directions for future developments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 12510 KB  
Article
Computer Vision-Based Optical Odometry Sensors: A Comparative Study of Classical Tracking Methods for Non-Contact Surface Measurement
by Ignas Andrijauskas, Marius Šumanas, Andrius Dzedzickis, Wojciech Tanaś and Vytautas Bučinskas
Sensors 2025, 25(19), 6051; https://doi.org/10.3390/s25196051 - 1 Oct 2025
Viewed by 661
Abstract
This article presents a principled framework for selecting and tuning classical computer vision algorithms in the context of optical displacement sensing. By isolating key factors that affect algorithm behavior—such as feed window size and motion step size—the study seeks to move beyond intuition-based [...] Read more.
This article presents a principled framework for selecting and tuning classical computer vision algorithms in the context of optical displacement sensing. By isolating key factors that affect algorithm behavior—such as feed window size and motion step size—the study seeks to move beyond intuition-based practices and provide rigorous, repeatable performance evaluations. Computer vision-based optical odometry sensors offer non-contact, high-precision measurement capabilities essential for modern metrology and robotics applications. This paper presents a systematic comparative analysis of three classical tracking algorithms—phase correlation, template matching, and optical flow—for 2D surface displacement measurement using synthetic image sequences with subpixel-accurate ground truth. A virtual camera system generates controlled test conditions using a multi-circle trajectory pattern, enabling systematic evaluation of tracking performance using 400 × 400 and 200 × 200 pixel feed windows. The systematic characterization enables informed algorithm selection based on specific application requirements rather than empirical trial-and-error approaches. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

25 pages, 7878 KB  
Article
JOTGLNet: A Guided Learning Network with Joint Offset Tracking for Multiscale Deformation Monitoring
by Jun Ni, Siyuan Bao, Xichao Liu, Sen Du, Dapeng Tao and Yibing Zhan
Remote Sens. 2025, 17(19), 3340; https://doi.org/10.3390/rs17193340 - 30 Sep 2025
Viewed by 282
Abstract
Ground deformation monitoring in mining areas is essential for hazard prevention and environmental protection. Although interferometric synthetic aperture radar (InSAR) provides detailed phase information for accurate deformation measurement, its performance is often compromised in regions experiencing rapid subsidence and strong noise, where phase [...] Read more.
Ground deformation monitoring in mining areas is essential for hazard prevention and environmental protection. Although interferometric synthetic aperture radar (InSAR) provides detailed phase information for accurate deformation measurement, its performance is often compromised in regions experiencing rapid subsidence and strong noise, where phase aliasing and coherence loss lead to significant inaccuracies. To overcome these limitations, this paper proposes JOTGLNet, a guided learning network with joint offset tracking, for multiscale deformation monitoring. This method integrates pixel offset tracking (OT), which robustly captures large-gradient displacements, with interferometric phase data that offers high sensitivity in coherent regions. A dual-path deep learning architecture was designed where the interferometric phase serves as the primary branch and OT features act as complementary information, enhancing the network’s ability to handle varying deformation rates and coherence conditions. Additionally, a novel shape perception loss combining morphological similarity measurement and error learning was introduced to improve geometric fidelity and reduce unbalanced errors across deformation regions. The model was trained on 4000 simulated samples reflecting diverse real-world scenarios and validated on 1100 test samples with a maximum deformation up to 12.6 m, achieving an average prediction error of less than 0.15 m—outperforming state-of-the-art methods whose errors exceeded 0.19 m. Additionally, experiments on five real monitoring datasets further confirmed the superiority and consistency of the proposed approach. Full article
Show Figures

Graphical abstract

27 pages, 7020 KB  
Article
RPC Correction Coefficient Extrapolation for KOMPSAT-3A Imagery in Inaccessible Regions
by Namhoon Kim
Remote Sens. 2025, 17(19), 3332; https://doi.org/10.3390/rs17193332 - 29 Sep 2025
Viewed by 385
Abstract
High-resolution pushbroom satellites routinely acquire multi-tenskilometer-scale strips whose vendors’ rational polynomial coefficients (RPCs) exhibit systematic, direction-dependent biases that accumulate downstream when ground control is sparse. This study presents a physically interpretable stripwise extrapolation framework that predicts along- and across-track RPC correlation coefficients for [...] Read more.
High-resolution pushbroom satellites routinely acquire multi-tenskilometer-scale strips whose vendors’ rational polynomial coefficients (RPCs) exhibit systematic, direction-dependent biases that accumulate downstream when ground control is sparse. This study presents a physically interpretable stripwise extrapolation framework that predicts along- and across-track RPC correlation coefficients for inaccessible segments from an upstream calibration subset. Terrain-independent RPCs were regenerated and residual image-space errors were modeled with weighted least squares using elapsed time, off-nadir evolution, and morphometric descriptors of the target terrain. Gaussian kernel weights favor calibration scenes with a Jarque–Bera-indexed relief similar to the target. When applied to three KOMPSAT-3A panchromatic strips, the approach preserves native scene geometry while transporting calibrated coefficients downstream, reducing positional errors in two strips to <2.8 pixels (~2.0 m at 0.710 m Ground Sample Distance, GSD). The first strip with a stronger attitude drift retains 4.589 pixel along-track errors, indicating the need for wider predictor coverage under aggressive maneuvers. The results clarify the directional error structure with a near-constant across-track bias and low-frequency along-track drift and show that a compact predictor set can stabilize extrapolation without full-block adjustment or dense tie networks. This provides a GCP-efficient alternative to full-block adjustment and enables accurate georeferencing in controlled environments. Full article
Show Figures

Figure 1

30 pages, 14129 KB  
Article
Evaluating Two Approaches for Mapping Solar Installations to Support Sustainable Land Monitoring: Semantic Segmentation on Orthophotos vs. Multitemporal Sentinel-2 Classification
by Adolfo Lozano-Tello, Andrés Caballero-Mancera, Jorge Luceño and Pedro J. Clemente
Sustainability 2025, 17(19), 8628; https://doi.org/10.3390/su17198628 - 25 Sep 2025
Viewed by 479
Abstract
This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and [...] Read more.
This study evaluates two approaches for detecting solar photovoltaic (PV) installations across agricultural areas, emphasizing their role in supporting sustainable energy monitoring, land management, and planning. Accurate PV mapping is essential for tracking renewable energy deployment, guiding infrastructure development, assessing land-use impacts, and informing policy decisions aimed at reducing carbon emissions and fostering climate resilience. The first approach applies deep learning-based semantic segmentation to high-resolution RGB orthophotos, using the pretrained “Solar PV Segmentation” model, which achieves an F1-score of 95.27% and an IoU of 91.04%, providing highly reliable PV identification. The second approach employs multitemporal pixel-wise spectral classification using Sentinel-2 imagery, where the best-performing neural network achieved a precision of 99.22%, a recall of 96.69%, and an overall accuracy of 98.22%. Both approaches coincided in detecting 86.67% of the identified parcels, with an average surface difference of less than 6.5 hectares per parcel. The Sentinel-2 method leverages its multispectral bands and frequent revisit rate, enabling timely detection of new or evolving installations. The proposed methodology supports the sustainable management of land resources by enabling automated, scalable, and cost-effective monitoring of solar infrastructures using open-access satellite data. This contributes directly to the goals of climate action and sustainable land-use planning and provides a replicable framework for assessing human-induced changes in land cover at regional and national scales. Full article
Show Figures

Figure 1

25 pages, 11479 KB  
Article
Improved Pixel Offset Tracking Method Based on Corner Point Variation in Large-Gradient Landslide Deformation Monitoring
by Dingyi Zhou, Zhifang Zhao and Fei Zhao
Remote Sens. 2025, 17(19), 3292; https://doi.org/10.3390/rs17193292 - 25 Sep 2025
Viewed by 410
Abstract
Aiming at the problems of feature matching difficulty and limited extension application in the existing pixel offset tracking method for large-gradient landslides, this paper proposes an improved pixel offset tracking method based on corner point variation. Taking the Jinshajiang Baige landslide as the [...] Read more.
Aiming at the problems of feature matching difficulty and limited extension application in the existing pixel offset tracking method for large-gradient landslides, this paper proposes an improved pixel offset tracking method based on corner point variation. Taking the Jinshajiang Baige landslide as the research object, the method’s effectiveness is verified using sentinel data. Through a series of experiments, the results show that (1) the use of VV (Vertical-Vertical) and VH (Vertical-Horizontal) polarisation information combined with the mean value calculation method can improve the accuracy and credibility of the circling of the landslide monitoring range, make up for the limitations of the single polarisation information, and capture the landslide range more comprehensively, which provides essential information for landslide monitoring. (2) The choice of scale factor has an essential influence on the results of corner detection, in which the best corner effect is obtained when the scale factor R is 2, which provides an essential reference basis for practical application. (3) By comparing traditional normalized and adaptive window cross-correlation methods with the proposed approach in calculating landslide offset distances, the proposed method shows superior matching accuracy and sliding direction estimation. (4) Analysis of pixels P1, P2, and P3 confirms the method’s high accuracy and reliability in landslide displacement assessment, demonstrating its advantage in tracking pixel offsets in large-gradient scenarios. Therefore, the proposed method offers an effective solution for large-gradient landslide monitoring, overcoming limitations of feature matching and limited applicability. It is expected to provide more reliable technical support for geological disaster management. Full article
Show Figures

Graphical abstract

Back to TopTop