Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,652)

Search Parameters:
Keywords = imaging sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
47 pages, 8613 KB  
Review
2D-to-3D Image Reconstruction in Agriculture: A Review of Methods, Challenges, and AI-Driven Opportunities
by Hemanth Reddy Sankaramaddi, Won Suk Lee, Kyoungchul Kim and Youngki Hong
Sensors 2026, 26(6), 1775; https://doi.org/10.3390/s26061775 - 11 Mar 2026
Abstract
Agriculture is rapidly becoming a data-driven field where automation relies on transforming 2D images into accurate 3D models. However, selecting the most effective method remains challenging due to the unconstrained nature of the environment. This review assesses the effectiveness of geometry-based, sensor-based, and [...] Read more.
Agriculture is rapidly becoming a data-driven field where automation relies on transforming 2D images into accurate 3D models. However, selecting the most effective method remains challenging due to the unconstrained nature of the environment. This review assesses the effectiveness of geometry-based, sensor-based, and learning-based reconstruction methodologies in agricultural settings. We analyze photogrammetric pipelines, active sensing, and neural rendering methods based on their geometric accuracy, data processing speed, and field performance against wind or occlusion. Our analysis indicates that while Light Detection and Ranging (LiDAR) is highly accurate, it is too expensive for widespread adoption. Conversely, geometry-based methods are inexpensive but struggle with complex biological structures. Learning-based methods, especially 3D Gaussian Splatting (3DGS), have revolutionized the field by enabling a balance between visual fidelity and real-time inference speed. We conclude that the best chance for scalability and accuracy lies in hybrid pipelines that integrate Vision Foundation Models (VFMs) with geometric priors. We believe that “hybrid intelligence” systems, such as edge-native 3D Gaussian Splatting combined with semantic priors, are the future of 3D reconstruction. These systems will enable the creation of real-time, spatiotemporal (4D) digital twins that drive automated decision-making in precision agriculture. Full article
(This article belongs to the Special Issue Feature Papers in Smart Agriculture 2025)
Show Figures

Figure 1

34 pages, 11009 KB  
Article
Full-Link Background Radiation Suppression and Detection Capability Optimization of Mid-Wave Infrared Hyperspectral Remote Sensing in Complex Scenarios
by Yun Wang, Bingqi Qiu, Huairong Kang, Xuanbin Liu, Mengyang Chai, Huijie Han and Yinnian Liu
Photonics 2026, 13(3), 271; https://doi.org/10.3390/photonics13030271 - 11 Mar 2026
Abstract
To address the technical bottlenecks of strong background radiation interference and weak target signals in mid-wave infrared (MWIR) hyperspectral mineral detection over complex terrain, this paper proposes a “full-link background radiation suppression” methodological framework. A coupled illumination-terrain-atmosphere-sensor radiative transfer model is constructed to [...] Read more.
To address the technical bottlenecks of strong background radiation interference and weak target signals in mid-wave infrared (MWIR) hyperspectral mineral detection over complex terrain, this paper proposes a “full-link background radiation suppression” methodological framework. A coupled illumination-terrain-atmosphere-sensor radiative transfer model is constructed to systematically quantify how multidimensional parameters—such as observation geometry, surface temperature, elevation, aerosol optical depth, and water vapor content—influence the target background radiation contrast. The findings reveal that daytime observation, lower surface temperature, higher altitude, dry atmosphere, and moderate solar and observation zenith angles are key factors for maximizing the signal-to-noise ratio. Comprehensive optimization analysis demonstrates that observations during midday in autumn and winter achieve optimal performance, with the target background relative contrast potentially enhanced by up to 6.29 times compared to unfavorable conditions such as summer nights. This work elucidates the physical mechanisms governing MWIR hyperspectral detection efficacy in complex scenarios, provides direct parameter-optimization strategies for intelligent mission planning of spaceborne imaging systems, and holds significant value for advancing mineral remote sensing from “passive acquisition” to “cognitive detection”. Full article
48 pages, 5054 KB  
Review
Advances, Challenges, and Recommendations for Non-Destructive Testing Technologies for Wind Turbine Blade Damage: A Review of the Literature from the Past Decade
by Guodong Qin, Yongchang Jin, Lizheng Qiao and Zhenyu Wu
Sensors 2026, 26(6), 1773; https://doi.org/10.3390/s26061773 - 11 Mar 2026
Abstract
As critical components of wind energy systems, the structural integrity of wind turbine blades is directly tied to the operational safety and economic performance of wind turbines. With blade designs trending toward larger and more flexible structures and operating environments becoming increasingly harsh, [...] Read more.
As critical components of wind energy systems, the structural integrity of wind turbine blades is directly tied to the operational safety and economic performance of wind turbines. With blade designs trending toward larger and more flexible structures and operating environments becoming increasingly harsh, maintenance strategies must urgently shift from reactive approaches to predictive maintenance paradigms. From an engineering application perspective, this study conducts a systematic and critical review of non-destructive testing (NDT) and structural health monitoring (SHM) technologies for wind turbine blades. Drawing on the literature published over the past decade, we examine the field applicability, limitations, and engineering challenges of core NDT techniques—including vision-based methods, acoustic approaches, vibration analysis, ultrasound, and infrared thermography. Particular emphasis is placed on the integration of data-driven approaches with engineering practice, evaluating the role of machine learning in fault classification and anomaly diagnosis, as well as the contributions of deep learning to automated defect detection in image and signal data. Moreover, this paper critically discusses the growing use of robotic inspection platforms, such as unmanned aerial vehicles and climbing robots, as multi-sensor carriers enabling rapid and comprehensive blade assessment. By comparatively analyzing detection performance, cost, and automation levels across technologies, we identify key engineering barriers, including environmental noise robustness, signal attenuation within complex blade structures, and the persistent gap between laboratory methods and field deployment. Finally, we outline forward-looking research directions, encompassing multi-modal sensor fusion, edge computing for real-time diagnostics, and the development of standardized SHM systems aimed at supporting full lifecycle blade management. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
29 pages, 14346 KB  
Article
LRCFuse: Infrared and Visible Image Fusion Based on Low-Rank Representation and Convolutional Sparse Learning
by Jingjing Liu, Yujie Zhu, Yuhao Zhang, Aiying Guo, Mengjiao Li and Jianhua Zhang
Sensors 2026, 26(6), 1771; https://doi.org/10.3390/s26061771 - 11 Mar 2026
Abstract
With the development of cross-modal image fusion in multi-sensor systems, current fusion technologies have made significant progress in feature extraction, facilitating more effective image analysis. However, insufficient fusion information may degrade the correlation between the source and fused images, often resulting in the [...] Read more.
With the development of cross-modal image fusion in multi-sensor systems, current fusion technologies have made significant progress in feature extraction, facilitating more effective image analysis. However, insufficient fusion information may degrade the correlation between the source and fused images, often resulting in the omission of critical features from the original modalities. Therefore, in order to preserve as much information as possible, especially for the complete extraction of effective feature information in source images, this paper proposes a new cross-modal image fusion method based on low-rank representation and convolutional sparse learning named LRCFuse. Firstly, the learned low-rank representation (LLRR) blocks are employed to perform dimensionality reduction on the source images while simultaneously extracting their low-rank and sparse feature components. Nevertheless, considering that the low-rank representation has insufficient modeling ability for different modal images, we introduce common feature preservation module (CFPM) blocks based on convolutional sparse coding. By leveraging the CFPM module, LRCFuse recovers common features from both source images to mitigate the loss caused by the imperfect assumptions of low-rank representation. Based on this, a multi-level optimization strategy incorporating pixel loss, shallow-level loss, mid-level loss, deep-level loss, and sobel loss is proposed to hierarchically learn and refine diverse image features. Quantitative and qualitative evaluations are conducted across various datasets, revealing that LRCFuse can effectively detect targets infrared salient targets, preserve additional details in visible images, and achieve better fusion results for subsequent downstream tasks. Full article
(This article belongs to the Special Issue Machine Learning in Image/Video Processing and Sensing)
Show Figures

Figure 1

15 pages, 1952 KB  
Article
Cost-Effective and Drift-Resistant Fiber-Optic Ultrasound Detection with Slope-Symmetric Fabry–Perot Sensor and AOM-Enabled Quadrature Demodulation
by Yufei Chu, Xiaoli Wang, Mohammed Alshammari, Zi Li and Ming Han
Photonics 2026, 13(3), 267; https://doi.org/10.3390/photonics13030267 - 11 Mar 2026
Abstract
A robust and cost-effective fiber-optic ultrasound sensor based on a slope-symmetric Fabry–Perot interferometer (FPI) is presented, employing dual-channel quadrature-biased heterodyne interrogation with an acousto-optic modulator (AOM). By introducing a 200 MHz frequency shift that yields an effective π/2 phase offset between the direct [...] Read more.
A robust and cost-effective fiber-optic ultrasound sensor based on a slope-symmetric Fabry–Perot interferometer (FPI) is presented, employing dual-channel quadrature-biased heterodyne interrogation with an acousto-optic modulator (AOM). By introducing a 200 MHz frequency shift that yields an effective π/2 phase offset between the direct (unshifted) and frequency-shifted optical paths, the system ensures complementary sensitivity: when one channel operates at zero slope on the FPI transfer function (minimum sensitivity), the other resides at maximum slope, providing inherent immunity to laser wavelength drift and environmental perturbations. Experimental validation demonstrates reliable ultrasound detection across varying operating points. At quadrature extremes, one channel achieves peak amplitudes of ±2 V while the other is quiescent, whereas intermediate points enable simultaneous detection with amplitudes of ±1.5 V (AOM channel) and ±0.05–0.1 V (direct channel), accompanied by corresponding DC levels ranging from ~0.4 V to 1.6 V. The AOM channel utilizes simple envelope detection after 9.5–11.5 MHz bandpass filtering, maintaining low cost, though coherent mixing is suggested for enhanced weak-signal performance. The angle-symmetric FPI design, combined with gold-disk reflector adaptations and potential femtosecond laser micromachining, further reduces fabrication costs without sacrificing finesse or sensitivity. This quadrature-biased approach offers superior stability compared to single-channel systems, making it highly suitable for practical applications in photoacoustic imaging, nondestructive testing, and structural health monitoring. Full article
Show Figures

Figure 1

26 pages, 8878 KB  
Article
A Spectrally Compatible Pseudo-Panchromatic Intensity Reconstruction for PCA-Based UAS RGB–Multispectral Image Fusion
by Dimitris Kaimaris
J. Imaging 2026, 12(3), 122; https://doi.org/10.3390/jimaging12030122 - 11 Mar 2026
Abstract
The paper presents a method for generating a pseudo-panchromatic (PPAN) orthophotomosaic that is spectrally compatible with the multispectral (MS) orthophotomosaic, and it targets the fusion of unmanned aircraft system (UAS) RGB–MS orthophotomosaics when no true panchromatic band is available. In typical UAS imaging [...] Read more.
The paper presents a method for generating a pseudo-panchromatic (PPAN) orthophotomosaic that is spectrally compatible with the multispectral (MS) orthophotomosaic, and it targets the fusion of unmanned aircraft system (UAS) RGB–MS orthophotomosaics when no true panchromatic band is available. In typical UAS imaging systems, RGB and multispectral sensors operate independently and exhibit different spectral responses and spatial resolutions, making the construction of a spectrally compatible substitution intensity a critical challenge for component substitution fusion. The conventional RGB-derived PPAN preserves spatial detail but is constrained by RGB–MS spectral incompatibility, expressed as reduced corresponding-band similarity. The proposed hybrid intensity (PPANE) increases the mean corresponding-band correlation from 0.842 (PPANA) to 0.928 (PPANE) and reduces the across-site mean SAM from 5.782° to 4.264°, while maintaining spatial sharpness comparable to the RGB-derived intensity. It is proposed that the PPANE orthophotomosaic be produced as a hybrid intensity (single band) image. Specifically, a multispectral-visible-derived intensity is resampled onto the RGB grid and statistically integrated with RGB spatial detail, followed by mild high-frequency enhancement to produce the final PPANE orthophotomosaic. Principal Component Analysis (PCA) fusion is applied to seven archaeological sites in Northern Greece. Spectral quality is evaluated on the MS grid using band-wise (corresponding-band) correlation and the Spectral Angle Mapper (SAM), while the spatial sharpness of the fused NIR orthophotomosaic is assessed using Tenengrad and Laplacian variance. The PPANE orthophotomosaic consistently increases correlations relative to PPANA (especially in Red Edge/NIR) and reduces the mean site-mean SAM. PPANC yields the lowest SAM but also the lowest spatial sharpness/clarity, whereas PPANE maintains spatial sharpness/clarity comparable to PPANA, supporting a balance between spectral consistency and spatial detail, as also confirmed through comparative evaluation against established component substitution fusion methods. The approach is reproducible and avoids full histogram matching; instead, it relies on explicitly defined linear standardization steps (mean–std normalization) and controlled spatial sharpening, and performs consistently across different scenes. Full article
(This article belongs to the Section Color, Multi-spectral, and Hyperspectral Imaging)
Show Figures

Figure 1

22 pages, 2804 KB  
Article
A Comprehensive Evaluation Method for Greenhouse-Grown Lettuce Based on RGB Images and Hyperspectral Data
by Duoer Ma, Hong Ren, Qi Zeng, Yidi Liu, Lulu Ma, Qiang Zhang, Ze Zhang and Jiangli Wang
Agronomy 2026, 16(6), 600; https://doi.org/10.3390/agronomy16060600 - 11 Mar 2026
Abstract
Quality grading of greenhouse lettuce requires rapid external appearance screening and nondestructive internal quality assessment. However, existing detection methods struggle to simultaneously evaluate both external and internal quality while maintaining efficiency, resulting in a lack of scientific and comprehensive integrated evaluation standards for [...] Read more.
Quality grading of greenhouse lettuce requires rapid external appearance screening and nondestructive internal quality assessment. However, existing detection methods struggle to simultaneously evaluate both external and internal quality while maintaining efficiency, resulting in a lack of scientific and comprehensive integrated evaluation standards for current crop grading. To address this issue, this study leveraged the technical strengths of different sensors to construct separate models: an RGB image-based monitoring model for external quality and a hyperspectral-based estimation model for internal quality. Using a combined objective–subjective weighting method, this approach scientifically integrated external and internal quality monitoring indicators to establish a comprehensive evaluation method for greenhouse lettuce quality. The results demonstrate that features such as canopy projection area, compactness, and color components can be extracted from RGB images. Combined with Ridge regression, this approach achieves high-accuracy estimation of lettuce fresh weight and leaf area (R2 ≥ 0.880). For intrinsic quality, by combining hyperspectral data with the CARS and SPA band selection algorithms, a Random Forest (RF)-based inversion model for chlorophyll, soluble sugar, protein, and vitamin C content was developed. The AHP-CRITIC method effectively resolved the weight imbalance caused by an excessive coefficient of variation in appearance indicators, thereby achieving the scientific integration of appearance and internal quality data. The grading outcomes of this integrated evaluation method were highly consistent with industry standards (kappa coefficient: 0.788). This approach establishes an effective link between the rapid monitoring of external and internal quality for comprehensive evaluation, providing a novel technical pathway and scientific basis for nondestructive post-harvest detection and automated grading of greenhouse vegetables. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

41 pages, 8829 KB  
Review
Mechanisms, Sensors, and Signals for Defect Formation and In Situ Monitoring in Metal Additive Manufacturing
by Sanae Tajalli Nobari, Fabian Hanning, Yongcui Mi and Joerg Volpp
Eng 2026, 7(3), 129; https://doi.org/10.3390/eng7030129 - 11 Mar 2026
Abstract
Metal additive manufacturing (AM) facilitates the production of geometrically complex components, yet its broader industrial use remains limited by the risk of defect formation and uncertainties in their detection, originating from the highly dynamic and high-temperature process environment. To make additive manufacturing more [...] Read more.
Metal additive manufacturing (AM) facilitates the production of geometrically complex components, yet its broader industrial use remains limited by the risk of defect formation and uncertainties in their detection, originating from the highly dynamic and high-temperature process environment. To make additive manufacturing more reliable and establish high-quality parts, it is important to understand how these defects form and how their characteristics appear during the process. This review explains the main causes of common defects, such as cracking, porosity, lack of fusion, and inclusions in metal AM processes, including Powder Bed Fusion and Directed Energy Deposition. It also connects main defect formation mechanisms to the optical, thermal, acoustic, and spectroscopic signals that can be measured during the process. Moreover, it is described how commonly used in situ monitoring systems work and how their signals correspond to melt pool dynamics, vapor plume, particle movement, and the solidification process for each kind of defect. An overview is provided of how data from these systems are analyzed, including the extraction of features from images, the evaluation of temperature fields, and the use of time and frequency domain techniques for various signals. By linking the physics of defect formation to measurable process signals, the interpretation of sensor data is enabled, and potential strategies for monitoring specific problems are outlined. Finally, recent developments are examined, including the integration of multiple sensors, advanced feature-representation approaches, and real-time data interpretation coupled with adaptive control. Together, these directions represent promising advances towards more intelligent and reliable monitoring systems for the future of metal AM. Full article
(This article belongs to the Section Materials Engineering)
Show Figures

Figure 1

23 pages, 7611 KB  
Article
Design and Optimization of a Twisted Photodiode Pixel Structure for All-Directional Phase-Detection Autofocus CMOS Image Sensors
by Daiki Shirahige, Koichi Fukuda, Hajime Ikeda, Yusuke Onuki, Ginjiro Toyoguchi, Kohei Okamoto, Shunichi Wakashima, Hiroshi Sekine, Shuhei Hayashi, Ryo Yoshida, Junji Iwata, Yasushi Matsuno, Katsuhito Sakurai, Hiroshi Yuzurihara and Takeshi Ichikawa
Sensors 2026, 26(6), 1758; https://doi.org/10.3390/s26061758 - 10 Mar 2026
Abstract
To achieve an all-directional and high-speed, high-accuracy autofocus (AF) function, we propose a CMOS image sensor with a Twisted Photodiode (PD) structure. The developed 3D-stacked back-side illuminated (BSI) sensor employs the Twisted PD, which enables equivalent angular response characteristics in both the horizontal [...] Read more.
To achieve an all-directional and high-speed, high-accuracy autofocus (AF) function, we propose a CMOS image sensor with a Twisted Photodiode (PD) structure. The developed 3D-stacked back-side illuminated (BSI) sensor employs the Twisted PD, which enables equivalent angular response characteristics in both the horizontal and vertical directions for the two PDs integrated within a single pixel, thereby realizing AF detection for all pixels and all directions. This paper describes the Twisted PD structure that enables all-directional AF and presents an analysis of charge transfer behavior in this unique 3D configuration. In this paper, “all-directional” refers to robustness with respect to subject direction. Full article
Show Figures

Figure 1

26 pages, 3654 KB  
Project Report
Computer Vision-Based Monitoring and Data Integration in a Multi-Trophic Controlled-Environment Agriculture Demonstrator
by Frederik Werner, Till Glockow, Kai Meissner, Martin Krüger, Markus Reischl and Christof M. Niemeyer
Sustainability 2026, 18(6), 2700; https://doi.org/10.3390/su18062700 - 10 Mar 2026
Abstract
Controlled-environment agriculture (CEA) and circular production systems require coordinated monitoring of biological and physicochemical processes across trophic levels. This project report presents the implementation of a multi-trophic controlled-environment agriculture demonstrator that integrates computer-vision-based monitoring with established sensor infrastructure for aquaculture, poultry, plants, microalgae, [...] Read more.
Controlled-environment agriculture (CEA) and circular production systems require coordinated monitoring of biological and physicochemical processes across trophic levels. This project report presents the implementation of a multi-trophic controlled-environment agriculture demonstrator that integrates computer-vision-based monitoring with established sensor infrastructure for aquaculture, poultry, plants, microalgae, duckweed, and insect modules. Stereo imaging and RGB-D systems are deployed for non-invasive quantification of fish biomass and plant growth, while continuous water-quality and environmental measurements (e.g., pH, dissolved oxygen, nitrate, ammonium, temperature, CO2) provide complementary process data. These data streams are synchronized within a shared database architecture to enable cross-module evaluation of nutrient dynamics, growth progression, and operational stability under real facility conditions. The implemented framework demonstrates how computer vision can extend conventional sensor-based monitoring by directly capturing biological performance indicators across aquatic, terrestrial, and microbial domains. While advanced predictive modeling and full digital twin simulation remain future development steps, the realized data-integration architecture establishes a structural foundation for the systematic evaluation of circular indoor food-production systems. The demonstrator illustrates how multimodal monitoring can support nutrient recirculation, transparency of biological variability, and data-driven assessment within controlled multi-trophic environments. Full article
(This article belongs to the Special Issue Food Science and Engineering for Sustainability—2nd Edition)
Show Figures

Figure 1

25 pages, 9221 KB  
Article
Research on Building Recognition in Ethnic Minority Villages Based on Multi-Feature Fusion
by Xiaoqiong Sun, Jiafang Yang, Wei Li, Ting Luo and Dongdong Xie
Buildings 2026, 16(6), 1099; https://doi.org/10.3390/buildings16061099 - 10 Mar 2026
Abstract
As a unique cultural heritage of Chinese ethnic minorities, Dong architecture provides rich historical and cultural information. Rapid and accurate extraction of ethnic building information from remote sensing images in complex terrain and high-density settlement environments is highly important for the protection of [...] Read more.
As a unique cultural heritage of Chinese ethnic minorities, Dong architecture provides rich historical and cultural information. Rapid and accurate extraction of ethnic building information from remote sensing images in complex terrain and high-density settlement environments is highly important for the protection of architectural heritage and the management of rural space. Huanggang Dong Village in Liping County, Guizhou Province, China, is taken as a case study. This paper develops a multifeature fusion machine learning framework for the automatic recognition of Dong ethnic architecture based on centimeter-level visible images captured by UAV. First, the vegetation index, HSI color features and texture features based on the gray level co-occurrence matrix are extracted from the UAV visible light orthophoto image. Through the random forest feature importance ranking and correlation test, six key features, namely, the VDVI, HSI-S, HSI-I, mean, variance and contrast, are selected to construct a multifeature space. This step constitutes the feature construction stage of the proposed methodology and provides the basis for subsequent classification. Second, on the basis of a support vector machine (SVM) and random forest (RF), classification models are constructed. The effects of different feature combinations and different algorithms on classification accuracy are systematically compared, and the results are evaluated in terms of overall accuracy (OA), the kappa coefficient, user accuracy (UA) and producer accuracy (PA). This second part highlights the classification phase of the methodology, which tests the feature space using different algorithms and evaluates the performance of the models. The experimental data fully show that under the condition of a single feature, the SVM model dominated by texture features performs best, with an OA of 85.33% and a kappa of 0.799; under the condition of multifeature fusion, the RF algorithm has a stronger ability to integrate multisource features. The accuracy of building category recognition based on the total feature and dimensionality reduction feature space is particularly prominent. The total feature and overall accuracy reach 89.00%, and the kappa coefficient is 0.850. The UA and PA reached 89.66% and 94.55%, respectively. Through in-depth comparative analysis, the vegetation index–color–texture multifeature fusion and machine learning classification framework based on UAV visible light images can achieve high-precision extraction of Dong architecture without relying on high-cost sensors. It can effectively alleviate the confusion between water bodies and shadows and between dark roofs and vegetation and effectively separate traditional Dong architecture from roads, vegetation and other elements. It provides a low-cost and feasible way for digital archiving, dynamic monitoring and protection management of the traditional village architectural heritage of ethnic minorities. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

22 pages, 2523 KB  
Article
An Infrared Star Identification Algorithm Based on Ordered Angular Distance Verification
by Xiaoyao Yan, Maosen Xiao and Fan Bu
Aerospace 2026, 13(3), 256; https://doi.org/10.3390/aerospace13030256 - 10 Mar 2026
Abstract
Short-wave infrared star sensors have become a key technology for all-time attitude determination within the atmosphere, in which the star identification algorithm plays a fundamental role. However, due to the limited number of detectable stars in infrared images, achieving robust and accurate identification [...] Read more.
Short-wave infrared star sensors have become a key technology for all-time attitude determination within the atmosphere, in which the star identification algorithm plays a fundamental role. However, due to the limited number of detectable stars in infrared images, achieving robust and accurate identification remains challenging. To address this issue, this paper proposes a star identification algorithm based on ordered angular distance verification. The algorithm first extracts radial and adjacency features via full-field-of-view sorting to mitigate the impact of “edge loss”. It then employs a fast initial matching that combines hash table lookup with binary search, substantially reducing the number of candidate navigation stars requiring detailed matching. Subsequently, a local search matching procedure corrects index misalignment caused by false or missing stars, while angular distance invariance verification prevents false matches; the combination of these mechanisms significantly enhances the algorithm’s robustness. In simulations using 5000 star images, the proposed algorithm achieves an identification rate of 99.48%. It maintains a rate above 96% under position noise, magnitude noise, and false stars. The average processing time per star image is 10.57 ms, approximately 39% of that required by the conventional grid algorithm (27.01 ms). The simulation results demonstrate that the proposed algorithm achieves high identification accuracy and maintains strong robustness in complex noise environments. Full article
(This article belongs to the Special Issue Recent Advances in Vehicle Navigation and Positioning)
Show Figures

Figure 1

21 pages, 5982 KB  
Article
Evaluating Geostationary Satellite-Based Approaches for NDVI Gap Filling in Polar-Orbiting Satellite Observations
by Han-Sol Ryu, Sung-Joo Yoon, Jinyeong Kim and Tae-Ho Kim
Sensors 2026, 26(5), 1731; https://doi.org/10.3390/s26051731 - 9 Mar 2026
Abstract
The Normalized Difference Vegetation Index (NDVI) derived from polar-orbiting satellites is widely used for vegetation monitoring; however, its temporal continuity is often limited by cloud contamination and fixed revisit cycles. To address this limitation, this study investigates the feasibility of using geostationary satellite [...] Read more.
The Normalized Difference Vegetation Index (NDVI) derived from polar-orbiting satellites is widely used for vegetation monitoring; however, its temporal continuity is often limited by cloud contamination and fixed revisit cycles. To address this limitation, this study investigates the feasibility of using geostationary satellite observations to enhance the spatial completeness of Sentinel-2 NDVI at its standard revisit intervals through cloud gap-filling applications. Geostationary Ocean Color Imager II (GOCI-II) data (250 m) was used as input, while Sentinel-2 Multispectral Instrument (MSI) NDVI (10 m) served as the reference dataset. To enable cross-sensor integration, a data-driven transformation framework was developed to convert GOCI-II NDVI into MSI-like NDVI while preserving dominant spatial variation patterns rather than pursuing strict pixel-level super-resolution. The transformed NDVI was assessed through spatial comparisons and statistical metrics, including correlation coefficient, mean absolute error, root mean square error (RMSE), normalized RMSE, and structural similarity index measure. Results show that geostationary-derived NDVI captures broad spatial organization and field-scale variability observed in MSI NDVI. Building on this cross-scale consistency, cloud gap-filling experiments demonstrate that temporally adjacent transformed NDVI scenes maintain consistent variation patterns, supporting their complementary use for compensating cloud-induced gaps. Although reduced contrast and magnitude-dependent biases remain, primarily due to the large spatial resolution difference and sub-pixel heterogeneity, an intermediate-resolution (80 m) sensitivity analysis indicates improved stability when the resolution gap is reduced. Overall, these findings highlight the practical potential of integrating geostationary and polar-orbiting observations to improve NDVI spatial continuity in cloud-prone regions. Full article
(This article belongs to the Special Issue Remote Sensing Technology for Agricultural and Land Management)
Show Figures

Figure 1

24 pages, 8735 KB  
Article
Evaluation of High Dynamic Range Imaging Methods for Luminance Measurements
by Lou Gevaux, Alejandro Ferrero, Alice Dupiau, Ángela Sáez, Markos Antonopoulos and Constantinos Bouroussis
J. Imaging 2026, 12(3), 114; https://doi.org/10.3390/jimaging12030114 - 9 Mar 2026
Viewed by 115
Abstract
Imaging luminance measurement is increasingly used in lighting applications, but the limited dynamic range of camera sensors requires using high dynamic range (HDR) imaging methods for evaluating scenes with large luminance contrasts. This work aims at investigating how parameters of HDR imaging techniques [...] Read more.
Imaging luminance measurement is increasingly used in lighting applications, but the limited dynamic range of camera sensors requires using high dynamic range (HDR) imaging methods for evaluating scenes with large luminance contrasts. This work aims at investigating how parameters of HDR imaging techniques may impact luminance measurement accuracy, using a numerical evaluation. A numerical simulation framework based on a digital twin of an imaging system and synthetic high-contrast luminance scenes is used to introduce controlled systematic error sources and quantify their impact on HDR luminance accuracy. The results support the identification of HDR approaches most suitable for producing luminance measurements traceable to the International System of Units (SI). Full article
(This article belongs to the Section Computational Imaging and Computational Photography)
Show Figures

Figure 1

24 pages, 4915 KB  
Article
Semantic-Guided Matching of Heterogeneous UAV Imagery and Mobile LiDAR Data Using Deep Learning and Graph Neural Networks
by Tee-Ann Teo, Hao Yu and Pei-Cheng Chen
Drones 2026, 10(3), 185; https://doi.org/10.3390/drones10030185 - 8 Mar 2026
Viewed by 94
Abstract
The integration of heterogeneous geospatial data, specifically low-cost unmanned aerial vehicle (UAV) imagery and mobile light detection and ranging (LiDAR) system point clouds, presents a significant challenge due to the significant radiometric and structural discrepancies between the two modalities. This study proposes a [...] Read more.
The integration of heterogeneous geospatial data, specifically low-cost unmanned aerial vehicle (UAV) imagery and mobile light detection and ranging (LiDAR) system point clouds, presents a significant challenge due to the significant radiometric and structural discrepancies between the two modalities. This study proposes a novel air-to-ground semantic feature matching framework to achieve precise geometric registration between these data sources by effectively incorporating semantic-constraint deep learning-based matching. The methodology transformed the cross-sensor alignment challenge into a robust two-dimensional image matching problem. This was achieved by first using YOLOv11 for semantic segmentation of common road markings in both the UAV orthoimage and the converted LiDAR intensity image to generate highly consistent feature references. Subsequently, the SuperPoint detector and a graph neural network matcher, SuperGlue, were applied to these semantic images to establish reliable geomatics information correspondence points. Experimental results confirmed that this semantic-guided strategy consistently outperformed traditional feature-based matching (i.e., scale-invariant feature transform + fast library for approximate nearest neighbors), particularly by converting the noisy LiDAR intensity image into a stabilized semantic representation. The explicit application of semantic constraints further proved effective in eliminating false matches between geometrically similar but semantically distinct objects. The final object-specific analysis demonstrated that features with clear, complex geometric structures (e.g., pedestrian crossings and directional arrows) provide the most robust matching control. In summary, the proposed framework successfully leverages semantic context to overcome cross-sensor heterogeneity, offering an automated and precise solution for the geometric alignment of mobile LiDAR data. Full article
Show Figures

Figure 1

Back to TopTop