Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,322)

Search Parameters:
Keywords = image error correction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 91954 KB  
Article
A Robust DEM Registration Method via Physically Consistent Image Rendering
by Yunchou Li, Niangang Jiao, Feng Wang and Hongjian You
Appl. Sci. 2026, 16(3), 1238; https://doi.org/10.3390/app16031238 - 26 Jan 2026
Abstract
Digital elevation models (DEMs) play a critical role in geospatial analysis and surface modeling. However, due to differences in data collection payload, data processing methodology, and data reference baseline, DEMs acquired from various sources often exhibit systematic spatial offsets. This limitation substantially constrains [...] Read more.
Digital elevation models (DEMs) play a critical role in geospatial analysis and surface modeling. However, due to differences in data collection payload, data processing methodology, and data reference baseline, DEMs acquired from various sources often exhibit systematic spatial offsets. This limitation substantially constrains their accuracy and reliability in multi-source joint analysis and fusion applications. Traditional registration methods such as the Least-Z Difference (LZD) method are sensitive to gross errors, while multimodal registration approaches overlook the importance of elevation information. To address these challenges, this paper proposes a DEM registration method based on physically consistent rendering and multimodal image matching. The approach converts DEMs into image data through irradiance-based models and parallax geometric models. Feature point pairs are extracted using template-based matching techniques and further refined through elevation consistency analysis. Reliable correspondences are selected by jointly considering elevation error distributions and geometric consistency constraints, enabling robust affine transformation estimation and elevation bias correction. The experimental results demonstrate that in typical terrains such as urban areas, glaciers, and plains, the proposed method outperforms classical DEM registration algorithms and state-of-the-art remote sensing image registration algorithms. The results indicate clear advantages in registration accuracy, robustness, and adaptability to diverse terrain conditions, highlighting the potential of the proposed framework as a universal DEM collaborative registration solution. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

29 pages, 6047 KB  
Article
Robust Multi-Resolution Satellite Image Registration Using Deep Feature Matching and Super Resolution Techniques
by Yungyo Im and Yangwon Lee
Appl. Sci. 2026, 16(2), 1113; https://doi.org/10.3390/app16021113 - 21 Jan 2026
Viewed by 68
Abstract
This study evaluates the effectiveness of integrating a Residual Shifting (ResShift)-based deep learning super-resolution (SR) technique with the Robust Dense Feature Matching (RoMa) algorithm for high-precision inter-satellite image registration. The key findings of this research are as follows: (1) Enhancement of Structural Details: [...] Read more.
This study evaluates the effectiveness of integrating a Residual Shifting (ResShift)-based deep learning super-resolution (SR) technique with the Robust Dense Feature Matching (RoMa) algorithm for high-precision inter-satellite image registration. The key findings of this research are as follows: (1) Enhancement of Structural Details: Quadrupling image resolution via the ResShift SR model significantly improved the distinctness of edges and corners, leading to superior feature matching performance compared to original resolution data. (2) Superiority of Dense Matching: The RoMa model consistently delivered overwhelming results, maintaining a minimum of 2300 correct matches (NCM) across all datasets, which substantially outperformed existing sparse matching models such as SuperPoint + LightGlue (SPLG) (minimum 177 NCM) and SuperPoint + SuperGlue (SPSG). (3) Seasonal Robustness: The proposed framework demonstrated exceptional stability, maintaining registration errors below 0.5 pixels even in challenging summer–winter image pairs affected by cloud cover and spectral variations. (4) Geospatial Reliability: Integration of SR-derived homography with RoMa achieved a significant reduction in geographic distance errors, confirming the robustness of the dense matching paradigm for multi-sensor and multi-temporal satellite data fusion. These findings validate that the synergy between diffusion-based SR and dense feature matching provides a robust technological foundation for autonomous, high-precision satellite image registration. Full article
(This article belongs to the Special Issue Applications of Deep and Machine Learning in Remote Sensing)
Show Figures

Figure 1

17 pages, 1423 KB  
Article
Residual Motion Correction in Low-Dose Myocardial CT Perfusion Using CNN-Based Deformable Registration
by Mahmud Hasan, Aaron So and Mahmoud R. El-Sakka
Electronics 2026, 15(2), 450; https://doi.org/10.3390/electronics15020450 - 20 Jan 2026
Viewed by 133
Abstract
Dynamic myocardial CT perfusion imaging enables functional assessment of coronary artery stenosis and myocardial microvascular disease. However, it is susceptible to residual motion artifacts arising from cardiac and respiratory activity. These artifacts introduce temporal misalignments, distorting Time-Enhancement Curves (TECs) and leading to inaccurate [...] Read more.
Dynamic myocardial CT perfusion imaging enables functional assessment of coronary artery stenosis and myocardial microvascular disease. However, it is susceptible to residual motion artifacts arising from cardiac and respiratory activity. These artifacts introduce temporal misalignments, distorting Time-Enhancement Curves (TECs) and leading to inaccurate myocardial perfusion measurements. Traditional nonrigid registration methods can address such motion but are often computationally expensive and less effective when applied to low-dose images, which are prone to increased noise and structural degradation. In this work, we present a CNN-based motion-correction framework specifically trained for low-dose cardiac CT perfusion imaging. The model leverages spatiotemporal patterns to estimate and correct residual motion between time frames, aligning anatomical structures while preserving dynamic contrast behaviour. Unlike conventional methods, our approach avoids iterative optimization and manually defined similarity metrics, enabling faster, more robust corrections. Quantitative evaluation demonstrates significant improvements in temporal alignment, with reduced Target Registration Error (TRE) and increased correlation between voxel-wise TECs and reference curves. These enhancements enable more accurate myocardial perfusion measurements. Noise from low-dose scans affects registration performance, but this remains an open challenge. This work emphasizes the potential of learning-based methods to perform effective residual motion correction under challenging acquisition conditions, thereby improving the reliability of myocardial perfusion assessment. Full article
Show Figures

Figure 1

20 pages, 5434 KB  
Article
A Wavenumber Domain Consistent Imaging Method Based on High-Order Fourier Series Fitting Compensation for Optical/SAR Co-Aperture System
by Ke Wang, Yinshen Wang, Chong Song, Bingnan Wang, Li Tang, Xuemei Wang and Maosheng Xiang
Remote Sens. 2026, 18(2), 315; https://doi.org/10.3390/rs18020315 - 16 Jan 2026
Viewed by 188
Abstract
Optical and SAR image registration and fusion are pivotal in the remote sensing field, as they leverage the complementary advantages of both modalities. However, achieving this with high accuracy and efficiency remains challenging. This challenge arises because traditional methods are confined to the [...] Read more.
Optical and SAR image registration and fusion are pivotal in the remote sensing field, as they leverage the complementary advantages of both modalities. However, achieving this with high accuracy and efficiency remains challenging. This challenge arises because traditional methods are confined to the image domain, applied after independent image formation. They attempt to correct geometric mismatches that are rooted in fundamental physical differences, an approach that inherently struggles to achieve both precision and speed. Therefore, this paper introduces a co-designed system and algorithm framework to overcome the fundamental challenges. At the system level, we pioneer an innovative airborne co-aperture system to ensure synchronous data acquisition. At the algorithmic level, we derive a theoretical model within the wavenumber domain imaging process, attributing optical/SAR pixel deviations to the deterministic phase errors introduced by its core Stolt interpolation operation. This model enables a signal-domain compensation technique, which employs high-order Fourier series fitting to correct these errors during the SAR image formation itself. This co-design yields a unified processing pipeline that achieves direct, sub-pixel co-registration, thereby establishing a foundational paradigm for real-time multi-source data processing. The experimental results on both multi-point and structural targets confirm that our method achieves sub-pixel registration accuracy across diverse scenarios, accompanied by a marked gain in computational efficiency over the time-domain approach. Full article
Show Figures

Figure 1

20 pages, 16586 KB  
Article
A Deep Transfer Learning Framework for Speed-of-Sound Aberration Correction in Full-Ring Photoacoustic Tomography
by Jie Yin, Yingjie Feng, Qi Feng, Junjun He and Chao Tao
Sensors 2026, 26(2), 626; https://doi.org/10.3390/s26020626 - 16 Jan 2026
Viewed by 334
Abstract
Speed-of-sound (SoS) heterogeneities introduce pronounced artifacts in full-ring photoacoustic tomography (PAT), degrading imaging accuracy and constraining its practical use. We introduce a transfer learning-based deep neural framework that couples an ImageNet-pretrained ResNet-50 encoder with a tailored deconvolutional decoder to perform end-to-end artifact correction [...] Read more.
Speed-of-sound (SoS) heterogeneities introduce pronounced artifacts in full-ring photoacoustic tomography (PAT), degrading imaging accuracy and constraining its practical use. We introduce a transfer learning-based deep neural framework that couples an ImageNet-pretrained ResNet-50 encoder with a tailored deconvolutional decoder to perform end-to-end artifact correction on photoacoustic tomography reconstructions. We propose a two-phase curriculum learning protocol, initial pretraining on simulations with uniform SoS mismatches, followed by fine-tuning on spatially heterogeneous SoS fields, to improve generalization to complex aberrations. Evaluated on numerical models, physical phantom experiments and in vivo experiments, the framework provides substantial gains over conventional back-projection and U-Net baselines in mean squared error, structural similarity index measure, and Pearson correlation coefficient, while achieving an average inference time of 17 ms per frame. These results indicate that the proposed approach can reduce the sensitivity of full-ring PAT to SoS inhomogeneity and improve full-view reconstruction quality. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

40 pages, 16360 KB  
Review
Artificial Intelligence Meets Nail Diagnostics: Emerging Image-Based Sensing Platforms for Non-Invasive Disease Detection
by Tejrao Panjabrao Marode, Vikas K. Bhangdiya, Shon Nemane, Dhiraj Tulaskar, Vaishnavi M. Sarad, K. Sankar, Sonam Chopade, Ankita Avthankar, Manish Bhaiyya and Madhusudan B. Kulkarni
Bioengineering 2026, 13(1), 75; https://doi.org/10.3390/bioengineering13010075 - 8 Jan 2026
Viewed by 682
Abstract
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such [...] Read more.
Artificial intelligence (AI) and machine learning (ML) are transforming medical diagnostics, but human nail, an easily accessible and rich biological substrate, is still not fully exploited in the digital health field. Nail pathologies are easily diagnosed, non-invasive disease biomarkers, including systemic diseases such as anemia, diabetes, psoriasis, melanoma, and fungal diseases. This review presents the first big synthesis of image analysis for nail lesions incorporating AI/ML for diagnostic purposes. Where dermatological reviews to date have been more wide-ranging in scope, our review will focus specifically on diagnosis and screening related to nails. The various technological modalities involved (smartphone imaging, dermoscopy, Optical Coherence Tomography) will be presented, together with the different processing techniques for images (color corrections, segmentation, cropping of regions of interest), and models that range from classical methods to deep learning, with annotated descriptions of each. There will also be additional descriptions of AI applications related to some diseases, together with analytical discussions regarding real-world impediments to clinical application, including scarcity of data, variations in skin type, annotation errors, and other laws of clinical adoption. Some emerging solutions will also be emphasized: explainable AI (XAI), federated learning, and platform diagnostics allied with smartphones. Bridging the gap between clinical dermatology, artificial intelligence and mobile health, this review consolidates our existing knowledge and charts a path through yet others to scalable, equitable, and trustworthy nail based medically diagnostic techniques. Our findings advocate for interdisciplinary innovation to bring AI-enabled nail analysis from lab prototypes to routine healthcare and global screening initiatives. Full article
(This article belongs to the Special Issue Bioengineering in a Generative AI World)
Show Figures

Graphical abstract

16 pages, 63609 KB  
Article
An Automated Framework for Estimating Building Height Changes Using Multi-Temporal Street View Imagery
by Jiqiu Deng, Qiqi Gu and Xiaoyan Chen
Appl. Sci. 2026, 16(1), 550; https://doi.org/10.3390/app16010550 - 5 Jan 2026
Viewed by 184
Abstract
Building height is an important indicator for describing the three-dimensional structure of cities. However, monitoring its changes is still difficult due to high labor costs, low efficiency, and the limited resolution and viewing angles of remote sensing images. This study proposes an automatic [...] Read more.
Building height is an important indicator for describing the three-dimensional structure of cities. However, monitoring its changes is still difficult due to high labor costs, low efficiency, and the limited resolution and viewing angles of remote sensing images. This study proposes an automatic framework for estimating building height changes using multi-temporal street view images. First, buildings are detected by the YOLO-v5 model, and their contours are extracted through edge detection and hole filling. To reduce false detections, greenness and depth information are combined to filter out pseudo changes. Then, a neighboring region resampling strategy is used to select visually similar images for better alignment, which helps to reduce the influence of sampling errors. In addition, the framework applies cylindrical projection correction and introduces a triangulation-based method (HCAOT) for building height estimation. Experimental results show that the proposed framework achieves an accuracy of 85.11% in detecting real changes and 91.23% in identifying unchanged areas. For height estimation, the HCAOT method reaches an RMSE of 0.65 m and an NRMSE of 0.04, which performs better than several comparison methods. Overall, the proposed framework provides an efficient and reliable approach for dynamically updating 3D urban information and supporting spatial monitoring in smart cities. Full article
Show Figures

Figure 1

18 pages, 5654 KB  
Article
Thermal Deformation Correction for the FY-4A LMI
by Yuansheng Zhang, Xiushu Qie, Dongjie Cao, Shanfeng Yuan, Dongfang Wang, Hongbo Zhang, Dongxia Liu, Zhuling Sun, Mingyuan Liu, Kexin Zhu, Rubin Jiang and Jing Yang
Remote Sens. 2026, 18(1), 163; https://doi.org/10.3390/rs18010163 - 4 Jan 2026
Viewed by 182
Abstract
Affected by solar radiation in space, the FY-4A Lightning Mapping Imager (LMI) detection array exhibits daily periodic thermal expansion and contraction, leading to deviations in lightning positioning accuracy. While LMI’s detection efficiency is higher at night, the dual edge matching algorithm, which relies [...] Read more.
Affected by solar radiation in space, the FY-4A Lightning Mapping Imager (LMI) detection array exhibits daily periodic thermal expansion and contraction, leading to deviations in lightning positioning accuracy. While LMI’s detection efficiency is higher at night, the dual edge matching algorithm, which relies on surface features for correction, does not perform well during nighttime (around 3 pixels). Analysis shows that most of the lightning data corrected by this method exhibit significant deviations from the actual lightning locations in practical applications. Therefore, this paper proposes a new correction method based on high precision ground-based lightning location data from the 2019 summer World Wide Lightning Location Network (WWLLN) and the Beijing Broadband Lightning Network (BLNET). Using these datasets as reference standards, the periodic deviation of LMI is determined, and a correction curve is derived using a weighted Gaussian fitting approach. This method further improves the nighttime lightning location accuracy of LMI on the basis of the current operational algorithm. The results demonstrate that the corrected LMI data significantly reduces the positioning errors, with an accuracy within ±1 pixel in the Beijing area, as an example. Full article
(This article belongs to the Special Issue Application of Satellite Data for Lightning Mapping)
Show Figures

Figure 1

45 pages, 1119 KB  
Review
Noise Sources and Strategies for Signal Quality Improvement in Biological Imaging: A Review Focused on Calcium and Cell Membrane Voltage Imaging
by Dmitrii M. Nikolaev, Ekaterina M. Metelkina, Andrey A. Shtyrov, Fanghua Li, Maxim S. Panov and Mikhail N. Ryazantsev
Biosensors 2026, 16(1), 31; https://doi.org/10.3390/bios16010031 - 1 Jan 2026
Viewed by 485
Abstract
This review addresses the challenges of obtaining high-quality quantitative data in the optical imaging of membrane voltage and calcium dynamics. The paper provides a comprehensive overview and systematization of recent studies that analyze factors limiting signal fidelity and propose strategies to enhance data [...] Read more.
This review addresses the challenges of obtaining high-quality quantitative data in the optical imaging of membrane voltage and calcium dynamics. The paper provides a comprehensive overview and systematization of recent studies that analyze factors limiting signal fidelity and propose strategies to enhance data quality. The primary sources of signal degradation in biological optical imaging, with an emphasis on membrane voltage and calcium imaging, are systematically explored across four major indicator classes: voltage-sensitive dyes (VSDs), genetically encoded voltage indicators (GEVIs), calcium-sensitive dyes (CSDs), and genetically encoded calcium indicators (GECIs). Common mechanisms that compromise data quality are classified into three main categories: fundamental photon shot noise, device-related errors, and sample-related measurement errors. For each class of limitation, its physical or biological origin and characteristic manifestations are described, which are followed by an analysis of available mitigation strategies, including hardware optimization, choice of sensors, sample preparation and experimental design, post-processing and computational correction methods. Full article
Show Figures

Figure 1

31 pages, 6944 KB  
Article
Prompt-Based and Transformer-Based Models Evaluation for Semantic Segmentation of Crowdsourced Urban Imagery Under Projection and Geometric Symmetry Variations
by Sina Rezaei, Aida Yousefi and Hossein Arefi
Symmetry 2026, 18(1), 68; https://doi.org/10.3390/sym18010068 - 31 Dec 2025
Viewed by 352
Abstract
Semantic segmentation of crowdsourced street-level imagery plays a critical role in urban analytics by enabling pixel-wise understanding of urban scenes for applications such as walkability scoring, environmental comfort evaluation, and urban planning, where robustness to geometric transformations and projection-induced symmetry variations is essential. [...] Read more.
Semantic segmentation of crowdsourced street-level imagery plays a critical role in urban analytics by enabling pixel-wise understanding of urban scenes for applications such as walkability scoring, environmental comfort evaluation, and urban planning, where robustness to geometric transformations and projection-induced symmetry variations is essential. This study presents a comparative evaluation of two primary families of semantic segmentation models: transformer-based models (SegFormer and Mask2Former) and prompt-based models (CLIPSeg, LangSAM, and SAM+CLIP). The evaluation is conducted on images with varying geometric properties, including normal perspective, fisheye distortion, and panoramic format, representing different forms of projection symmetry and symmetry-breaking transformations, using data from Google Street View and Mapillary. Each model is evaluated on a unified benchmark with pixel-level annotations for key urban classes, including road, building, sky, vegetation, and additional elements grouped under the “Other” class. Segmentation performance is assessed through metric-based, statistical, and visual evaluations, with mean Intersection over Union (mIoU) and pixel accuracy serving as the primary metrics. Results show that LangSAM demonstrates strong robustness across different image formats, with mIoU scores of 64.48% on fisheye images, 85.78% on normal perspective images, and 96.07% on panoramic images, indicating strong semantic consistency under projection-induced symmetry variations. Among transformer-based models, SegFormer proves to be the most reliable, attains higher accuracy on fisheye and normal perspective images among all models, with mean IoU scores of 72.21%, 94.92%, and 75.13% on fisheye, normal, and panoramic imagery, respectively. LangSAM not only demonstrates robustness across different projection geometries but also delivers the lowest segmentation error, consistently identifying the correct class for corresponding objects. In contrast, CLIPSeg remains the weakest prompt-based model, with mIoU scores of 77.60% on normal images, 59.33% on panoramic images, and a substantial drop to 59.33% on fisheye imagery, reflecting sensitivity to projection-related symmetry distortions. Full article
Show Figures

Figure 1

17 pages, 18689 KB  
Article
Assessing the Impact of T-Mart Adjacency Effect Correction on Turbidity Retrieval from Landsat 8/9 and Sentinel-2 Imagery (Case Study: St. Lawrence River, Canada)
by Mohsen Ansari, Yulun Wu and Anders Knudby
Remote Sens. 2026, 18(1), 127; https://doi.org/10.3390/rs18010127 - 30 Dec 2025
Viewed by 262
Abstract
In inland waters, Atmospheric Correction (AC), including Adjacency Effect (AE) correction, is a major challenge for water quality retrieval using optical satellite data. This study evaluated three image pre-processing options for turbidity retrieval in the St. Lawrence River using Sentinel-2 (S2) and Landsat [...] Read more.
In inland waters, Atmospheric Correction (AC), including Adjacency Effect (AE) correction, is a major challenge for water quality retrieval using optical satellite data. This study evaluated three image pre-processing options for turbidity retrieval in the St. Lawrence River using Sentinel-2 (S2) and Landsat 8/9 (L8/9) imagery with the Light Gradient Boosting Machine (LightGBM) model: (1) No pre-processing, i.e., use of Top-of-Atmosphere (TOA) reflectance, (2) AC pre-processing, obtaining water-leaving reflectance (Rw) from AC for the Operational Land Imager lite (ACOLITE)’s Dark Spectrum Fitting (DSF) technique, and (3) AE pre-processing, correcting for the AE using T-Mart before obtaining Rw from DSF. Results demonstrated that AE pre-processing outperformed the other two options. For L8/9, AE pre-processing reduced the Root Mean Square Error (RMSE) and improved the median symmetric accuracy (ε) by 48.8% and 19.0%, respectively, compared with AC pre-processing, and by 48.5% and 50.7%, respectively, compared with No pre-processing. For S2, AE pre-processing performed better than AC pre-processing and also outperformed No pre-processing, reducing RMSE by 28.4% and ε by 50.8%. However, No pre-processing yielded the lowest absolute symmetric signed percentage bias (|β|) among all pre-processing options. Analysis indicated that AE pre-processing yielded superior performance within 0–300 m from shore than other options, where the AE influence is strongest. Turbidity maps generated using AE pre-processing were smoother and less noisy compared to the other pre-processing options, particularly in cloud-adjacent regions. Overall, our findings suggest that incorporating AE correction through T-Mart improves the performance of the LightGBM model for turbidity retrieval from both L8/9 and S2 imagery in the St. Lawrence River, compared to the alternative pre-processing options. Full article
(This article belongs to the Special Issue Recent Advances in Water Quality Monitoring)
Show Figures

Graphical abstract

15 pages, 2794 KB  
Article
Improved Method for Quantitative Measurement of OH Radicals Based on Absorption Spectroscopy
by Xiu Yang, Jie Cui, Rui Ma, Lindan Yue, Yongzhuo Yin, Janhua Qi, Youning Xu, Benchuan Xu and Liang Zhu
Molecules 2026, 31(1), 118; https://doi.org/10.3390/molecules31010118 - 29 Dec 2025
Viewed by 232
Abstract
OH-PLIF quantitative measurements suffer from high temperature sensitivity and poor applicability of calibration constants, this paper combines absorption spectroscopy with dual-line temperature inversion to establish an explicitly temperature-corrected OH radical concentration inversion model. By simultaneously acquiring PLIF images and absorption spectrum data under [...] Read more.
OH-PLIF quantitative measurements suffer from high temperature sensitivity and poor applicability of calibration constants, this paper combines absorption spectroscopy with dual-line temperature inversion to establish an explicitly temperature-corrected OH radical concentration inversion model. By simultaneously acquiring PLIF images and absorption spectrum data under varying hydrogen-oxygen mixture flow rates, the equivalent absorption path length is calculated and the temperature-dependent absorption cross-section σ(ν,T) is incorporated. This enables the dynamic response of the integral absorption rate to high-temperature flame environments. Results demonstrate that the established temperature-corrected model significantly reduces systematic errors caused by temperature variations, with calibration constant C fluctuating less than ±5% across different operating conditions. Further optimization via least-squares method yielded the optimal constant Copt = 0.01844. Its applicability was validated across various operating conditions, with average relative errors controlled within 4–6%. Compared to the uncorrected model, overall error decreased from 9.1% to 5.2%. Full article
Show Figures

Figure 1

19 pages, 9564 KB  
Article
High-Fidelity Colorimetry Using Cross-Polarized Hyperspectral Imaging and Machine Learning Calibration
by Zhihao He, Li Luo, Xiangyang Yu, Yuchen Guo and Weibin Hong
Appl. Sci. 2026, 16(1), 314; https://doi.org/10.3390/app16010314 - 28 Dec 2025
Viewed by 276
Abstract
Accurate colorimetric quantification presents a significant challenge, as traditional imaging technologies fail to resolve metamerism and even hyperspectral imaging (HSI) is compromised by nonlinearities and specular reflections. This study introduces a high-fidelity colorimetric system using cross-polarized HSI to suppress specular reflections, integrated with [...] Read more.
Accurate colorimetric quantification presents a significant challenge, as traditional imaging technologies fail to resolve metamerism and even hyperspectral imaging (HSI) is compromised by nonlinearities and specular reflections. This study introduces a high-fidelity colorimetric system using cross-polarized HSI to suppress specular reflections, integrated with a Support Vector Regression (SVR) model to correct the system’s nonlinear response. The system’s performance was rigorously validated, demonstrating exceptional stability and repeatability (average ΔE00<0.1). The SVR calibration significantly enhanced accuracy, reducing the mean color error from ΔE00=4.36 to 0.43. Furthermore, when coupled with a Random Forest classifier, the system achieved 99.0% accuracy in discriminating visually indistinguishable (metameric) samples. In application-specific validation, it successfully quantified cosmetic color shifts and achieved high-precision skin-tone matching with a fidelity as low as ΔE00=0.82. This study demonstrates that the proposed system, by synergistically combining cross-polarization and machine learning, constitutes a robust tool for high-precision colorimetry, addressing long-standing challenges and showing significant potential in fields like cosmetic science. Full article
Show Figures

Figure 1

27 pages, 13958 KB  
Article
Digitizing Legacy Gravimetric Data Through GIS and Field Surveys: Toward an Updated Gravity Database for Kazakhstan
by Elmira Orynbassarova, Katima Zhanakulova, Hemayatullah Ahmadi, Khaini-Kamal Kassymkanova, Daulet Kairatov and Kanat Bulegenov
Geosciences 2026, 16(1), 16; https://doi.org/10.3390/geosciences16010016 - 24 Dec 2025
Viewed by 330
Abstract
This study presents the digitization and integration of Kazakhstan’s legacy gravimetric maps at a scale of 1:200,000 into a modern geospatial database using ArcGIS. The primary objective was to convert analog gravity data into a structured, queryable, and spatially analyzable digital format to [...] Read more.
This study presents the digitization and integration of Kazakhstan’s legacy gravimetric maps at a scale of 1:200,000 into a modern geospatial database using ArcGIS. The primary objective was to convert analog gravity data into a structured, queryable, and spatially analyzable digital format to support contemporary geoscientific applications, including geoid modeling and regional geophysical analysis. The project addresses critical gaps in national gravity coverage, particularly in underrepresented regions such as the Caspian Sea basin and the northeastern frontier, thereby enhancing the accessibility and utility of gravity data for multidisciplinary research. The methodology involved a systematic workflow: assessment and selection of gravimetric maps, raster image enhancement, georeferencing, and digitization of observation points and anomaly values. Elevation data and terrain corrections were incorporated where available, and metadata fields were populated with information on the methods and accuracy of elevation determination. Gravity anomalies were recalculated, including Bouguer anomalies (with densities of 2.67 g/cm3 and 2.30 g/cm3), normal gravity, and free-air anomalies. A unified ArcGIS geodatabase was developed, containing spatial and attribute data for all digitized surveys. The final deliverables include a 1:1,000,000-scale gravimetric map of free-air gravity anomalies for the entire territory of Kazakhstan, a comprehensive technical report, and supporting cartographic products. The project adhered to national and international geophysical mapping standards and utilized validated interpolation and error estimation techniques to ensure data quality. The validation process by the modern gravimetric surveys also confirmed the validity and reliability of the digitized historical data. This digitization effort significantly modernizes Kazakhstan’s gravimetric infrastructure, providing a robust foundation for geoid modeling, tectonic studies, and resource exploration. Full article
(This article belongs to the Section Geophysics)
Show Figures

Figure 1

16 pages, 1956 KB  
Article
Post Hoc Error Correction for Missing Classes in Deep Neural Networks
by Andrey A. Lebedev, Victor B. Kazantsev and Sergey V. Stasenko
Technologies 2026, 14(1), 8; https://doi.org/10.3390/technologies14010008 - 22 Dec 2025
Viewed by 321
Abstract
This paper presents a novel post hoc error correction method that enables deep neural networks to recognize classes that were completely excluded during training. Unlike traditional approaches requiring full model retraining, our method uses hidden layer representations from any pre-trained classifier to detect [...] Read more.
This paper presents a novel post hoc error correction method that enables deep neural networks to recognize classes that were completely excluded during training. Unlike traditional approaches requiring full model retraining, our method uses hidden layer representations from any pre-trained classifier to detect and correct errors on missing categories. We demonstrate the approach on facial emotion recognition using the RAF-DB dataset, systematically excluding each of the seven emotion classes from training. The results show correction gains of up to 0.811 for excluded classes while maintaining 99% retention on known classes in the best setup. The method provides a computationally efficient alternative to retraining when new categories emerge after deployment. Full article
(This article belongs to the Special Issue Image Analysis and Processing)
Show Figures

Figure 1

Back to TopTop