Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (542)

Search Parameters:
Keywords = subpixel

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 7225 KiB  
Article
Placido Sub-Pixel Edge Detection Algorithm Based on Enhanced Mexican Hat Wavelet Transform and Improved Zernike Moments
by Yujie Wang, Jinyu Liang, Yating Xiao, Xinfeng Liu, Jiale Li, Guangyu Cui and Quan Zhang
J. Imaging 2025, 11(8), 267; https://doi.org/10.3390/jimaging11080267 - 11 Aug 2025
Viewed by 146
Abstract
In order to meet the high-precision location requirements of the corneal Placido ring edge in corneal topographic reconstruction, this paper proposes a sub-pixel edge detection algorithm based on multi-scale and multi-position enhanced Mexican Hat Wavelet Transform and improved Zernike moment. Firstly, the image [...] Read more.
In order to meet the high-precision location requirements of the corneal Placido ring edge in corneal topographic reconstruction, this paper proposes a sub-pixel edge detection algorithm based on multi-scale and multi-position enhanced Mexican Hat Wavelet Transform and improved Zernike moment. Firstly, the image undergoes preliminary processing using a multi-scale and multi-position enhanced Mexican Hat Wavelet Transform function. Subsequently, the preliminary edge information extracted is relocated based on the Zernike moments of a 9 × 9 template. Finally, two improved adaptive edge threshold algorithms are employed to determine the actual sub-pixel edge points of the image, thereby realizing sub-pixel edge detection for corneal Placido ring images. Through comparison and analysis of edge extraction results from real human eye images obtained using the algorithm proposed in this paper and those from other existing algorithms, it is observed that the average sub-pixel edge error of other algorithms is 0.286 pixels, whereas the proposed algorithm achieves an average error of only 0.094 pixels. Furthermore, the proposed algorithm demonstrates strong robustness against noise. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

23 pages, 8942 KiB  
Article
Optical and SAR Image Registration in Equatorial Cloudy Regions Guided by Automatically Point-Prompted Cloud Masks
by Yifan Liao, Shuo Li, Mingyang Gao, Shizhong Li, Wei Qin, Qiang Xiong, Cong Lin, Qi Chen and Pengjie Tao
Remote Sens. 2025, 17(15), 2630; https://doi.org/10.3390/rs17152630 - 29 Jul 2025
Viewed by 332
Abstract
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the [...] Read more.
The equator’s unique combination of high humidity and temperature renders optical satellite imagery highly susceptible to persistent cloud cover. In contrast, synthetic aperture radar (SAR) offers a robust alternative due to its ability to penetrate clouds with microwave imaging. This study addresses the challenges of cloud-induced data gaps and cross-sensor geometric biases by proposing an advanced optical and SAR image-matching framework specifically designed for cloud-prone equatorial regions. We use a prompt-driven visual segmentation model with automatic prompt point generation to produce cloud masks that guide cross-modal feature-matching and joint adjustment of optical and SAR data. This process results in a comprehensive digital orthophoto map (DOM) with high geometric consistency, retaining the fine spatial detail of optical data and the all-weather reliability of SAR. We validate our approach across four equatorial regions using five satellite platforms with varying spatial resolutions and revisit intervals. Even in areas with more than 50 percent cloud cover, our method maintains sub-pixel edging accuracy under manual check points and delivers comprehensive DOM products, establishing a reliable foundation for downstream environmental monitoring and ecosystem analysis. Full article
Show Figures

Figure 1

24 pages, 8483 KiB  
Article
A Weakly Supervised Network for Coarse-to-Fine Change Detection in Hyperspectral Images
by Yadong Zhao and Zhao Chen
Remote Sens. 2025, 17(15), 2624; https://doi.org/10.3390/rs17152624 - 28 Jul 2025
Viewed by 366
Abstract
Hyperspectral image change detection (HSI-CD) provides substantial value in environmental monitoring, urban planning and other fields. In recent years, deep-learning based HSI-CD methods have made remarkable progress due to their powerful nonlinear feature learning capabilities, yet they face several challenges: mixed-pixel phenomenon affecting [...] Read more.
Hyperspectral image change detection (HSI-CD) provides substantial value in environmental monitoring, urban planning and other fields. In recent years, deep-learning based HSI-CD methods have made remarkable progress due to their powerful nonlinear feature learning capabilities, yet they face several challenges: mixed-pixel phenomenon affecting pixel-level detection accuracy; heterogeneous spatial scales of change targets where coarse-grained features fail to preserve fine-grained details; and dependence on high-quality labels. To address these challenges, this paper introduces WSCDNet, a weakly supervised HSI-CD network employing coarse-to-fine feature learning, with key innovations including: (1) A dual-branch detection framework integrating binary and multiclass change detection at the sub-pixel level that enhances collaborative optimization through a cross-feature coupling module; (2) introduction of multi-granularity aggregation and difference feature enhancement module for detecting easily confused regions, which effectively improves the model’s detection accuracy; and (3) proposal of a weakly supervised learning strategy, reducing model sensitivity to noisy pseudo-labels through decision-level consistency measurement and sample filtering mechanisms. Experimental results demonstrate that WSCDNet effectively enhances the accuracy and robustness of HSI-CD tasks, exhibiting superior performance under complex scenarios and weakly supervised conditions. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

15 pages, 4409 KiB  
Article
Performance of Dual-Layer Flat-Panel Detectors
by Dong Sik Kim and Dayeon Lee
Diagnostics 2025, 15(15), 1889; https://doi.org/10.3390/diagnostics15151889 - 28 Jul 2025
Viewed by 304
Abstract
Background/Objectives: In digital radiography imaging, dual-layer flat-panel detectors (DFDs), in which two flat-panel detector layers are stacked with a minimal distance between the layers and appropriate alignment, are commonly used in material decompositions as dual-energy applications with a single x-ray exposure. DFDs also [...] Read more.
Background/Objectives: In digital radiography imaging, dual-layer flat-panel detectors (DFDs), in which two flat-panel detector layers are stacked with a minimal distance between the layers and appropriate alignment, are commonly used in material decompositions as dual-energy applications with a single x-ray exposure. DFDs also enable more efficient use of incident photons, resulting in x-ray images with improved noise power spectrum (NPS) and detection quantum efficiency (DQE) performances as single-energy applications. Purpose: Although the development of DFD systems for material decomposition applications is actively underway, there is a lack of research on whether single-energy applications of DFD can achieve better performance than the single-layer case. In this paper, we experimentally observe the DFD performance in terms of the modulation transfer function (MTF), NPS, and DQE with discussions. Methods: Using prototypes of DFD, we experimentally measure the MTF, NPS, and DQE of the convex combination of the images acquired from the upper and lower detector layers of DFD. To optimize DFD performance, a two-step image registration is performed, where subpixel registration based on the maximum amplitude response to the transform based on the Fourier shift theorem and an affine transformation using cubic interpolation are adopted. The DFD performance is analyzed and discussed through extensive experiments for various scintillator thicknesses, x-ray beam conditions, and incident doses. Results: Under the RQA 9 beam conditions of 2.7 μGy dose, the DFD with the upper and lower scintillator thicknesses of 0.5 mm could achieve a zero-frequency DQE of 75%, compared to 56% when using a single-layer detector. This implies that the DFD using 75 % of the incident dose of a single-layer detector can provide the same signal-to-noise ratio as a single-layer detector. Conclusions: In single-energy radiography imaging, DFD can provide better NPS and DQE performances than the case of the single-layer detector, especially at relatively high x-ray energies, which enables low-dose imaging. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

21 pages, 4388 KiB  
Article
An Omni-Dimensional Dynamic Convolutional Network for Single-Image Super-Resolution Tasks
by Xi Chen, Ziang Wu, Weiping Zhang, Tingting Bi and Chunwei Tian
Mathematics 2025, 13(15), 2388; https://doi.org/10.3390/math13152388 - 25 Jul 2025
Viewed by 336
Abstract
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of [...] Read more.
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of high-frequency details, high computational complexity, and insufficient adaptability to complex scenes. To address these challenges, we propose an Omni-dimensional Dynamic Convolutional Network (ODConvNet) tailored for SISR tasks. Specifically, ODConvNet comprises four key components: a Feature Extraction Block (FEB) that captures low-level spatial features; an Omni-dimensional Dynamic Convolution Block (DCB), which utilizes a multidimensional attention mechanism to dynamically reweight convolution kernels across spatial, channel, and kernel dimensions, thereby enhancing feature expressiveness and context modeling; a Deep Feature Extraction Block (DFEB) that stacks multiple convolutional layers with residual connections to progressively extract and fuse high-level features; and a Reconstruction Block (RB) that employs subpixel convolution to upscale features and refine the final HR output. This mechanism significantly enhances feature extraction and effectively captures rich contextual information. Additionally, we employ an improved residual network structure combined with a refined Charbonnier loss function to alleviate gradient vanishing and exploding to enhance the robustness of model training. Extensive experiments conducted on widely used benchmark datasets, including DIV2K, Set5, Set14, B100, and Urban100, demonstrate that, compared with existing deep learning-based SR methods, our ODConvNet method improves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the visual quality of SR images is also improved. Ablation studies further validate the effectiveness and contribution of each component in our network. The proposed ODConvNet offers an effective, flexible, and efficient solution for the SISR task and provides promising directions for future research. Full article
Show Figures

Figure 1

17 pages, 2032 KiB  
Article
Measurement Techniques for Highly Dynamic and Weak Space Targets Using Event Cameras
by Haonan Liu, Ting Sun, Ye Tian, Siyao Wu, Fei Xing, Haijun Wang, Xi Wang, Zongyu Zhang, Kang Yang and Guoteng Ren
Sensors 2025, 25(14), 4366; https://doi.org/10.3390/s25144366 - 12 Jul 2025
Viewed by 424
Abstract
Star sensors, as the most precise attitude measurement devices currently available, play a crucial role in spacecraft attitude estimation. However, traditional frame-based cameras tend to suffer from target blur and loss under high-dynamic maneuvers, which severely limit the applicability of conventional star sensors [...] Read more.
Star sensors, as the most precise attitude measurement devices currently available, play a crucial role in spacecraft attitude estimation. However, traditional frame-based cameras tend to suffer from target blur and loss under high-dynamic maneuvers, which severely limit the applicability of conventional star sensors in complex space environments. In contrast, event cameras—drawing inspiration from biological vision—can capture brightness changes at ultrahigh speeds and output a series of asynchronous events, thereby demonstrating enormous potential for space detection applications. Based on this, this paper proposes an event data extraction method for weak, high-dynamic space targets to enhance the performance of event cameras in detecting space targets under high-dynamic maneuvers. In the target denoising phase, we fully consider the characteristics of space targets’ motion trajectories and optimize a classical spatiotemporal correlation filter, thereby significantly improving the signal-to-noise ratio for weak targets. During the target extraction stage, we introduce the DBSCAN clustering algorithm to achieve the subpixel-level extraction of target centroids. Moreover, to address issues of target trajectory distortion and data discontinuity in certain ultrahigh-dynamic scenarios, we construct a camera motion model based on real-time motion data from an inertial measurement unit (IMU) and utilize it to effectively compensate for and correct the target’s trajectory. Finally, a ground-based simulation system is established to validate the applicability and superior performance of the proposed method in real-world scenarios. Full article
Show Figures

Figure 1

25 pages, 8564 KiB  
Article
A Vision-Based Single-Sensor Approach for Identification and Localization of Unloading Hoppers
by Wuzhen Wang, Tianyu Ji, Qi Xu, Chunyi Su and Guangming Zhang
Sensors 2025, 25(14), 4330; https://doi.org/10.3390/s25144330 - 10 Jul 2025
Viewed by 376
Abstract
To promote the automation and intelligence of rail freight, the accurate identification and localization of bulk cargo unloading hoppers have become a key technical challenge. Under the technological wave driven by the deep integration of Industry 4.0 and artificial intelligence, the bulk cargo [...] Read more.
To promote the automation and intelligence of rail freight, the accurate identification and localization of bulk cargo unloading hoppers have become a key technical challenge. Under the technological wave driven by the deep integration of Industry 4.0 and artificial intelligence, the bulk cargo unloading process is undergoing a significant transformation from manual operation to intelligent control. In response to this demand, this paper proposes a vision-based 3D localization system for unloading hoppers, which adopts a single visual sensor architecture and integrates three core modules: object detection, corner extraction, and 3D localization. Firstly, a lightweight hybrid attention mechanism is incorporated into the YOLOv5 network to enable edge deployment and enhance the detection accuracy of unloading hoppers in complex industrial scenarios. Secondly, an image processing approach combining depth consistency constraint (DCC) and geometric structure constraints is designed to achieve sub-pixel level extraction of key corner points. Finally, a real-time 3D localization method is realized by integrating corner-based initialization with an RGB-D SLAM tracking mechanism. Experimental results demonstrate that the proposed system achieves an average localization accuracy of 97.07% under challenging working conditions. This system effectively meets the comprehensive requirements of automation, intelligence, and high precision in railway bulk cargo unloading processes, and exhibits strong engineering practicality and application potential. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

15 pages, 1770 KiB  
Article
PSHNet: Hybrid Supervision and Feature Enhancement for Accurate Infrared Small-Target Detection
by Weicong Chen, Chenghong Zhang and Yuan Liu
Appl. Sci. 2025, 15(14), 7629; https://doi.org/10.3390/app15147629 - 8 Jul 2025
Viewed by 262
Abstract
Detecting small targets in infrared imagery remains highly challenging due to sub-pixel target sizes, low signal-to-noise ratios, and complex background clutter. This paper proposes PSHNet, a hybrid deep-learning framework that combines dense spatial heatmap supervision with geometry-aware regression for accurate infrared small-target detection. [...] Read more.
Detecting small targets in infrared imagery remains highly challenging due to sub-pixel target sizes, low signal-to-noise ratios, and complex background clutter. This paper proposes PSHNet, a hybrid deep-learning framework that combines dense spatial heatmap supervision with geometry-aware regression for accurate infrared small-target detection. The network generates position–scale heatmaps to guide coarse localization, which are further refined through sub-pixel offset and size regression. A Complete IoU (CIoU) loss is introduced as a geometric regularization term to improve alignment between predicted and ground-truth bounding boxes. To better preserve fine spatial details essential for identifying small thermal signatures, an Enhanced Low-level Feature Module (ELFM) is incorporated using multi-scale dilated convolutions and channel attention. Experiments on the NUDT-SIRST and IRSTD-1k datasets demonstrate that PSHNet outperforms existing methods in IoU, detection probability, and false alarm rate, achieving IoU improvement and robust performance under low-SNR conditions. Full article
Show Figures

Figure 1

10 pages, 4530 KiB  
Article
A Switchable-Mode Full-Color Imaging System with Wide Field of View for All Time Periods
by Shubin Liu, Linwei Guo, Kai Hu and Chunbo Zou
Photonics 2025, 12(7), 689; https://doi.org/10.3390/photonics12070689 - 8 Jul 2025
Viewed by 293
Abstract
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging [...] Read more.
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging platform that integrates a 155 mm f/6 telephoto daytime camera with a 52 mm f/1.5 large-aperture low-light full-color night-vision camera into a single, co-registered 26 cm housing. By employing a sixth-order aspheric surface to reduce the element count and weight, our system achieves near-diffraction-limited MTF (>0.5 at 90.9 lp/mm) in daylight and sub-pixel RMS blur < 7 μm at 38.5 lp/mm under low-light conditions. Field validation at 0.0009 lux confirms high-SNR, full-color capture from bright noon to the darkest nights, enabling seamless switching between long-range, high-resolution surveillance and sensitive, low-light color imaging. This compact, robust design promises to elevate applications in security monitoring, autonomous navigation, wildlife observation, and disaster response by providing uninterrupted, color-faithful vision in all lighting regimes. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

19 pages, 2465 KiB  
Article
The Design and Implementation of a Dynamic Measurement System for a Large Gear Rotation Angle Based on an Extended Visual Field
by Po Du, Zhenyun Duan, Jing Zhang, Wenhui Zhao, Engang Lai and Guozhen Jiang
Sensors 2025, 25(12), 3576; https://doi.org/10.3390/s25123576 - 6 Jun 2025
Cited by 1 | Viewed by 528
Abstract
High-precision measurement of large gear rotation angles is a critical technology in gear meshing-based measurement systems. To address the challenge of high-precision rotation angle measurement for large gear, this paper proposes a binocular vision method. The methodology consists of the following steps: First, [...] Read more.
High-precision measurement of large gear rotation angles is a critical technology in gear meshing-based measurement systems. To address the challenge of high-precision rotation angle measurement for large gear, this paper proposes a binocular vision method. The methodology consists of the following steps: First, sub-pixel edges of calibration circles on a 2D dot-matrix calibration board are extracted using edge detection algorithms to obtain pixel coordinates of the circle centers. Second, a high-precision calibration of the measurement reference plate is achieved through a 2D four-parameter coordinate transformation algorithm. Third, binocular cameras capture images of the measurement reference plates attached to large gear before and after rotation. Coordinates of the camera’s field-of-view center in the measurement reference plate coordinate system are calculated via image processing and rotation angle algorithms, thereby determining the rotation angle of the large gear. Finally, a binocular vision rotation angle measurement system was developed, and experiments were conducted on a 600 mm-diameter gear to validate the feasibility of the proposed method. The results demonstrate a measurement accuracy of 7 arcseconds (7”) and a repeatability precision of 3 arcseconds (3”) within the 0–30° rotation range, indicating high accuracy and stability. The proposed method and system effectively meet the requirements for high-precision rotation angle measurement of large gear. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

22 pages, 18370 KiB  
Article
Digital Domain TDI-CMOS Imaging Based on Minimum Search Domain Alignment
by Han Liu, Shuping Tao, Qinping Feng and Zongxuan Li
Sensors 2025, 25(11), 3490; https://doi.org/10.3390/s25113490 - 31 May 2025
Viewed by 637
Abstract
In this study, we propose a digital domain TDI-CMOS dynamic imaging method based on minimum search domain alignment, which consists of five steps: image-motion vector computation, image jitter estimation, feature pair matching, global displacement estimation, and TDI accumulation. To solve the challenge of [...] Read more.
In this study, we propose a digital domain TDI-CMOS dynamic imaging method based on minimum search domain alignment, which consists of five steps: image-motion vector computation, image jitter estimation, feature pair matching, global displacement estimation, and TDI accumulation. To solve the challenge of matching feature point pairs in dark and low-contrast images, our method first optimizes the size and position of the search box using an image motion compensation mathematical model and a satellite platform jitter model. Then, the feature point pairs that best match the extracted feature points of the reference frame are identified within the search box of the target frame. After that, a kernel density estimation algorithm is proposed for calculating the displacement probability density of each feature point pair to fit the actual displacement between two frames. Finally, we align and superimpose all the frames in the digital domain to generate a delayed integral image. Experimental results show that this method greatly improves the alignment speed and accuracy of dark and low-contrast images during dynamic imaging. It effectively mitigates the effects of image motion and jitter from the spatial camera, and the fitted global image motion error is kept below 0.01 pixels, which is compensated to improve the MTF coefficient of the image motion and jitter link to 0.68, thus improving the imaging quality of TDI. Full article
Show Figures

Figure 1

21 pages, 4299 KiB  
Article
Classification of Microbial Activity and Inhibition Zones Using Neural Network Analysis of Laser Speckle Images
by Ilya Balmages, Dmitrijs Bļizņuks, Inese Polaka, Alexey Lihachev and Ilze Lihacova
Sensors 2025, 25(11), 3462; https://doi.org/10.3390/s25113462 - 30 May 2025
Cited by 1 | Viewed by 785
Abstract
This study addresses the challenge of rapidly and accurately distinguishing zones of microbial activity from antibiotic inhibition zones in Petri dishes. We propose a laser speckle imaging technique enhanced with subpixel correlation analysis to monitor dynamic changes in the inhibition zone surrounding an [...] Read more.
This study addresses the challenge of rapidly and accurately distinguishing zones of microbial activity from antibiotic inhibition zones in Petri dishes. We propose a laser speckle imaging technique enhanced with subpixel correlation analysis to monitor dynamic changes in the inhibition zone surrounding an antibiotic disc. This method provides faster results compared to the standard disk diffusion assay recommended by EUCAST. To enable automated analysis, we used machine learning algorithms for classifying areas of bacterial or fungal activity versus inhibited growth. Classification is performed over short time windows (e.g., 1 h), supporting near-real-time assessment. To further improve accuracy, we introduce a correction method based on the known spatial dynamics of inhibition zone formation. The novelty of the study lies in combining a speckle imaging subpixel correlation algorithm with ML classification and with pre- and post-processing. This approach enables early automated assessment of antimicrobial effects with potential applications in rapid drug susceptibility testing and microbiological research. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

12 pages, 1728 KiB  
Article
Subtraction Method for Subpixel Stitching: Synthetic Aperture Holographic Imaging
by Zhangyue Wei, John J. Healy and Min Wan
Photonics 2025, 12(6), 551; https://doi.org/10.3390/photonics12060551 - 29 May 2025
Viewed by 365
Abstract
Image stitching is a crucial technique in various fields of imaging and photography, which allows for the creation of high-resolution images by combining multiple smaller images with overlapping regions. Here, we propose a simple and fast method, the Subtraction Method (SM), for pixel [...] Read more.
Image stitching is a crucial technique in various fields of imaging and photography, which allows for the creation of high-resolution images by combining multiple smaller images with overlapping regions. Here, we propose a simple and fast method, the Subtraction Method (SM), for pixel and subpixel image stitching using holographic data. The feasibility of the SM is verified at the pixel and subpixel level using two 2D images captured by a 4-f imaging system. Additionally, the application of the SM for holographic data is also explored. The effective aperture of an inline holographic imaging system is enlarged by stitching four in-line holograms, demonstrating enhanced resolution and image quality after reconstruction. The stitching accuracy and computational time of the SM is compared with the correlation method. With optimization, the result shows that the SM achieves high accuracy and requires less computational time for image stitching when compared to contemporary image stitching algorithms. Full article
(This article belongs to the Special Issue Advances in Holography and Its Applications)
Show Figures

Figure 1

32 pages, 612 KiB  
Article
Improved Splitting-Integrating Methods for Image Geometric Transformations: Error Analysis and Applications
by Hung-Tsai Huang, Zi-Cai Li, Yimin Wei and Ching Yee Suen
Mathematics 2025, 13(11), 1773; https://doi.org/10.3390/math13111773 - 26 May 2025
Viewed by 503
Abstract
Geometric image transformations are fundamental to image processing, computer vision and graphics, with critical applications to pattern recognition and facial identification. The splitting-integrating method (SIM) is well suited to the inverse transformation T1 of digital images and patterns, but it encounters [...] Read more.
Geometric image transformations are fundamental to image processing, computer vision and graphics, with critical applications to pattern recognition and facial identification. The splitting-integrating method (SIM) is well suited to the inverse transformation T1 of digital images and patterns, but it encounters difficulties in nonlinear solutions for the forward transformation T. We propose improved techniques that entirely bypass nonlinear solutions for T, simplify numerical algorithms and reduce computational costs. Another significant advantage is the greater flexibility for general and complicated transformations T. In this paper, we apply the improved techniques to the harmonic, Poisson and blending models, which transform the original shapes of images and patterns into arbitrary target shapes. These models are, essentially, the Dirichlet boundary value problems of elliptic equations. In this paper, we choose the simple finite difference method (FDM) to seek their approximate transformations. We focus significantly on analyzing errors of image greyness. Under the improved techniques, we derive the greyness errors of images under T. We obtain the optimal convergence rates O(H2)+O(H/N2) for the piecewise bilinear interpolations (μ=1) and smooth images, where H(1) denotes the mesh resolution of an optical scanner, and N is the division number of a pixel split into N2 sub-pixels. Beyond smooth images, we address practical challenges posed by discontinuous images. We also derive the error bounds O(Hβ)+O(Hβ/N2), β(0,1) as μ=1. For piecewise continuous images with interior and exterior greyness jumps, we have O(H)+O(H/N2). Compared with the error analysis in our previous study, where the image greyness is often assumed to be smooth enough, this error analysis is significant for geometric image transformations. Hence, the improved algorithms supported by rigorous error analysis of image greyness may enhance their wide applications in pattern recognition, facial identification and artificial intelligence (AI). Full article
Show Figures

Figure 1

23 pages, 5897 KiB  
Article
A Vision-Based Procedure with Subpixel Resolution for Motion Estimation
by Samira Azizi, Kaveh Karami and Stefano Mariani
Sensors 2025, 25(10), 3101; https://doi.org/10.3390/s25103101 - 14 May 2025
Cited by 2 | Viewed by 568
Abstract
Vision-based motion estimation for structural systems has attracted significant interest in recent years. As the design of robust algorithms to accurately estimate motion still represents a challenge, a multi-step framework is proposed to deal with both large and small motion amplitudes. The solution [...] Read more.
Vision-based motion estimation for structural systems has attracted significant interest in recent years. As the design of robust algorithms to accurately estimate motion still represents a challenge, a multi-step framework is proposed to deal with both large and small motion amplitudes. The solution combines a stochastic search method for coarse-level measurements with a deterministic method for fine-level measurements. A population-based block matching approach, featuring adaptive search limit selection for robust estimation and a subsampled block strategy, is implemented to reduce the computational burden of integer pixel motion estimation. A Reduced-Error Gradient-based method is next adopted to achieve subpixel resolution accuracy. This hybrid Smart Block Matching with Reduced-Error Gradient (SBM-REG) approach therefore provides a powerful solution for motion estimation. By employing Complexity Pursuit, a blind source separation method for output-only modal analysis, structural mode shapes and vibration frequencies are finally extracted from video data. The method’s efficiency and accuracy are assessed here against synthetic shifted patterns, a cantilever beam, and six-story laboratory tests. Full article
Show Figures

Figure 1

Back to TopTop