Next Article in Journal
Photo-Modulation and Phase Behavior of Liquid Crystal Composites Based on Cyclic Diazobenzene Molecular Switches
Previous Article in Journal
A Novel Fiber-Optic Fabry–Perot Absolute Pressure Sensor Based on Frequency Modulated Continuous Wave Interferometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Photonics 2026, 13(4), 330; https://doi.org/10.3390/photonics13040330
Submission received: 5 February 2026 / Revised: 9 March 2026 / Accepted: 19 March 2026 / Published: 28 March 2026
(This article belongs to the Section Lasers, Light Sources and Sensors)

Abstract

To meet the requirements of quantitative elemental analysis in the ultraviolet (UV) spectrum, a UV single-photon imaging system was developed, integrating a digital micromirror device (DMD) and a single photon-counting imaging detector, enabling high sensitivity, high resolution, and a wide dynamic range. However, intrinsic geometric distortion poses a significant challenge to accurate spectral calibration. A hybrid correction framework is proposed, cascading polynomial coarse correction with multilayer perceptron (MLP) fine regression, improving calibration accuracy. The method utilizes a full-field dot-array mask projected by the DMD to acquire distortion-reference image pairs. The polynomial model rapidly captures the dominant high-order distortion, while a lightweight MLP performs non-parametric fine regression of residual displacements, achieving a mean error of 0.84 pixels. This approach reduces the root mean square (RMS) error to 1.01 pixels, outperforming traditional direct linear transformation (5.35 pixels) and pure polynomial models (1.33 pixels), while the nonlinearity index decreases from 0.35° to 0.05°. In addition, the method demonstrates stable performance across multi-scale checkerboard patterns ranging from 128 to 280 pixels, with RMS errors remaining around the 1-pixel level. These results validate the high-precision distortion suppression and robust cross-scale performance of the proposed framework. By leveraging DMD-generated patterns for self-calibration, this method eliminates the need for external targets, offering a scalable solution for high-end spectrometer calibration.

1. Introduction

Atomic fluorescence spectrometry (AFS) and inductively coupled plasma optical emission spectrometry (ICP-OES) are fundamental analytical techniques widely employed for elemental quantification in environmental monitoring, food safety, geological exploration, clinical biomonitoring, and related fields [1,2]. AFS enables femtogram-level detection of trace toxicological elements via high-brightness atomic fluorescence that is generated in the ultraviolet (UV) region [3]. In contrast, ICP-OES enables simultaneous multi-line detection by exploiting broad-spectrum atomic emission that spans 165–900 nm and covers the UV, visible, and near-infrared spectral ranges [4]. In contemporary instrumentation, UV imaging detection systems are predominantly based on a charge-coupled device (CCD) and complementary metal–oxide–semiconductor (CMOS) technologies. However, substantial dark current noise inherently limits their capacity for weak UV signal acquisition [5,6]. Although intensified CCDs (ICCDs) can achieve high-performance detection of weak radiation spectra, they require deep cryogenic cooling to suppress circuit dark noise and to operate at high readout frame rates. The associated costs and complexity therefore restrict their widespread adoption within the aforementioned fields [7]. Conversely, the single photon-counting imaging detector (SPC imaging detector) can achieve high-sensitivity imaging in the UV band without the need for deep cooling, offering advantages such as low power consumption, compactness, and system simplicity [8]. The wedge-and-strip anode (WSA) SPC imaging detector, which was developed by a team at the Changchun Institute of Optics, Fine Mechanics and Physics (CIOMP), Chinese Academy of Sciences, has been applied to n the EUV camera aboard the Chang’e-7 lunar probe as well as in the wide-angle auroral imager on the Feng-Yun-3D meteorological satellite [9,10,11]. Despite these merits, further enhancement of the dynamic range and spatial resolution remains a priority. To this end, the digital micromirror device (DMD) can be adapted to operate in the UV band through UV-enhancement processing, thereby providing a new technical pathway that enables advanced UV imaging detection [12]. As a reflective spatial light modulator, the DMD offers several intrinsic advantages, including a small micromirror unit size, a high switching speed, and the absence of macroscopic moving parts [13,14]. By coupling a UV-enhanced DMD with a space-qualified SPC imaging detector, a UV single-photon imaging system has been developed. This system enables highly sensitive SPC imaging detector of UV spectral signals with low dark count rates, small effective pixel sizes, and high spatial light modulation speeds, without imposing additional cryogenic cooling requirements. However, the lack of one-to-one spatial correspondence between DMD pixels and the detector’s sensitive elements, together with inherent nonlinear distortion introduced by the SPC imaging detector, results in pronounced geometric distortion in the acquired spectral images. In spectral regions with dense line distributions, even pixel-level drift can severely degrade quantitative accuracy. Currently, a systematic calibration and distortion correction model tailored to this specific UV imaging chain remains unavailable.
Geometric distortion in digital imaging systems significantly compromises image fidelity by introducing complex pixel displacements and structural deformations [15,16]. These distortions typically arise from multiple sources during image acquisition, including optical lenses, image detectors, and other optical path components [17]. While existing research predominantly addresses lens-induced distortion, conventional correction techniques generally require multi-view image sets to achieve high-precision geometric calibration. However, the logistical constraints of acquiring multi-view data across diverse application scenarios severely limit the practical utility of these approaches [18,19]. Furthermore, the performance of feature-based distortion correction methods, such as those relying on checkerboard corner detection, is inherently constrained by feature localization accuracy. Consequently, these approaches often fail to robustly handle scenes characterized by complex, non-standard distortions [20,21]. Recently, deep learning architectures have demonstrated substantial progress in distortion correction. For example, the RDTR framework leverages a unified geometric model combined with model-aware pre-training on images of arbitrary resolution to effectively correct radial distortion [22]. Similarly, methods such as DR-GAN adopt end-to-end generative frameworks integrated with self-supervised strategies, which directly learn structural mappings through low-to-high perceptual losses, thereby enabling single-stage, real-time radial distortion correction without explicit calibration parameters [23,24]. Nevertheless, the efficacy of these learning-based methods in complex distortion scenarios remains limited, as it depends strongly on the consistency between predefined distortion priors and the actual distortion characteristics. Constrained by these theoretical primitives, current data-driven approaches remain difficult to adapt for the unique, idiosyncratic distortions of practical UV imaging systems.
The intrinsic nonlinear distortion of the array-based single-photon counter is typically corrected using polynomial mapping models. This strategy avoids explicit modeling of the physical distortion mechanisms and directly fits a multivariate polynomial to represent the spatial transformation from the detector output coordinate system to an ideal reference coordinate system. Owing to its compact formulation, computational robustness, and ease of calibration, this strategy has become a widely adopted approach for geometric distortion correction to date [25,26].
At the system level, Feng Wei et al. investigated pixel alignment between a DMD and a CCD-based imaging system at the pixel scale [27,28]. In their approach, pixel-level alignment between DMD micromirrors and CCD pixels is first achieved via Moiré fringe feedback. Subsequently, the direct linear transformation (DLT) algorithm is employed to compute a homography matrix, thereby establishing a global coordinate mapping from the DMD mask to the CCD in a single step. This procedure enables pixel-wise light intensity modulation and geometric distortion correction. However, the DLT method is applicable only to scenarios involving a small field of view and pure rotation, and it is unable to correct the severe distortions and non-planar imaging conditions encountered in UV optical systems.
This paper focuses on a UV single-photon imaging system that integrates a DMD and a SPC imaging detector. The overall system architecture is first described. Distortion correction is then investigated, addressing both the intrinsic distortion of the SPC imaging detector and the mapping mismatch between the DMD and the SPC imaging detector. To this end, a cascaded method combining polynomial fitting with multilayer perceptron (MLP) regression is introduced. A high-precision spatial mapping model is constructed and experimentally validated, supporting pixel-level image reconstruction and distortion compensation. Consequently, the quantitative accuracy and stability of the UV imaging system are substantially improved, providing a reliable pixel-level basis for subsequent elemental quantification. This advance could further improve the performance of high-end UV spectrometers.

2. Systems and Methods

This section systematically describes the constituent modules of the UV single-photon imaging system and analyzes the underlying optical principles. To address the limitations of the current UV imaging detector, particularly the limited dynamic range, a UV single-photon imaging system is developed to meet the specified requirements. This system integrates the advantages of the SPC imaging detector (namely high sensitivity and low noise in the UV band) with the features of the DMD, including small pixel size and a high spatial light modulation rate. As illustrated in Figure 1, the system primarily comprises three components: a SPC imaging detector, a DMD, and a coupling optics module. The coupling optics module includes a convex mirror, a concave mirror, and an optical chamber. The DMD is positioned at the front end of the optical of the UV single-photon imaging system, functioning as a high-speed binary optical shutter. When the DMD micromirrors are in the “on” state, incident light from the object passes through the coupling optical system and forms an image on the SPC imaging detector. Conversely, in the “off” state, the light path is blocked, preventing light from reaching the SPC imaging detector. This mechanism enables programmable two-dimensional gating, supports real-time modulation with user-defined coded masks, and facilitates UV imaging.
Figure 2 illustrates the hardware prototype of the UV single-photon imaging system used in the experiment. The DMD used is a DLP9500 (manufactured by Texas Instruments in Dallas, TX, USA), with a resolution of 1920 × 1080 pixels and a micromirror pitch of 10.8 µm. The host computer performs real-time programming of the DMD driver circuit through the dedicated control software, enabling microsecond-scale micromirror state refresh. A self-developed WSA SPC imaging detector is used, with an equivalent pixel size of approximately 60 µm. Powered by a high-voltage supply (HV Supply), the detector generates output charge pulses, which are first pre-amplified and pulse-shaped by the front-end electronics (FEE). The pulses are subsequently sampled by a high-speed analog-to-digital acquisition circuit (HS-ADAC) and transmitted to image control software for event-level reconstruction. Additionally, the DMD operates via time-division flipping with a micromirror pitch of 10.8 µm, this enables time-division image acquisition at the detector and thereby enhances resolution.
The incident spectral signal passes through the optical system to the SPC imaging detector, where photon events are detected by partitioning the charge across three electrodes. The detector’s inherent distortion is the primary source of system-level distortion. Although the DMD enables pixel-level spatial modulation of incoming photons, misalignment between the DMD and the detector can introduce additional geometric distortion. These distortions are secondary to the intrinsic distortion of the SPC imaging detector, which predominantly affects overall image quality. Accurate pixel-level modulation requires a precise correspondence between DMD micromirrors and detector pixels. This alignment is essential for controlling the photon flux incident on each pixel. The DMD projects the modulated signal onto the detector, where events are localized by three-electrode charge partitioning. When the electron cloud is excessively large or small, edge regions can exhibit S-shaped and sinusoidal modulation distortions. These distortions are corrected using a series of dot-array masks loaded sequentially onto the DMD. After propagation through the optical system, the corresponding dot-array images are recorded by the detector, enabling acquisition of end-to-end distortion samples for accurate calibration and correction.
To address the errors described above, a hybrid correction framework, termed “polynomial fitting combined with MLP regression,” is proposed and evaluated. This framework consists of four main steps, as shown in Figure 3: (i) acquisition of the distorted image, (ii) feature point matching, (iii) initial correction using polynomial fitting, and (iv) fine pixel-level correction with an MLP network.
First, a regularly spaced dot-array mask is generated over the DMD imaging plane, with the pattern centered on a reference pixel. The mask is loaded onto the DMD, and the micromirrors are flipped accordingly. Next, the UV light source is activated. The modulated beam propagates through the optical path and forms a distorted dot-array image on the SPC imaging detector, thereby completing the acquisition of the distorted image. Then, based on the gray-scale distribution and centroid deviation of the distorted image, an appropriate feature point matching method is selected. The feature point strategy is optimized according to the observed gray-scale distribution and centroid deviation pattern of the dots in the distorted image. Preliminary correction is then performed using polynomial fitting, followed by fine pixel-level correction with an MLP network.
During the correction process, the image is input as a two-dimensional matrix with grayscale values. The main task is to establish a high-precision mapping between distorted and reference coordinates, which is used to update the feature points. The remaining pixels are then interpolated under this mapping to generate the final corrected image. The following sections describe feature-point extraction and the construction of the polynomial and MLP models.

2.1. Feature Point Extraction

A dot-array mask is generated on the DMD imaging plane, centered on a reference pixel, with a 29-pixel step between adjacent dots. After reflection by the DMD and propagation through the coupling optical system, the light forms a distorted image on the SPC imaging detector. array image. Before extracting feature points (dot centroids) from the distorted image, preprocessing is applied to the dot-array image. The distorted image is represented by Equation (1):
I x , y 0 , I m a x , ( x , y ) Ω
where   Ω denotes the image domain, and   I m a x represents the global maximum grayscale value. To reduce background noise interference, threshold segmentation is applied using a threshold T , empirically set to 20% of the maximum grayscale value. Pixels with grayscale values higher than the threshold T are identified as valid feature points, while all others are considered background and excluded from subsequent calculations. The resulting binary image is represented by Equation (2):
B x , y = 1 , I x , y T 0 , I x , y T
Connected component analysis is performed on the binary image B x , y , yielding N independent regions denoted as C i , as shown in Equation (3):
C i = C i | i = 1 , , N , C i Ω
The regions are filtered by area, and only components with more than 4 pixels are retained as valid regions. For each valid region C i , the zero-order and first-order moments are calculated to obtain its centroid coordinates, as shown in Equation (4):
x i ¯ = ( x , y ) C i x ( x , y ) C i 1 y i ¯ = ( x , y ) C i y ( x , y ) C i 1
The centroids of the dots in the distorted image are calculated using the described algorithm and adopted as the feature point coordinates.
Furthermore, based on the imaging principle, distortion is minimal near the image center and is therefore treated as negligible. Nine control points in the central region are used to generate a reference image by inverse projection under an ideal geometric relationship. The dot-array mask is translated across the DMD plane in 1-pixel increments to acquire images sequentially. In total, 30 pairs of distorted and reference images are acquired, and feature-point matching is performed for each pair.

2.2. Polynomial Model

The system images control points with known spatial positions. The mapping relationship between the distorted image coordinates and reference coordinates is described by a set of higher-order polynomial equations, as shown in Equation (5):
u = a x , y = i = 0 N j = 0 N P x i y j v = b x , y = i = 0 N j = 0 N Q x i y j
where x , y ) and u , v represent the pixel coordinates in the distorted and reference images, respectively; P and   Q denote the correction coefficients; and N is the order of the polynomial. More severe distortion generally requires a higher polynomial order N , increasing the number of coefficients P and Q . For an Nth-order polynomial, the required number of correction coefficients is N + 1 2 . Based on practical correction performance, the polynomial order was set to 4. Five sets of matched feature-point coordinates were selected, and the coefficients P and Q were estimated by least squares. The remaining 25 distorted images were then mapped into the reference coordinate system using the estimated polynomial coefficients. Bilinear interpolation was used for grayscale resampling, yielding coarsely corrected images in geometry. These images provide the initial mapping for the subsequent refinement stage.

2.3. MLP Model

The preliminarily corrected images obtained through polynomial fitting are subjected to a second round of feature-point extraction. Correspondences between feature points in the corrected images and those in the reference images are used to train an MLP model. The MLP model is adopted to capture nonlinear residual distortions that are poorly modeled by polynomial fitting. Polynomial models effectively capture coarse distortions and are often used as an initial calibration step; however, their ability to correct residual distortions is limited. In contrast, the MLP model can refine the remaining distortions beyond the polynomial stage. As shown in Figure 4, the network consists of four cascaded fully connected blocks. Each block, except the output block, includes a linear layer followed by a nonlinear activation function, forming a deep feedforward architecture. Network weights are learned iteratively until the loss converges. The learned mapping fits the training samples and shows spatial generalization. Specifically, given an input coordinate in the imaging plane, the trained model predicts the corresponding coordinate in the reference space, enabling high-precision mapping across the field of view.

3. Experiment and Processing Results

3.1. Geometric Evaluation Metrics

To quantitatively evaluate the image correction method, the root mean square (RMS) geometric error, mean error (ME), and nonlinearity index ( σ n o n l ) are used as the primary geometric metrics. These metrics evaluate the correction performance from three perspectives: overall positioning error, local feature-point regression accuracy, and distortion degree. As shown in Figure 5, the RMS reflects the average Euclidean distance between the reference and corrected points. The ME, also referred to as mapping accuracy, is defined as the mean Euclidean error between the corrected and ideal coordinates of the extracted feature points. The ( σ n o n l ) is calculated by fitting a line to feature points in the same row (or column) and determining the standard deviation of the angular deviations of the fitted line’s slope from the ideal line (0° for horizontal or 90° for vertical). A smaller ( σ n o n l ) indicates that the geometric shape is closer to the ideal line, implying reduced residual distortion. These metrics are defined in Equations (6)–(8),
R M S = 1 N i = 1 N [ x i u i 2 + y i v i 2 ]
M E = 1 N i = 1 N x i u i 2 + y i v i 2
σ n o n l = 1 K j = 1 K θ j θ r e f 2   ,     θ r e f = 0 ° , R o w   90 ° , C o l u m n  
where u i , v i and x i , y i denote the coordinates of the ith feature point in the reference and corrected images, respectively; N is the number of feature points; θ j represents the slope angle of the jth fitted line; and K is the number of selected rows (or columns). A lower RMS indicates a reduction in overall positioning error, while a lower ME or pixel mapping accuracy reflects improved correction precision at the feature point level. In addition, a smaller σ n o n l signifies weaker local nonlinear distortion. Collectively, these complementary metrics quantify overall performance in rigid alignment and geometric fidelity.

3.2. Data Acquisition and Correction

Before the calibration procedure, acquiring multiple original images to evaluate the detector’s imaging performance is essential. The entire DMD imaging plane was patterned with a 15 × 25 dot array (375 dots in total), in which each dot comprised a 3 × 3 pixel block and the array was arranged on a regularly spaced grid with fixed step sizes. The corresponding micromirrors were set to “on”, with all others set to “off”, thereby forming a standard dot-array mask on the DMD imaging plane. UV light was modulated by the DMD and imaged onto the SPC detector through the optical system. The resulting distorted dot-array image is shown in Figure 6a and exhibits substantial stretching and drift across the detection area. The dot-array mask was then scanned across the DMD imaging plane pixel by pixel, and 30 pairs of distorted and reference images were acquired by repeating this procedure.
To reduce local nonlinear distortion and improve accuracy, an initial correction was performed using a fourth-order polynomial (N = 4). In total, 1875 feature-point pairs from five of the 30 distorted–reference image sets were selected as input, far exceeding the minimum of 25 points required for a fourth-order polynomial. A least-squares fit was applied to these pairs to estimate the correction coefficient matrix. The remaining 25 distorted images were then corrected using the estimated coefficients. A representative corrected result is shown in Figure 6b. Polynomial correction substantially reduced diagonal distortion and partially mitigated S-shaped edge distortion. For comparison, the system-level DLT algorithm proposed by Feng Wei et al. was applied. As shown in Figure 6c, its performance was less satisfactory.
To achieve higher-precision correction, an MLP model was introduced. Feature-point coordinates from the distorted and reference images were normalized and used as inputs, which facilitates optimization and accelerates model convergence. For the 25 polynomial-corrected distorted images, feature-point data were extracted to form 25 input sets. These data were split into 19 sets for training, 3 for validation and 3 for testing. During training, the Euclidean distance between the predicted and ground-truth points was used as the loss function (Equation (6)).
A fully connected encoder–decoder network was used, with both input and output dimensions set to 2 (two-dimensional coordinates). The network architecture, determined through grid search, consisted of 6 layers with a 2-64-256-64-64-2 configuration. The Adam optimizer was used with an initial learning rate of 0.01, weight decay of 0.0005, and a batch size of 32. A CosineAnnealingLR scheduler reduced the learning rate to 0.0001 after 1500 epochs. To prevent overfitting, early stopping based on the validation loss and regularization in the hidden layers were applied. The model with the minimum validation loss was selected. Its output underwent inverse normalization to obtain the final mapping coordinates. The corrected result, shown in Figure 6d, demonstrates further improvement in edge distortion. Additionally, to provide a clearer visual comparison, residual error heatmaps generated by different correction methods are presented in Figure 7. As shown, the proposed method exhibits noticeably better performance than the other approaches.

3.3. Quantitative Evaluation and Validation

To quantitatively evaluate the performance of each correction model, Table 1 and Figure 8 summarize the geometric error metrics obtained from 20 sets of dot-array images processed by the three methods. As shown in Table 1, the proposed polynomial–MLP fusion strategy achieved an average ME of 0.84 pixels across the 20 test sets. Relative to the classical polynomial model and the DLT method, this corresponds to performance improvements of 22% and 80%, respectively. In Figure 8, the left and right y-axes correspond to the RMS (Equation (6)) and σ n o n l (Equation (8)), respectively. The DLT method yields an RMS of 5.35 pixels and a σ n o n l   of 0.35°. The classical polynomial model reduces these metrics to 1.33 pixels and 0.05°, respectively. Furthermore, the polynomial-MLP fusion strategy proposed in this work reduces the RMS to 1.01 pixels and σ n o n l   to 0.05°. Compared to the classical polynomial model, the proposed method achieves an additional reduction of 24% in RMS and 44% in σ n o n l   . When benchmarked against the DLT method, it achieved reductions of 81% and 86% in the two metrics, respectively, indicating markedly better performance than the traditional approaches. These results highlight advantages in suppressing pixel-level distortion and improving geometric fidelity.
To evaluate the generalizability of the proposed method, checkerboard calibration targets of four different scales (128 × 128, 192 × 192, 256 × 256, and 280 × 280 pixels) were generated across the entire DMD imaging plane. The top-left corner was defined as the origin, and incomplete grid cells were zero-padded. Figure 9, Figure 10, Figure 11 and Figure 12 show typical results for calibration targets ranging from 128 × 128 to 280 × 280 pixels: (a) the original distorted image; (b) and (c) corrected outputs using the polynomial model and DLT method; (d) the corrected image obtained using the proposed “polynomial + MLP” method. Visual inspection indicates that the proposed method provides more accurate edge alignment and grid-point registration than either individual algorithm. The method also showed robust performance across pattern scales, supporting its versatility and effectiveness.
Figure 13 quantifies the RMS error across different checkerboard sizes. The polynomial–MLP method maintained an RMS on the order of 1 pixel for checkerboards ranging from 192 × 192 to 280 × 280. As the grid size decreased and fewer corner points were available, RMS increased slightly. This trend is consistent with the results from the dot-array images. Due to the sparsity of corner points in smaller checkerboards, which increases uncertainty in σ n o n l calculation, only the RMS metric is reported in this section. Accordingly, the nonlinearity index is not discussed further.

4. Conclusions

This study investigated nonlinear distortion in a UV single-photon imaging system comprising a DMD and an SPC detector, arising from three-electrode charge partitioning in the detector and mapping mismatch between the DMD and the SPC detector. To address this issue, a two-stage cascaded framework—termed polynomial coarse correction followed by MLP fine regression—was developed and experimentally validated. The framework first applied a fourth-order polynomial to capture the dominant higher-order distortion components. A lightweight six-layer MLP then performed non-parametric fitting of the residual displacement. Consequently, ME was 0.84 pixels, indicating high feature-point-level mapping accuracy. The RMS error was reduced from 5.35 pixels (traditional DLT) and 1.33 pixels (polynomial model) to 1.01 pixels. Similarly, the nonlinearity index σ n o n l   was reduced from 0.35° to 0.05°, corresponding to additional reductions of 24% (RMS) and 44% ( σ n o n l   ) relative to the classical polynomial model. Across four checkerboard targets ranging from 128 to 280 pixels, RMS error remained on the order of one pixel. The entire calibration process required only 30 sets of sparse dot arrays, eliminating the need for multi-view images and external physical calibration targets. This approach provides a high-precision and scalable geometric reference for the UV single-photon imaging system. This advance may facilitate high-precision calibration in UV analytical instruments such as AFS and ICP-OES.

Author Contributions

Conceptualization, W.Y.; methodology, W.Y.; software, S.Y.; validation, W.Y. and T.M.; formal analysis, W.Y. and Z.H.; resources, L.H. and C.T.; data curation, W.Y.; writing—original draft preparation, W.Y.; writing—review and editing, W.Y. and L.H.; supervision, B.C.; funding acquisition, C.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the University and Research Institution Pilot Scale-Up Selection Project of Changchun (Grant No. 25ZSLX31) and in part by the National Key R&D Plan of China (Grant No. 2022YFF0708500).

Data Availability Statement

The data presented in this study are not publicly available due to privacy, and access can be requested from [yanwanhong@ciomp.ac.cn] upon reasonable request.

Conflicts of Interest

A Chinese patent application (CN121235962A) has been filed based on the technology reported in this manuscript. The author Wanhong Yan is an inventor, and Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences is the applicant/assignee. This constitutes a potential conflict of interest.

References

  1. Yang, X.; Yang, J.; Su, Y.; Deng, Y.; Wen, X.; Zheng, C. A vacuum ultraviolet (UV) Photoreactor-based flow droplet digestion for determination of arsenic and mercury in blood by atomic fluorescence spectrometry. Anal. Chem. 2025, 97, 1977–1982. [Google Scholar] [CrossRef] [PubMed]
  2. Erdiwansyah; Gani, A.; Desvita, H.; Mahidin; Bahagia; Mamat, R.; Rosdi, S. Investigation of heavy metal concentrations for biocoke by using ICP-OES. Results Eng. 2025, 25, 103717. [Google Scholar] [CrossRef]
  3. Tao, C.; Li, C.; Li, Y.; Wang, H.; Zhang, Y.; Zhou, Z.; Mao, X.; Ma, Z.; Tian, D. A UV digital micromirror spectrometer for dispersive AFS: Spectral interference in simultaneous determination of Se and Pb. J. Anal. At. Spectrom. 2018, 33, 2098–2106. [Google Scholar] [CrossRef]
  4. Moazzen, M.; Shariatifar, N.; Sohrabvandi, S.; Mortazavian, A.M.; Khoshtinat, K.; Khodaei, S.M.; Khanniri, E. Determination of Elements by ICP-OES Method in Ice-Cream and Cream Samples: A Risk Assessment Study by Monte Carlo Simulation. Biol. Trace Elem. Res. 2025, 203, 5461–5477. [Google Scholar] [CrossRef]
  5. Li, F.M.; Nathan, A. CCD Image Sensors in Deep-Ultraviolet: Degradation Behavior and Damage Mechanisms; Springer: Berlin, Germany, 2005. [Google Scholar]
  6. Xiao, H.; Zhang, K.; Ruan, J.; Hao, J.; Sun, C. Optical and communication performance investigation of UV and DUV light-stimulated quantum dots. IEEE Trans. Electron Devices 2024, 71, 4187–4195. [Google Scholar] [CrossRef]
  7. Hopkins, A.J.; Cooper, J.L.; Profeta, L.T.M.; Ford, A.R. Portable deep-ultraviolet (DUV) Raman for standoff detection. Appl. Spectrosc. 2016, 70, 861–873. [Google Scholar] [CrossRef] [PubMed]
  8. Tao, C.; Zhang, H.J.; Li, C.S.; He, L.P.; Ma, Z.Y.; Chen, B.; Zhang, R. Atomic Fluorescence Dispersion Detection Technique Based on Area Array Single Photon Counting Imaging Detector. Chin. J. Anal. Chem. 2025, 53, 187–194. [Google Scholar]
  9. Zhang, X.X.; Chen, B.; He, F.; Song, K.F.; He, L.P.; Liu, S.J.; Guo, Q.F.; Li, J.W.; Wang, X.D.; Zhang, H.J.; et al. Wide-field auroral imager onboard the Fengyun satellite. Light Sci. Appl. 2019, 8, 47. [Google Scholar] [CrossRef]
  10. He, L.P.; He, F.; Chen, B.; Zhang, X.; Guo, Q.; Han, Z.; Wang, X.; Zhang, H.; Mao, S.; Yan, W.; et al. Dual-wavelength extreme ultraviolet camera (EUC) for the Chang’E-7 mission: Instrument development and calibration. Sci. China Technol. Sci. 2025, 68, 2220601. [Google Scholar] [CrossRef]
  11. Han, Z.W.; Song, K.F.; Zhang, H.J.; Yu, M.; He, L.P.; Guo, Q.F.; Wang, X.; Liu, Y.; Chen, B. Photon Counting Imaging with Low Noise and a Wide Dynamic Range for Aurora Observations. Sensors 2020, 20, 5958. [Google Scholar] [CrossRef] [PubMed]
  12. Ye, J.T.; Yu, C.; Li, W.; Li, Z.P.; Lu, H.; Zhang, R.; Zhang, J.; Xu, F.; Pan, J.W. Ultraviolet photon-counting single-pixel imaging. Appl. Phys. Lett. 2023, 123, 024005. [Google Scholar] [CrossRef]
  13. Feng, W.; Zhang, F.; Qu, X.; Zheng, S. Per-pixel coded exposure for high-speed and high-resolution imaging using a digital micromirror device camera. Sensors 2016, 16, 331. [Google Scholar] [CrossRef]
  14. Zhao, S.J.; Yin, Z.; Yu, S.B.; Wang, W.; Yu, H.Z.; Li, W.H.; Tao, C. Block-based compressive imaging with a swin transformer. Opt. Express 2025, 33, 9587–9603. [Google Scholar] [CrossRef]
  15. Yin, W.; Zang, X.; Wu, L.; Zhang, X.; Zhao, J. A distortion correction method based on actual camera imaging principles. Sensors 2024, 24, 2406. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, Z.; Tang, Z.; Huang, J.; Li, J. A real-time correction and stitching algorithm for underwater fisheye images. Signal Image Video Process. 2022, 16, 1783–1791. [Google Scholar] [CrossRef]
  17. Huang, K.; Ziauddin, S.; Zand, M.; Greenspan, M. One shot radial distortion correction by direct linear transformation. In Proceedings of the IEEE International Conference on Image Processing (ICIP); IEEE: Piscataway, NJ, USA, 2020; pp. 473–477. [Google Scholar]
  18. Hartley, R.; Kang, S.B. Parameter-free radial distortion correction with center of distortion estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1309–1321. [Google Scholar] [CrossRef]
  19. Sakai, S.; Ito, K.; Aoki, T.; Watanabe, T.; Unten, H. Phase-based window matching with geometric correction for multi-view stereo. IEICE Trans. Inf. Syst. 2015, 98, 1818–1828. [Google Scholar] [CrossRef]
  20. Weng, J.; Zhou, W.; Ma, S.; Qi, P.; Zhong, J. Model-free lens distortion correction based on phase analysis of fringe-patterns. Sensors 2020, 21, 209. [Google Scholar] [CrossRef]
  21. Wu, Y.; Jiang, S.; Xu, Z.; Zhu, S.; Cao, D. Lens distortion correction based on one chessboard pattern image. Front. Optoelectron. 2015, 8, 319–328. [Google Scholar] [CrossRef]
  22. Wang, W.; Feng, H.; Zhou, W.; Liao, Z.; Li, H. Model-aware pre-training for radial distortion rectification. IEEE Trans. Image Process. 2023, 32, 5764–5778. [Google Scholar] [CrossRef]
  23. Liao, K.; Lin, C.; Zhao, Y.; Gabbouj, M. DR-GAN: Automatic radial distortion rectification using conditional GAN in real-time. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 725–733. [Google Scholar] [CrossRef]
  24. Li, X.; Zhang, B.; Sander, P.V.; Liao, J. Blind geometric distortion correction on images through deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4855–4864. [Google Scholar]
  25. He, L.P.; Yue, J.Y.; Liu, S.J.; Chen, B. Polynomial correction of photon-counting position-sensitive detector’s distortion. Acta Opt. Sin. 2012, 32, 0604002. [Google Scholar]
  26. Jiang, Z.; Ni, Q. Design and performance of photon imaging detector based on cross-strip anode with charge induction. Appl. Sci. 2022, 12, 8471. [Google Scholar] [CrossRef]
  27. Feng, W.; Zhang, F.; Wang, W.; Xing, W.; Qu, X. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging. Appl. Opt. 2017, 56, 3831–3840. [Google Scholar] [CrossRef]
  28. Wu, P.; Wang, Y.; Sun, H.; Wen, Z. A distortion self-calibration method for binocular high dynamic light adjusting and imaging system based on digital micromirror device. IEEE/CAA J. Autom. Sin. 2017, 11, 2036–2038. [Google Scholar] [CrossRef]
Figure 1. Schematic diagram of the optical setup.
Figure 1. Schematic diagram of the optical setup.
Photonics 13 00330 g001
Figure 2. Photograph of the hardware prototype of the system.
Figure 2. Photograph of the hardware prototype of the system.
Photonics 13 00330 g002
Figure 3. Block diagram of the proposed correction framework.
Figure 3. Block diagram of the proposed correction framework.
Photonics 13 00330 g003
Figure 4. Block diagram of the MLP model.
Figure 4. Block diagram of the MLP model.
Photonics 13 00330 g004
Figure 5. Definition of the Nonlinear Index ( σ n o n l ).
Figure 5. Definition of the Nonlinear Index ( σ n o n l ).
Photonics 13 00330 g005
Figure 6. Correction results of the dot-array images. (a) Original image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Figure 6. Correction results of the dot-array images. (a) Original image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Photonics 13 00330 g006
Figure 7. Comparison of residual error heatmaps. (a) Original image. (b) Polynomial model. (c) DLT method. (d) Proposed (Polynomial + MLP) method.
Figure 7. Comparison of residual error heatmaps. (a) Original image. (b) Polynomial model. (c) DLT method. (d) Proposed (Polynomial + MLP) method.
Photonics 13 00330 g007
Figure 8. Performance metrics of dot-array image correction using different methods.
Figure 8. Performance metrics of dot-array image correction using different methods.
Photonics 13 00330 g008
Figure 9. Correction results for the 128 × 128 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Figure 9. Correction results for the 128 × 128 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Photonics 13 00330 g009
Figure 10. Correction results for the 192 × 192 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Figure 10. Correction results for the 192 × 192 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Photonics 13 00330 g010
Figure 11. Correction results for the 256 × 256 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Figure 11. Correction results for the 256 × 256 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Photonics 13 00330 g011
Figure 12. Correction results for the 280 × 280 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Figure 12. Correction results for the 280 × 280 pixels checkerboard target. (a) Original distorted image. (b) Image corrected using the polynomial model. (c) Image corrected using the DLT method. (d) Image corrected using the proposed (Polynomial + MLP) method.
Photonics 13 00330 g012
Figure 13. Comparison of RMS errors across different checkerboard sizes.
Figure 13. Comparison of RMS errors across different checkerboard sizes.
Photonics 13 00330 g013
Table 1. Mean error of dot-array image correction using different methods.
Table 1. Mean error of dot-array image correction using different methods.
Image IndexME_DLT (Pixel)ME_Poly (Pixel)ME_Poly + MLP (Pixel)
14.16 1.07 0.82
24.13 1.07 0.83
34.16 1.06 0.85
44.11 1.08 0.84
54.07 1.06 0.82
64.29 1.07 0.85
74.14 1.05 0.81
84.13 1.03 0.81
94.26 1.04 0.81
104.15 1.05 0.81
114.27 1.08 0.84
124.18 1.20 0.87
134.16 1.24 0.88
144.21 1.15 0.85
154.28 1.05 0.85
164.22 1.11 0.86
174.15 1.10 0.87
184.10 1.05 0.85
194.18 1.06 0.86
204.12 1.07 0.86
Average4.17 1.08 0.84
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yan, W.; He, L.; Tao, C.; Ma, T.; Han, Z.; Yu, S.; Chen, B. Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System. Photonics 2026, 13, 330. https://doi.org/10.3390/photonics13040330

AMA Style

Yan W, He L, Tao C, Ma T, Han Z, Yu S, Chen B. Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System. Photonics. 2026; 13(4):330. https://doi.org/10.3390/photonics13040330

Chicago/Turabian Style

Yan, Wanhong, Lingping He, Chen Tao, Tianqi Ma, Zhenwei Han, Sibo Yu, and Bo Chen. 2026. "Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System" Photonics 13, no. 4: 330. https://doi.org/10.3390/photonics13040330

APA Style

Yan, W., He, L., Tao, C., Ma, T., Han, Z., Yu, S., & Chen, B. (2026). Cascaded Polynomial and MLP Regression for High-Precision Geometric Calibration of Ultraviolet Single-Photon Imaging System. Photonics, 13(4), 330. https://doi.org/10.3390/photonics13040330

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop