Next Article in Journal
How Imitation Learning and Human Factors Can Be Combined in a Model Predictive Control Algorithm for Adaptive Motion Planning and Control
Previous Article in Journal
Data Spine: A Federated Interoperability Enabler for Heterogeneous IoT Platform Ecosystems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Optical Aberration Calibration and Correction of Photographic System Based on Wavefront Coding

State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, Hangzhou 310027, China
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(12), 4011; https://doi.org/10.3390/s21124011
Submission received: 19 April 2021 / Revised: 26 May 2021 / Accepted: 27 May 2021 / Published: 10 June 2021
(This article belongs to the Section Physical Sensors)

Abstract

:
The image deconvolution technique can recover potential sharp images from blurred images affected by aberrations. Obtaining the point spread function (PSF) of the imaging system accurately is a prerequisite for robust deconvolution. In this paper, a computational imaging method based on wavefront coding is proposed to reconstruct the wavefront aberration of a photographic system. Firstly, a group of images affected by local aberration is obtained by applying wavefront coding on the optical system’s spectral plane. Then, the PSF is recovered accurately by pupil function synthesis, and finally, the aberration-affected images are recovered by image deconvolution. After aberration correction, the image’s coefficient of variation and mean relative deviation are improved by 60% and 30%, respectively, and the image can reach the limit of resolution of the sensor, as proved by the resolution test board. Meanwhile, the method’s robust anti-noise capability is confirmed through simulation experiments. Through the conversion of the complexity of optical design to a post-processing algorithm, this method offers an economical and efficient strategy for obtaining high-resolution and high-quality images using a simple large-field lens.

1. Introduction

The performance of optical systems depends considerably on the design of the optical system, as aberration is a key obstacle for an optical system to reach the ideal diffraction-limited resolution. To obtain high-quality images, optical imaging systems designers must correct and balance aberrations by combining multiple lenses of different glass materials. Even if the final design of the optical systems meets the requirements, it will make optical systems cumbersome and expensive.
Fortunately, the aberration correction problem can be reshaped into a computational problem to be solved after the acquisition of the image data. The acquired aberrated image is digitally post-processed using an image deconvolution algorithm to reconstruct an aberration-corrected high-quality image [1]. This lowers the expense and complexity of the optics while ensuring the resolution of the optical system [2]. The blur kernel used in the deconvolution calculation—i.e., the optical system’s point spread function (PSF)—is a key factor in determining the image reconstruction’s quality. If the PSF is not obtained accurately, the reconstructed images are most likely to have severe artifacts and ringing effects [3,4], affecting the quality of the image reconstruction.
In recent years, a few methods have been proposed to correct optical aberrations and improve image quality by acquiring PSFs. The blind deconvolution algorithm is a widely used method for PSF acquisition [5,6,7,8] that uses prior knowledge to estimate a clear image and the PSF directly from the blurred image by minimizing the cost function. However, the PSF of optical systems is always spatially variant and needs to be processed in patches. However, the image information in each small patch is very limited, and the lack of information will cause the estimation of PSF to be inaccurate. The restoration results are not reliable. The most intuitive PSF acquisition method is the direct measurement method [9] which can directly obtain the optical system’s impulse response to the point light source; i.e., PSF. However, the light intensity of the point light source is weak, the measurement results are subject to sensor noise, and the signal-to-noise ratio is low. The fitted parameter method [10,11] uses the measured PSF to match the simulated PSF to calibrate the lens prescription and then compute fitted PSFs by simulation. However, for optical systems with low mounting accuracy, system mounting errors can have a serious impact on the PSF estimation.
Another way to mitigate optical aberrations operates by adding masks to the optical system to encode the wavefront, including amplitude masks [12,13] and phase masks [14,15,16]. However, amplitude masks can block a portion of the incident light, and very fine printing patterns can cause diffraction artifacts [12]. Meanwhile, phase masks reduce the effective imaging resolution of the imaging system [15].
Another class of methods acquires the optical system’s PSF by reconstructing the wavefront. A simple method is to employ the Shack–Hartmann wavefront sensor [17], which consists of a lenslet array and an array detector. The phase of the wavefront can be linked to the local focal spot shifts in the corresponding region. Despite its simple principle, significant setup modifications are unavoidable. The density of the microlens limits its spatial resolution and sensitivity, so the phase approximation is rough. Another class of methods reconstructs the wavefront directly from intensity measurements by using a phase retrieval procedure. This method first introduces phase diversity in the multiple optical field intensity patterns recorded by the camera and then uses an iterative algorithm to reconstruct the wavefront from the recorded intensity patterns. One simple way to introduce phase diversity is by defocusing (i.e., axial scanning). There are a variety of reported methods for phase retrieval using defocus diversity, including iterative algorithms [18], transport-of-intensity equation (TIE)-based methods [19,20], and other non-iterative methods [21]. However, since the optical resolution can exceed the pixel resolution of the detector easily, these methods are susceptible to pixelation artifacts. Other approaches use wavefront coding to introduce phase diversity [22,23,24]. Those methods reconstruct the wavefront of the Fourier ptychography microscope (FPM) system by utilizing the redundancies in the dataset acquired by FPM to eliminate the effects of aberrations in the FPM system on the reconstruction results. However, since the spatial light modulator’s (SLM) angle-dependent amplitude response has less variation only at low-incidence angles [25], the wavefront’s reconstruction results with large field of view (FOV) setups can be bounded by this effect. To date, few studies have reported on the aberration correction of a photographic system based on wavefront coding.
Here, we perform an optical aberration calibration and correction method of photographic systems based on wavefront encoding. To minimize the aberrations introduced by optical elements besides the given lens during the measurement process, we designed an experimental setup resembling the direct measurement method, as shown in Figure 1. The experimental setup allowed us to accurately calibrate the PSF of a given photographic lens at an arbitrary FOV. However, in contrast to the direct measurement method using the point source images taken with the given lens to obtain the PSF, our method acquires the PSF from multiple images obtained by wavefront encoding. Thus, our method is less demanding in terms of the light source brightness, target shape, and sensor, and more resistant to sensor noise, resulting in more robust PSF calibration results.

2. Materials and Methods

Similar to the direct measurement method’s experimental setup, as shown in Figure 1a, the prototype system design consists of a collimator, an SLM, the crude optical system that requires aberration correction, and the sensor element. The object is first imaged to infinity by the collimator. Due to the collimator’s long focal length, the light has a small incident beam angle and can be approximated as a near-axis light without aberration introduced. The light wavefront is then modulated by the SLM and is recorded by an image sensor after passing through the crude optical system.
For an optical system with a large FOV, the aberration is spatially varied. We divide the full FOV into multiple small fields of view. Within a smaller field of view angle, it can be assumed that the spatially varying aberrations are invariant [26,27,28]. In the following discussion, we constrain our analysis to a particular FOV.
We consider an unknown sample s ( x , y ) located in the field-of-view range t 0 . A point source ( x 0 , y 0 ) with an amplitude and phase C on the sample plane can be described by
E 1 ( x 1 , y 1 ) = C δ ( x 1 x 0 , y 1 y 0 )
The wavefront transmits by Fresnel propagation to the front surface of L 1 :
E 2 ( x 2 , y 2 ) = C exp ( j π λ f 0 ( x 2 2 + y 2 2 ) ) j λ f 0 + δ ( x 1 x 0 , y 1 y 0 ) exp ( j 2 π λ f 0 ( x 1 x 2 + y 1 y 2 ) ) exp ( j π λ f 0 ( x 1 2 + y 1 2 ) ) d x 1 d y 1 = C exp ( j π λ f 0 ( x 2 2 + y 2 2 ) ) j λ f 0 exp ( j 2 π λ f 0 ( x 0 x 2 + y 0 y 2 ) ) exp ( j π λ f 0 ( x 0 2 + y 0 2 ) )
The idealized thin lens that has a focal length f 0 causes a phase delay of exp ( j π λ f 0 ( x 2 2 + y 2 2 ) ) , so the distribution of the light field after passing through the L 1 is
E 2 ( x 2 , y 2 ) = E 2 ( x 2 , y 2 ) exp ( j π λ f 0 ( x 2 2 + y 2 2 ) ) = C j λ f 0 exp ( j 2 π λ f 0 ( x 0 x 2 + y 0 y 2 ) ) exp ( j π λ f 0 ( x 0 2 + y 0 2 ) )
The field then propagates with distance d 0 to reach the spectral plane, introducing a frequency-dependent phase factor e j k d 0 1 ( cos 2 α + cos 2 β ) , and the field can be expressed as
E 3 ( u , v ) = F 1 { F { E 2 ( x 2 , y 2 ) exp ( j k d 0 1 ( cos 2 α + cos 2 β ) ) } }
Subsequently, a mask M ( u , v ) is applied to the field, while any discrepancies between the imaging system and the ideal are included in the pupil function P ( u , v ) . The field expression can be obtained as
E 3 ( u , v ) = E 3 ( u , v ) P ( u , v ) M ( u , v )
The field distribution at the sensor plane is
E 4 ( ξ , η ) = exp ( [ i k 2 f 1 ( 1 d 1 f 1 ) ( ξ 2 + η 2 ) ] ) j λ f 1 F { E 3 ( u , v ) } ( ξ , η ) = A F { P ( u , v ) M ( u , v ) } ( ξ , η ) δ ( ξ + x 0 λ f 0 , η + y 0 λ f 0 ) = A F { P ( u , v ) M ( u , v ) } ( ξ + x 0 λ f 0 , η + y 0 λ f 0 )
Setting A = C exp ( j π λ f 0 ( x 0 2 + y 0 2 ) ) j λ f 0 exp ( [ i k 2 f 1 ( 1 d 1 f 1 ) ( ξ 2 + η 2 ) ] ) j λ f 1 exp ( j k d 0 1 λ 2 ( ξ 2 + η 2 ) ) where E 4 ( ξ , η ) is the complex field that is incident on the sensor after the point source located at ( x 0 , y 0 ) has passed through the optical system. It is the point spread function of the optical system. Since our imaging system is incoherent, the phase relations between points in the sample plane are not relevant, and the complex phase fluctuations in A related to d 0 , d 1 are irrelevant and have no effect on the captured images. The imaging system’s intensity PSF can be defined as
h ( ξ + x 0 λ f 0 , η + y 0 λ f 0 ) = E 4 ( ξ , η ) 2 = A 2 F { P ( u , v ) M ( u , v ) } ( ξ + x 0 λ f 0 , η + y 0 λ f 0 ) 2
Thus, after neglecting the constants and dropping coordinate scaling,
h ( ξ , η ) = F { P ( u , v ) M ( u , v ) } ( ξ , η ) 2
In a small field of view range t 0 , the aberration can be considered spatially invariant. For a particular aperture mask M ( u , v ) , h ( ξ , η ) is the intensity of the optical system’s PSF. The image i t 0 ( ξ , η ) of an unknown sample S ( x , y ) captured by the sensor plane for an aperture mask M ( u , v ) can be expressed as
i t 0 ( ξ , η ) = h t o ( ξ , η ) | S ( ξ , η ) | 2 = h t 0 ( ξ , η ) l ( ξ , η )
where l ( ξ , η ) is the intensity of S ( ξ , η ) . The equation shows that the sample’s captured image is subject to the effect of the PSF derived from the sub-regions of the pupil.
By scanning the Fourier spectrum of a sample, we can use the captured intensity images to synthesize the sample’s Fourier spectrum by the phase retrieval algorithm. In our work, the PSFs are the intensity images captured by the sensor, and the pupil function is the Fourier spectrum to be synthesized. Thus, we can use the phase retrieval algorithm to synthesize the pupil function using a series of acquired PSF intensities by scanning the optical system’s spectral plane. The aberrations in the image are then removed by deconvolution.
As shown in Figure 2, we calibrate and correct the optical system for aberrations in three steps: (1) firstly, we obtain n sets of intensity images i n ( s , t ) by moving the sub-aperture M n ( u , v ) sequentially through the n sub-aperture positions in the spectral plane, as shown in Figure 1b. Then, we perform local aberration recovery to obtain the n PSF intensities h n ( s , t ) determined by the n masks in the spectral plane; (2) we reconstruct the pupil function P ( u , v ) from the obtained n PSF intensities h n ( s , t ) using a phase recovery algorithm; and (3) the reconstructed pupil function P ( u , v ) is used to deconvolve the aberrated image i ( s , t ) acquired by the crude optical system to obtain the aberration-free image l ( s , t ) .
The following three subsections present these steps in detail.

2.1. Local Aberration Recovery

To obtain the point spread function h n ( s , t ) desired to reconstruct the pupil function P ( u , v ) , we first obtained n sets of intensity images i n ( s , t ) by applying a sub-aperture M n ( s , t ) at n locations in the spectral plane. We then applied an image pair-based blur kernel estimation algorithm [29] to determine the local aberration, in which one of the image pairs was assumably blur-free, whereas the other image was aberration-blurred. In general, the center of an imaging lens can be regarded as a region that is free from aberrations that can reach a diffraction-limited resolution. The image i 1 ( s , t ) obtained with the central aperture can be used as a reference image to determine the pupil function of the entire aperture, and the differences between this reference image and the images i n ( s , t ) of the sub-aperture at other locations are attributed to the residual aberration [26]. We estimate the local aberration PSF using an iterative Tikhonov deconvolution in the Fourier domain [30].
The update of h n ( s , t ) in the Fourier domain is given by
H n k + 1 ( u , v ) = H n k ( u , v ) + β | I 1 ( u , v ) | I 1 * ( u , v ) ( I n ( u , v ) I 1 ( u , v ) H n k ( u , v ) ) | I 1 ( u , v ) | max ( | I 1 ( u , v ) | 2 + α )
where H n ( u , v ) and I n ( u , v ) are the Fourier spectra of h n ( s , t ) and i n ( s , t ) , respectively, β is the scaling constant to adjust the iteration step, and α is a small value to ensure the stability of the value during the iteration.
The algorithm flow is depicted in Figure 3.
With this algorithm, we can obtain the local PSF and the intensity information of the pupil function captured by the sensor plane under the modulation of n sub-apertures. We show the synthesis of the complete pupil function by the phase retrieval algorithm in the next section.

2.2. Pupil Function Reconstruction

By scanning the Fourier spectrum of the sample, the phase retrieval algorithm can synthesize the sample’s Fourier spectrum from a collection of intensity images captured by the sensor. In this paper, the pupil function is the Fourier spectrum we want to recover, and the n PSFs obtained by the local aberration recovery algorithm in the previous part are the intensity images used for reconstruction. To reconstruct the optical pupil function, we used an alternating minimization-based phase retrieval algorithm [31], and the algorithm flow is given in Figure 4.
We aimed to solve the following problem:
P * ( u , v ) = arg min P ( u , v ) n p ^ n ( s , t ) F { M n ( u , v ) P ( u , v ) } 2 s . t . | p ^ n ( s , t ) | 2 = h n ( s , t )
At each iteration (k), we performed the following three steps to update the estimate of the pupil function:
(1): h 1 ( s , t ) is the PSF intensity captured by the sensor with the central aperture. We set the scaled version of F { h 1 ( s , t ) } ( u , v ) to the initial estimate of P ( u , v ) and subsequently used the estimate of the optical pupil function to calculate the complex-valued field p ^ n k ( s , t ) at the sensor:
P 1 ( u , v ) = F { h a , 1 ( s , t ) } ( u , v )
p ^ n k ( s , t ) = F { M n ( u , v ) P k ( u , v ) }
(2) We replaced the amplitude of p ^ a , n k ( u , v ) with the amplitude of the corresponding actual PSF h n ( s , t ) :
p ^ a , n k ( s , t ) h n ( s , t ) | p ^ a , n k ( s , t ) | 2 p ^ a , n k ( s , t )
(3) We updated the estimate of P k ( u , v ) by solving the following regularized, least-squares problem:
P k + 1 ( u , v ) min imize P ( u , v ) n p ^ n k ( s , t ) F { M n ( u , v ) P ( u , v ) } 2 2 + τ P ( u , v ) 2 2
where τ > 0 is the regularization parameter used to ensure numerical stability during the reconstruction [32]. This regularized, least-squares problem has a fixed form solution that can be efficiently computed by the fast Fourier transform.

2.3. Image Deconvolution

By using the algorithm in the previous section, we were able to obtain the pupil function P ( u , v ) of the crude optical system, which in turn gave us the PSF of the optical system; i.e., h ( s , t ) = | F { P ( u , v ) } ( s , t ) | 2 . We could therefore recover the latent image l ( s , t ) from the aberrated image i ( s , t ) .
In an optical system, the latent image l ( s , t ) is blurred by the PSF h ( s , t ) . The intensity image obtained at the sensor can be denoted as i ( s , t ) = h ( s , t ) l ( s , t ) + n ( s , t ) , where n ( s , t ) is the additive noise and is the convolution operator. To suppress the effect of noise on the deconvolution result, we used a deconvolution method employing regularization [33] to obtain the latent image l ( s , t ) .

3. Experiments and Results

To certify the effectiveness of the proposed method, we conducted both simulated and real data experiments.

3.1. Simulated Data Experiments

In this paper, we used the imaging simulation function of the CODEV software to investigate the effectiveness of the proposed method for reconstructing the PSF. Firstly, we built the optical system shown in Figure 5 in CODEV, including a long-focus double-glued lens as a collimator (D0 = 50.8 mm, f0 = 500 mm, GCL-010611), an SLM, and the crude optical system. The focal length of the crude optical system was f 1 = 75   mm , and the pupil diameter was D 1 = 25   mm .
Since the given optical system was not well-corrected, it had severe off-axis aberrations. The crude optical system was rotated 10° to reconstruct the PSF at the 10°field of view of the crude optical system. A series of sub-apertures were loaded on the SLM plane as shown in Figure 1b, the diameter of the sub-apertures was d = 5   mm , and the distance of each shift was δ = 3.8   mm . At each sub-aperture position, the sub-aperture images were obtained using the software’s image simulation function, and then 30 dB,40 dB,50 dB of Gaussian noise was added, respectively. The reconstructed PSFs obtained using the proposed method are shown in the blue inset boxes in Figure 6(c4–c6).
We used the 664 px × 664 px resolution chart in Figure 6a as the original image. Then, we generated the blurred image at the 10° FOV of the crude optical system using the imaging simulation function of the CODEV and added 30 dB,40 dB,50 dB, if Gaussian noise to the image, respectively, as shown in Figure 6(c1–c3). The original PSF of the crude optical system is shown in the blue inset box. Finally, we conducted aberration correction using the reconstructed PSF. The image comparison before and after aberration correction is shown in Figure 6c. It can be seen that our method was very effective in reconstructing the PSF and in removing aberrations from the blurred images at variable noise levels. The aberration-corrected image had sharper edges, significantly better definition, and no ringing effect. As indicated by the line profile in Figure 6b, the aberration-corrected lines were still well resolved even in the presence of severe noise.
The availability of truth data in simulated experiments allowed us to further quantitatively evaluate image quality. The full reference indexes—the structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) [34,35]—were utilized to assess the similarity between the image before and after aberration correction and the original image. Higher values of PSNR and SSIM indicated that the image was more similar to the original image and that the image recovery was more effective.
The quantitative evaluation results of the simulated experiments are shown in Table 1. Although the aberration could be effectively corrected when the image signal-to-noise ratio was 30 dB, the evaluation index was not greatly improved due to severe noise. The evaluation indexes of the remaining corrected images were greatly improved, which was consistent with the visual evaluation effect.
In summary, the proposed method was able to reconstruct the PSF of the optical system robustly. The aberration-corrected images obtained by the proposed method were able to remove the aberration well without the ringing effect. We could also verify that our method has good noise immunity and can accurately reconstruct the PSF of an optical system in the presence of severe noise.

3.2. Real Data Experiments

Our experimental setup was similar to the simulated optical system, as shown in Figure 7, where we used a collimator with focal length f 0 = 500   mm and aperture D = 50.8   mm . The crude optical system consisted of an inexpensive industrial lens ( f 1 = 50   mm , F / 3.3 ) and a narrow-band filter ( λ = 0.54   μ m . The industrial lens was focused on optical infinity, and this large F-number industrial lens had significant off-axis aberrations due to being improperly corrected. We established a shifting sub-aperture mask of the spectral plane by using a physical iris with an adjustable aperture fitted on a two-dimensional translation stage.
In our experiments, the range of the scanned spectral plane was matched to the pupil diameter R of the optical system. The position of the sub-aperture was determined by the radius r of the sub-aperture and the overlap δ between adjacent sub-apertures; i.e., 2 R = 2 r + n 1 2 r δ , where n is the number of sub-apertures in each column. Firstly, the radius of the sub-aperture should make the image data oversampling rate S = λ f 1 2 r r C C D greater than 2. Furthermore, a smaller sub-aperture radius results in a more accurate wavefront reconstruction. However, a smaller sub-aperture radius implies a multiplication of the number of sub-apertures. A smaller number of sub-apertures is desired without going under the sampling limit. Thus, a sub-aperture radius of r = 1.8   mm was a balanced choice in our experiments. Our method was not affected greatly by the overlap rate between adjacent sub-apertures, and when the overlap rate η = δ 2 r 100 % between sub-apertures was zero, the results of our method became very similar to those of the Shack–Hartmann sensor. Therefore, we chose the minimum overlap rate η = 16.25 % required to enable complete coverage of the spectral plane by the sub-aperture.
In our experiments, we moved the sub-aperture n = 6 times in sequence laterally in x and y directions through a region with a L × L = 15.3   mm × 15.3   mm square spectral plane. In every step, we moved the sub-aperture by δ = 2.34   mm until it crossed all n 2 = 36 distinct sub-aperture locations. We took multiple snapshots and combined them via high dynamic range processing (HDR) [36] at each sub-aperture position to ensure that each image photographed was properly exposed.
In the image plane, we placed a CCD sensor (Sony IMX253 Genie Nano CL-M4040 from Teledyne Dalsa in Waterloo, Canada, with an effective resolution of 4112 pixels × 3008 pixels (14 mm × 10 mm) and a pixel size r C C D = 3.45   μ m ). The image data oversampling rate was S = λ f 1 r C C D d > 2   S a / s , which thus satisfied the Nyquist sampling requirement [37].
To quantitatively verify the reliability of the results obtained after aberration correction, we first conducted experiments using a WT1005-62 resolution test target as a sample. In our experiments, we first imaged the resolution test target through a collimator to optical infinity. Subsequently, we rotated the crude optical system to collect data from multiple fields of view.
Following the scheme mentioned in Section 2, the image comparison before and after aberration correction is presented in Figure 8. Figure 8a shows the distribution of the measured data sets in the whole field of view. Figure 8(b1~e1) and Figure 8(b2~e2) show the images of four different fields of view before and after the aberration correction, respectively. The local part of the resolution target is enlarged and shown in the inset box to show the resolution improvement. It can be seen that the images before correction had serious off-axis aberrations. The aberrations were well corrected across all fields of view after deconvolution, and no artifacts were produced. Simultaneously, it can be noticed that Group 15 ( d width = 3.56   μ m ) was well-resolved after aberration correction, corresponding to the resolution limit of the sensor.
To further quantify the resolution improvement before and after aberration correction, the line profile of line sections before and after aberration correction is shown and compared in Figure 8f. The red and blue curves are the line profile curves before and after correction, respectively. The contrast between the test target lines before correction was very low. After compensation, the peaks of the test target were more evenly spaced, and the improved contrast between the peaks and the troughs verified the usefulness of our aberration correction method.
Two non-reference indexes—the coefficient of variation (CV) and the mean relative deviation (MRD) [38]—were used in the real experiments for quantitative evaluation, and the results are presented in Table 2. Here, it can be seen that CV values and MRD values increased on average by 60% and 30%, respectively, in real experiments, implying the improvement of image sharpness and confirming the feasibility of the proposed method.
To further validate our method’s ability to correct aberrations when imaging complex scenes, Figure 9 and Figure 10 show real-world images captured by our inexpensive industrial lens. Figure 9 and Figure 10(b1~d1) show the enlarged images of the contents in the corresponding color boxes in Figure 9 and Figure 10a, respectively. Figure 9 and Figure 10(b2~d2) show the images after aberration correction using our method. After aberration correction, the sharpness of the image was significantly improved, as shown by the line profiles in Figure 9 and Figure 10(b5~e5). Meanwhile, we restored the image patches with the blind-estimated PSF [39] for comparison, as shown in Figure 9 and Figure 10(b3~d3). The deconvolution method and deconvolution parameters were the same as in our proposed method. The deconvolution using the blind-estimated PSF resulted in significant artifacts, and thus could not reasonably handle the images captured using the inexpensive industrial lens. Compared to the blind method, our proposed method not only removed the aberrations robustly but also produced almost no artifacts, and the quality of images was more stable.

4. Discussion

In this paper, we propose a method for measuring PSF based on wavefront coding. By performing wavefront coding, local aberration recovery, and pupil function reconstruction, we can reconstruct the crude photographic system’s pupil function accurately. We designed simulated and real experiments for our experiments, and both qualitative and quantitative assessments confirmed that the proposed method can precisely reconstruct the given photographic system’s PSF across all fields of view under different noise conditions, which shows that our method has good universality. Finally, we can obtain high-quality and aberration-free images by performing deconvolution.
Although our method works robustly on reconstructing the PSF, the method still suffers from the limitation of long measurement times. Thus, in the future, we will focus on reducing data acquisition times; for example, by using faster SLM and better scanning strategies.

Author Contributions

C.Y. and Y.S. conceived and designed the experiments; C.Y. performed the experiments, analyzed the data, and wrote the paper. Both authors have read and agreed to the published version of the manuscript.

Funding

National Natural Science Foundation of China (NSFC) (No. 61927802).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data and code will be made available on request to the correspondent author’s email with appropriate justification.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schuler, C.J.; Hirsch, M.; Harmeling, S.; Scholkopf, B. Non-Stationary Correction of Optical Aberrations. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 659–666. [Google Scholar]
  2. Heide, F.; Rouf, M.; Hullin, M.B.; Labitzke, B.; Heidrich, W.; Kolb, A. High-Quality Computational Imaging through Simple Lenses. ACM Trans. Graph. 2013, 32, 1–14. [Google Scholar] [CrossRef]
  3. Ahi, K.; Shahbazmohamadi, S.; Asadizanjani, N. Quality Control and Authentication of Packaged Integrated Circuits Using Enhanced-Spatial-Resolution Terahertz Time-Domain Spectroscopy and Imaging. Opt. Lasers Eng. 2018, 104, 274–284. [Google Scholar] [CrossRef]
  4. Ahi, K. A Method and System for Enhancing the Resolution of Terahertz Imaging. Measurement 2019, 138, 614–619. [Google Scholar] [CrossRef]
  5. Fienup, J.R.; Miller, J.J. Aberration Correction by Maximizing Generalized Sharpness Metrics. JOSA A 2003, 20, 609–620. [Google Scholar] [CrossRef] [PubMed]
  6. Thiebaut, E.; Conan, J.-M. Strict a Priori Constraints for Maximum-Likelihood Blind Deconvolution. J. Opt. Soc. Am. A 1995, 12, 485–492. [Google Scholar] [CrossRef]
  7. Yue, T.; Suo, J.; Wang, J.; Cao, X.; Dai, Q. Blind Optical Aberration Correction by Exploring Geometric and Visual Priors. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1684–1692. [Google Scholar]
  8. Gong, X.; Lai, B.; Xiang, Z. AL0 Sparse Analysis Prior for Blind Poissonian Image Deconvolution. Opt. Express 2014, 22, 3860. [Google Scholar] [CrossRef] [PubMed]
  9. Leger, D.; Duffaut, J.; Robinet, F. MTF Measurement Using Spotlight. In Proceedings of the IGARSS ’94-1994 IEEE International Geoscience and Remote Sensing Symposium, Pasadena, CA, USA, 8–12 August 1994. [Google Scholar] [CrossRef]
  10. Zheng, Y.; Huang, W.; Pan, Y.; Xu, M. Optimal PSF Estimation for Simple Optical System Using a Wide-Band Sensor Based on PSF Measurement. Sensors 2018, 18, 3552. [Google Scholar] [CrossRef] [Green Version]
  11. Shih, Y.; Guenter, B.; Joshi, N. Image Enhancement Using Calibrated Lens Simulations. In Transactions on Petri Nets and Other Models of Concurrency XV; Springer Science and Business Media LLC: Berlin, Germany, 2012; Volume 7575, pp. 42–56. [Google Scholar]
  12. Pandharkar, R.; Kirmani, A.; Raskar, R. Lens Aberration Correction Using Locally Optimal Mask Based Low Cost Light Field Cameras. Imaging Syst. 2010, 3. [Google Scholar] [CrossRef]
  13. Vettenburg, T.; Harvey, A.R. Correction of Optical Phase Aberrations Using Binary-Amplitude Modulation. J. Opt. Soc. Am. A 2011, 28, 429–433. [Google Scholar] [CrossRef] [PubMed]
  14. Patwary, N.; Shabani, H.; Doblas, A.; Saavedra, G.; Preza, C. Experimental Validation of a Customized Phase Mask Designed to Enable Efficient Computational Optical Sectioning Microscopy through Wavefront Encoding. Appl. Opt. 2017, 56, D14–D23. [Google Scholar] [CrossRef] [Green Version]
  15. Doblas, A.; Preza, C.; Dutta, A.; Saavedra, G. Tradeoff between Insensitivity to Depth-Induced Spherical Aberration and Resolution of 3D Fluorescence Imaging Due to the Use of Wavefront Encoding with a Radially Symmetric Phase Mask. In Proceedings of the Three-Dimensional and Multidimensional Microscopy, Image Acquisition and Processing XXV, San Francisco, CA, USA, 29–31 January 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10499, p. 104990F. [Google Scholar]
  16. González-Amador, E.; Padilla-Vivanco, A.; Toxqui-Quitl, C.; Olvera-Angeles, M.; Arines, J.; Acosta, E. Wavefront Coding with Jacobi-Fourier Phase Masks. In Proceedings of the Current Developments in Lens Design and Optical Engineering XX, San Diego, CA, USA, 2019, 12 August 2019; p. 1110405. [Google Scholar]
  17. Beverage, J.L.; Shack, R.V.; Descour, M.R. Measurement of the Three-Dimensional Microscope Point Spread Function Using a Shack-Hartmann Wavefront Sensor. J. Microsc. 2002, 205, 61–75. [Google Scholar] [CrossRef] [PubMed]
  18. Allen, L.J.; Oxley, M.P. Phase Retrieval from Series of Images Obtained by Defocus Variation. Opt. Commun. 2001, 199, 65–75. [Google Scholar] [CrossRef]
  19. Waller, L.; Tian, L.; Barbastathis, G. Transport of Intensity Phase-Amplitude Imaging with Higher Order Intensity Derivatives. Opt. Express 2010, 18, 12552–12561. [Google Scholar] [CrossRef] [PubMed]
  20. Gureyev, T.; Nugent, K. Rapid Quantitative Phase Imaging Using the Transport of Intensity Equation. Opt. Commun. 1997, 133, 339–346. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Pedrini, G.; Osten, W.; Tiziani, H.J. Reconstruction of Inline Digital Holograms from Two Intensity Measurements. Opt. Lett. 2004, 29, 1787–1789. [Google Scholar] [CrossRef] [PubMed]
  22. Ou, X.; Zheng, G.; Yang, C. Embedded Pupil Function Recovery for Fourier Ptychographic Microscopy. Opt. Express 2014, 22, 4960–4972. [Google Scholar] [CrossRef]
  23. Chung, J.; Martinez, G.W.; Lencioni, K.C.; Sadda, S.R.; Yang, C. Computational Aberration Compensation by Coded-Aperture-Based Correction of Aberration Obtained from Optical Fourier Coding and Blur Estimation. Optica 2019, 6, 647–661. [Google Scholar] [CrossRef]
  24. Shen, C.; Chan, A.C.; Chung, J.; Williams, D.E.; Hajimiri, A.; Yang, C. Computational Aberration Correction of VIS-NIR Multi-spectral Imaging Microscopy Based on Fourier Ptychography. Opt. Express 2019, 27, 24923. [Google Scholar] [CrossRef] [Green Version]
  25. Lizana, A.; Martín, N.; Estapé, M.; Fernández, E.; Moreno, I.; Márquez, A.; Iemmi, C.; Campos, J.; Yzuel, M.J. Influence of the Incident Angle in the Performance of Liquid Crystal on Silicon Displays. Opt. Express 2009, 17, 8491. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Zheng, G.; Ou, X.; Horstmeyer, R.; Yang, C. Characterization of Spatially Varying Aberrations for Wide Field-of-View Microscopy. Opt. Express 2013, 21, 15131–15143. [Google Scholar] [CrossRef]
  27. Trussell, H.; Hunt, B. Image Restoration of Space Variant Blurs by Sectioned Methods. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, Australia, 19–24 April 2005; Volume 3, pp. 196–198. [Google Scholar]
  28. Costello, T.P.; Mikhael, W.B. Efficient Restoration of Space-Variant Blurs from Physical Optics by Sectioning with Modified Wiener Filtering. Digit. Signal Process. 2003, 13, 1–22. [Google Scholar] [CrossRef]
  29. Yuan, L.; Sun, J.; Quan, L.; Shum, H.Y. Image Deblurring with Blurred/Noisy Image Pairs. In ACM SIGGRAPH 2007 Papers; Association for Computing Machinery: New York, NY, USA, 2007; p. 1-es. [Google Scholar]
  30. Neumaier, A. Solving Ill-Conditioned and Singular Linear Systems: A Tutorial on Regularization. SIAM Rev. 1998, 40, 636–666. [Google Scholar] [CrossRef]
  31. Gerchberg, R.W. A Practical Algorithm for the Determination of Phase from Image and Diffraction Pictures. Optik 1972, 35, 237–246. [Google Scholar]
  32. Lei, T.; Waller, L. 3D Intensity and Phase Imaging from Light Field Measurements in an LED Array Microscope. Optica 2015, 2, 104–111. [Google Scholar]
  33. Krishnan, D.; Fergus, R. Fast Image Deconvolution using Hyper-Laplacian Priors. In Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009, Vancouver, BC, Canada, 7–10 December 2009. [Google Scholar]
  34. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Wang, Z.; Bovik, A.C. Mean Squared Error: Love it or Leave it? A New Look at Signal Fidelity Measures. IEEE Signal Process. Mag. 2009, 26, 98–117. [Google Scholar] [CrossRef]
  36. Debevec, P.E.; Malik, J. Recovering High Dynamic Range Radiance Maps from Photographs. In ACM SIGGRAPH 2008 Classes; Association for Computing Machinery: New York, NY, USA, 1997; Volume 97, pp. 1–10. [Google Scholar]
  37. Shannon, C.E. Communication In The Presence Of Noise. Proc. IEEE 1998, 86, 447–457. [Google Scholar] [CrossRef]
  38. Shen, H.; Zhang, L. A MAP-Based Algorithm for Destriping and Inpainting of Remotely Sensed Images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1492–1502. [Google Scholar] [CrossRef]
  39. Krishnan, D.; Tay, T.; Fergus, R. Blind Deconvolution Using a Normalized Sparsity Measure. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 233–240. [Google Scholar]
Figure 1. (a) The experimental setup schematic. (b) The sub-aperture M ( u , v ) loaded on the SLM. Green circles indicate the coverage of each sub-aperture. Red dots indicate the center position of the sub-aperture. Yellow arrows indicate each movement of the sub-aperture.
Figure 1. (a) The experimental setup schematic. (b) The sub-aperture M ( u , v ) loaded on the SLM. Green circles indicate the coverage of each sub-aperture. Red dots indicate the center position of the sub-aperture. Yellow arrows indicate each movement of the sub-aperture.
Sensors 21 04011 g001
Figure 2. The framework of the proposed method.
Figure 2. The framework of the proposed method.
Sensors 21 04011 g002
Figure 3. The algorithm flow chart of the local aberration recovery process.
Figure 3. The algorithm flow chart of the local aberration recovery process.
Sensors 21 04011 g003
Figure 4. The algorithm flow chart of the pupil function reconstruction process.
Figure 4. The algorithm flow chart of the pupil function reconstruction process.
Sensors 21 04011 g004
Figure 5. Simulated experimental setup built in CODEV.
Figure 5. Simulated experimental setup built in CODEV.
Sensors 21 04011 g005
Figure 6. Simulation results of aberration correction: (a) original image reprinted with permission from Synopsis, Inc., (b) the line outline of the periodic line in (c). (c) The results before and after the correction of images with different noise levels are compared, and the inset boxes show the used measured PSF.
Figure 6. Simulation results of aberration correction: (a) original image reprinted with permission from Synopsis, Inc., (b) the line outline of the periodic line in (c). (c) The results before and after the correction of images with different noise levels are compared, and the inset boxes show the used measured PSF.
Sensors 21 04011 g006
Figure 7. Experimental Setup.
Figure 7. Experimental Setup.
Sensors 21 04011 g007
Figure 8. Spatially varying aberration calibration and correction result on a WT1005-62 resolution test target. (a) Full FOV image. The pupil function and PSF of each small region denoted by (b1e1) varied spatially, as shown in (b3e3). The deconvolution results (b2e2) show that the spatially varying aberrations were adequately corrected after processing. The inset frame enlarges part of the sample to indicate the enhanced resolution. (f) Line profiles through the line pairs in (b1e1) (b2e2) to highlight the aberration correction performance.
Figure 8. Spatially varying aberration calibration and correction result on a WT1005-62 resolution test target. (a) Full FOV image. The pupil function and PSF of each small region denoted by (b1e1) varied spatially, as shown in (b3e3). The deconvolution results (b2e2) show that the spatially varying aberrations were adequately corrected after processing. The inset frame enlarges part of the sample to indicate the enhanced resolution. (f) Line profiles through the line pairs in (b1e1) (b2e2) to highlight the aberration correction performance.
Sensors 21 04011 g008
Figure 9. Results for the factory image. (a) Full FOV image captured by our inexpensive industrial lens. The pupil function and PSF of each small region denoted by (b1d1) varied spatially, as shown in (b4d4). The deconvolution results (b2d2) show that the spatially varying aberrations were adequately corrected after processing. (b3d3) The restored results using blind-estimated PSFs. (b5d5) Line profiles at the underlined places in (b1d1) (b2d2) to show the improved image sharpness before and after correction using the proposed method. (e) The three images on the left are enlargements of the inset frames in (b1d1), which are captured as blurred images in different FOV. The three images in the middle are enlargements of the inset frames in (b2d2) and are the restored results corrected using the proposed method. The three images on the right are enlargements of the inset frames in (b3d3), the results of the correction using the blind-estimation method.
Figure 9. Results for the factory image. (a) Full FOV image captured by our inexpensive industrial lens. The pupil function and PSF of each small region denoted by (b1d1) varied spatially, as shown in (b4d4). The deconvolution results (b2d2) show that the spatially varying aberrations were adequately corrected after processing. (b3d3) The restored results using blind-estimated PSFs. (b5d5) Line profiles at the underlined places in (b1d1) (b2d2) to show the improved image sharpness before and after correction using the proposed method. (e) The three images on the left are enlargements of the inset frames in (b1d1), which are captured as blurred images in different FOV. The three images in the middle are enlargements of the inset frames in (b2d2) and are the restored results corrected using the proposed method. The three images on the right are enlargements of the inset frames in (b3d3), the results of the correction using the blind-estimation method.
Sensors 21 04011 g009
Figure 10. Results for the building image. (a) Full FOV image captured by our inexpensive industrial lens. The pupil function and PSF of each small region denoted by (b1d1) varied spatially, as shown in (b4d4). The deconvolution results (b2d2) show that the spatially varying aberrations were adequately corrected after processing. (b3d3) The restored results using blind-estimated PSFs. (b5d5) Line profiles at the underlined places in (b1d1) (b2d2) to show the improved image sharpness before and after correction using the proposed method. (e) The three images on the left are enlargements of the inset frames in (b1d1), which are captured as blurred images in different FOV. The three images in the middle are enlargements of the inset frames in (b2d2) and are the restored results corrected using the proposed method. The three images on the right are enlargements of the inset frames in (b3d3), the results of the correction using the blind-estimation method..
Figure 10. Results for the building image. (a) Full FOV image captured by our inexpensive industrial lens. The pupil function and PSF of each small region denoted by (b1d1) varied spatially, as shown in (b4d4). The deconvolution results (b2d2) show that the spatially varying aberrations were adequately corrected after processing. (b3d3) The restored results using blind-estimated PSFs. (b5d5) Line profiles at the underlined places in (b1d1) (b2d2) to show the improved image sharpness before and after correction using the proposed method. (e) The three images on the left are enlargements of the inset frames in (b1d1), which are captured as blurred images in different FOV. The three images in the middle are enlargements of the inset frames in (b2d2) and are the restored results corrected using the proposed method. The three images on the right are enlargements of the inset frames in (b3d3), the results of the correction using the blind-estimation method..
Sensors 21 04011 g010
Table 1. Quantitative assessment of simulated data experiments.
Table 1. Quantitative assessment of simulated data experiments.
IndexImagesSNR = 30 dBSNR = 40 dBSNR = 50 dB
PSNRBefore correction15.518215.533315.5348
After correction17.629225.813129.6136
SSIMBefore correction0.55580.60560.6110
After correction0.56210.79320.9164
Table 2. Quantitative assessment of the real data experiments.
Table 2. Quantitative assessment of the real data experiments.
IndexImagesbcde
CVBefore correction1.55361.39101.32071.3313
After correction2.24192.26112.30222.1944
MRDBefore correction1.28101.21011.16571.1679
After correction1.56781.57111.56511.5513
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yao, C.; Shen, Y. Optical Aberration Calibration and Correction of Photographic System Based on Wavefront Coding. Sensors 2021, 21, 4011. https://doi.org/10.3390/s21124011

AMA Style

Yao C, Shen Y. Optical Aberration Calibration and Correction of Photographic System Based on Wavefront Coding. Sensors. 2021; 21(12):4011. https://doi.org/10.3390/s21124011

Chicago/Turabian Style

Yao, Chuanwei, and Yibing Shen. 2021. "Optical Aberration Calibration and Correction of Photographic System Based on Wavefront Coding" Sensors 21, no. 12: 4011. https://doi.org/10.3390/s21124011

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop