PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems

: Ultra-wide-ﬁeld of view (UWFOV) imaging systems are affected by various aberrations, most of which are highly angle-dependent. A description of UWFOV imaging systems, such as microscopy optics, security camera systems and other special space-variant imaging systems, is a difﬁcult task that can be achieved by estimating the Point Spread Function (PSF) of the system. This paper proposes a novel method for modeling the space-variant PSF of an imaging system using the Zernike polynomials wavefront description. The PSF estimation algorithm is based on obtaining ﬁeld-dependent expansion coefﬁcients of the Zernike polynomials by ﬁtting real image data of the analyzed imaging system using an iterative approach in an initial estimate of the ﬁtting parameters to ensure convergence robustness. The method is promising as an alternative to the standard approach based on Shack–Hartmann interferometry, since the estimate of the aberration coefﬁcients is processed directly in the image plane. This approach is tested on simulated and laboratory-acquired image data that generally show good agreement. The resulting data are compared with the results of other modeling methods. The proposed PSF estimation method provides around 5% accuracy of the optical system model.


Introduction
Modern photonic imaging systems, used in astronomy and in other areas of science, are required to perform with high imaging quality, a wide aperture (i.e., low F-numbers), and high spatial resolution.These systems are known as UWFOV systems.The wide-angle optical elements used in UWFOV systems introduce a significant shift variance [1], which complicates the modeling task.Many recent research works have focused on finding a model of UWFOV systems that is suitable for reducing the uncertainties in reconstructing image data from wide-field microscopic systems [2][3][4], security cameras [5][6][7], and all-sky cameras [8][9][10][11][12][13][14][15].Any imaging system can be described by its spatial impulse response function, commonly referred to as the Point Spread Function (PSF).PSF is used to represent the aberrations of UWFOV devices, and can be applied either to the entire image or within regions uniquely defined by the isoplanatic angle.However, it can sometimes be complicated to obtain a PSF model of the system.It can be difficult to achieve the desired accuracy using conventional methods-not from the measurement point of view, but because it is difficult to describe the space variance (SV) of the PSF over the Field of View (FOV).This leads to an obvious issue.From the measurement point of view, it is necessary to obtain the PSF for all discrete points of the entire FOV.Then the PSF of the system is described by such a field of individual PSFs.Born [16] calls this type of PSF partially space-invariant because there are just discrete points (PSFs) in parts of the FOV.However, a space-variant PSF is described by a model in all points of the acquired image.Of course, this brings high demands on the precision of the model.Issues associated with sampling, pixel size and the pixel sensitivity profile must be taken into account in the design.It is very difficult to obtain the parameters of the model, especially in image areas with an aberrated impulse response and for objects with a small Full Width at Half Maximum (FWHM) parameter in relation to the pixel size.These objects, captured, e.g., in microscopy using all-sky cameras and/or special optical elements, can hardly be processed using current techniques, which are typically oriented to multimedia content.
A widely-used approach for obtaining the PSF is to model wavefront aberrations using Zernike polynomials.These polynomials were introduced by Zernike in [17].This work was later used by Noll [18], who introduced his approach to indexing, which is widely used in the design of optics.Zernike polynomials are usually applied to rotationally symmetric optical surfaces.Zernike polynomials form a basis in optics with Hopkins [19] field dependent wave aberration.Recently, Sasián [20] reformulated the work of Hopkins.This new formulation is more practical for optical design.A more recent contribution by Ye et al. [21] uses Bi-Zernike polynomials, which is an alternative to the method mentioned in Gray's article [22].
Zernike polynomials are used in modern interferometers for describing wave aberrations.However, researchers have begun to think about using these polynomials for an overall description of lens optics.When the first all-sky camera projects appeared, the question of how to model aberrations of fisheye lenses arose.Weddell [23] proposed the approach of a partially space-invariant model of PSF based on Zernike polynomials.Another all sky project, Pi of the Sky [1], focusing on gamma ray bursts, attempted to make a model of PSF.Their approach was quite different from Weddell's [23].First, they calculate the dependency of the coefficients of the polynomials on the angle to the optical axis for several points.Then they interpolate these coefficients over the entire FOV.The WILLIAM project [24] (WIde-field aLL-sky Image Analyzing Monitoring system) faces the issue of aberrations at an extremely wide-field FOV (see also [25,26]).Gray [22] proposed a method for describing the spatial variance of Zernike polynomials.This approach is derived from the description provided by Hopkins [19], and laid the foundations for truly space-variant Zernike polynomials.Of course, there is also the problem of space-variant PSF.In fact, we cannot be limited to rotationally symmetric imaging systems only.Hasan [27] and Thompson [28] came up with a more general description of the pupil function for elliptical or rotationally non-symmetrical Zernike polynomials.This description can be used for calculating the wave aberration in optical materials such as calomel for acousto-optical elements [29].The first optical approach to wave aberration estimation was described by Páta et al. [30].
The search for the best description of UWFOV systems is not limited to astronomical imaging.Microscopy is another area where the UWFOV type of lens is used.Measurements of the special spherical aberration using a shearing interferometer were described in [31].A promising new approach was formulated in [32].It is based on aberration measurements of photolithographic lenses with the use of hybrid diffractive photomasks.The aperture exit wavefront deformation is modeled in [33] for wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography.
UWFOV cameras are also used in surveillance systems.However, an image affected by aberrations can have a negative effect in criminal investigations [6].Ito et al. [5] face this issue, and they propose an approach that estimates a matrix of coefficients of the wavefront aberrations.Many authors have reported on investigations of space-variant imaging systems.An estimate of the PSF field dependency is critical for restoring the degraded image.Qi and Cheng proposed their linear space-variant model [34] and focused on the restoration algorithm for imaging systems.Heide et al. [35] proposed techniques for removing aberration artifacts using blind deconvolution for the imaging system with a simple lens.
In our work, we propose a modeling method for PSF estimation for space-variant systems.Since we would like to use this model for general optical systems, the method is based on modeling the PSF of the system without knowledge of the wavefront.Thus, the method can be an alternative to the Shack-Hartmann interferometer [4,36], or to other direct wavefront measurement methods, since we Appl.Sci.2017, 7, 151 3 of 21 can estimate wavefront aberrations from fitting PSF in the image plane.The following section begins with a description of wavefront aberrations.

Wavefront Aberration Functions
An ideal diffraction limited imaging system transforms a spherical input wave with an inclination equal to the plane wave direction.As described by Hopkins [19], real imaging systems convert the input plane wave into a deformed wavefront.The difference between an ideal wavefront and an aberrated wavefront in the exit aperture can be expressed as where W(x, y) are wavefronts-the surface of points characterized by the same phase.Figure 1 illustrates the shape of the ideal spherical wavefront W sp (x, y) and aberrated wavefront W ab (x, y).
It can be useful to introduce the coordinate system illustrated in Figure 2. The object plane is described by the ξ, η coordinates; the exit pupil uses x, y notation, and the image plane uses u, ν notation.Then we introduce the x, ŷ normalized coordinates (see Figure 2b,c) at the exit pupil and , θ polar coordinates.The normalized image plane coordinates are û, v and H, ϕ, respectively.
Appl.Sci.2017, 7, 151 3 of 22 can estimate wavefront aberrations from fitting PSF in the image plane.The following section begins with a description of wavefront aberrations.

Wavefront Aberration Functions
An ideal diffraction limited imaging system transforms a spherical input wave with an inclination equal to the plane wave direction.As described by Hopkins [19], real imaging systems convert the input plane wave into a deformed wavefront.The difference between an ideal wavefront and an aberrated wavefront in the exit aperture can be expressed as where W(x, y) are wavefronts-the surface of points characterized by the same phase.It can be useful to introduce the coordinate system illustrated in Figure 2. The object plane is described by the ξ, η coordinates; the exit pupil uses x, y notation, and the image plane uses u, ν notation.Then we introduce the ˆ, x y normalized coordinates (see Figure 2b,c) at the exit pupil and ρ, θ polar coordinates.The normalized image plane coordinates are ˆ, u v and H, ϕ, respectively.The aberrated wavefront can be represented by a set of orthonormal base functions, known as Zernike polynomials.Zernike polynomials, which are described in [17] and in [37][38][39][40], are a set of functions orthogonal over the unit circle, usually described in polar coordinates as ( , ) m n Z   , where ρ is the radius and θ is the angle with respect to the x -axis in the exit aperture (see Figure 2b).They represent functions of optical distortions that classify each aberration using a set of polynomials.The set of Zernike polynomials is defined in [17]; other adoptions are in [37][38][39][40], and it can be written as with n describing the power of the radial polynomial and m describing the angular frequency.The aberrated wavefront can be represented by a set of orthonormal base functions, known as Zernike polynomials.Zernike polynomials, which are described in [17] and in [37][38][39][40], are a set of functions orthogonal over the unit circle, usually described in polar coordinates as Z m n (ρ, θ), where ρ is the radius and θ is the angle with respect to the x-axis in the exit aperture (see Figure 2b).They represent functions of optical distortions that classify each aberration using a set of polynomials.The set of Zernike polynomials is defined in [17]; other adoptions are in [37][38][39][40], and it can be written as with n describing the power of the radial polynomial and m describing the angular frequency.
is the normalization factor with the Kronecker delta function δ m0 = 1 for m = 0, and δ m0 = 0 for m = 0, and is the radial part of the Zernike polynomial.
Appl.Sci.2017, 7, 151 4 of 22 is the radial part of the Zernike polynomial.Any wavefront phase distortion over a circular aperture of unit radius can be expanded as a sum of the Zernike polynomials as ( , ) ( , ) which can be rewritten using Equation (2) as where m has values of , 2, , n n n     k is the order of the polynomial expansion, and ( , ) Z mode in the expansion, i.e., it is equal to the root mean square (RMS) phase difference for that mode.The wavefront aberration function across the field of view of the optical system can then be described as Any wavefront phase distortion over a circular aperture of unit radius can be expanded as a sum of the Zernike polynomials as which can be rewritten using Equation (2) as where m has values of −n, −n + 2, . . .n, k is the order of the polynomial expansion, and A m n = A m n (H, ϕ) is the coefficient of the Z m n mode in the expansion, i.e., it is equal to the root mean square (RMS) phase difference for that mode.The wavefront aberration function across the field of view of the optical system can then be described as where the wavefront is described with normalized polar coordinates.A better understanding can be provided by Table A1 in Appendix A and by Figure 3, which describes the indexing of Zernike polynomials and the kind of aberration associated with an index of Zernike polynomials.where the wavefront is described with normalized polar coordinates.A better understanding can be provided by Table A1 in Appendix A and by Figure 3, which describes the indexing of Zernike polynomials and the kind of aberration associated with an index of Zernike polynomials.A1 includes the field dependency of these polynomials.

Field Dependency of the Wavefront
A description of the wavefront aberration of a circular rotationally symmetric optical system can be adopted from [22], originally from [19] ( , ; , ) cos ( ), where k = 2p + m and l = 2n + m.Symbol klm W is used for the expansion coefficients; the coordinates are defined in Figure 2c.The field dependency of ( , ) A H  can be solved by comparing wavefront aberrations using Equations ( 7) and ( 8) as ).
Equation ( 10) can be obtained by expanding Equation ( 7) and rewriting terms cos ( , ) m   into terms containing a set of goniometric functions cos( ), sin( ), m m r   N and comparing the coefficients in this form with expanded Zernike polynomials (as in Table A1).The resulting coefficients ( , ) A H  describing the space variation of Zernike polynomials are presented in [22].
Coefficients klm W are then used for describing the aberration of the space-variant optical system.The order of the aberration terms is defined by Hopkins in [19] as ( )1 order sum of the powers of and    .
(10)  A1 includes the field dependency of these polynomials.

Field Dependency of the Wavefront
A description of the wavefront aberration of a circular rotationally symmetric optical system can be adopted from [22], originally from [19] where k = 2p + m and l = 2n + m.Symbol W klm is used for the expansion coefficients; the coordinates are defined in Figure 2c.The field dependency of A m n (H, ϕ) can be solved by comparing wavefront aberrations using Equations ( 7) and (8) Equation ( 10) can be obtained by expanding Equation ( 7) and rewriting terms cos m (θ, ϕ) into terms containing a set of goniometric functions cos(mθ), sin(mθ), r ∈ N and comparing the coefficients in this form with expanded Zernike polynomials (as in Table A1).The resulting coefficients A m n (H, ϕ) describing the space variation of Zernike polynomials are presented in [22].Coefficients W klm are then used for describing the aberration of the space-variant optical system.The order of the aberration terms is defined by Hopkins in [19] as order = (sum o f the powers o f ρ and θ) − 1. (10) Appl.Sci.2017, 7, 151 6 of 21

The Space-Variant Point Spread Function
Wide-field optics is usually affected by various aberrations, such as distortion which makes the PSF of the system space-variant, i.e., the PSF changes across the field of view.It is, therefore, necessary to use a complicated model of the system's PSF.Let us begin with a linear space-invariant optical imaging system, which can be expressed as where p(x, y) defines the shape, the size, and the transmission of the exit pupil, and W(x, y) is the phase deviation of the wavefront from a reference sphere.The generalized exit pupil function is described in [16] as An imaging system is space-invariant if the image of a point source object changes only in position, not in the functional form [41].However, wide-field optical systems give images where the point source object changes both in position and in functional form.Thus, wide-field systems and their impulse responses lead to space-variant optical systems.If we consider a space-variant system, we cannot use the convolution for expressing the relation between object and image.When computing SVPSF, we have to use the diffraction integral [6,42].SVPSF can then be expressed as with a defined magnification of the system where z 1 is the distance from the object plane to the principal plane and z 2 is the distance from the principal plane to the image plane.

Proposed Method
The method proposed in this paper is based on modeling the PSF of the system, and comparing it with real image data without requiring measurements of the wavefront aberrations.The fitting of the model takes place in the image plane.
There are three conditions for acquiring the set of calibration images.The first criterion is that FWHM of the image of the light source should be sufficient for PSF estimation.The experimental results show that FWHM size greater than 5 px is enough for the algorithm.FWHM of the diffractive image of the source should be smaller than 1 px.Otherwise the diffractive function has to be added to the model of the system.The image of the 200 µm pinhole has FWHM size 0.6 µm which is less than the size of the pixel of our camera.The influence of the source shape has to be taken into account.The second criterion is the Signal-to-Noise Ratio (SNR), which should be greater than 20 dB.Section 6 contains a comparison of results obtained with different SNR.The third criterion is that the number of test images depends on the space variance of the system, i.e., a heavily distorted optical system will require more test images to satisfy this criterion.We have to choose the distance of two different PSFs in such a way that where f i and f i+1 are the images of the point light source in the image plane.M and N are the sizes of the f i and f i+1 images.In our example, a grid of 24 positions of the PSFs in one quadrant of the optical system is sufficient to estimate the model.The wavefront is modeled using Zernike polynomials and known optical parameters.In addition to the input image data, we need to know camera sensor parameters such as resolution, the size of the sensor and optical parameters such as the focal length (crop factor, if included), the F-number and the diameter of the exit pupil.The obtained model of the PSF of the optical system is based on the assessment of differential metrics.We can describe the modeling of real UWFOV systems as a procedure with three main parts: the optical part, the image sensor, and the influence of the sensor noise.The space-variant impulse response h(u, v, ξ, η) can include the influence of the image sensor (e.g., pixel shape and sensitivity profile, noise or quantization error).The sensor has square pixels with uniform sensitivity.Then, the PSF of the imaging system can be described as where h sen (u, v) is the PSF of the sensor and h opt (u, v, ξ, η) is the PSF of the optical part.Symbol * describes the convolution.h opt (u, v, ξ, η) can be calculated from the system parameters and wavefront deformations using the Fourier transform as described in Equation ( 13).The wavefront deformation is modeled using Zernike polynomials for the target position in the image plane (see Equation ( 7)).Ultra-wide field images typically have angular dependent PSF.High orders of Zernike polynomials are therefore used in the approximation.The wavefront approximation is used up to the 8th order plus the 9th spherical aberration of the expansion function.The field dependence of the coefficients is formulated in [22].In our work, the set of field-dependent coefficients was expanded up to the 8th order plus the 9th spherical aberration.The actual table of the used W klm coefficient is attached in Appendix A as Table A1.
Let us assume an imaging system with an unknown aberration model.We then obtain with this system a grid of K test images of a point light sources covering the entire FOV as Then, let be the d-th realization of the model in the corresponding position in the image as the original object.Sub-matrix f i is the image of the point light source in the image plane, while matrix f d is the sub-image model computed by our method.The size of the sub-arrays must be sufficient to cover the whole neighborhood of the point light object on the positions {u 0i , v 0i )} K i=1 .As was mentioned above, symbols û, v are used for image plane coordinates in the normalized optimization space, and u, v are image plane coordinates.Note that f i and f d can be located at any point over the entire field of view.The step between the positions of the point light source has to be chosen to cover the observable difference of the acquired point.The positions of the point light source in the field of view play an important role in the convergence efficiency.In our example, we divided the field of view each 10 degrees uniformly horizontally and vertically.We therefore obtain a matrix of PSFs sufficient for a description of the model.It turned out that finer division of the FOV is not necessary and does not improve the accuracy of the model and it satisfies the condition in Equation (15).
The algorithm uses two evaluation methods.The first method is based on calculating the root mean square error (RMSE) over the difference matrix in Equation ( 21) and optimizing the W klm parameters and the ∆ û, ∆ v positions for decreasing residuals.The second method is based on deducting the original and model matrix to obtain the maximum difference (MAXDIFF).Then, the residuals of this method indicate the deviation against the original matrix.The first method with RMSE calculation provides a better shape description.However, this method can result in local extremes.The reduction can be resolved by using the MAXDIFF calculation method.This method minimizes the local extremes, because it is focused on minimizing the maximum difference between the f i and f d object matrices, but the output can be a more general shape of PSF.
Let us now introduce operators RMSE( f i , f d ) and MAXDIFF( f i , f d ), which are used as descriptors of the differences between the original PSF and the model of PSF ( f i and f d ).
M and N are the sizes of the f i and f d sub-arrays.The optimization parameter of the W klm coefficients uses the Nelder-Mead optimizing algorithm, which is described in detail in [43].Let R f i , f d be the optimizing operator, then where W klm , ∆ û, ∆ v are variables for minimizing the cost function.For multiparameter optimization tasks, the challenge is to find the global minimum of the function.Considering this issue, we can find an appropriate set of starting parameters of the fit algorithm by smart selection of on-axis points (PSFs) on which we will obtain appropriate W klm , ∆ û, ∆ v variables, and we will then increase the precision of our model by selecting off-axis points and improving W klm , ∆ û, ∆ v variables.The process of point selection is illustrated in Figure 4, and the steps in the algorithm are as follows: Select a point placed on the optical axis (this point is considered as SV aberration free).The W klm optimization coefficient is based on minimizing RMSE or the MAXDIFF metrics.Then we obtain where W klm (1) is the first realization of the fit, and ∆ û, ∆ v represent the displacement of the object point in the image plane.The next calibration point will be placed on the û-axis and next to the first point.All W klm coefficients from the first point fit will be used as start conditions in the next step of the fit.
In the next step, we will fit all the points along the û-axis by increasing distance H.
The previous result is used as the start condition for the next point.
Then, we can continue along the v-axis by increasing distance H.This procedure gives the first view of the model.
where W klm (d) is the d-th realization of the fit.
After fitting all the on-axis points, we will start to fit all the off-axis points.The example in this paper uses 24 points.
After fitting all the points, we need to evaluate the output W klm coefficients which can describe the field dependency of our model.
We verified experimentally that the median applied to the set of estimated W klm coefficients provides better results of the output model than other statistical methods.Thus, we need to find the median of every W klm coefficient over all fit realizations (the number of realizations is L) of the used points.This step will eliminate extreme values of W klm coefficients which can occur at some positions of the PSF due to convergence issues caused by sampling of the image or overfitting effects caused by high orders polynomials.Extreme values indicate that the algorithm found some local minimum of the cost function and not the global minimum.The values of the W klm coefficients are then significantly different from the coefficients obtained in the previous position.These variations are given by the goodness of fit.
The output set of W klm coefficients then consists of values verified over the field.


After fitting all the on-axis points, we will start to fit all the off-axis points.o The example in this paper uses 24 points.
 After fitting all the points, we need to evaluate the output klm W coefficients which can describe the field dependency of our model.


We verified experimentally that the median applied to the set of estimated klm   As is illustrated in Figure 4, the described procedure repeats in every order (as defined in Equation ( 10)) of W klm coefficients.The estimates of higher order coefficients (6th and 8th) come from lower order coefficients that have already been estimated from lower orders.If we want to describe our optical system with coefficients up to the 8th order, we will first need to obtain coefficients of the 4th order.Then, it is necessary to repeat the procedure from Equation (22) to Equation (25) of the previously described procedure assuming 4th order W klm coefficients as a starting condition.Following the whole procedure, we will estimate the 6th order coefficients.Then, repeating the procedure again with 6th order coefficients as a starting condition, we can finally calculate 35 coefficients of the 8th order.The result is a set of coefficients (the number of coefficients depends on the order that is used) related to the optical system with the field dependency described in Section 3.

Results
This section is divided into two subsections.Our first goal was to verify the method for estimating the W klm coefficients.We, therefore, simulated an artificial imaging system and tested the convergence of the introduced algorithm to find proper W klm parameters.The second subsection of results involves modeling real imaging systems with the results of different orders of the Zernike polynomials.The input conditions, such as SNR > 20 dB (peak to noise) of all considered PSFs, the number of calibration images and FWHM, mentioned in Section 5 have to be taken into account before acquiring the calibration images and applying the algorithm of the PSF estimate.

Numerical Stability Verification
This section deals with verifying the stability of the optimization convergence.The test pattern, used for verifying the functionality of the algorithm was an image obtained by generating random W klm coefficients (i.e., random optical aberrations) and placing PSFs in locations covering the entire field of view (see Figure 5), assuming a rotational symmetric imaging system.To verify the algorithm, we test PSFs in the locations marked with red circles.Table 1 summarizes the parameters of the simulated system.Table 2, Figures 6 and 7 show successive verification of the proposed algorithm by values of MAXDIFF and the RMSE operator, a fitted curve illustrating the trend of the results, and standard deviation error bars.We can see that the difference between the original and the model is within the order of thousandths.The difference is given only by quantization noise and the sampling of the test pattern.The sampling of the original PSF appears as a serious issue, and low resolution causes problems within the convergence in the optimization of the W klm parameters.The resolution of the image may affect the accurate positioning of the PSF.For an explanation, we can find the maximum of the PSF by calculating the center of the mass precisely, if the sampling of the pattern is finer.Therefore, if we can find the maximum precisely, we can use precise positioning û, v for calculating the PSF of the system.1. Differences between the original PSF and the model are within the order of thousandths.Thus, the proposed algorithm can find the exact W coefficients used for generating the test pattern illustrated in Figure 5.    1. Differences between the original PSF and the model are within the order of thousandths.Thus, the proposed algorithm can find the exact W klm coefficients used for generating the test pattern illustrated in Figure 5.The graphs mentioned above provide information about the accuracy of the model against the positioning of the PSF in the image.However, it can also be useful to mention the accuracy of the model when the original image includes noise.For this reason, we performed a test where one PSF, in the center of the image, was affected by additional Gaussian noise.A test was performed in which the SNR in the image under test was from 28 dB to 9 dB (see Figure 8).We can see that the algorithm works quite well from 28 dB to 20 dB, and MAXDIFF is less than 3%.However, when SNR is below 20 dB, the error decreases rapidly.This is due to the fact that our model focuses on optical aberrations, not on modeling the noise.Figure 9 illustrates a direct comparison between the original PSF, the obtained model and the difference between them in relation to RMSE and the MAXDIFF operator.This verification process was performed to verify whether the algorithm is able to find the model W klm parameters that are closest to the W klm coefficients used in generating the pattern illustrated in Figure 5.

Experimental Laboratory Results
The second part of the results describes the performance of our method when dealing with real data.The experimental images were obtained with the setup illustrated in Figure 10.A small white LED diode was used as a light source, together with a pinhole 200 μm in diameter.The image data were acquired with a G2-8300 astronomical camera [44], with a 3358 × 2536 pixel CCD sensor,

Experimental Laboratory Results
The second part of the results describes the performance of our method when dealing with real data.The experimental images were obtained with the setup illustrated in Figure 10.A small white LED diode was used as a light source, together with a pinhole 200 µm in diameter.The image data were acquired with a G2-8300 astronomical camera [44], with a 3358 × 2536 pixel CCD sensor, corresponding to a size of 18.1 × 13.7 mm, which gives a pixel size of 5.39 µm.The camera was equipped with a Sigma 10 mm EX DC HSM fisheye lens [45], which is the diagonal type of fisheye lens.A camera was mounted on a 2D rotational stage.This configuration, together with an observation distance of 5 m, gives FWHM size of a PSF of around 6 pixels.Different positions of the light source, to cover the entire FOV, were achieved by rotating the camera in both axes.The advantage of this approach is that the same distance is kept between the light source and the camera, assuming that nodal mounting has been used.
Appl.Sci.2017, 7, 151 14 of 22 corresponding to a size of 18.1 × 13.7 mm, which gives a pixel size of 5.39 μm.The camera was equipped with a Sigma 10 mm EX DC HSM fisheye lens [45], which is the diagonal type of fisheye lens.A camera was mounted on a 2D rotational stage.This configuration, together with an observation distance of 5 m, gives FWHM size of a PSF of around 6 pixels.Different positions of the light source, to cover the entire FOV, were achieved by rotating the camera in both axes.The advantage of this approach is that the same distance is kept between the light source and the camera, assuming that nodal mounting has been used.Table 3 summarizes parameters of the simulated system, which are the same as for the simulated system in Section 6.1.However, it is necessary to take into account that the optical aberrations are different.The graphs in Figures 11-13 and Table 4 show the results of the fit with a different order of klm W coefficients, a fitted curve illustrating the trend of the results, and standard deviation error bars.
It can be seen that the best result is obtained with coefficients up to the 4th order.Then, with a higher order of the coefficients, we obtain slightly worse results, even for the PSFs placed on the optical axis.With increasing image distance, the situation seems to deteriorate, and the difference increases.One reason for this is the optimization convergence, which is worse for the more involved klm W coefficients in 6th and 8th orders.Another explanation is provided by Figure 15 and Table 5.It can be seen that the absolute difference is slightly worse for higher orders of klm W coefficients, but that Table 3 summarizes parameters of the simulated system, which are the same as for the simulated system in Section 6.1.However, it is necessary to take into account that the optical aberrations are different.The graphs in Figures 11-13 and Table 4 show the results of the fit with a different order of W klm coefficients, a fitted curve illustrating the trend of the results, and standard deviation error bars.Figure 14 shows a comparison of the results with other PSF modeling methods.It can be seen that the best result is obtained with coefficients up to the 4th order.Then, with a higher order of the coefficients, we obtain slightly worse results, even for the PSFs placed on the optical axis.With increasing image distance, the situation seems to deteriorate, and the difference increases.One reason for this is the optimization convergence, which is worse for the more involved W klm coefficients in 6th and 8th orders.Another explanation is provided by Figure 15 and Table 5.It can be seen that the absolute difference is slightly worse for higher orders of W klm coefficients, but that the shape of the model or the precision of the obtained shape of the PSF is better.The image of the differences in Figure 15 provides information proving that the maximum of the difference is located in a smaller number of pixels in the central part of the PSF.Thus the model seems to be well described in terms of the shape description.We can conclude that the higher order (6th and 8th) W klm coefficients provide better parameters of the model in terms of the shape description and better localization of aberrations, but sometimes at the cost of a slightly worse absolute difference.Table 4 shows another interesting phenomenon.The results obtained by 8th order fitting are slightly better than the results obtained by 6th order fitting.We observed that the 6th order model aberrations are overfitted; the model adds aberrations which are actually not present.However, use of the 8th order model compensates these overfitting effects.Finally, we have to admit that it is not always necessary to use high order Zernike polynomials, and our results illustrate that a simple solution including only basic aberrations (up to the 4th order) provides results of several percent.4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.
When we compare our method for estimating field-dependent PSF with other approaches mentioned in the introduction, we obtain similar results.Results from all implemented approaches are illustrated in Figure 14.An approach based on work by Piotrowski [1], labeled as the Interpolated  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.
When we compare our method for estimating field-dependent PSF with other approaches mentioned in the introduction, we obtain similar results.Results from all implemented approaches are illustrated in Figure 14.An approach based on work by Piotrowski [1], labeled as the Interpolated  4 and ten other points according to the normalized image distance H.The points which are not mentioned in Table 4 are selected similarly as illustrated in Figure 5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.Table 4. Selected on-axis point results for the MAXDIFF difference between the original and the estimated model, where H is from 0 to 0.83 and ϕ is equal to zero.The MAXDIFF differences of the polynomial orders are given as a percentage.Highlighted columns contain results used later in the comparison.model, provide results of the MAXDIFF operator from 7.8% to 9.9% over the FOV.Another approach based on Weddel's method [23], labeled as the Sectorized model, which divides the image into smaller invariant parts, provides similar results to the Interpolated model.However, it can be seen that the Sectorized model may fail when the PSF model in one sector does not precisely fit to all PSFs inside the sector.This can be seen in Figure 14, where the results for the Sectorized model (marked with a black+) at H = 0.83, 0.88, and 0.92 vary from 8% to 17%.The last method included here is fitting the PSFs with a Gaussian function [46] which was choosen because it can be use as a model of the diffraction limited optical system.This approach provides results from 14% to 22%; however, we had not expected very good results from this method.The Gaussian model was used as a complement to the space-variant methods.We also implemented an approach that uses the Moffat function [46].However, the results provided by fitting the Moffat function start from 20% and go up to 40%.We therefore concluded that the Moffat model is inappropriate, and we did not include these results in Figure 14.Our method includes detailed information about the shape of the PSF of the imaging system.Information about the shape of the PSF can be important for PSF fitting photometry in astronomy [47], for deconvolution algorithms in microscopy [48,49] and for other applications.Our model of the UWFOV system was successfully used by Fliegel [50] for a comparison of the deconvolution methods.In this context, a direct estimate of field-dependent Zernike polynomials brings a new approach to the description of space-variant imaging systems.The method was also used for modeling the WILLIAM all-sky camera [51], where the modeling method faced the issue of the presence of the Bayer mask in the system.When we compare our method for estimating field-dependent PSF with other approaches mentioned in the introduction, we obtain similar results.Results from all implemented approaches are illustrated in Figure 14.An approach based on work by Piotrowski [1], labeled as the Interpolated model, provide results of the MAXDIFF operator from 7.8% to 9.9% over the FOV.Another approach based on Weddel's method [23], labeled as the Sectorized model, which divides the image into smaller invariant parts, provides similar results to the Interpolated model.However, it can be seen that the Sectorized model may fail when the PSF model in one sector does not precisely fit to all PSFs inside the sector.This can be seen in Figure 14, where the results for the Sectorized model (marked with a black+) at H = 0.83, 0.88, and 0.92 vary from 8% to 17%.The last method included here is fitting the PSFs with a Gaussian function [46] which was choosen because it can be use as a model of the diffraction limited optical system.This approach provides results from 14% to 22%; however, we had not expected very good results from this method.The Gaussian model was used as a complement to the space-variant methods.We also implemented an approach that uses the Moffat function [46].However, the results provided by fitting the Moffat function start from 20% and go up to 40%.We therefore concluded that the Moffat model is inappropriate, and we did not include these results in Figure 14.

Metrics
Our method includes detailed information about the shape of the PSF of the imaging system.Information about the shape of the PSF can be important for PSF fitting photometry in astronomy [47], for deconvolution algorithms in microscopy [48,49] and for other applications.Our model of the UWFOV system was successfully used by Fliegel [50] for a comparison of the deconvolution methods.In this context, a direct estimate of field-dependent Zernike polynomials brings a new approach to the description of space-variant imaging systems.The method was also used for modeling the WILLIAM all-sky camera [51], where the modeling method faced the issue of the presence of the Bayer mask in the system.4 this PSF is marked with the blue column.The first row relates to the 4th order results, the second row relates to the 6th order results and the third row relates to the 8th order results.

Conclusions
The proposed method for estimating PSF from an optical system is novel in a direct estimate of optical aberrations of the optical system.In our work, we have used findings reported in previous papers devoted to a complex description of optical systems and their aberrations.It is a difficult task to describe a system of this type, and UWFOV systems further greatly complicate this situation, since they are heavily aberrated.This paper has summarized a complex mathematical approach, and provides an algorithm for modeling the PSF of space-variant optical systems.The proposed algorithm has been verified with simulated data, and has been applied to real image data, showing error of the model around 5%.A comparison of images with different SNR provide MAXDIFF results around 3% for images with SNR greater than 20 dB.The algorithm has been compared with other space-variant modeling methods.It has been shown to be competitive since our results are better than or equal to the results provided by other modeling methods.Other space-variant models provide accuracy around 8.5%.Our results demonstrate that the approach described here is also suitable for UWFOV systems.The results compare models of imaging systems of 4th, 6th, and 8th orders of Zernike The object was placed at 20 • with respect to the optical axis (H = 0.33).In Table 4 this PSF is marked with the blue column.The first row relates to the 4th order results, the second row relates to the 6th order results and the third row relates to the 8th order results.

Conclusions
The proposed method for estimating PSF from an optical system is novel in a direct estimate of optical aberrations of the optical system.In our work, we have used findings reported in previous papers devoted to a complex description of optical systems and their aberrations.It is a difficult task to describe a system of this type, and UWFOV systems further greatly complicate this situation, since they are heavily aberrated.This paper has summarized a complex mathematical approach, and provides an algorithm for modeling the PSF of space-variant optical systems.The proposed algorithm has been verified with simulated data, and has been applied to real image data, showing error of the model around 5%.A comparison of images with different SNR provide MAXDIFF results around 3% for images with SNR greater than 20 dB.The algorithm has been compared with other space-variant modeling methods.It has been shown to be competitive since our results are better than or equal to the results provided by other modeling methods.Other space-variant models provide accuracy around 8.5%.Our results demonstrate that the approach described here is also suitable for UWFOV systems.The results compare models of imaging systems of 4th, 6th, and 8th orders of Zernike polynomials and show some benefits of using different orders.The contribution of this paper to the description of aberration is via a method for obtaining aberration coefficients in an unknown optical system and for using them in the model of space-variant PSF.The algorithm was used to model the WILLIAM system [51], and the model provided by the approach described here was used in a comparison [50] of deconvolution methods.Further research should address pixel profile sensitivity and sampling methods aimed at improving the way in which the model describes sensor PSF.

Figure 1 .
Figure 1.The difference between an ideal spherical wavefront and an aberrated wavefront.
the normalization factor with the Kronecker delta function 0 m  = 1 for m = 0, and 0 m  = 0 for m ≠ 0, and

Figure 1 .
Figure 1.The difference between an ideal spherical wavefront and an aberrated wavefront.

Figure 2 .
Figure 2. Graphical representation of the adopted coordinate systems.Subfigure (a) illustrates all planes, subfigure (b) illustrates normalized exit pupil and (c) illustrates notation of normalized Image plane.

Figure 2 .
Figure 2. Graphical representation of the adopted coordinate systems.Subfigure (a) illustrates all planes, subfigure (b) illustrates normalized exit pupil and (c) illustrates notation of normalized Image plane.

Figure 3 .
Figure 3.The set of Zernike polynomials used here.Table A1 includes the field dependency of these polynomials.

Figure 3 .
Figure 3.The set of Zernike polynomials used here.Table A1 includes the field dependency of these polynomials.

W
coefficients provides better results of the output model than other statistical methods.Thus, we need to find the median of every klm W coefficient over all fit realizations (the number of realizations is L) of the used points.This step will eliminate extreme values of klm W coefficients which can occur at some positions of the PSF due to convergence issues caused by sampling of the image or overfitting effects caused by high orders polynomials.Extreme values indicate that the algorithm found some local minimum of the cost function and not the global minimum.The values of the klm W coefficients are then significantly different from the coefficients obtained in the previous position.These variations are given by the goodness of fit. The output set of klm W coefficients then consists of values verified over the field.

Figure 4 .
Figure 4. Diagram of the proposed algorithm.Note that the klm W coefficients from 4th order estimate are used as a start condition for the 6th order estimate, and the 6th order klm W coefficients are used

Figure 4 .
Figure 4. Diagram of the proposed algorithm.Note that the W klm coefficients from 4th order estimate are used as a start condition for the 6th order estimate, and the 6th order W klm coefficients are used as a start condition for an estimate of the 8th order W klm coefficients.

22 Figure 5 .
Figure 5. Field of simulated PSFs with randomly generated klm W coefficients.We consider the simulated imaging system as rotationally symmetric.The points in the red circle were used for verifying the algorithm.

Figure 6 .
Figure 6.The verification results contain all points used in the verification.The positions of all points

Figure 5 .
Figure 5. Field of simulated PSFs with randomly generated W klm coefficients.We consider the simulated imaging system as rotationally symmetric.The points in the red circle were used for verifying the algorithm.

Figure 5 .
Figure 5. Field of simulated PSFs with randomly generated klm W coefficients.We consider the simulated imaging system as rotationally symmetric.The points in the red circle were used for verifying the algorithm.

Figure 6 .
Figure 6.The verification results contain all points used in the verification.The positions of all points are illustrated in Figure 5. MAXDIFF according to the normalized image distance.

Figure 6 .
Figure 6.The verification results contain all points used in the verification.The positions of all points are illustrated in Figure 5. MAXDIFF according to the normalized image distance.Appl.Sci.2017, 7, 151 12 of 22

Figure 7 .
Figure 7.The verification results contain all points used in the verification.The positions of all points are illustrated in Figure 5. RMSE according to the normalized image distance.

Figure 7 .
Figure 7.The verification results contain all points used in the verification.The positions of all points are illustrated in Figure 5. RMSE according to the normalized image distance.

22 Figure 8 .
Figure 8.The dependence of SNR on MAXDIFF.When SNR is lower than 20 dB, the error of the model increases rapidly.

Figure 9
Figure 9 illustrates a direct comparison between the original PSF, the obtained model and the difference between them in relation to RMSE and the MAXDIFF operator.This verification process was performed to verify whether the algorithm is able to find the model klm W parameters that are

Figure 8 .
Figure 8.The dependence of SNR on MAXDIFF.When SNR is lower than 20 dB, the error of the model increases rapidly.

Figure 9
Figure9illustrates a direct comparison between the original PSF, the obtained model and the difference between them in relation to RMSE and the MAXDIFF operator.This verification process was performed to verify whether the algorithm is able to find the model klm W parameters that are

Figure 9 .
Figure 9.The result of fitting.A comparison between the original PSF and the PSF model of the system.The graph on the right shows the intensity of the difference between the original PSF and the model; it indicates a very good result for the goodness of fit.

Figure 9 .
Figure 9.The result of fitting.A comparison between the original PSF and the PSF model of the system.The graph on the right shows the intensity of the difference between the original PSF and the model; it indicates a very good result for the goodness of fit.

Figure 10 .
Figure 10.The experimental setup, consisting of a camera stage and a light source with a small aperture.

Figure 10 .
Figure 10.The experimental setup, consisting of a camera stage and a light source with a small aperture.

Figure 11 .
Figure 11.Experimental results for the MAXDIFF difference up to the 4th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 11 .
Figure 11.Experimental results for the MAXDIFF difference up to the 4th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 12 .
Figure 12.Experimental results for the MAXDIFF difference up to the 6th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 13 .
Figure 13.Experimental results for the MAXDIFF difference up to the 8th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 12 .
Figure 12.Experimental results for the MAXDIFF difference up to the 6th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

22 Figure 12 .
Figure 12.Experimental results for the MAXDIFF difference up to the 6th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 13 .
Figure 13.Experimental results for the MAXDIFF difference up to the 8th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 13 .
Figure 13.Experimental results for the MAXDIFF difference up to the 8th order.The graph contains results from Table4and ten other points according to the normalized image distance H.The points which are not mentioned in Table4are selected similarly as illustrated in Figure5.The angle ϕ of these points differs from zero.These points are placed at positions covering the entire FOV.

Figure 14 .
Figure 14.A comparison of different modeling of a single optical system.All models were fitted to the same set of PSFs.

Figure 14 .
Figure 14.A comparison of different modeling of a single optical system.All models were fitted to the same set of PSFs.

Figure 15 .
Figure 15.Results of the fitting.The PSF model is shown with different orders of polynomials .The object was placed at 20° with respect to the optical axis (H = 0.33).In Table4this PSF is marked with the blue column.The first row relates to the 4th order results, the second row relates to the 6th order results and the third row relates to the 8th order results.

Figure 15 .
Figure 15.Results of the fitting.The PSF model is shown with different orders of polynomials.The object was placed at 20 • with respect to the optical axis (H = 0.33).In Table4this PSF is marked with the blue column.The first row relates to the 4th order results, the second row relates to the 6th order results and the third row relates to the 8th order results.

Table 1 .
The sensor size, the resolution and the optical parameters of the simulated artificial imaging system.

Table 2 .
Selected on-axis points results of verification of the algorithm, where normalized distance H is from 0 to 0.83 and ϕ is equal to zero.Image distance H is normalized according to the sensor, and it is related to an FOV of 110°; all optical parameters are summarized in Table

Table 1 .
The sensor size, the resolution and the optical parameters of the simulated artificial imaging system.

Table 2 .
Selected on-axis points results of verification of the algorithm, where normalized distance H is from 0 to 0.83 and ϕ is equal to zero.Image distance H is normalized according to the sensor, and it is related to an FOV of 110 • ; all optical parameters are summarized in Table

Table 3 .
The sensor size, the resolution and the optical parameters of the experimental imaging system used for acquiring the image dataset.

Table 4 .
Selected on-axis point results for the MAXDIFF difference between the original and the estimated model, where H is from 0 to 0.83 and ϕ is equal to zero.The MAXDIFF differences of the polynomial orders are given as a percentage.Highlighted columns contain results used later in the comparison.

Table 5 .
A direct comparison of results estimated by 4th, 6th and 8th order polynomials.This table is related to Figure15.The results are calculated for one position of the light source at H = 0.33.The total flux difference was calculated for enumerating the overall intensity difference between the original PSF and the model.

Table 5 .
A direct comparison of results estimated by 4th, 6th and 8th order polynomials.This table is related to Figure15.The results are calculated for one position of the light source at H = 0.33.The total flux difference was calculated for enumerating the overall intensity difference between the original PSF and the model.