A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting

Image vignetting is one of the major radiometric errors, which occurs in lens-camera systems. In many applications, vignetting is an undesirable phenomenon; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction in the acquired image. In the most frequently used approach to the vignetting correction, i.e., the flat-field correction, the usage of appropriate vignetting models plays a crucial role. In the article, the new model of vignetting, i.e., Smooth Non-Iterative Local Polynomial (SNILP) model, is proposed. The SNILP model was compared with the models known from the literature, e.g., the polynomial 2D and radial polynomial models, in a series of numerical tests and in the real-data experiment. The obtained results prove that the SNILP model usually gives better vignetting correction results than the other aforementioned tested models. For images larger than UXGA format (1600×1200), the proposed model is also faster than other tested models. Moreover, among the tested models, the SNILP model requires the least hardware resources for its application. This means that the SNILP model is suitable for its usage in devices with limited computing power.


Problem of Vignetting
The image vignetting is a phenomenon of the reduction of the brightness of an image from its optical center toward its edges. The vignetting is often an unintended and undesired effect, and its characteristics depends on optical properties of the lens-camera system. Considering the causes of vignetting, there are four main types of this phenomenon [1,2], listed it in the order of the place of its occurrence from an imaged scene to an image sensor, i.e., mechanical vignetting, optical vignetting, natural vignetting, and pixel vignetting. Mechanical vignetting refers to the occlusion of the light path by elements of lens-camera system, such as a filter mounted on a lens. This type of vignetting usually causes a 100% reduction in image brightness, which means a complete loss of information. Therefore, this type of vignetting will not be taken into consideration in this article. The next type of vignetting is optical vignetting, which refers to the light fall-off caused by the blockage of off-axis incident light inside the lens body. The amount of blocked light depends on the physical dimensions of the lens, particularly the lens focal length, diameter of lenses in the lens body, and the lens aperture setting. Natural vignetting refers to the light fall-off related to the geometry of the image-forming system. Pixel vignetting refers to the light fall-off related to the angle-dependent sensitivity of the image sensor pixels.
The effect of vignetting is undesirable in many applications, especially when there is a need for stitching images for creating panoramic images [3,4] or mosaic images [5,6], radiometric, or quantitative analysis of images [7,8], e.g., in such areas as microscopy [6,9,10], micro CT imaging [11], and remote sensing [12,13]. The best way to reduce vignetting is to remove its causes, e.g., by the usage of a lens with appropriate characteristics or by careful choice of exposure parameters. However, the usage of such solutions is not always possible nor brings the desired result. In such cases, the only solution is to use a software-based vignetting correction method.

Vignetting Correction
Probably the most frequently used approach to vignetting correction is based on the idea of "flat field correction." In this approach, the vignetting is estimated based on reference image of vignetting I V , which presents uniformly illuminated flat surface with a uniform color. The only source of brightness differences across I V is the vignetting of the used lens-camera system. The way an image I V is formed can be described by the following formula: where V is a real vignetting of the analyzed lens-camera system, I f lat is an ideal image of a scene with reference flat surface, and (i, j) are, respectively, horizontal and vertical pixel coordinates. Vignetting estimator V is established during approximation process using assumed vignetting model VM and image I V , i.e., where V ∈ (0, 1]. For obtaining image I with corrected vignetting, which is based on the acquired image I, the following calculations must be done: For the best correction results the images I V and I should be acquired using the same lens-camera system, its parameters (e.g., focal length for zoom lenses), and it exposure parameters (e.g., aperture stop number of the lens). In practice, for real-time vignetting correction, (3) is realized using lookup table.
The flat field correction is, of course, not the only approach which realizes the vignetting correction. The others are based on, e.g., physically-based models of vignetting [14,15] and analysis of a sequence of images [1][2][3]16] or a single image [17][18][19][20] of a natural scene to estimate the vignetting. However, the usage of these approaches is connected with many limitations. In the case of methods, which use the physically-based approach, detailed information about the parameters of a lens-camera system is needed, which is often unavailable. Additionally, the obtained vignetting model is usually limited to only one type of vignetting. In the case of both aforementioned image-based approaches, the vignetting estimation is usually obtained as a result of finding an optimal value an objective function for an assumed radial vignetting model; however, this assumption limits the number of lens-camera systems for which these approaches can be used. Moreover, the effectiveness of image-based methods depends strongly on the precision of localization of corresponding pixels in images sequence, which requires usage of an additional correction of lens distortion. It should also be mentioned that the effectiveness of these methods strongly depends on scene uniformity.
In comparison to the aforementioned approaches, the flat-field-based approach is more universal; this method can be used for virtually any lens-camera system. The adjustment of the flat-field method to the analyzed system is the matter of the usage of the vignetting model, which is appropriate for the actual vignetting occurring in the analyzed system. This mainly means the usage of vignetting model, for which assumptions of its use are consistent with the features of vignetting of the analyzed lens-camera system.
Over the years, in the literature, different vignetting models based on the idea of parametric approximation have been presented, e.g., polynomial model [1,21,22], exponential polynomial model [21], hyperbolic cosine model [23], Gaussian function [24], and radial polynomial model [25]. The last three aforementioned vignetting models belong to a popular approach, which assumes that the actual vignetting V of lens-camera system has a cylindrical symmetry. This means that vignetting can be modeled using radial function, i.e., the lens-camera vignetting V(i, j) is a function of r, where r is a distance between pixel p(i, j) and optical center C (with coordinates (i C , j C )) of the lens-camera system. The use of the radial vignetting assumption simplifies the process of searching for the vignetting estimator; the approximation of the 2D function is replaced by the approximation of the 1D function. On the other hand, searching for optical center is not a trivial task [26]; its realization involves additional calculations and is a source of potential errors. Moreover, the usage of a radial function limits the applicability of this method.
The choosing of the proper vignetting model is not the only problem related to flatfield correction. The other problem is the preparation of a reference scene, especially the uniform illumination of the reference surface is a problem. It should be emphasized that the quality of the obtained V estimator strongly depends on the luminance uniformity of the reference scene.

Objective of the Work
The assumption about radial vignetting can be used for most lens-camera systems. However, there is a large group of systems for which this assumption is not applicable, e.g., industrial lenses designed with reducing vignetting in mind, perspective-control and focal plane-control lenses (shift and tilt lenses), and anamorphic lenses. Therefore, there is a need for an universal vignetting model which can be used for different lens-camera systems.
Good examples of such an universal solution are the vignetting models proposed by Sawchuk in 1977, in Reference [21], in the form of polynomial 2D (P2D) and exponential polynomial 2D vignetting models. In both models, vignetting estimators are found as a result of seeking a solution to the multiple regression problem. Another universal solution was proposed in 2004 by Yu [23]. The proposed method is based on the idea of wavelet denoising and decimation. The common feature of the aforementioned methods is the relatively high computational complexity of these methods. Due to this feature, these methods are not very suitable for implementation in devices with relatively low computing power or memory resources, such as, e.g., portable devices, smart cameras, vision sensors, embedded vision systems, and unmanned vehicles. Hence, there is a need to develop a universal vignetting model that will combine relatively low computational and memory complexity with the high accuracy of vignetting correction.
A possible solution to this problem uses the idea of decomposition of the 2D approximation problem into many interrelated 1D approximation problems. This approach was presented by Reference [27][28][29]. The results obtained so far in this area are briefly described and summarized in Section 2. The aim of the conducted research was to improve the vignetting model based on this idea. The new vignetting model (SNILP) that meets this requirement is presented in Section 3. Section 4 presents results of experiment that was performed to compare the quality of the vignetting correction results obtained from the new model with the results obtained from selected vignetting models described in the literature. In Section 5, comparisons of others features of the compared models are presented. Lastly, Section 6 contains a summary of the research.

Local Parabolic Model
The problem of vignetting estimation is, in fact, the problem of seeking an approximation function of 2D data. As presented in Reference [27], in the form of Local Parabolic model, the 2D multiple approximation problem for image of size M × N pixels can be decomposed into M + N simple linear regression problems using the following formula: where

Local Polynomial (LP) Model
The more general Local Polynomial (LP) model of vignetting was proposed in Reference [28]. In this model, the degree of polynomial s used as 1D regression function is not limited to degree of 2 but can be freely chosen; hence, V x j and V y i are calculated as follows: where s is the assumed degree of regression polynomials. The LP model gives more exact approximation results of vignetting than Local Parabolic model. However, the more exact analysis of the LP model shows that the obtained surface of vignetting estimator is non-smooth and ragged.

Smooth Local Polynomial (SLP) Model
As a solution to this problem, in Reference [29], the Smooth Local Polynomial (SLP) model was proposed. The core idea of this method is the iterative repetition of the LP model estimation, i.e., where V k is a result of the k-th iteration of the SLP, and, for k = 0, it holds that which is the input image for the SLP model. It follows directly from the definition of the SLP model that, for k = 1, the results obtained from this model are the same as the results obtained from the LP model.

Essential Properties of the LP and SLP Models
The following belong to the essential properties of each of the vignetting models: accuracy of reproducing the actual vignetting V on the basis of the obtained image or images I V , features of the obtained model (such as, e.g., surface smoothness), and ease of use of the model. Due to the fact that, when determining the vignetting estimate for real lens-camera systems, it is difficult to reliably verify the correctness of the obtained model, it was decided to perform numerical simulations.
The same test conditions were used in all simulations presented in this section, as well as in Section 3.2. The simulation conditions are described in detail in Appendix A. For comparison of vignetting models, lenses with three focal lengths were simulated for 135 film format, i.e., f = 24 mm, 50 mm, and 300 mm. This selection of focal lengths includes the most important focal lengths of lenses (converted to the appropriate image formats used in a given application), which are used for, e.g., photography, film-making, monitoring, machine vision, and remote sensing.
The use of vignetting models based on the idea of image decomposition requires the determination of the degree of the polynomial that will be used to approximate individual lines of the image. In the case of the presented simulation results, the following polynomial degree values were used, respectively, to the aforementioned focal lengths: 10, 6, and 2. Such a selection of the degrees of approximation polynomials results from the fact that, for these degrees of the polynomial, the best results (in terms of Root Mean Square Error) of the vignetting estimation for the tested focal lengths were obtained. To obtain statistically reliable results, the simulations were repeated for each lens 100 times.
The most important property of vignetting model is probably its accuracy. Thanks to the simulation, the exact shape of the V vignetting is known. Hence, for evaluation of model accuracy, the commonly used difference measures, such as Mean Absolute Error (MAE) or Root-Mean-Square Error (RMSE) can be used. Figure 1 presents values of those measures obtained for the simulated data.
Based on the knowledge about the causes of vignetting, it can be concluded that the vignetting function should have a smooth surface. As previously mentioned, surfaces obtained from the LP model are non-smooth; thus, they differ from the vignetting of real lens-camera systems. Figure 2 presents a comparison of smoothness of exemplary results obtained from the LP model and different numbers of iterations of the SLP model. The presented results confirm the significant improvement in smoothness obtained by using the SLP model compared to the LP model. The smoothness of the surface of vignetting estimator V can be evaluated not only visually but also using objective measures, such as Mean Local Standard Deviation (MLSD) and Mean Local Slope Corrected Deviation (MLSCD). The usage of MLSD measure for smoothness evaluation is based on the idea of Local Standard Deviation (LSD), which was used for decades by image processing community, in original [30,31] or slightly modified form, for evaluation of local changes in images. LSD is calculated in moving window w ij of size w x × w y , where coordinates (i, j) defined the position of the window center in the analyzed image I of resolution M × N, i.e., where 2r x + 1 = w x , and 2r y + 1 = w y ; notation a : b denotes the range of values from a to b with increment +1. In this work, MLSD is calculated as follows: where and The idea of MLSCD is based on the idea of MLSD with one important modification; in the case of MLSCD locally (i.e., inside of moving window w ij ), the standard deviation σ(w ij ) of pixel values is not calculated, such as in LSD, in relation to the average value w ij of pixels inside w ij , but in relation to a local 2D polynomial approximation PolyApp s 2D w ij of degree s of image I surface covered by w ij . This change results in a much lower sensitivity of the MLSCD measure to different slopes of analyzed surfaces; thanks to this, the proposed measure suits better to measure the surface smoothness than the MLSD measure. This property of MLSD results from the fact that values of LSD are strongly influenced by the local slope of the analyzed surfaces; therefore, in the LSD measure, the fluctuations of surface smoothness are dominated by changes of pixel values caused by a surface slope. The MLSCD measure is calculated as follows: where and It is worth noting that, using s = 0 in (17), the MLSCD measures come down to calculating the value of MLSD. In the work, the windows size w x = w y = 7 were used for both MLSD and MLSCD measures; in the case of MLSCD, s = 2 was used. To reduce the influence of the border effect associated with the use of a moving window on the smoothness evaluation results, the range of coordinates of moving window centers used for calculation of MLSD and MLSCD (Equations (12) and (15)) is smaller than the actual size of the analyzed images. Figure 3 presents the comparison of MLSD and MLSCD values obtained for LP and SLP (for k = 1, . . . , 30, where, as mentioned earlier, for k = 1, it holds that LP(I V ) = SLP(I V )) models for simulated vignetting of three focal length f of simulated lenses (see Appendix A for experiment details). The presented results show that the SLP model outperforms the LP model at the price of k times larger computational complexity. At this moment, the practical question that arises is as follows: Is it possible to obtain estimation results with quality similar to those obtained from the SLP model for large k, with the computational cost close to the cost required for the LP model?

Model Description
To simplify further considerations, the whole process of repeated 1D polynomial regression is based on the horizontal and the vertical lines, and then calculation, based on these results, approximated values of the input image I for point (i, j), which, from now on, is, respectively, denoted as PolyApp s x I, (i, j) and PolyApp s y I, (i, j) ; s is a degree of the used 1D polynomial regression and, later, 1D approximation function.
In the SLP model, the approximations of the input image I V based on independently calculated approximations for horizontal and vertical lines, i.e., V x and PolyApp s y of I V are calculated in sequences, e.g., as a first approximation in horizontal direction is calculated, and, then, based on these results, the approximation in vertical direction is calculated. This procedure can be written formally as All operations used for calculation needed in the SNILP model are linear. Therefore, the sequence of the directions in which the regression is performed does not matter; hence, Since, in practice, the acquisition parameters of image I V are selected so that there is no saturation phenomenon in it, therefore, the following condition is fulfilled max( V) < 1. Hence, it is necessary to normalize the range of V to the range (0, 1], which is performed as follows: From now on, V denotes the vignetting estimate in the normalized range.

Selected Properties of the SNILP Model
Analyzing the properties of the SNILP model, it is worth starting with paying attention to the fact that, in practice, the relationship (21) is satisfied with a certain error. The occurrence of this numerical error is related to the fact that the change of the order of directions in which the 1D approximations are calculated (e.g., first along X axis, and then along Y axis) may cause some differences in estimation results. However, on the basis of the carried out tests, the results of which are presented in Table 1 and Figure 4, it can be concluded that, for each repetition, the maximal error that arises due to a change in the approximations directions order was not greater than 2 × 10 −12 ; so, they are irrelevant, even in the case of use 16-bit analog-to-digital converters in a camera image processor. The conditions of numerical test were identical to those used in Section 2.4.
Idempotence is the property of some mathematical operations or algorithms, whereby its multiple applications for the same data and the same parameters does not affect the obtained result. In the case of the SNILP model, it means where k is a number of repetition usage of the SNILP model. In practice, however, because of the presence of the numerical error in polynomial approximation procedures, where ε(i, j) is the error value for point (i, j) related to numerical errors.    Figure 5 presents mean errorsε(i, j) calculated for different numbers k of repeated usage of the SNILP model, the valuesε(i, j) calculated on the basis of 100 repetitions of the experiment. One additional usage (k = 2) of the SNILP model changes, on average, the results for each point (i, j) not more than 10 −13 , and such changes do not have practical meaning. The influence of numerical errors is, of course, more visible for larger k, but, even for k = 25, the values ofε(i, j) are not greater than 10 −11 . The maximum difference between RMSE values caused by repetitive usage of the SNILP model does not exceed 5 × 10 −13 . These results show that usage of the SNILP model is an idempotent operation. Figure 1 presents a comparison of MAE and RMSE values for SLP (box-plot) and SNILP (orange lines-solid line is a median) models. The presented plots show that differences between results, obtained from the SLP and SNILP models, gradually decrease with the execution of subsequent iterations of the SLP model. Based on these results, the following supposition can be formulated:

Comparison of the SLP and SNILP Models
i.e., vignetting estimates obtained from the SNILP model are, for the same degree s of approximation polynomials, asymptotes for a series of estimates obtained from the successive iterations of the SLP model. This supposition is confirmed by the results (Figure 3) of measure of smoothness, using the MLSD and MLSCD measures, of vignetting estimate surfaces obtained from both models. Comparing the SLP model with the SNILP model, it should be recalled that the result for the SNILP model is obtained in one step, when a qualitative similar result from the SLP model requires k iterations. In the SLP model, for the image of size M × N pixels, one iteration step includes N approximations in the horizontal direction, M approximations in the vertical direction, and MN summations and MN splits for calculation of mean from these approximations. In the case of the SNILP model, a whole calculation requires only N approximations in the horizontal direction, as well as M approximations in the vertical direction. Hence, assuming a similar quality of the results obtained from the SLP and SNILP models and the same polynomial degree s used in both models, the following relationship between the computational complexities of these models is true:

Assumptions and Conditions of the Experiment
The aim of conducted experiment was a comparison, based on real data, of the selected, known from the literature, vignetting models with the SNILP model. The quality of vignetting model can be evaluated by the analysis of the result image: Since the I V image contains noise, and the estimation result V may differ significantly from the vignetting of a given camera, it is possible that the I f lat image will contain pixels with values beyond the range of [0, 255], i.e., the range of pixel values for cameras used in the experiment. Therefore, the correction result, i.e., the image I f lat , before its evaluation, is subjected to the operation of truncation of pixel values according to the formula The data analyzed in the experiment were acquired using three different webcams, namely: Logitech C920, Xiaomi IMILAB CMSXJ22A, and A4Tech PK-910H ( Figure 6), which, in the article, are denoted, respectively, as CAM-A, CAM-B, and CAM-C. In the case of webcams, their manufacturers do not provide detailed technical data, such as, e.g., lens focal length, lens speed, etc. However, the webcams used in the experiment were carefully selected to differ in their parameters, particularly the field of view and geometric distortions; comparison of the webcams main technical data is presented in Table 2. Such a selection of cameras allows for carrying out the experiment for possibly different examples of real vignetting.  As a flat-field surface needed for acquiring vignetting image (I V ), the uniformly backlighted milky poly(methyl methacrylate) (PMMP, "plexiglass") panel of size 50 cm × 100 cm and thickness 2.5 mm was used. As a light source, the Asus PA248QV graphic monitor displaying a white screen with brightness set to maximum, i.e., 300 lx, was used. The plexiglass panel was carefully positioned parallel to the monitor screen surface. In order to position the camera parallel to the monitor screen, each camera was positioned in such a way that the geometric distortions of the obtained reproduction of the test image were symmetric. The test image was displayed on a monitor serving as the panel illumination (Asus PA248QV), and the acquired images were observed in real time on the second monitor. For the time of acquiring the images, the aforementioned plexiglass panel was inserted between the monitor and the used camera.
In order to reduce the noise which is present in the captured images, in the experiment, the I V image was used, obtained as the average of 100 originally captured images I V e . Since the aim of the experiment is evaluation of availability of fitting of the compared vignetting models to the real vignetting, there is no need for calculating of vignetting estimate for each color channel, i.e., R, G and B, of the I V image. Therefore, in the experiment, for each camera, as the input image, was used. The exposure parameters for each camera were carried out automatically for the first image I V 1 , and, for the rest of the images, i.e., I V 2 , . . . , I V 100 , the same parameters were used. The I V e images were acquired in the darkroom so that additional lights did not interfere with this process. Additionally, in the case of CAM-A and CAM-B, all signaling diodes with which the cameras are equipped were covered. Unfortunately, due to its design (the signaling diode has the form of a ring around the lens), such an operation was impossible with the CAM-C. However, failure to perform this operation had no significant impact on the acquired images I V e due to the relatively high brightness of the imaged panel surface compared to the CAM-C signaling diodes, and quite large distance between the camera and the plexiglass panel.
It is important to notice that, because of the aim of the experiment eventually small errors in, e.g., positioning of the monitor, the plexiglass panel or the cameras, as well as a small non-uniformity of screen illumination, do not influence on qualitative evaluation of the experiment results. Of course, such errors affect the quantitative results of the estimation; however, these errors do not change the judgment of the ability of the tested vignetting models on finding the best approximation V based on the input image I V ; it is exactly this feature of the vignetting models that is evaluated in the experiment.
In the experiment, four vignetting models were compared, i.e., polynomial 2D (P2D), radial polynomial (RP), smooth local polynomial (SLP), and smooth non-iterative local polynomial (SNILP). The entire experiment (i.e., from image acquisition, through all calculations, up to data presentation) was performed using MATLAB R2021a software package with appropriate Toolboxes, i.e., IMAGE ACQUISITION, IMAGE PROCESSING, and CURVE FITTING.
For implementation of the P2D model, the MATLAB fit function from CURVE FIT-TING TOOLBOX, which calculates, among others, 2D polynomial approximation, can be used. However, due to the limitation of this function, it can only be used for approximation of polynomials with a maximum degree s = 5; in the experiment, the own function, which directly uses ordinary least squares equation for 2D polynomial approximation, was written and used. It is important to notice that the own function, in general, has no limit for degree s of the approximation polynomial. Moreover, the own function returns the approximation results faster than the fit function, and the differences between approximation results obtained from the own function and the fit function for data from the experiment are no greater than 10 −10 ; therefore, from a practical point of view, they are negligible.
The implementation of the RP model uses MATLAB polyfit and polyval functions, which calculate parameters of approximation polynomial and values of approximation function using the obtained parameters. The implemented own function, which carries out these tasks, was no better than the aforementioned MATLAB functions. Additionally, in the case of the RP model, there is a need for founding of optical center C of the lens-camera system based on I V image; from this point, the distance r is calculated (see Section 1.2). In the experiment, C was found by searching of the coordinates of the maximum of 2D polynomial approximation of degree s = 2 of the I V image. This approach was compared with others, e.g., a 2D polynomial approximation with degree s = 4 and the usage of smoothing function with moving window; however, none of the tested approaches give better results in the sense of values of STD I f lat and IQR I f lat than the usage of a 2D polynomial approximation with s = 2.
In the case of SLP and SNILP, both models were implemented using polyfit and polyval functions for 1D approximation and calculation approximated values. The results for the SLP model are obtained for k = 25 iterations.
All models were compared for different degrees s of the approximation polynomials, i.e., for s ∈ {2, . . . , 10}. The tested range of the values of polynomials degree fully covers the value of the approximation polynomial degree used in practice; in the literature, it is recommended to use s from 2, for tasks that do not require precise correction, and up to 6 in more demanding applications.

Results of the Experiment
In Figure 7 Tables 3 and 4. Additionally, in Figure 8, comparison of normalized I V images and the obtained V for s = 10 is presented. The normalized images norm(I V ) are calculated as follows: where smooth I V (i, j) is a result of smoothing of the I V image with 2D Gaussian filter with standard deviation σ = 0.5 and size 10 × 10; the norm(I V ) images are used only for visualization (these images are not used during calculation of V). Figure 9 shows the results of corrections of I V images, i.e., the I f lat images, obtained for s = 10.

Discussion of the Results
The experiment on real data confirms the results of numerical tests (Section 3.3), i.e., for a large number k of iterations of the SLP model, the results obtained from the SLP and SNILP models are the same. However, it is important to remember that the computation time which is needed for obtaining the same results from both models is much longer in the case of the SLP model than in the case of the SNILP model.
Comparing results from all tested models, it can be easily seen that, beyond the results for the IQR measure obtained from CAM-B for s ∈ {2, 3}, the order of the obtained results (regardless of the used measure) is the same, i.e., the best results are given by the SLP and SNILP models, slightly worse results are given by the P2D model, and much worse results are obtained from the RP model. It is worth noting that, e.g., for the STD measure and s ≥ 4, the results from the SNILP and SLP models are usually at least 2 times better than the results obtained from the RP model, which is preferred by many authors.
The STD and IQR measures show aggregated information about models quality. Additional knowledge about the models is provided by the graphs presented in Figure 9. Comparing the results for the SPL and SNILP models with the results obtained for the P2D and RP models, it can be seen that the spatial uniformity of the dispersion of values in the I f lat images are different. For the individual I f lat images obtained from the SLP and SNILP models, the dispersion of its values is spatially uniform, which indicates-assuming that the mean value of cameras noise η Cam = 0-a good fit of the V estimates to the input images I V . Following this line of thinking, assuming that the I f lat images were taken correctly, it means that the V estimates obtained from these models exactly approximate the real vignetting V of the individual cameras. Contrary to these results, in the I f lat images obtained from the P2D and RP models, regions exist in which the spatial uniformity is not preserved. This is especially true for the results obtained for CAM-B (Figure 9b,e) and CAM-C (Figure 9c,f). Based on above the assumptions, the occurrence of such areas in the I f lat image proves that the obtained vignetting estimate V is inaccurately adjusted to the real vignetting V of the tested camera.

Comparison of Others Features of the Tested Vignetting Models
The quality of the vignetting correction results is not the only criterion which should be taken into consideration while choosing a model of vignetting for a specific application. The others are, e.g., computational complexity (determining the speed of calculation), a required memory size (determining the size of images for which the selected model can be used in the given conditions, e.g., using intelligent camera with limited hardware resources), and, last but not least, the minimum amount of data required to save the determined vignetting estimate.
In Table 5, results of the measurement of computation times of the compared vignetting models are presented. Additionally, in Figure 10, the obtained results are presented in the form of a log-log graph. The presented results are obtained as a median value from 10 measures for each tested image format and the vignetting model. For all vignetting models, the degree s = 5 of their approximation polynomials was chosen. In the case of the RP model, an image optical center was found using the same method as in the experiment (Section 4.1). For the SLP model, the number of iterations k = 10 was chosen. The test vignetting images were obtained using the method presented in Appendix A. The measurements were conducted on a computer equipped with AMD Ryzen 7 3800X processor, 16 GB RAM, and a M.2 PCIe NVMe SSD disk. The results show that the SNILP model, already, for images with a resolution equal to or greater than 1600 × 1200, is the fastest model. The difference in computation times between the SNILP model and the next model, i.e., RP, grows with increasing image resolution. It is worth noticing that the results obtained for the SLP and SNILP models confirm relation (26), i.e., that the calculation for the SNILP model is at least k times faster than in the case of the usage the SLP model. Table 5. Computation times of the tested models for selected image formats and the degree of approximation polynomial s = 5.   Table 5.

Image Resolution
The lower bound of a model required memory size can be estimated from the analysis of the minimal memory size needed for calculation of vignetting estimate. In the case of the tested models, for this purpose, Ordinary Least Squares method is used; hence, based on its matrix formulation, i.e.,β whereβ is a vector of estimated polynomial parameter, X is a matrix of independent variables, and y is a vector of observations; it can be seen that the size of the matrix X, in practice, is determining the amount of needed memory. Let I denote the size of the analyzed image of resolution M × N, where M ≤ N, and s is an order of the used approximation polynomial. The comparison of X size is presented in Table 6; in the table, additional remarks about memory size needed by each vignetting model are also presented. Based on this information, it can be concluded that the SNILP model has the lowest memory requirements among the analyzed models. Table 6. Comparison of models required memory size.

Model
Size of X Remarks In the case of this model, the stage of searching for an optical center of the I V image can require more memory than the stage of approximation of the radial function V. For the approach to searching for the optical center, which is used in the experiment above, the statement is true when where s c is the order of 2D polynomial used for image optical center searching.
In the SPL model, there is a stage of calculation vignetting estimate V by averaging (4) the previously calculated values V x and V y , which requires additional space of a size 2I to keep these variables. Because of the small value of s, which is commonly used, the stage of averaging usually requires more memory than 1D approximation needed for calculation of V x and V y .
The determined vignetting estimate can be saved in form, e.g., a matrix V of size I or a set of parameters. The exact number of needed parameters for each model is presented in Table 7. This comparison shows that the SLP model requires the largest number of parameters. The SNILP model requires N(s + 1) parameters less, but, when compared with the P2D model, and especially with the RP model, the number of parameters which are required to save the vignetting estimate V is still large. On the other hand, in order to use the vignetting estimate V saved in the form of a set of parameters, it is necessary to carry out appropriate calculations which require appropriate hardware resources (including memory larger than I needed for save V in form of a matrix) and time. Moreover, in applications where the real-time vignetting correction is needed, e.g., on-line monitoring or quality control, storing V in the form of a matrix is the best solution. In such cases, the advantages of usage a smaller number of parameters, as in the case of P2D and RP models, are no longer relevant.

Conclusions
Both, i.e., numerical and experimental, results prove that the proposed SNILP vignetting model outperforms the SLP model in both quality of vignetting estimation and the computation speed. The comparison of results obtained from the SNILP model with these obtained from the P2D and RP models shows that the proposed model provides a better vignetting correction, in a sense of the used measures STD I f lat and IQR I f lat , than both models known from the literature.
Summarizing the results presented in this article, it can be stated that the SNILP vignetting model is a universal vignetting model (similar to the P2D and SLP models). However, compared to these models, the SNILP model is characterized by greater accuracy of the obtained result V. It should also be noted that, compared to the widely used RP model, the SNILP model gives for s ≥ 2 better, and, for s ≥ 4, much better, results and can be used for vignetting correction of virtually any lens-camera system (where the usage of the RP model is limited due to the use of the radial function). Moreover, the results from the SNILP model for larger images are obtained in the lowest computation time and the more limited hardware resources can be used for its computing. This last feature is especially important for visual systems that have limited computational capabilities, such as, e.g., smart cameras and sensors, embedded vision systems or vision systems used in unmanned vehicles.
So far, the vignetting models presented in the literature can be divided into two groups, i.e., universal models (e.g., the P2D model), in which usage is not limited to a certain type of vignetting, but their use requires relatively high computing power and dedicated models (e.g., the RP model), which have a relatively low requirement for computing power; however, their usage is limited. Hence, there is no model that would combine its universal application with low computational complexity. Based on the results presented in the article, it can be concluded that the proposed SNILP model combines these requirements and, in this sense, is a novel contribution to solving the vignetting correction issue.

Data Availability Statement:
The data presented in this study are available on reasonable request from the corresponding author.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript:

Appendix A. Data Preparation for Numerical Tests
Because of the aims of numerical tests, there is no need for using very accurate models of vignetting V; therefore, as the vignetting function V, the relatively simple cos 4 model of off-axis illumination [15] was adopted. The original formula was modified for easier calculation of V for different resolutions of the analyzed images. Hence, the vignetting function V used in test is defined as follows: where V(i, j) ∈ (0, 1], f is the focal length of the lens in millimeters, D is a length of diagonal of a simulated sensor in millimeters, and r is a normalized distance from the center (i c , j c ) of an image to the image pixel of coordinates (i, j) (where i ∈ {1, M} and j ∈ {1, , N}) and is calculated as follows: The obtained vignetting V is a radial function.
To simulate non-radial vignetting, it is enough to slightly modify (A3), i.e., where, for α = 1, V(i, j) is a radial function, and, for α = 1, V(i, j) is a non-radial function, e.g., for α > 1, the vignetting in the vertical (y) direction is stronger than in the horizontal (x) direction. For simulation of a lens with optical center shifted in relation to the geometrical center of the lens, it is enough to replace in (A3) or (A4) coordinates (i c , j c ) of the geometrical lens center by coordinates (i oc , j oc ) of the simulated optical center of the lens. For the tests, in which the results are presented in Sections 2 and 3, the values of D and f were chosen according to 135 (35 mm) film standard; hence, D/2 = 43.267 mm. The tests were carried on for the three values of lens focal length f , i.e., 24 mm, 50 mm, and 300 mm. Such a choice of lens focal lengths covers (after its conversion, taking into account the sensor size used in the application) the most frequently used focal length range. The results presented in the article were obtained for images of resolution 1280 × 1024, α = 1 (radial vignetting) and coordinates of optical center consistent with the coordinates of the image geometric center.
Real vignetting image I V contains a noise; therefore, in the simulation vignetting function, V is contaminated with a Gaussian mixture noise; the formula is as follows: where N µ, σ represents noise with a normal distribution, with mean µ and standard deviation σ. The parameters σ multi and σ add are the standard deviations of, respectively, multiplicative and additional component of mixture noise; the mean values of noise of both components are equal to 0, and κ is a scaling factor that simulates exposure correction used during the acquisition of real I V images to prevent saturation from occurring in captured images. The test images were created for: σ multi = 10%, σ add = 5%, and κ = 0.8. The last stage of creation of an I V image is a truncation of its values to the range [0, 255].