Image Vignetting Correction Using a Deformable Radial Polynomial Model

Image vignetting is one of the major radiometric errors that occur in lens-camera systems. In many applications, vignetting is an undesirable effect; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction. In probably the most frequently used approach to the vignetting correction, that is, the flat-field correction, the use of appropriate vignetting models plays a pivotal role. The radial polynomial (RP) model is commonly used, but for its proper use, the actual vignetting of the analyzed lens-camera system has to be a radial function. However, this condition is not fulfilled by many systems. There exist more universal models of vignetting; however, these models are much more sophisticated than the RP model. In this article, we propose a new model of vignetting named the Deformable Radial Polynomial (DRP) model, which joins the simplicity of the RP model with the universality of more sophisticated models. The DRP model uses a simple distance transformation and minimization method to match the radial vignetting model to the non-radial vignetting of the analyzed lens-camera system. The real-data experiment confirms that the DRP model, in general, gives better (up 35% or 50%, depending on the measure used) results than the RP model.


Introduction
Image vignetting, also called image shading, is a phenomenon in which the image brightness is reduced from the optical center of this image toward its edges. The characteristic of vignetting depends on the parameters, mainly geometrical and optical, of the lens-camera system used and the parameters chosen during image acquisition, such as aperture size (f-number) and lens focal length (in the case of the usage of a varifocal or a zoom lens). Vignetting is usually unintentional and undesired. The appearance of this effect is particularly undesirable when there is a need for radiometric or quantitative image analysis, which is very common in different areas, e.g., astronomy [1,2]; microscopy [3][4][5][6]; and remote sensing applications using terrestrial [7,8], airborne [9][10][11][12][13] and spaceborne sensors [14,15], to name just a few of them. This phenomenon is also undesirable in the case of the use of computational imaging algorithms, such as the creation of high dynamic range (HDR) images [16,17], the stitching of static images to create panoramic [18][19][20] or mosaic images [3][4][5][6]21], as well as a panoramic real-time view [22]. Vignetting also affects the results of image analysis, including the results obtained using neural networks [23,24].
The best way to reduce vignetting is to remove its causes. This aim can be reached by, e.g., the usage of a lens with appropriate characteristics or setting appropriate exposure parameters. However, in practice, such actions are not always possible or may not produce the desired results. In such cases, hardware solutions can be supported by the usage of computational vignetting correction methods. Based on the causes of vignetting, there exist four main types of vignetting [25,26], that is: • Mechanical vignetting-the effect of this type of vignetting is the blockade of light rays on their way from a scene through a lens to a camera sensor; the result of this is a complete loss of information in certain areas of the image and a lack of data needed for computational vignetting correction methods. • Optical vignetting-is related to the optical characteristic of the used lens, its characteristic can be change by change of the lens aperture size [27]. • Natural vignetting-refers to the loss of image brightness caused by a change of the viewing angle Θ for individual image pixels, it is modeled by the cos 4 (Θ) law [28]. • Pixel vignetting-is related to the geometrical size and optical design (in the case of the use of microlenses) of the image sensor of the camera [29].
The computational vignetting correction methods can be used to correct the last three of them; that is, they cannot be used to correct mechanical vignetting. What is more, there is not an easy way to correct each vignetting type independently; however, in practice there is no need to distinguish the causes of vignetting because all mentioned types of vignetting (apart from mechanical vignetting of course) are jointly corrected in a single procedure. It is also worth noting that vignetting correction, the parameters of which are very often calculated using reference images acquired from close distance, can be directly used to correct images obtained for distant objects-such a procedure is typical, e.g., for remote sensing applications. This advantage is due to the fact that the actual vignetting characteristics for fixed parameters of the lens-camera system used do not depend on the distance between the imaging system and an object, but on the viewing angle at which this object is visible from this system. During the years of development, many different approaches to the vignetting correction problem have been presented in the literature. The vignetting correction methods can be roughly divided into two groups, that is, into the group of methods that do not use the reference image or images, and the group of methods that use such data. The methods which belong to the first group use, e.g.: • Physically-based models of vignetting [28,30]-the usage of these methods requires detailed knowledge about the parameters of the used lens-camera system, which is often unavailable; however, this approach is very useful during the design process of the optical system. • A single image [31][32][33][34] or a sequence of images [18,19,25,26] of a natural scene or scenes to estimate the vignetting-in the case of these methods, the vignetting estimation is obtained as a result minimization of an objective function with the assumption that vignetting is a radial function, which limits the number of lens-camera systems for which these methods can be used. The effectiveness of these methods also depends strongly on many other factors, such as the precision of localization of corresponding pixels, uniformity of the analyzing scene, which limits their applicability. However, and this is a significant advantage, these methods can be used for already acquired images when the acquisition of new reference images is not possible; such situations are common in the case of, e.g., historical images.
To the second group belong, among others, probably the most frequently used methods, that is, those which are based on the idea of a "flat-field correction" approach.
In the "flat-field" approach, the vignetting is estimated based on a reference image of vignetting I V , which presents a uniformly illuminated flat surface with a uniform color. This approach assumes that the main source of brightness differences in I V is the vignetting of the lens-camera system that was used; however, because the I V image is acquired by a real lens-camera system, this image also contains noise. The formation of the I V image can be described by the following formula where V is a real vignetting of the analyzed lens-camera system and V(i, j) ∈ (0, 1] (where 1 means no vignetting at pixel (i, j) and 0 means complete blocking of light), I f lat is an ideal image of a scene with a reference flat surface, ε(i, j) is an image noise and (i, j) are, respectively, horizontal and vertical pixel coordinates. The vignetting estimate V is established during the approximation process, denoted as approx(·), using the assumed vignetting model VM and the image I V as follows where V ∈ (0, 1]. The corrected version of an acquired image I, that is, an image I, is obtained using the following formula For the best correction results, the images I V and I should be acquired using the same lens-camera system, its parameters (e.g., focal length for zoom lenses), and exposure parameters (e.g., value of aperture f-number). It should also be mentioned that for the most accurate vignetting correction results, this procedure should be preceded by a dark frame correction, and the camera used for the acquisition of the I V and I images should have a linear characteristic or both images, I V and I are linearized, that is, the relationship between the R, G, B values of the (i, j) pixel and the luminous flux of light incoming to the corresponding pixels of the camera sensor is proportional. Adjusting the flat-field method to the analyzed system is a matter of using the vignetting model, which is appropriate for the actual vignetting occurring in this system. In the literature, different parametric vignetting models have been presented, for example, the polynomial 2D (P2D) model [25,35,36], the exponential 2D polynomial model [35], the smooth non-iterative local polynomial (SNILP) model [37], the radial polynomial (RP) model [38], the hyperbolic cosine model [39], and the Gaussian function [40]. The last three mentioned vignetting models belong to a widely used approach, which assumes that the actual vignetting V of the lens-camera system has a cylindrical symmetry. This means that the vignetting V(i, j) can be modeled using a radial function V(r), that is, a function of r, where is the distance between the pixel with coordinates (i, j) and the optical center C of the lens-camera system, with coordinates (x C , y C ). The use of the assumption of radial vignetting simplifies the process of searching for the vignetting estimate because the approximation of the 2D function is replaced by a much simpler approximation of the 1D function. The consequence of using a simpler approximation function (vignetting model) is a reduction in the number of parameters needed to determine the vignetting estimate, for example, the P2D model needs (s 2 + 3s + 2)/2, where the RP model needs only s + 3 parameters; s is the degree of the approximation function used. The usage of radial vignetting is convenient and therefore very popular; however, because not every lens-camera system satisfies this assumption, this approach to vignetting correction is not universal and should be used carefully. The radial vignetting assumption is not satisfied by a large group of imaging systems, such as industrial lenses designed with reducing vignetting in mind, perspective-control and focal plane-control lenses (shift and tilt lenses), anamorphic lenses, etc.
Analyzing the information presented above, it can be concluded that in the literature there is a lack of model proposals that combine the simplicity of use and a small number of parameters, which are characteristic for the RP model, with the universality of, for example, the P2D model or the SNILP model. In the article, we present a novel model of vignetting, that is, the Deformable Radial Polynomial (DRP) model, which is our attempt to fill this gap. The idea of the new model is based on the observation that the vignetting in many lenscamera systems is not ideally radial, but is rather a radial vignetting, which is "squeezed" in one direction. The contribution of the DRP method is the use of a simple function for distance calculation, which is a slight modification of (4), to transform the non-radial vignetting observed in the image space into the radial vignetting in the distance space. This step allows, for the price of adding only one parameter describing the non-radiality of the vignetting, for the use of a radial vignetting model, e.g., a polynomial 1D model as in the case of the DRP model, for modeling actual non-radial vignetting of a real lens-camera system. To verify this idea, the vignetting correction results obtained from the DRP model were compared with the results obtained using the models RP and P2D, which are wellknown from the literature, as well as the state-of-the-art model, that is, the SNILP model. In the comparison, the data from five webcams with different vignetting characteristics, including the degree of non-radiality of their vignetting, were used. The results obtained show that the DRP model, in general, gives better results than the RP model. In the same cases, these results are up 35% or 50%, depending on the measure used, better than the results obtained from the RP model.
The rest of the article is organized as follows. The DRP model of vignetting is described in Section 2. Section 3 presents the conditions of the real-data model comparison, as well as its results. The discussion of the results is given in Section 4. The last, Section 5, contains the conclusions and suggestions for future research.

The Deformable Radial Polynomial Model of Vignetting
As mentioned above, the concept of estimation of vignetting using the RP model is very simple: it is assumed that the vignetting has cylindrical symmetry. Therefore, when analyzing the vignetting problem in the image plane, the value of the vignetting has rotational symmetry with respect to the optical image center C. This means that the vignetting model also has rotational symmetry, which leads to the use of the radial function as a vignetting model. In such a case, the vignetting value of all pixels that are at the same distance r from C has the same value v(r), and hence the initial 2D problem of finding the vignetting estimate can be solved as a much simpler 1D approximation problem. Thus, when as the 1D approximation function the 1D polynomial function of degree s, denoted as PolyReg s (·), is chosen, the radial polynomial (RP) model is obtained, which can be formally written as follows V = PolyReg s v(r) .
To solve this problem, that is, to find the parameters of the approximation polynomial, assuming that v(r) does not contain outliers, the ordinary least squares (OLS) method can be used. When analyzing the images of vignetting I V (Figure 1), it can be seen that vignetting often is not a really radial function, but rather is a "squeezed" radial function. From this observation the founding idea of the DRP model has been derived, which is that the usage of the distance function, which is used to calculate distance r between the pixel (i, j) and the optical center C with coordinates (x C , y C ), to transform the non-radial vignetting observed in the image space into radial vignetting in the distance space r. This simple transformation allows the use of a radial vignetting function, which in the case of the DRP model is a 1D polynomial function, for modeling actual non-radial vignetting of a lens-camera system.
The method for a simple realization of non-radial to radial vignetting transformation is taken from the method of modeling non-radial vignetting presented in [37]. According to the solution presented there, to accomplish such transformation it is sufficient to slightly modify Equation (4) by a parameter η, as follows The η can be treated as the measure of non-radiality of the vignetting, where, for ideal radial vignetting η = 1, and for non-radial vignetting η = 1, e.g., for η > 1, the vignetting in the vertical (y) direction is stronger than in the horizontal (x) direction.
(e) CAM-C, η = 1.2952 Figure 1. Comparison of the acquired images I V ; the images are presented in the order of the increasing non-radiality degree of vignetting, which is described by the η coefficient.
The value of the coefficient η must be estimated. This can be found by solving the following minimization problem for the given coordinates (x c , y c ) of the optical center C of the image; as mentioned earlier, the parameters p of PolyReg s r η (i, j) can be effectively found using the OLS method. When the optimal value of η * is determined, V is calculated using the parameters p obtained from the PolyReg s v r η (i, j) for η * . Because the acquisition of I V should be carried out in such a way that there is no saturation phenomenon in this image, this means that max(I V ) < 1; assuming that the values of the pixels are represented in the normalized range [0, 1]. This leads to a situation where max( V) < 1; hence, it is necessary to normalize the range of V to the range (0, 1], which is performed as follows where := is the assignment operator. The obtained V is the final result of the vignetting estimate using the DRP model.
In the described above algorithm, the coordinates of C must be known or at least estimated before the DRP model can be used. However, these coordinates can also be estimated by solving the following multivariate minimization problem In this case, the final value of V is calculated analogously as in the previous variant of the DRP model, that is, using the parameters p of PolyReg s v r η,x c ,y c (i, j) estimated for the obtained η * , x c * , y c * and then normalized according to (8).

Assumptions of the Comparison and Methods of Evaluation
The purpose of the conducted experiment was a comparison, based on real data, of the DRP model with the selected, known from the literature, vignetting models, that is, the RP model and the P2D and the state of the art SNILP models. The quality of the vignetting models can be evaluated by analyzing the image correction results obtained using vignetting estimates V calculated using the compared vignetting models. Ideal correction results should be flat, that is, all pixels in the corrected image I f lat should have the same value. Of course, in the case of real data, this ideal result cannot be achieved due to the presence of noise in the input image I V . In such cases, the pixels in the I f lat image should have similar values with a possible minimal dispersion. Good dispersion measures are the standard deviation (STD) and the interquartile range (IQR); therefore, these measures were used for a quantitative evaluation of the results of the vignetting correction. Hence, a lower value of STD I f lat or IQR I f lat means a better ability of the analyzed vignetting model to adapt to the real vignetting.
Since the I V image contains noise and the estimation result V may differ significantly from the vignetting of a given camera, it is possible that the I f lat image contains pixels with values beyond the range of [0, 255], that is, the range of pixel values for cameras used in the experiment. Therefore, the result of the correction, that is, the image I f lat , before its evaluation, is subjected to the operation of truncation of the pixel values according to the following formula

Laboratory Setup and Data Acquisition Process
The data analyzed in the experiment were acquired using together five webcams, namely Microsoft LifeCam Studio, Logitech C920, Hama C-600 Pro, and two Xiaomi IMILAB CMSXJ22A webcams (Figure 2), which are indicated in the article, respectively, from CAM-A to CAM-E. In the case of webcams, their manufacturers usually do not provide detailed technical data, such as lens focal length, lens speed, etc. Despite this, a comparison of the main technical data of the webcams used is presented in Table 1.
The webcams used in the comparison were selected to differ in their vignetting characteristics. Such a selection of cameras allows for performing a comparison for different examples of real vignetting with a different vignetting strength (also called vignetting magnitude) and a different degree of non-radiality of the vignetting function, which is essential in the case of this comparison. Vignetting strength can be measured using the STD and IQR measure for the images I V acquired using individual cameras-the values of these measures are presented in the second column of, respectively, Tables 3 and 4.
The tested cameras are arranged in the order from the camera with the lowest vignetting strength (CAM-A) to the camera with the highest vignetting strength (CAM-E). In Table 5, the estimated values of parameter η are presented. For simplicity of results analysis, the η value estimated during calculation of the best DRP-2 model for individual camera is in the article used as coefficient of vignetting non-radiality of this camera, and denoted as η. If the cameras were to be ranked in the order from the camera with the most radial vignetting to the camera with the most non-radial vignetting, that is in the order of ascending η values, the order would be as follows: CAM-E, CAM-D, CAM-A, CAM-B, CAM-C.  As a flat-field surface, which is needed to acquire the vignetting image I V , a uniformly back-lighted milky poly(methyl methacrylate) (PMMP, "plexiglass") panel of size 50 cm × 100 cm and thickness 3 mm was used. As a light source, the NEC EA304WMI-BK graphic monitor was used that displays a white screen with the brightness set to its maximum, i.e., 350 lx. The plexiglass panel was carefully placed parallel to the surface of the monitor screen, the distance between the monitor and the panel was constant and equal to 15 cm. To position the camera parallel to the monitor screen each camera was positioned in such a way that the geometric distortions of the reproduction of the test image obtained from this camera were symmetric. The test image was displayed on a monitor that served as panel illumination (NEC EA304WMI-BK), and the acquired images were observed in real time on the second display Figure 3a. For the time to acquire the series of images I V , the aforementioned plexiglass panel was inserted between the monitor and the camera used Figure 3b. To reduce the noise presented in the captured images, in the experiment, the I V image was used, obtained as the average of 100 originally captured images I V e . Since the experiment evaluates the availability of fitting of the compared vignetting models to the real vignetting, there is no need to calculate the vignetting estimate for each color channel, that is, R, G, and B, of the I V image. Therefore, in the experiment, for each camera, as the input image, was used. The exposure parameters of each camera were automatically carried out for the first image I V 1 , and for the rest of the images, that is, I V 2 , . . . , I V 100 , the same parameters were used. The I V e images were acquired in a dark laboratory room so that no additional lights interfered with this process. Additionally, all signaling diodes with which the cameras are equipped, as well as all contrasting inscriptions on the camera housings, were covered with black opaque tape. In the case of cameras equipped with autofocus, that is, for CAM-A to CAM-C, their focus has been set to infinity. It is important to note that due to the objective of the comparison, eventually small errors in, e.g., positioning of the monitor, the plexiglass panel, or the cameras, as well as a small non-uniformity of screen illumination, do not influence the qualitative evaluation of the experiment results. Of course, such errors can affect the quantitative results of the estimation; however, these errors do not change the judgment of the ability of the vignetting models tested to find the best approximation V based on the input image I V , and precisely this property of the vignetting models is evaluated in the comparison.

Compared Vignetting Models and Their Implementations
The entire experiment, that is, from the image acquisition, through all calculations, to data presentation, was carried out using the MATLAB R2021b software package with the Image Acquisition Toolbox and the Optimization Toolbox. In the experiment, four vignetting models have been compared, that is, the novel deformable radial polynomial (DRP) model; known from the literature the radial polynomial (RP) [38] and polynomial 2D (P2D) [25] models; and the model which in terms of the quality of the vignetting correction can be treated as a state-of-the-art solution, that is, the smooth non-iterative local polynomial (SNILP) model [37].
The RP and DRP models have been tested in two variants, that is, with the calculation of coordinates (x c , y c ) of the optical center of the image C performed before the vignetting estimation procedure (these variants are denoted, respectively, RP-1 and DRP-1) and with the estimation of (x c , y c ) integrated into the vignetting estimation process (these variants are hereafter denoted as RP-2 and DRP-2). The implementation of all variants tested for the RP and DRP models uses the MATLAB functions polyfit and polyval, respectively, to estimate the parameters of the approximation polynomial and calculate its values. In the implementation of the RP-2, DRP-1, and DRP-2 models, the MATLAB function fminunc, which is a MATLAB procedure for solving unconstrained nonlinear optimization problems, is used to find the optimal values of (x c , y c ) in the case of the RP-2 and DRP-2 models, and the values of η in the case of both variants of the DRP model. The coordinates of the optical center needed for the RP-1 and DRP-1 models were determined by searching for the coordinates of the maximum value of the 2D polynomial approximation of degree s = 2 of the I V image. In the case of the RP-2 model for estimating (x c , y c ) the optimization procedure (9) with η set as a constant value 1 has been used. As the initial values for the fminunc function, the following values were chosen: η 0 = 1, x C 0 = M 2 and y C 0 = N 2 , where M × N is the resolution of the analyzed image, which for all cameras used was the same, that is, 1920 × 1080. For the others parameters of the fminunc function their default values have been used. The other two models, that is, the P2D and SNILP models, have been implemented according to the information given in [37].
All models were compared for different degrees s of the approximation polynomials, that is, for s ∈ {2, . . . , 10}. The tested range of the values s covers the value of the degree of approximation polynomial proposed in the literature, that is, from s = 2, for tasks that do not require exact correction, and up to s = 6 in the most demanding applications. In Table 2, the comparison of type and number of parameters required to specify the vignetting estimates determined using individual models is presented.   Figure 4 presents vignetting estimates V obtained from the RP-1, DRP-2, and P2D models. These results are presented using pseudocolor and isolines, which allows for an easier comparison of the obtained vignetting estimates. In addition, in Appendix A in Figures A1-A10, the acquired vignetting images I V , vignetting estimates V, and corrected images I f lat are presented in the form of 3D charts.

Model Type of Parameters Number of Parameters
The numerical results of the experiment-that is, the values of the STD and IQR measures calculated for the images I f lat , which are the results of vignetting correction-are presented in Tables 3 and 4. In Figure 5, a comparison of the best results of the vignetting correction obtained for each model is presented. In this comparison, the STD and IQR values mentioned above are related to the values of the respective measures of the vignetting strength of the input images I V ; these values are presented in the second columns of, respectively, Tables 3 and 4. Table 5 presents the estimated values of the coefficient η obtained for the DRP-1 and DRP-2 models.

Evaluation of the Models in Terms of the Accuracy of the Obtained Vignetting Estimates
The purpose of the performed experiment was a comparison of the vignetting correction results obtained from the DRP model with the results obtained from the models known from the literature. Due to the large differences in the vignetting characteristics of the individual cameras used in the experiment, both in terms of the vignetting strength and the degree of non-radiality, the obtained correction results have been analyzed from the perspective of the influence of these factors on the correction results.
Analyzing the influence of the vignetting strength on the results of the vignetting correction, measured both with the STD (Table 3) and IQR (Table 4) measures, it can be seen that both tested variants of the novel DRP model, i.e., DRP-1 and DRP-2, practically always fall between the results obtained from the RP model and those obtained from the P2D and SNILP models. Slight exceptions to this general rule apply when low degree polynomials are used (s ≤ 3) or when the correction was made for the image from the camera with near-ideal radial vignetting, that is, CAM-E with η = 1.0068, where ideal radial vignetting has η = 1. The same relationship between the results obtained from individual models also applies to the relative evaluation of the vignetting correction ( Figure 5).
Among the cameras used for the experiment, only the CAM-E camera has an almost ideal radial vignetting ( η = 1.0068), while for the other cameras, the image brightness in the vertical direction decreases faster than in the horizontal direction (radial vignetting is "squeezed" in the vertical direction). The speed of brightness decreasing in the case of other cameras varies from about 6% (CAM-D, η = 1.0605), through about 9% (CAM-A, η = 1.0901) and about 15% (CAM-B, η = 1.1559) to almost 30% (CAM-C, η = 1.2952). Evaluating the vignetting correction results obtained for CAM-E, the results obtained from all models are comparable. However, it can be noticed that for small values of s ≤ 5, the RP and DRP models achieve better results than both more universal models, i.e., P2D and SNILP. This advantage decreases with increasing s and finally for s ≥ 8 the P2D and SNILP models give better correction results than the RP and DRP models. It is worth noting that even in such a favorable situation for the RP model (almost ideal radial vignetting) for most values of s, any variant of the RP model does not give better results than the corresponding variant of the DRP model. These results indicate that it is worth considering using the DRP model instead of the RP model, even if the vignetting is almost ideal radial.
Interestingly, in the case of the CAM-E camera and the STD measure, the variants RP-1 and DRP-1, i.e., those that use the previously calculated coordinates (x C , y C ) of the center C, give better results than the RP-2 and DRP-2 variants, in which these coordinates are sought within one optimization problem. A similar situation also occurs in the case of the CAM-A and CAM-D cameras, but only in the case of the IQR measure. Such an effect seems to be counter-intuitive, but it should be noted that the applied optimization procedure does not guarantee to find a global minimum, and no studies have been conducted on the influence of the selection of parameters of the optimization method on its results.
For each camera, which has a noticeable non-radial vignetting, that is, from CAM-A to CAM-D, the results obtained for any variant of the DRP model and for any value of s are always better than the results obtained for any variant of the RP model. With the increase in the degree of non-radiality (increasing η), this difference grows. In the case of the CAM-C ( η = 1.2952), the use of the DRP model in relation to the RP model, when comparing the corresponding variants of both models, gives about 35% improvement in the correction quality when the STD measure is used, and over 50% improvement in the case of the use of the IQR measure.
The influence of the degree of non-radiality on the results of vignetting correction is particularly well visible if one looks at the graphs in Figure 5. For the camera with almost radial vignetting (CAM-E), each tested model corrects the vignetting of this camera to a similar extent. However, with the increase in the non-radiality of camera vignetting, the more noticeable is the predominance of models that allow the occurrence of non-radial vignetting over the RP model. Of course, the DRP model is inferior to the P2D and SNILP models in terms of the obtained quality, but these models are much more complex. The difference in model complexity can be seen by comparing the number of parameters necessary to save the vignetting estimate obtained from each of these models (Table 2).

Computation Time
Another analyzed issue is the computation time, which is required to determine the vignetting estimate using individual models. In the vast majority of cases, even relatively long calculation times of the vignetting estimate are not a serious problem. The reason for this is that there is rarely a need to derive a vignetting estimate. Such situations occur occasionally, for example, when the new lens is used or periodically (e.g., every month) in order to ensure the highest quality of vignetting correction of the acquired images. Moreover, even if vignetting needs to be estimated for many parameters of the analyzed lens-camera system (e.g., for many aperture size values and many focal lengths of the varifocal lens), the calculations can be carried out in the background without involving the human operator.
In Figure 6 the median of the computational times for all models tested and for polynomial degrees s ∈ {2, . . . , 10} are presented. In the test, the acquired images I V with resolution 1920 × 1080 were used and 25 repetitions (5 repetitions for each of the I V images acquired using tested cameras) were performed for each model-s value combination. The measurements were carried out on a computer equipped with an AMD Ryzen 7 3800X processor, 16 GB RAM, and a M.2 PCIe NVMe SSD disk using MATLAB 2021b software. The results obtained show that the DRP-2 model requires the longest computation time. This model is followed by the RP-2 and DRP-1 models. This situation results from the use of the minimization procedure, which is much slower than the OLS method used many times for individual lines of the image, as in the case of the SNILP model, or once, but for a larger number of data, as in the case of the P2D model.
It is also worth noting that the calculation times will vary as the degree of the polynomial increases. This is because that increasing the value of s increases also the size of the data matrix, and thus the number of mathematical operations that must be performed.

Conclusions and Future Research
The intention of the authors during the development of the DRP model was to combine the simplicity of the RP model with the universality of more complex models such as P2D and SNILP. The tests carried out for cameras, from which the acquired images differ in both the amount of vignetting and the degree of fulfillment of the radial vignetting assumption, fully confirm that this assumed goal has been achieved. The DRP model uses only one more parameter than the RP model, but according to the results obtained, it can give up to 35% better correction results in terms of the STD measure and up to over 50% better correction results in terms of the IQR measure.
It should be noted that the results obtained from the DRP model are better than those obtained from the RP model for all tested cameras, including the camera with almost perfect radial vignetting (CAM-E, η = 1.0068), i.e., in conditions for which the RP model was designed. This observation leads to the essential conclusion that from the perspective of the quality of the vignetting correction, the DRP model can be successfully used as the default vignetting model instead of the RP model. Such a statement results from the fact that the results obtained from the DRP model in the case of radial vignetting are not worse than the results obtained from the RP model, and in the case of non-radial vignetting, they are better or much better than those obtained from the existing model. The expected quality improvement of the vignetting correction results increases with the increase in the degree of non-radiality of the analyzed vignetting.
The use of a polynomial function as a radial vignetting model is one of the few approaches to radial vignetting correction proposed in the literature (e.g., [39,40]). An interesting direction of research would therefore be to check how the usage of the distance transform (6) used in the DRP model would increase the flexibility of other than polynomial radial vignetting models. An important part of these studies should be the comparison of minimization methods in terms of their usefulness in the process of estimating the model parameters, the coordinates of the optical center C and the value of the η * coefficient.
In the conducted research, relatively simple lens-camera systems in the form of webcams were used. An important observation resulting from their application is the noticing of the prevalence of non-radial vignetting among the imaging systems used. Only one of the five cameras used has a vignetting that can be considered as radial (CamE) and only the other two have less than 10% differences in their vignetting in the vertical and horizontal directions (CAM-C and CAM-D). The question then arises to what extent the non-radial vignetting is widespread among more complex camera systems and how the developed DRP model will work in such systems. The analysis of these issues will be the subject of further research.
An important contribution made by the development of the DRP model-apart from making the RP model more flexible at the price of adding only one, easy to determine parameter η, which represents the degree of non-radial of actual vignetting-is that it opens the possibility of using methods adapted to images with radial vignetting for its use in the case of images with non-radial vignetting. Good examples of such methods are methods based on the idea of single-image vignetting correction [31][32][33][34]. This is possible because the DRP model uses transformation (6) of distance calculation between a pixel and optical image center C to transform the non-radial vignetting that occurs in most lens camera systems into an ideal radial vignetting model. As part of future research, it is planned to implement this idea in practice.

Data Availability Statement:
The data presented in this study are available on reasonable request from the corresponding author.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript or in the decision to publish the results.

Abbreviations
The following abbreviations are used in this article: In Figures A1, A3, A5, A7 and A9 the normalized I V images obtained from each camera and its estimates obtained from the compared models are presented. Normalized images norm(I V ) are calculated as follows: where smooth I V (i, j) is the result of the smoothing of the I V image with 2D Gaussian filter with standard deviation σ = 0.5 and size 10 × 10; the images norm(I V ) are used only for visualization (these images are not used during the calculation of V). Figures A2, A4, A6, A8 and A10 show the best correction results of the I V images, that is, the I f lat images. The best results are usually obtained for s = 10, however, in the case of CAM-D and both variants of the RP and DRP models, the best results have been obtained for s = 9. The correction results in these images are presented as the normalized I f lat images, which are calculated as follows norm I f lat (i, j) = I f lat (i, j) This solution makes it easier to compare the correction results for different cameras.