Next Article in Journal
Modeling and Characterization of Traffic Flows in Urban Environments
Previous Article in Journal
On Maximizing the Throughput of Packet Transmission under Energy Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rectification of Images Distorted by Microlens Array Errors in Plenoptic Cameras

1
School of Energy Science and Engineering, Harbin Institute of Technology, 92 West Dazhi Street, Harbin 150001, China
2
Key Laboratory of Aerospace Thermophysics, Ministry of Industry and Information Technology, Harbin Institute of Technology, 92 West Dazhi Street, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(7), 2019; https://doi.org/10.3390/s18072019
Submission received: 27 May 2018 / Revised: 21 June 2018 / Accepted: 22 June 2018 / Published: 23 June 2018
(This article belongs to the Section Physical Sensors)

Abstract

:
A plenoptic cameras is a sensor that records the 4D light-field distribution of target scenes. The surface errors of a microlens array (MLA) can cause the degradation and distortion of the raw image captured by a plenoptic camera, resulting in the confusion or loss of light-field information. To address this issue, we propose a method for the local rectification of distorted images using white light-field images. The method consists of microlens center calibration, geometric rectification, and grayscale rectification. The scope of its application to different sized errors and the rectification accuracy of three basic surface errors, including the overall accuracy and the local accuracy, are analyzed through simulation of imaging experiments. The rectified images have a significant improvement in quality, demonstrating the provision of precise light-field data for reconstruction of real objects.

1. Introduction

Plenoptic cameras based on microlens arrays (MLAs), which apply light-field imaging technology, can acquire and display multi-angle light radiation intensity distributions from spatial targets in a single exposure [1,2]. In contrast to conventional camera sensors that only sample spatial information, they can simultaneously record the 4D light-field including 2D spatial information and 2D angular information. The retention of angular information of light radiation provides the necessary light-field data for multi-view imaging, digital refocusing, and 3D reconstruction of a target scene [2,3,4]. In a plenoptic camera, a MLA is added between the main lens and the image sensor to capture the raw image recording the complete light-field, which is composed of a series of sub-images formed by the respective microlenses arranged in a specific sequence. The positions of the sub-images correspond to the spatial light-field information, and the positions of the pixels beneath the sub-images correspond to the angular light-field information. Owing to this correlation, the MLA intrinsic and extrinsic parameters determine image quality and subsequent light-field decoding and reconstruction results, so it is important that the MLA itself has a high surface precision and maintains strict azimuth alignment.
In recent years, numerous reconstruction algorithms of complex scenes have been developed using light-field image data [5,6,7]. Plenoptic cameras are gradually being applied in engineering areas such as high temperature flame measurement [8,9], target recognition [10], particle image velocimetry [11,12], and real-time monitoring [13,14]. In order to effectively utilize the light-field data in the image processing and reconstruction process, the raw image should accurately record the light-field information of the measured target. However, due to manufacturing and assembly defects in the MLAs [15,16,17], different distortions appear in the actual images acquired using them, including blurring and aliasing caused by the orientation misalignments between the MLA and the image sensor [18], as well as changes in brightness, resolution and spot position owing to microlens surface errors [19,20]. The degradation and distortion of the raw image makes a large amount of data information confused or missing, leading to a significant reduction in the reconstruction accuracy and efficiency of the target flow field [21,22,23]. Therefore, it is necessary to rectify the distorted light-field image caused by the MLA errors. Differences in 2D image correction method [24,25,26], the rectification of light-field images, requires special attention to the distortion of the microlens sub-image. Jin et al. [27] presented a correction method for the raw image based on the commercial plenoptic camera Lytro. By estimating the rotation error angle of the MLA relative to the image sensor plane, the light-field image was integrally rotated to align with the pixel axis, eliminating the image distortion. Dansereau et al. [28] established a lens distortion model of the plenoptic camera and concluded that the light-field image was primarily affected by directionally dependent radial distortion. They further proposed a method for rectifying the ray projection errors of the decoded image. Cho et al. [29] described a method to calibrate the microlens images in a hexagonal arrangement, which can achieve sub-pixel precision locating of the microlens centers. The experimental results showed that the microlens centers were non-uniform, and the author inferred that this could be attributed to slight shape difference between the microlenses caused by manufacturing defects. Li et al. [30] analyzed the system imaging characteristics under MLA assembly error conditions, and provided proper quality evaluation indexes and correction functions for the error in the light-field image.
It is known from the aforementioned studies that current rectification methods generally idealize the MLA surface parameters and consider only the image distortion from the azimuth deviation between the integral MLA and sensor plane, while neglecting the microlens surface error. In fact, the MLA surface error and its image distortion are not consistent. On the one hand, owing to various machining techniques and materials, and the complicated structures, there are local differences in the MLA surface errors that result in different error forms and magnitudes for each microlens [15,31]. On the other hand, the same error on different microlenses deteriorates the corresponding sub-images differently. The trend of change in light-field image quality is related to the error type, size and spatial direction [20]. Therefore, integral methods cannot effectively rectify the local degradation caused by microlens surface errors, and moreover, such approaches generate problems of large computational complexity, insignificant correction effects, or excessive correction.
In order to reverse the effects of the MLA surface error and restore the light-field data of the space target, in this paper, we propose a new rectification method of distorted image for the MLA errors. The rectification method, which directly utilizes the raw white image, extracts coordinate and grayscale information of the feature points and ideal reference points for each sub-image. With the relevant information, the error sub-image geometric rectification and grayscale rectification matrices are determined, and thus the geometry and grayscale deviation of the light-field image can be locally rectified. Further, based on a ray-tracing simulation imaging system [32], we develop a MLA surface error model to analyze the rectification accuracy and its applicable error range under different error conditions. The proposed method is verified by comparing the real object images before and after rectification.

2. Models

2.1. Light-Field Imaging Model

In this paper, on the basis of the Plenoptic Camera 1.0 designed by Ng et al. [2], we establish a physical model of plenoptic camera imaging system, which is used for light-field imaging simulation in the calibration and rectification process. As depicted in Figure 1, the imaging model mainly includes a main lens, a MLA and an image sensor, where the MLA is placed at the imaging plane of the main lens and the image sensor is coupled at one focal length behind the MLA. The MLA divides the main lens pupil into several sub-apertures. Each microlens samples the rays from multiple directions at a certain point through every sub-aperture, and finally, projects these rays onto the image sensor to form a sub-image, as shown in Figure 1. To obtain the raw white image required to rectify distorted light-field images, a white plate is placed at the distance of d = 2.5 m from the main lens, with the center on the system optical axis and the corresponding image distance of l = 109.6 mm. The plate size is 40 × 40 cm, the surface is diffuse reflective, and the reflectivity is 1.0. The other related parameters of the model may be found by referring to a previous report [20].

2.2. MLA Surface Error Model

Figure 2 demonstrates the design model of the MLA used in this paper. The entire MLA consists of NW × NH square-aperture microlenses in a matrix arrangement. The length and center pitch of the microlenses are both p. The two sides of the microlenses are spherical surfaces with radius-of-curvature r, the microlens thickness is t, and the focal length is f. The main parameters of the MLA model are listed in Table 1. Each unit of the MLA is labelled as Um,n, which indicates that the unit is located in the m-th row and n-th column of the MLA, where m = 1, 2, …, NH and n = 1, 2, …, NW. We establish the MLA coordinate system o-xyz with the MLA center as the origin o. The x-axis passes through the optical axis of the main lens, and the y- and z-axes are parallel to the square grids of the microlenses, as shown in Figure 2. Setting the point ( 0 , y 0 , z 0 ) in the upper left corner of the MLA as the datum mark, the center coordinates of the microlens Um,n are ( 0 , y 0 ( m 1 / 2 ) p , z 0 + ( n 1 / 2 ) p ) , and its surface can be described by a standard spherical surface equation:
[ x ( r t / 2 ) ] 2 + [ y ( y 0 ( m 1 / 2 ) p ) ] 2 + [ z ( z 0 + ( n 1 / 2 ) p ) ] 2 = r 2
where the coordinates y 0 and z 0 can be calculated from y 0 = N H p / 2 and z 0 = N W p / 2 , respectively. Equation (1) with a negative sign for the term ( r t / 2 ) signifies the microlens incident-light surface, whereas the equation with a positive sign signifies the transmitted-light surface.
However, due to manufacturing deviations in the production process, the MLA surface parameters do not perfectly coincide with the design values, resulting in different local surface errors, such as coordinate distortion [15], curvature variation [16], centroid shift [33,34,35], or irregular deformation [36]. We develop three basic error models that contain pitch error, radius-of-curvature error and decenter error, according to the arrangement and geometric features of the MLA used in this paper. The actual surface error can be characterized with a combination of these basic errors.
Under the condition that the other parameters of the MLA are constant, if the pitch between the adjacent microlenses changes, it is called the pitch error, Δp; if the radius-of-curvature of the microlens deviates from the standard value, it is called the radius-of-curvature error, Δr; if a certain offset occurs between the spherical centers on both sides of the microlens, it is called the decenter error, δ. Table 2 shows the mathematical description of the error models, where the negative sign of the term (rt/2) indicates the incident-light surface equation of the error microlens Um,n, and the positive sign indicates the transmitted-light surface equation; α and β are the angles between the pitch error Δp and the decenter error δ and the horizontal direction, respectively. As seen in Table 2, the pitch error and decenter error change the center position of the microlens, which causes the microlens optical axis to shift or tilt. The radius-of-curvature error only affects the spherical radius of the microlens, and consequently alters its focal length.

3. Rectification Method

In the previous studies, we analyzed the local imaging characteristics and degradation mechanism of the MLA surface error. The results showed that the error caused major distortions such as position shift, boundary diffusion, and brightness variation in some sub-mages of the light-field image [19,20]. Based on this, we propose a method for rectifying the distorted light-field image using raw white images. First, the microlens center is calibrated to determine the center of each sub-image and divide the light-field image x into a set of sub-image regions R. Next, the geometric distortion (e.g., position shift and boundary diffusion) is removed by the geometric rectification matrix P. The parameters in the matrix P are estimated from the coordinates of the extracted feature points in each sub-image. Finally, the grayscale distortion (e.g., brightness variation) is decreased with the grayscale rectification matrix G. The grayscale factors can be calculated sequentially from the gray average of the geometric-rectified sub-images. Figure 3 illustrates the rectification procedure of the proposed method.

3.1. Microlens Center Calibration

As a result of the manufacturing and assembly accuracy of the camera and the lens aberration, the sub-images, corresponding to the microlenses, are shifted to different degrees. For accurate extraction of feature point information and subsequent geometric and grayscale rectification for each image, it is a prerequisite that the microlens center is calibrated to determine the relationship between the pixel and the microlens.
To calibrate all microlens images, a uniform white plate (shown in Figure 1) is adopted to capture a raw white light-field image, which consists of individual microlens sub-images, as shown in Figure 4a. Because of lens vignetting, the gray value maximum in each white sub-image approximates the microlens center theoretically. In order to avoid the effect of inhomogeneous diffuse reflection and image noise on the center calibration, we take statistics to sum the pixel gray values of the raw image by rows and columns:
S r o w ( x ) = y = 1 N I ( x , y ) , x = 1 , 2 , , M
S c o l ( y ) = x = 1 M I ( x , y ) , y = 1 , 2 , , N
where I ( x , y ) denotes the gray value of the pixel in the x-th row and y-th column of the image, S r o w ( x ) and S c o l ( y ) denotes the sum of the gray values of the pixels in the x-th and the y-th column, respectively, and the raw image resolution is M (H) × N (W).
The thresholds T h r o w = 1.5 × min 1 x M S r o w ( x ) and T h c o l = 1.5 × min 1 x N S c o l ( y ) are set to screen the above results. The row, S r o w ( x ) of which is below the threshold T h r o w , is selected as the horizontal boundary of the sub-image, and the column with S c o l ( y ) below the threshold T h c o l is selected as the vertical boundary. Through the four adjacent boundaries, the raw image is preliminarily divided into a number of microlens regions R : R 1 , 1 , R 1 , 2 , , R m , n , , where the subscripts m,n represent the m-th row and n-th column divided region, as shown in Figure 4b.
After acquiring all the microlens regions, the sub-image center can be located based on the division result. First, we calculate the sum of the pixel gray values in each row and each column of the region R m , n :
S r o w m , n ( x ) = y = c d I ( x , y ) , x = a , , b
S c o l m , n ( y ) = x = a b I ( x , y ) , y = c , , d
where a and b denote the horizontal boundaries of the region R m , n , and c and d denote the vertical boundaries. Then, the pixel point whose the row and column with the largest sum of gray values in the region R m , n is determined as the sub-image center c m , n , that is:
c m , n ( x c , y c ) = ( arg max a x b { S r o w m , n ( x ) } , arg max c y d { S c o l m , n ( y ) } )
Figure 4c shows the center calibration results of microlens sub-images. Finally, according to the center coordinates and the number of pixels l × l covered by sub-image, the raw image is divided into the sub-image regions R : R 1 , 1 , R 1 , 2 , , R m , n , , R N H , H W , which correspond to NW × NH microlenses, respectively, as shown in Figure 4d.
The calibration method proposed in this paper takes into account the sub-image shift caused by the MLA errors. On the basis of the preliminary division of the microlens region, the center points are further obtained by summing the rows and columns of pixel gray values in each region, so that accurate center location and region division for all sub-images can be achieved. As shown in Figure 4c,d, when there are local offsets in the raw white light-field image, the proposed method can still calibrate the microlens center and divide the corresponding sub-image regions, which provides the basis for the geometric and grayscale rectification.

3.2. Geometric Rectification

From the imaging principle of the plenoptic camera and the simulation results of MLA errors [19,20], it is known that the geometric distortion of the light-field image typically manifests as the sub-image translation, rotation and scaling, or the superposition of the arbitrary aforementioned forms. Therefore, we use a geometric error matrix and feature point to establish the correspondence between the error sub-image and the ideal sub-image. Consider a feature point P 0 ( x m , n , y m , n ) in the error sub-image region R m , n , and its coordinates are ( u m , n , v m , n ) in the ideal sub-image. For a 2D plane, the coordinates of P0 between these two sub-images satisfy:
[ x m , n y m , n ] = R m , n [ u m , n v m , n ] + T m , n
where R m , n = [ r m , n 11 r m , n 12 r m , n 21 r m , n 22 ] represents the rotation and scaling of the sub-image region R m , n , and T m , n = [ t m , n 13 , t m , n 23 ] T represents the translation of the sub-image region R m , n . Equation (7) can be written in homogeneous coordinates as:
[ x m , n y m , n 1 ] = [ r m , n 11 r m , n 12 t m , n 13 r i m , n 21 r m , n 22 t m , n 23 0 0 1 ] [ u m , n v m , n 1 ] = P m , n [ u m , n v m , n 1 ]
where P m , n is the geometric error matrix of the sub-image region R m , n .
According to Equation (8), P m , n can be estimated when only three feature points are set theoretically. To improve calculation accuracy without compromising data processing speed, we adopt five feature points to solve the geometric error matrix of each sub-image. As illustrated in Figure 5, the feature point at the center is the calibrated center point of the sub-image in Section 3.1, and the feature points at the edges are the four edge points in the central row and column of the sub-image.
For the extraction of edge feature points, we use the Sobel template [37,38] to convolve with the pixels ( x , y ) in the central row and column (i.e., the xc-th row and yc-th column of the sub-image with center c m , n ) of the sub-image region R m , n :
G X ( x , y ) = | I ( x 1 , y + 1 ) + 2 I ( x , y + 1 ) + I ( x + 1 , y + 1 )         [ I ( x 1 , y 1 ) + 2 I ( x , y 1 ) + I ( x + 1 , y 1 ) ] |
G Y ( x , y ) = | I ( x + 1 , y 1 ) + 2 I ( x + 1 , y ) + I ( x + 1 , y + 1 )         [ I ( x 1 , y 1 ) + 2 I ( x 1 , y ) + I ( x 1 , y + 1 ) ] |
where G X ( x , y ) and G Y ( x , y ) are the horizontal and vertical convolution results, respectively. The gradient of the pixel point ( x , y ) is given by:
[ I ( x , y ) ] = G X ( x , y ) 2 + G Y ( x , y ) 2
The pixel points with maximum gradient value in the directions of upper, lower, left, and right are extracted as the edge feature points, labelled as p m , n N , p m , n S , p m , n W , and p m , n E , with the coordinates p m , n N ( x 1 , y c ) , p m , n S ( x 2 , y c ) , p m , n W ( x c , y 1 ) and p m , n E ( x c , y 2 ) .
When P m , n is estimated, for arbitrary pixel point P ( x m , n , y m , n ) in the error sub-image region R m , n , the coordinates corresponding to the ideal sub-image are ( u m , n , v m , n ) via a coordinate transformation. Thus, the correspondence between the error coordinates and the ideal coordinates of each pixel in the sub-image can be deduced with the inverse matrix P m , n = ( P m , n ) 1 , thereby rectifying the sub-image geometric distortion, that is:
[ u m , n v m , n 1 ] = P m , n [ x m , n y m , n 1 ] = [ r m , n 11 r m , n 12 t m , n 13 r m , n 21 r m , n 22 t m , n 23 0 0 1 ] 1 [ x m , n y m , n 1 ]
where P m , n is the geometric rectification matrix of the sub-image region R m , n .
The coordinates of the feature points in each error sub-image and its ideal sub-image are extracted from raw white light-field images. Using the extracted coordinates, the error matrix parameters and the corresponding rectification matrix of each sub-image are determined by Equations (8) and (12), and finally P, the geometric rectification matrix of the light-field image can be composed as follows:
P = [ P 1 , 1 P 1 , 2 P 1 , n P 2 , 1 P 2 , 2 P 2 , n P m , 1 P m , 2 P i , n ]
For the light-field image x to be rectified, with the geometric rectification matrix P, the coordinates of all the pixels in the light-field image are transformed according to Equation (14), and we obtain the geometric-rectified image x .
x = P x

3.3. Grayscale Rectification

The final step of the proposed method is to rectify the grayscale distortion in light-field image by a grayscale rectification matrix G. Since some surface errors of the MLA alter the luminance and contrast of the sub-image, the brightness of light-field image is non-uniform, which affects the 3D reconstruction results. Therefore, grayscale rectification is considered to recover the brightness uniformity of the light-field image after geometric distortion rectification.
For a geometric-rectified sub-image region R m , n , the grayscale factor g m , n is defined as:
g m , n = μ S m , n μ R m , n
where μ R m , n and μ S m , n are the gray average of the sub-image region R m , n and the corresponding ideal sub-image image, respectively. μ R m , n and μ S m , n can be calculated from:
μ R m , n = 1 l 2 x = 1 l y = 1 l I R m , n ( x , y )
μ S m , n = 1 l 2 x = 1 l y = 1 l I S m , n ( x , y )
where I R m , n ( x , y ) and I R m , n ( x , y ) denote the gray values of the pixel point ( x , y ) in the two images, respectively, and l 2 is the number of the pixels covered by the sub-image.
The grayscale factor of each sub-image in the raw image is calculated sequentially from Equation (15), obtaining the light-field image grayscale rectification matrix:
G = [ g 1 , 1 g 1 , 2 g 1 , n g 2 , 1 g 2 , 2 g 2 , n g m , 1 g m , 2 g m , n ]
For the geometric-rectified light-field image x , the brightness deviation is modified by multiplying the gray value of all the pixels in each sub-image with the corresponding grayscale factor to acquire the grayscale-rectified image x , that is:
x = G x
In the proposed method, including center calibration, geometric rectification and grayscale rectification, only the raw white light-field image (error image) captured by a plenoptic camera is used. Moreover, the white image simulated from the plenoptic imaging system with the standard parameters under the same conditions is used as an ideal image to jointly solve the geometric rectification matrix P and grayscale rectification matrix G of the light-field image. Once the parameters of the two matrices are determined, they can be applied to the targeted geometric and grayscale rectification for other images of any scene captured by the same plenoptic camera.

4. Results and Analysis

In this section, we investigate the rectification accuracy and applicable error range of the proposed method for the three basic errors of the MLA, and verify the rectification method for light-field images of real objects. Figure 6 shows the flowchart of the simulation experiments. First, the ideal MLA model and the surface error model with different types and magnitudes are added to the light-field imaging model established in Section 2.1, to obtain the ideal white image and the error white image. Second, the complete parameters of the geometric and grayscale rectification matrix are obtained by the method proposed in Section 3, and the rectified white image is used to quantitatively analyze the rectification effect. Third, the real objects are imaged by the light-field imaging model with MLA errors, and the light-field images before and after rectification, sub-aperture images and refocused images, respectively, are evaluated to verify the effectiveness and reliability of the proposed method.
The real objects adopted in this paper are a set of square plates at different depths as shown in Figure 7. Each plate size is 10 × 10 cm, and consists of black and white squares with a side length of 2.5 cm. They are placed 2.5, 3.5, 4.5 and 5.5 m away from the main lens plane. We used Monte Carlo algorithm for the simulation on a 2.40 GHz Intel® Xeon® E5-2680v4 server with 128.0 GB of RAM. To guarantee an adequate number of rays and improve computational efficiency, parallel computation with 10 threads was performed. The total number of rays in the simulation was 3 × 1010.

4.1. Pitch Error

When a pitch error exists in the MLA of a plenoptic camera, the center position of the microlens changes, which causes its optical axis to move, resulting in discontinuity or aliasing between the sub-images formed on the image sensor after rays pass through the MLA. To analyze the rectification accuracy of the proposed method for the distorted light-field images caused by the pitch error, according to typical processing effects in optical freeform surfaces, we add a vertical pitch error (α = 90°) in the range 0.2–1.4 μm at intervals of 0.2 μm to the MLA model for the simulation experiments, obtaining distorted white light-field images. Next, using the method proposed in Section 3, geometric rectification and grayscale rectification are sequentially performed on the distorted images. We measure the peak signal to noise ratio (PSNR) of the distorted, geometric-rectified, and grayscale-rectified images, and the results are presented in Figure 8a. The pitch error can cause severe degradation of the light-field image, and the PSNR value of the image decreases as the error increases. The PSNR value of the geometric-rectified image ranges from 26.57 dB to 28.11 dB, and the PSNR value of the grayscale-rectified image ranges from 26.64 dB to 28.27 dB. It indicates that the method can effectively recover image quality from the pitch error distortion.
For further study of the rectification effect on sub-images at different positions in the light-field image, considering the pitch error Δp = 1.4 μm at α = 90° as an example, the PSNR results of the sub-images on the diagonal of the distorted image before and after rectification are selected to make plots of the relationship of the sub-image position and the PSNR value. As shown in Figure 8b, for unrectified light-field images, the PSNR value of the sub-image decreases rapidly with the distance from its position to the image center. For grayscale-rectified light field images, the overall quality of the sub-images is significantly improved. The differences in sub-image quality at different positions are small, and the fluctuation range of PSNR values is 24.08–29.63 dB with an average of 27.34 dB.
The fluctuation of sub-image quality after rectification can be owing to the limitation of the pixel size on the image sensor. Because the integer pixel unit is not sufficient to exactly represent the center points of the sub-images, only when the sub-image center coincided with the center of the pixel unit, the geometric distortion of the sub-image can be fully rectified. In other cases, the rectification has a different degree of deviation. As a result, the PSNR values of the rectified sub-images fluctuate within a certain range. As per the above analysis, it is believed that the proposed method can effectively rectify the distorted light-field images in the pitch error range Δp ≤ 1.4 μm. The overall rectification accuracy is greater than 26.64 dB, and the local rectification accuracy is greater than 24.08 dB.
In order to evaluate the proposed method for real object images, we set a pitch error of Δp = 1.4 μm at α = 0° and α = 90° in the simulation imaging model for the objects shown in Figure 7, and performed digital refocusing on the raw light-field images before and after rectification. The resultant images that refocus at different positions are shown in Figure 9b,c. For comparison, Figure 9a also shows the refocusing results of the ideal image under standard conditions. It can be seen from Figure 9b that the refocused images computed from the distorted raw image has blurring, aliasing, and stretching deformations to varying degrees, and the distortion becomes more obvious with the refocusing depth, so that an accurate refocused image cannot be obtained at these longer refocusing positions. However, there is slight distortion in the refocused images (Figure 9c) using the rectified raw image, where the object information at different positions is clearly reconstructed. In addition, we also introduce the mean square error (MSE) and structural similarity index (SSIM) [39] to quantitatively characterize the refocused image quality before and after rectification.
The quality evaluation results for each refocused image are marked in its lower left corner. With rectification processing, the grayscale and structural similarity distortions of the refocused images are greatly reduced, and can be approximated as standard refocused images. This indicates that the proposed method can compensate for the pitch error of the MLA and transform the raw light-field image to fulfil the object refocusing requirements.

4.2. Radius-of-Curvature Error

Radius-of-curvature error changes the focal length of the microlens in the MLA, so that the distance from the erroneous microlens to the sensor does not satisfy the conjugate distance, resulting in a reduced resolution of the raw image captured by the plenoptic camera. To analyze the rectification effect on the light-field image with radius-of-curvature error, we define the relative error of the radius-of-curvature εr = Δr/r, and set the relative error range as −13% ≤ εr ≤ 13%. Figure 10a shows the PSNR results of the distorted raw white light-field image and the corresponding geometric- and grayscale-rectified images under different error conditions. From the plot without rectification, the larger the magnitude of the radius-of-curvature error, the smaller the PSNR value of the raw image before rectification, and negative errors have a more significant effect on the image than positive errors. But in general, the image degradation caused by this error is small. The minimum PSNR within the error range is 19.49 dB. From the plot with rectification, the PSNR value of the raw image is improved, but it still decreases as the magnitude of the error increases. When the relative error εr = −10%, the obtained PSNR value of the grayscale-rectified image can be over 25 dB. The degradation degree of the image is low, and the distortion is invisible to the human eye (see partially enlarged standard image and rectified image in the following Figure 11), which indicates that this rectification method is applicable for distorted images that have a relative error | ε r | 10 % .
In order to explore the local rectification accuracy for the radius-of-curvature error, we calculate the PSNR values of the sub-images on the diagonal of the distorted image with the relative error εr = −10%, and the corresponding grayscale-rectified image. As can be seen from the results shown in Figure 10b, the sub-image quality of the raw distorted image varies periodically with the position of the sub-image. After rectification, each sub-image has higher quality, and the average PSNR can reach 25.65 dB. The overall quality of the rectified image still has periodic variations in the range 24.54–26.71 dB, but the period remains unchanged. Moreover, the PSNR values of the sub-images near the center of the image are lower than those of the sub-images at the edge of the image, which becomes more significant after rectification.
Since the error does not change the center position of the microlens sub-image, the periodic variation can be attributed to the slight deviation between the sub-image and its ideal position due to the main lens aberration. The position deviation of the sub-image increases with the distance from the microlens to the MLA center. The sub-images near the center have the small deviations, such that the geometric rectification effect is not obvious. And for the sub-images near the edge, the deviations become larger, and these can be well-rectified using the proposed method. Consequently, the quality of the sub-images near the edge is higher after rectification. From the above analysis, it is confirmed that the proposed method can achieve ideal rectification for the distorted light-field images in the radius-of-curvature relative error range −10% ≤ εr ≤ 10%. The overall rectification accuracy is greater than 24.59 dB, and the local rectification accuracy is greater than 24.54 dB.
Figure 11a,b shows the simulated light-field imaging results for the real objects in Figure 7 under standard conditions and with a relative error εr = −10% for the MLA. It can be seen from the partially enlarged images that the boundaries of the spots in the distorted image diffuse outwards, crosstalk occurs between adjacent spots, and the definition of the edges of the object pattern (highlighted with a yellow outline) decreases. Using our method to rectify the image in Figure 11b, the rectified light-field image is obtained, as shown in Figure 11c. The edge diffusion and crosstalk of the spots are reduced, and the blurring of the pattern edge is also alleviated. Although there are some distortions in the spot edges, the definition and contrast of the entire image is significantly improved compared to the image without rectification. The MSE value of the image is reduced by 76.00%, and the SSIM value is increased by 2.11%. This result shows that the proposed method can be applied to real object rectification under the radius-of-curvature error condition.

4.3. Decenter Error

Because of manufacturing technical defects and other uncertainties, the MLA with a double-sided structure may have decentering errors after processing [28,29]. The decenter error causes the optical axes of the microlenses to tilt, resulting in the movement of the sub-images formed on the image sensor. To demonstrate the rectification ability of the proposed method to this error, we set a decenter error ranging from 2 μm to 10 μm, and the angle β with the horizontal direction of 90° in the MLA model, for imaging simulation, and sequentially calculated the PSNR values of these raw white images before and after rectification. From the results shown in Figure 12a, the influence of the decenter error on the light-field image is related to the magnitude of the error. As the error increases, the degradation of the raw image gradually is aggravated and the PSNR value also decreases. After rectification, the raw images restore quality with the high PSNR values for all the different decenter errors. The PSNR values of the rectified images are mostly concentrated between 26 dB and 28 dB, and whether or not grayscale-rectified, the PSNR values of the images are basically the same. This indicates that for the image distortion caused by the decenter error, the quality of the light-field image can be recovered only by the geometric rectification method, without performing grayscale rectification.
We select the raw white image with the decenter error of δ = 10 μm at β = 90° to investigate the accuracy of the geometric rectification method for the sub-images at different positions in the decenter error image. Figure 12b shows the PSNR results of the sub-images on the diagonal of the distorted and grayscale-rectified images. The PSNR values of the unrectified sub-images, show a periodic fluctuation, in the range 15.25–15.95 dB. After geometric rectification, the PSNR values of most sub-images can reach more than 32.73 dB, the quality of which is approximately equal to the corresponding standard sub-images. However, for a few sub-images, the rectification results are not ideal and the PSNR values are still lower than 16 dB, seriously affecting the rectification accuracy of the entire image. The main reason we consider to be responsible for this is that the vignetting of the microlens suffers from the tilt of the optical axis caused by the decenter error. For the sub-images corresponding to the microlenses at certain positions, the center points determined by the center calibration method may have a deviation (not more than one pixel), and the subsequent extraction of edge feature points is also inaccurate, so that these distorted sub-images cannot be recovered with the geometric rectification.
As a verification, we modified the center coordinates of the four sub-images with low PSNR values in Figure 12b and re-performed the rectification processing. From the results shown in Figure 12c, the quality of these four sub-images with the center modification is significantly improved after the geometric rectification, and the average PSNR of the entire set of sub-images is 33.60 dB. Based on the above analysis, we confirm that the proposed geometric rectification method can perfectly rectify distorted light-field images in the decenter error range δ ≤ 10 μm. The overall rectification accuracy is greater than 25.88 dB, and the local rectification accuracy with center modification is greater than 32.82 dB.
Further, we add a decenter error of 10 μm to the MLA model in the horizontal and vertical two directions for the imaging simulation of the real objects. Figure 13a–c shows the sub-aperture images of the object image under standard conditions, with the decenter error, and after geometric rectification, respectively. As shown in Figure 13a,b, the sub-aperture image with the error is shifted along the horizontal and vertical directions based on the boundary of the standard sub-aperture image (shown by the yellow line), causing image degradation in grayscale and structural similarity, and the dislocation aliasing of the subsequent refocused images. After the rectification of the light-field image, the sub-aperture image has no offset. The MSE and SSIM values of the sub-aperture image are 1.01 and 0.9996, which can be considered to be very similar to the standard sub-aperture image. The effectiveness of the rectification method for real object imaging with the decenter error is verified.

4.4. Combined Error

The surface errors of the MLA after processing are complicated such that there may be two or three different errors in one microlens. Based on the analysis of the single error, we further test the performance of the proposed method for the combined errors. Considering a complex scene with multiple objects, the real objects used in this section are a set of 3D geometric entities with occlusion, as shown in Figure 14. The cone, cylinder, and sphere are placed at 4.5, 5.0, and 6.0 m from the main lens plane. The MLA surface error model is set to a multi-error combination condition, in which the pitch error is Δp = 1.4 μm at α = 90°, the radius-of-curvature relative error is εr = −10% and the decenter error is δ = 10 μm at β = 0°, for imaging experiments.
Figure 15 shows the light-field images and the corresponding sub-aperture images and refocused images under standard conditions, with the combined error, and after grayscale rectification, respectively. As can be seen from Figure 15b, the distorted images appear as a superposition of the distortion characteristics of each single error, such as aliasing and stretching deformation caused by the pitch error, crosstalk due to the radius-of-curvature error, and image shift from the decenter error. The degradation degree of the overall image is very high. The rectified images, as shown in Figure 15c, have no obvious distortions and deforms, showing clear images of the real 3D objects.

5. Disscussion

In this paper, we have proposed a local rectification method for the distorted light-field image caused by MLA surface errors using a raw white image. The method includes calibrating the sub-image center of the microlens, and sequentially rectifying the geometric distortion and grayscale deviation of each sub-image. Based on the basic error model of the MLA and the plenoptic camera imaging model established in this paper, we analyzed the accuracy and applicable error range of the proposed method for rectifying the pitch error, radius-of-curvature error, and decenter error. Further, we also evaluated the light-field image quality of real objects before and after rectification under different error conditions including single error and combined error by utilizing the MSE and SSIM indexes, verifying the effectiveness and reliability of the proposed rectification method.
The analysis results indicate that the pitch error can cause serious degradation of the light-field image, and the refocused image of the real objects displays blurring, aliasing and stretching deformation. In the pitch error range Δp ≤ 1.4 μm, the proposed method can effectively rectify the distorted image, with overall rectification accuracy greater than 26.64 dB, and local rectification accuracy greater than 24.08 dB. The rectified image can accurately reconstruct the object information at different positions. The radius-of-curvature error can cause periodic fluctuations in the light-field image quality, and in the real object image, the adjacent spots have crosstalk, and the definition of the pattern edges decreases. In the radius-of-curvature error range −10% ≤ εr ≤ 10%., the overall rectification accuracy is greater than 24.59 dB, and the local rectification accuracy is greater than 24.54 dB. The MSE value of the rectified object image is reduced by 76.00%, and the SSIM value is increased by 2.11%. For the decenter error, the degradation and distortion of the light-field image increases with the error magnitude, and the sub-aperture image of the object is shifted along the error direction. In the decenter error range δ ≤ 10 μm, the light-field image quality can be restored with the geometric rectification method alone, to yield an overall rectification accuracy greater than 25.88 dB, and local correction accuracy with center modification greater than 32.82 dB.

6. Conclusions

In conclusion, the rectification method proposed in this paper can achieve a good compensation effect for the image distortion caused by different surface errors of the MLA, effectively reducing loss and deviation of light-field information. The rectified light-field image has a high quality and can meet the requirements of subsequent digital processing and target reconstruction. However, the accuracy of the rectification method is largely affected by the calibration result of the sub-image centers. We determined the center points of each microlens to a certain pixel, but due to the size of the pixels on the sensor, the pixel cannot represent the actual center point very precisely, which may lead to positioning deviation of the center point and inaccurate rectification results, as shown in Figure 8b and Figure 12b. Therefore, future work will consider the use of sub-pixel precision for location of the center points to improve the accuracy of this method.

Author Contributions

Conceptualization, S.L. and Y.Y.; Methodology, S.L. and Y.Y.; Software, S.L. and C.Z.; Validation, S.L. and C.Z.; Formal Analysis, S.L.; Investigation, Data Curation, S.L. and Y.Z.; Writing-Original Draft Preparation, S.L.; Writing-Review & Editing, S.L. and Y.Y.; Visualization, S.L., Y.Z. and C.Z.; Supervision, Y.Y. and H.T.; Project Administration, Y.Y. and H.T.; Funding Acquisition, Y.Y. and H.T.

Funding

This research was funded by National Natural Science Foundation of China (Nos. 51776051, 51327803).

Acknowledgments

The authors are especially grateful to the editors and referees who gave important comments that helped us improve this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Levoy, M. Light fields and computational imaging. Computer 2006, 39, 46–55. [Google Scholar] [CrossRef]
  2. Ng, R.; Levoy, M.; Brédif, M.; Duval, G.; Horowitz, M.; Hanrahan, P. Light field photography with a hand-held plenoptic camera. Comput. Sci. Tech. Rep. 2005, 2, 1–11. [Google Scholar]
  3. Levoy, M.; Ng, R.; Adams, A.; Footer, M.; Horowitz, M. Light field microscopy. ACM Trans. Graph. 2006, 25, 924–934. [Google Scholar] [CrossRef]
  4. Georgiev, T.; Lumsdaine, A. Focused plenoptic camera and rendering. J. Electron. Imaging 2010, 19, 021106. [Google Scholar]
  5. Antensteiner, D.; Štolc, S.; Pock, T. A review of depth and normal fusion algorithms. Sensors 2018, 18, 431. [Google Scholar] [CrossRef] [PubMed]
  6. Rodríguez, M.; Magdaleno, E.; Pérez, F.; García, C. Automated software acceleration in programmable logic for an efficient NFFT algorithm implementation: A case study. Sensors 2017, 17, 694. [Google Scholar] [CrossRef] [PubMed]
  7. Pérez, J.; Magdaleno, E.; Pérez, F.; Rodríguez, M.; Hernández, D.; Corrales, J. Super-Resolution in plenoptic cameras using FPGAs. Sensors 2014, 14, 8669–8685. [Google Scholar] [CrossRef] [PubMed]
  8. Sun, J.; Xu, C.; Zhang, B.; Hossain, M.M.; Wang, S.; Qi, H.; Tan, H. Three-dimensional temperature field measurement of flame using a single light field camera. Opt. Express 2016, 24, 1118–1132. [Google Scholar] [CrossRef] [PubMed]
  9. Yuan, Y.; Liu, B.; Li, S.; Tan, H. Light-field-camera imaging simulation of participatory media using Monte Carlo method. Int. J. Heat Mass Transf. 2016, 102, 518–527. [Google Scholar] [CrossRef]
  10. Kim, S.; Ban, Y.; Lee, S. Face liveness detection using a light field camera. Sensors 2014, 14, 22471–22499. [Google Scholar] [CrossRef] [PubMed]
  11. Fahringer, T.W.; Lynch, K.P.; Thurow, B.S. Volumetric particle image velocimetry with a single plenoptic camera. Meas. Sci. Technol. 2015, 26, 115201. [Google Scholar] [CrossRef]
  12. Chen, H.; Sick, V. Three-dimensional three-component air flow visualization in a steady-state engine flow bench using a plenoptic camera. SAE Int. J. Engines 2017, 10, 625–635. [Google Scholar] [CrossRef]
  13. Skinner, K.A.; Johnson-Roberson, M. Towards real-time underwater 3D reconstruction with plenoptic cameras. In Proceedings of the 2016 IEEE/RSJ International Conference on in Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 2014–2021. [Google Scholar]
  14. Dong, F.; Ieng, S.H.; Savatier, X.; Etienne-Cummings, R.; Benosman, R. Plenoptic cameras in real-time robotics. Int. J. Robot. Res. 2013, 32, 206–217. [Google Scholar] [CrossRef]
  15. Liu, X.; Zhang, X.; Fang, F.; Zeng, Z.; Gao, H.; Hu, X. Influence of machining errors on form errors of microlens arrays in ultra-precision turning. Int. J. Mach. Tools Manuf. 2015, 96, 80–93. [Google Scholar] [CrossRef]
  16. Cao, A.; Pang, H.; Wang, J.; Zhang, M.; Chen, J.; Shi, L.; Deng, Q.; Hu, S. The Effects of Profile Errors of Microlens Surfaces on Laser Beam Homogenization. Micromachines 2017, 8, 50. [Google Scholar] [CrossRef]
  17. Thomason, C.M.; Fahringer, T.F.; Thurow, B.S. Calibration of a microlens array for a plenoptic camera. In Proceedings of the 52nd AIAA Aerospace Sciences Meeting, National Harbor, MD, USA, 13–17 January 2014; p. 0396. [Google Scholar]
  18. Li, S.; Yuan, Y.; Zhang, H.; Liu, B.; Tan, H. Microlens assembly error analysis for light field camera based on Monte Carlo method. Opt. Commun. 2016, 372, 22–36. [Google Scholar] [CrossRef]
  19. Li, S.; Yuan, Y.; Liu, B.; Wang, F.; Tan, H. Influence of microlens array manufacturing errors on light-field imaging. Opt. Commun. 2018, 410, 40–52. [Google Scholar] [CrossRef]
  20. Li, S.; Yuan, Y.; Liu, B.; Wang, F.; Tan, H. Local error and its identification for microlens array in plenoptic camera. Opt. Lasers Eng. 2018, 108, 41–53. [Google Scholar] [CrossRef]
  21. Shi, S.; Wang, J.; Ding, J.; Zhao, Z.; New, T.H. Parametric study on light field volumetric particle image velocimetry. Flow Meas. Instrum. 2016, 49, 70–88. [Google Scholar] [CrossRef]
  22. Fahringer, T.; Thurow, B. The effect of grid resolution on the accuracy of tomographic reconstruction using a plenoptic camera. In Proceedings of the 51st AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, Dallas, TX, USA, 7–10 January 2013; p. 0039. [Google Scholar]
  23. Kong, X.; Chen, Q.; Wang, J.; Gu, G.; Wang, P.; Qian, W.; Ren, K.; Miao, X. Inclinometer assembly error calibration and horizontal image correction in photoelectric measurement systems. Sensors 2018, 18, 248. [Google Scholar] [CrossRef] [PubMed]
  24. Lourenço, M.; Barreto, J.P.; Francisco, V. sRD-SIFT: Keypoint detection and matching in images with radial distortion. IEEE Trans Robot. 2012, 28, 752–760. [Google Scholar] [CrossRef]
  25. Furnari, A.; Farinella, G.M.; Bruna, A.R.; Battiato, S. Affine covariant features for fisheye distortion local modeling. IEEE Trans. Image Process. 2017, 26, 696–710. [Google Scholar] [CrossRef] [PubMed]
  26. Cruz-Mota, J.; Bogdanova, I.; Paquier, B.; Bierlaire, M.; Thiran, J.P. Scale invariant feature transform on the sphere: Theory and applications. Int. J. Comput. Vis. 2012, 98, 217–241. [Google Scholar] [CrossRef]
  27. Jin, J.; Cao, Y.; Cai, W.; Zheng, W.; Zhou, P. An effective rectification method for lenselet-based plenoptic cameras. In Proceedings of the SPIE/COS Photonics Asia on Optoelectronic Imaging and Multimedia Technology IV, Beijing, China, 12–14 October 2016; p. 100200F. [Google Scholar]
  28. Dansereau, D.G.; Pizarro, O.; Williams, S.B. Decoding, calibration and rectification for lenselet-based plenoptic cameras. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, USA, 23–28 June 2013; pp. 1027–1034. [Google Scholar]
  29. Cho, D.; Lee, M.; Kim, S.; Tai, Y.W. Modeling the calibration pipeline of the Lytro camera for high quality light-field image reconstruction. In Proceedings of the 2013 IEEE International Conference on Computer Vision (ICCV), Sydney, Australia, 1–8 December 2013; pp. 3280–3287. [Google Scholar]
  30. Li, T.; Li, S.; Li, S.; Yuan, Y.; Tan, H. Correction model for microlens array assembly error in light field camera. Opt. Express 2016, 24, 24524–24543. [Google Scholar] [CrossRef] [PubMed]
  31. Mukaida, M.; Yan, J. Ductile machining of single-crystal silicon for microlens arrays by ultraprecision diamond turning using a slow tool servo. Int. J. Mach. Tools Manuf. 2017, 115, 2–14. [Google Scholar] [CrossRef]
  32. Liu, B.; Yuan, Y.; Li, S.; Shuai, Y.; Tan, H. Simulation of light-field camera imaging based on ray splitting Monte Carlo method. Opt. Commun. 2015, 355, 15–26. [Google Scholar] [CrossRef]
  33. Shih, Y.M.; Kao, C.C.; Ke, K.C.; Yang, S.Y. Imprinting of double-sided microstructures with rapid induction heating and gas-assisted pressuring. J. Micromech. Microeng. 2017, 27, 095012. [Google Scholar] [CrossRef]
  34. Zhao, Z.; Hui, M.; Liu, M.; Dong, L.; Liu, X.; Zhao, Y. Centroid shift analysis of microlens array detector in interference imaging system. Opt. Commun. 2015, 354, 132–139. [Google Scholar] [CrossRef]
  35. Huang, C.Y.; Hsiao, W.T.; Huang, K.C.; Chang, K.S.; Chou, H.Y.; Chou, C.P. Fabrication of a double-sided micro-lens array by a glass molding technique. J. Micromech. Microeng. 2011, 21, 085020. [Google Scholar] [CrossRef]
  36. Xie, D.; Chang, X.; Shu, X.; Wang, Y.; Ding, H.; Liu, Y. Rapid fabrication of thermoplastic polymer refractive microlens array using contactless hot embossing technology. Opt. Express 2015, 23, 5154–5166. [Google Scholar] [CrossRef] [PubMed]
  37. Furnari, A.; Farinella, G.M.; Bruna, A.R.; Battiato, S. Generalized Sobel filters for gradient estimation of distorted images. In Proceedings of the 2015 IEEE Conference on Image Processing (ICIP), Quebec City, QC, Canada, 27–30 September 2015; pp. 3250–3254. [Google Scholar]
  38. Furnari, A.; Farinella, G.M.; Bruna, A.R.; Battiato, S. Distortion adaptive Sobel filters for the gradient estimation of wide angle images. J. Vis. Commun. Image Represent. 2017, 46, 165–175. [Google Scholar] [CrossRef]
  39. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Physical model of plenoptic camera imaging system.
Figure 1. Physical model of plenoptic camera imaging system.
Sensors 18 02019 g001
Figure 2. MLA design model and its coordinate system.
Figure 2. MLA design model and its coordinate system.
Sensors 18 02019 g002
Figure 3. Schematic of the rectification procedure for a distorted light-field image.
Figure 3. Schematic of the rectification procedure for a distorted light-field image.
Sensors 18 02019 g003
Figure 4. Step-by-step results of microlens center calibration. (a) Raw white light-field image captured by plenoptic camera; (b) Preliminary division of microlens region; (c) Estimated center point; (d) Divided sub-image region.
Figure 4. Step-by-step results of microlens center calibration. (a) Raw white light-field image captured by plenoptic camera; (b) Preliminary division of microlens region; (c) Estimated center point; (d) Divided sub-image region.
Sensors 18 02019 g004
Figure 5. Extracted sub-image feature points for solving geometric error matrix.
Figure 5. Extracted sub-image feature points for solving geometric error matrix.
Sensors 18 02019 g005
Figure 6. Flowchart of our simulation experiment.
Figure 6. Flowchart of our simulation experiment.
Sensors 18 02019 g006
Figure 7. Layout of real objects in light-field imaging.
Figure 7. Layout of real objects in light-field imaging.
Sensors 18 02019 g007
Figure 8. PSNR results for pitch error in light-field image. (a) Overall PSNR of raw white light-field images under different pitch error conditions; (b) Local PSNR of each sub-image on the diagonal of the raw image with a pitch error Δp = 1.4 μm at α = 90°.
Figure 8. PSNR results for pitch error in light-field image. (a) Overall PSNR of raw white light-field images under different pitch error conditions; (b) Local PSNR of each sub-image on the diagonal of the raw image with a pitch error Δp = 1.4 μm at α = 90°.
Sensors 18 02019 g008
Figure 9. Comparison with refocused images of real object (a) from the standard light-field image; (b) from the distorted image with a pitch error Δp = 1.4 μm at α = 0° and α= 90°; and (c) from the image rectified by the proposed method. The MSE and SSIM quality evaluation results for each refocused image are marked in its lower left corner.
Figure 9. Comparison with refocused images of real object (a) from the standard light-field image; (b) from the distorted image with a pitch error Δp = 1.4 μm at α = 0° and α= 90°; and (c) from the image rectified by the proposed method. The MSE and SSIM quality evaluation results for each refocused image are marked in its lower left corner.
Sensors 18 02019 g009aSensors 18 02019 g009b
Figure 10. PSNR results for radius-of-curvature error in light-field image. (a) Overall PSNR of raw white light-field images under different radius-of-curvature error conditions; (b) Local PSNR of each sub-image on the diagonal of the raw image with a relative error εr = −10%.
Figure 10. PSNR results for radius-of-curvature error in light-field image. (a) Overall PSNR of raw white light-field images under different radius-of-curvature error conditions; (b) Local PSNR of each sub-image on the diagonal of the raw image with a relative error εr = −10%.
Sensors 18 02019 g010
Figure 11. Comparison with partially enlarged light-field images of real object (a) under standard conditions; (b) with a relative error εr = −10%; and (c) after geometric rectification. The MSE and SSIM quality evaluation results for each image are marked in its lower right corner.
Figure 11. Comparison with partially enlarged light-field images of real object (a) under standard conditions; (b) with a relative error εr = −10%; and (c) after geometric rectification. The MSE and SSIM quality evaluation results for each image are marked in its lower right corner.
Sensors 18 02019 g011
Figure 12. PSNR results for decenter error in light-field image. (a) Overall PSNR of raw white light-field images under different decenter error conditions; (b) Local PSNR of each sub-image on the diagonal of the raw image with a decenter error δ = 10 μm and β = 90°; (c) Local PSNR results after center modification.
Figure 12. PSNR results for decenter error in light-field image. (a) Overall PSNR of raw white light-field images under different decenter error conditions; (b) Local PSNR of each sub-image on the diagonal of the raw image with a decenter error δ = 10 μm and β = 90°; (c) Local PSNR results after center modification.
Sensors 18 02019 g012
Figure 13. Comparison with sub-aperture images of real object (a) from the standard light-field image; (b) from the distorted image with a decenter error δ = 10 μm at β = 0° and β = 90°, and (c) from the image rectified by the proposed method. The MSE and SSIM quality evaluation results for each sub-aperture image are marked in its lower right corner.
Figure 13. Comparison with sub-aperture images of real object (a) from the standard light-field image; (b) from the distorted image with a decenter error δ = 10 μm at β = 0° and β = 90°, and (c) from the image rectified by the proposed method. The MSE and SSIM quality evaluation results for each sub-aperture image are marked in its lower right corner.
Sensors 18 02019 g013
Figure 14. Real 3D objects in performance test for combined errors.
Figure 14. Real 3D objects in performance test for combined errors.
Sensors 18 02019 g014
Figure 15. Comparison with light-field images, sub-aperture images and refocused images of real 3D objects (a) under standard conditions; (b) with a combined error; and (c) after grayscale rectification. The MSE and SSIM quality evaluation results for each sub-aperture image are marked in its lower right corner.
Figure 15. Comparison with light-field images, sub-aperture images and refocused images of real 3D objects (a) under standard conditions; (b) with a combined error; and (c) after grayscale rectification. The MSE and SSIM quality evaluation results for each sub-aperture image are marked in its lower right corner.
Sensors 18 02019 g015
Table 1. Main parameters of MLA model.
Table 1. Main parameters of MLA model.
ParametersValue
Number of microlenses NW × NH102 × 102
Pitch (Side length) p100 μm
Radius of curvature r469 μm
Thickness at the vertex t10 μm
Focal length f420 μm
Refractive index n (λ = 632.8 nm)1.56
Table 2. Mathematical Model of MLA surface error.
Table 2. Mathematical Model of MLA surface error.
ErrorsSurface Description Equations
Pitch error Δp [ x ( r t / 2 ) ] 2 + [ y ( y 0 ( m 1 / 2 ) p + Δ p sin α ) ] 2 + [ z ( z 0 + ( n 1 / 2 ) p + Δ p cos α ) ] 2 = r 2
Radius-of-curva-ture error Δr [ x ( r t / 2 ) ] 2 + [ y ( y 0 ( m 1 / 2 ) p ) ] 2 + [ z ( z 0 + ( n 1 / 2 ) p ) ] 2 = ( r + Δ r ) 2
Decenter error δ [ x ( r t / 2 ) ] 2 + [ y ( y 0 ( m 1 / 2 ) p ) ] 2 + [ z ( z 0 + ( n 1 / 2 ) p ) ] 2 = r 2
[ x + ( r t / 2 ) ] 2 + [ y ( y 0 ( m 1 / 2 ) p + δ sin β ) ] 2 + [ z ( z 0 + ( n 1 / 2 ) p + δ cos β ) ] 2 = r 2

Share and Cite

MDPI and ACS Style

Li, S.; Zhu, Y.; Zhang, C.; Yuan, Y.; Tan, H. Rectification of Images Distorted by Microlens Array Errors in Plenoptic Cameras. Sensors 2018, 18, 2019. https://doi.org/10.3390/s18072019

AMA Style

Li S, Zhu Y, Zhang C, Yuan Y, Tan H. Rectification of Images Distorted by Microlens Array Errors in Plenoptic Cameras. Sensors. 2018; 18(7):2019. https://doi.org/10.3390/s18072019

Chicago/Turabian Style

Li, Suning, Yanlong Zhu, Chuanxin Zhang, Yuan Yuan, and Heping Tan. 2018. "Rectification of Images Distorted by Microlens Array Errors in Plenoptic Cameras" Sensors 18, no. 7: 2019. https://doi.org/10.3390/s18072019

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop