Next Article in Journal
Algebraic Speed Estimation for Sensorless Induction Motor Control: Insights from an Electric Vehicle Drive Cycle
Previous Article in Journal
Exploring Reddit Community Structure: Bridges, Gateways and Highways
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Vignetting Compensation Method for CMOS Camera Based on LED Spatial Array

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
3
Changchun Cedar Electronics Technology Co., Ltd., Changchun 130103, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(10), 1936; https://doi.org/10.3390/electronics13101936
Submission received: 22 April 2024 / Revised: 10 May 2024 / Accepted: 14 May 2024 / Published: 15 May 2024

Abstract

:
To solve the problem of pixel light intensity information distortion caused by camera vignetting in optical devices such as CMOS or CCD cameras, existing studies mainly focus on small spatial light fields and point light sources and adopt an integrating sphere and function model for vignetting correction, but it is not suitable for large LED optical composite display devices. Under this background, this paper innovatively proposes a camera vigneting compensation method based on an LED spatial array, independently develops a two-dimensional translation device driven by a high-precision guide rail, uses spatial array technology to obtain the brightness distribution of the corrected display screen to quantify its camera vigneting distortion characteristics, and adopts systematic mathematical operations and iterative compensation strategies. Industry standard tests show that the brightness uniformity of the display has been improved by 5.06%. The above research results have been applied to mass production and industrialization.

1. Introduction

In the pursuit of achieving high-quality image display on LED screens, CCD/CMOS cameras have become a pivotal instrument for luminance and spatial distribution calibration due to their resemblance to the human visual system and their swift, efficient image acquisition capabilities. Nevertheless, in practical applications, an inherent characteristic of camera imaging known as vignetting negatively impacts the consistency calibration process. Vignetting refers to the progressive decrease in image brightness as the distance from the camera’s optical axis increases [1], a phenomenon that is particularly pronounced during LED screen brightness calibration. When employing matrix cameras to gather data for uniformity correction, the natural attenuation of brightness from the central to the peripheral areas can lead to hardware correction coefficients that underestimate the center region and overestimate the edge regions [2]. Upon application of these correction parameters, the actual display performance exhibits a non-uniform state with a darker center and brighter edges. While this variation may be less noticeable within an individual cabinet due to a seemingly smooth transition in brightness [3], it becomes significantly apparent across multiple cabinets when assembled into a large-scale screen, thereby undermining the overall continuity and uniformity of the displayed image. Consequently, to attain the desired level of display quality, it is imperative to accurately compensate for the camera’s vignetting effect either prior to or during the LED screen brightness calibration, ensuring consistent and even brightness across the entire display surface.
The common camera vignetting compensation strategy mainly includes three ways: look up the table method, integrating sphere calibration method, and function model approximation method.
The principle of the table lookup method is to use a standard and stable point light source to illuminate the camera and obtain the compensation factor of each pixel. Although this method is simple and easy to operate, as the camera’s resolution continues to improve, the longer the measurement time, the compensation factor of the camera’s photosensitive pixels requires greater storage space. Moreover, when the camera acquisition condition changes, different compensation factors need to be compared, which has great limitations in engineering applications.
The integrating sphere is a hollow sphere with a white coating on the inside, and the points on the inner wall have a high degree of uniform diffuse reflection, ensuring that the light emitted from the opening of the sphere is uniform white. By aiming the camera lens at the light outlet of the integrating sphere to shoot the image, the compensation coefficient of each pixel of the camera is calculated, and the vignetting correction is implemented based on this. The size of the light outlet of the integrating sphere limits the calibration only for small light fields, which are powerless for large light fields, and the wavelength. It is possible to calibrate a single LED display using this method [4], but there are practical engineering challenges in creating a custom integrating sphere suitable for such a large light field when faced with a large LED display of hundreds of square meters. This method also has the disadvantage of occupying too much storage space.
The function model approximation method is a method that uses a variety of function models to simulate and approximate camera vignetting surfaces. For example, Mark Brady’s polynomial model [5], Sawchuk’s exponential polynomial model [6], Wonpil Yu’s hyperbolic cosine model [7], F J W-M Leong’s Gaussian function model [8], and radial polynomial model advocated by Ramsay [9], etc. Although this method alleviates the problem of excessive storage space occupation to a certain extent, it also inevitably increases the difficulty of estimating the vignetting function and fails to completely solve other existing problems.
In 2010, Zhang Xin proposed a correction method for the vignetting phenomenon of the camera on the LED display screen [10]. However, this method can only effectively eliminate some high-frequency differences, and the discreteness of the LED light-emitting chip itself is not processed. In 2016, Tian Zhihui used a luminance meter to measure the brightness values at different positions of the screen and analyzed the vignetting of the screen through low-order surface fitting technology [11]. However, due to the high cost of the luminance meter and the slow measurement speed, this scheme is limited in practical engineering applications. In 2019, Wang Sichao and his team members proposed to compensate for the vignetting effect of the camera by combining the pan-tilt and the camera to collect brightness data [12,13]. However, due to the difficulty of keeping the rotation axis of the pan-tilt consistent with the camera imaging center, there is a large error in the obtained vignetting surface. The adaptive acquisition error correction strategy based on shift difference proposed by Mao Xinyue scholars in 2022 [14] introduced the problem of repetitive error in the time-sharing acquisition process despite some innovations.
In view of the above problems, this study proposes a camera vignetting compensation algorithm based on LED spatial arrangement and designs a camera vignetting correction device. The device can accurately capture the spatial distribution information of LED brightness and then characterize the vignetting characteristic curve of the camera to achieve accurate calibration of the vignetting phenomenon of the camera. This method can not only eliminate the discreteness of the LED light-emitting tube core but also has the characteristics of fast measurement speed and small storage space. In practical application, this technology has been successfully applied in the field of LED display calibration, showing a significant optimization effect that can ensure that multiple display units are seamlessly spliced into a whole screen and meet the application requirements of different occasions, thus effectively improving the display consistency and uniformity level of the whole screen.

2. Methods

To solve the camera vignetting problem mentioned above, a space array compensation algorithm is proposed to correct the camera vignetting effect. In order to further study and quantify the surface distortion caused by camera vignetting, a set of precision two-dimensional translation tables is designed. The device is driven by a high-precision guide rail to ensure moving accuracy between 0.020 mm and 0.040 mm. The specific configuration is: The camera is fixed on the top of the darkroom and can move accurately with the guide rail in the X-axis direction; the LED display box is fixed at the bottom and moves along the guide rail in the Y-axis direction. The maximum travel of the guide rail is 100 cm. When the device door is closed, an ideal darkroom environment is formed for accurate measurement and analysis, as shown in Figure 1 below.
Basic idea: Use the special equipment designed to accurately obtain the camera’s vignetting distribution surface and use it as the target surface to be compensated. Then, the surface to be compensated is multiplied by the original correction coefficient matrix, which is obtained from the camera data directly without considering the vignetting effect, to generate a new correction coefficient matrix that takes into account the discreteness of the luminous chip and the influence of the camera vignetting. Finally, the optimized correction coefficient matrix is applied to accurately correct the LED screen, so as to eliminate the influence of camera vignetting and chip dispersion on the brightness uniformity of the screen. Figure 2 shows the flow diagram of the camera surface compensation algorithm.

2.1. Luminance Consistency Correction

The first step is to correct the brightness consistency of the LED box to eliminate the uneven brightness caused by the discreteness of the light-emitting chip.
The camera to be compensated is used to collect the LED box pixel by pixel, and the original brightness of each LED light point can be obtained by positioning the position of the light point. Because in the case of the specified camera, keep the aperture unchanged, the actual brightness of the LED is proportional to the gray value of the digital image, so under appropriate exposure conditions, the gray value of the image is used to represent the brightness. The original gray value of the LED lamp is G r a y ( i , j ) . In order to eliminate the influence of environmental noise, the same exposure value is used to collect the unlit LED box in the same environment to obtain the background noise N ( i , j ) . The original grayscale true value is D ( i , j ) :
D ( i , j ) = G r a y ( i , j ) N ( i , j )
Taking 80% of the mean value of the matrix as the target brightness T :
T = i = 1 P j = 1 Q D ( i , j ) P × Q × 80 %
where i = 1~P, j = 1~Q. P represents the number of rows of LED display pixels, and Q represents the number of columns of LED display pixels.
The initial correction coefficient C ( i , j ) is calculated by the following formula:
C ( i , j ) = T D ( i , j )
The initial correction factor C ( i , j ) is loaded through the control system.
In order to compare the display uniformity of the LED display before and after correction, the LED display in the initial corrected state is collected again under the same collection conditions to obtain the corrected gray value G r a y ( i , j ) (no secondary collection is required in actual operation, and the initial brightness correction is completed after loading the correction coefficient).
Then the corrected gray truth value D ( i , j ) :
D ( i , j ) = G r a y ( i , j ) N ( i , j )
where N ( i , j ) is the background noise of this time, and Figure 3 shows the comparison between the corrected gray truth value D ( i , j ) and the original gray truth value D ( i , j ) , and it can be seen that the uniformity has been significantly improved.

2.2. Camera Vignetting Surface Correction

At this time, the LED display is in a state after initial correction, which eliminates the dispersion of the light-emitting chip but at the same time introduces the vignetting distribution surface error of the camera to be compensated. In this paper, the brightness distribution of LEDs is used to represent the vignetic surface of the camera to be compensated. The equipment designed to obtain the brightness distribution is shown in Figure 1. Camera vignetting correction mainly includes the following 5 steps, the basic process is shown in Figure 4:
The pixel size of the LED display is P rows and Q columns, which are divided into M rows and N columns of arrays, each of which has pixel points (P/M) × (Q/N). The schematic diagram of the acquisition method is shown in Figure 5. Light up all pixels in the first array, move the guide rail of the device so that the light spot is located in the center of the field of view of camera A for shooting, and light up the next array successively. Camera A always maintains the normal direction of the light spot and collects the spatial array of red, green, and blue primary colors of the LED box after the initial brightness correction. (Camera A is the camera equipped with the device in Figure 1. Different from the camera to be compensated, the vignetting of this camera is negligible because the light spot is always in the center of the camera field of view and maintains the normal direction).
By integrating the gray values in each spatial array, the M × N two-dimensional matrix Sur1 is obtained, which is used to represent the brightness distribution surface. As shown in Figure 6, it can be observed that the brightness distribution presents complex spatial surface distortion, and the brightness of red is a surface with low middle and high sides. Green and blue also have different degrees of surface. Sur1 is calculated as follows:
S u r 1 = x = 1 X y = 1 Y A r r 1 ( x , y )
A r r 1 ( x , y ) represents the gray value of the photosensitive pixels in row x and column y of the image acquired by each spatial array. A spatial array image is represented by X × Y photosensitive pixels.
Sur1 is denoised and filtered, and the cubic spline interpolates into a matrix G1 of size P × Q, then the surface compensation matrix V 1 ( i , j ) is:
V 1 ( i , j ) = G min G 1 ( i , j )
where G min is the minimum value in G 1 ( i , j ) , and i = 1~P, j = 1~Q. The camera vignetting correction coefficient F 1 ( i , j ) is obtained by multiplying the surface correction matrix V 1 ( i , j ) and the brightness consistency correction coefficient C ( i , j ) .
F 1 ( i , j ) = V 1 ( i , j ) × C ( i , j )
The camera vignetting correction coefficient F 1 ( i , j ) is loaded.
The brightness distribution surface Sur2 after camera vignetting compensation is collected by the device to verify the flatness. When “the row gray difference” P 2 _ r o w and “column gray difference” P 2 _ c o l are both less than 1%, camera vignetting correction is completed.
S u r 2 _ r o w ( m ) = n = 1 N S u r 2 ( m , n ) N S u r 2 _ c o l ( n ) = m = 1 M S u r 2 ( m , n ) M
P 2 _ r o w = max ( S u r 2 _ r o w ( m ) ) min ( S u r 2 _ r o w ( m ) ) min ( S u r 2 _ r o w ( m ) ) × 100 % P 2 _ c o l = ( max ( S u r 2 _ c o l ( n ) ) min ( S u r 2 _ c o l ( n ) ) ) min ( S u r 2 _ c o l ( n ) ) × 100 %
where m = 1~M, n = 1~N.
If the above conditions are not met, the LED display at this time is again carried out in the operation of steps 2–5, and the camera vignetting is corrected several times until the flatness of the collected two-dimensional matrix SurNum meets the requirements, and the collection number Num is usually between 3 and 5. Surface compensation matrix W N u m ( i , j ) and camera vignetting correction coefficient F N u m ( x , y ) are obtained:
W N u m ( i , j ) = V 1 ( i , j ) × V 2 ( i , j ) × × V N u m ( i , j )
F N u m ( i , j ) = W N u m ( i , j ) × C ( i , j )
In the experiment, an LED display box with a point spacing of 1.19 mm was selected, which had 288 × 256 pixels and was divided into 18 rows and 16 columns of arrays, each with a size of 16 × 16 pixels. Figure 6 shows the brightness distribution when Num = 3. The comparison data of horizontal and vertical gray difference before and after the compensation of the three primary colors calculated by Equation (9) are shown in Table 1.
It can be seen that after compensating for camera vignetting, the brightness of the display screen meets the flatness requirements and achieves the purpose of camera vignetting correction.

2.3. Edge Brightness Correction

After the camera vignetting correction in the previous step, the overall brightness uniformity of the LED display screen has been improved, but it is found that the edge of the box appears as a bright line after splicing. This is because when the LED display screen is collected in a single box, the edge is not set off by other lights, and the collected brightness is low. The correction coefficient of the initial brightness correction calculation is too large, resulting in bright lines between adjacent boxes after splicing. Further edge brightness correction is performed for this phenomenon.
With the same device, the box after the vignetting correction of the camera in the previous step is successively lit up with 36 rows/columns of the upper, lower, left, and right edges. From the edge to the inner side, in order to obtain the accurate brightness of the LED pixels, the pixels of each row/column are usually lit at the interval. The center of the camera A is aligned to the center of the row/column, and a picture is taken and moved with the lit LED pixels. The left edge brightness acquisition diagram is shown in Figure 7, and the right edge is the same. The upper edge brightness acquisition diagram is shown in Figure 8, and the lower edge is the same. A total of 144 pictures were collected.
The brightness of each row or column is represented by the gray mean A m ( m = 1 ~ 36 ) of each row or column. From the blue line in Figure 9, it can be observed that the edge brightness is obviously large, especially A 1 ~ A s brightness (S ≤ 18, S = 8 in this batch of LED boxes), so the edge brightness adjustment is necessary. In this paper, A s + 1 ~ A 36 data are used for two-dimensional polynomial fitting. The formula is as follows:
A m = p 1 × m 2 + p 2 × m + p 3 ( m = ( S + 1 ) ~ 36 )
The p 1 , p 2 , p 3 are obtained by fitting, and the edge brightness obtained by fitting is set to I m .
I m = p 1 × m 2 + p 2 × m + p 3 , ( m = 1 ~ S ) A m , ( m = ( S + 1 ) ~ 36 )
Then calculate the trimming ratio B m :
B m = I m / A m
By multiplying the trimming ratio with the corresponding point of the vignetting correction coefficient, the final correction coefficient is P ( i , j ) .
C o m ( i , j ) = F N u m ( i , j ) × B m
After loading C o m ( i , j ) , the edge 36 lines are collected again under the same experimental conditions to verify the feasibility of the method. The test data on the left edge of the blue primary color are shown in Figure 9. The blue line represents the actual brightness value, the orange line represents the fitting brightness value, and the gray line represents the brightness value after edge correction. From the data, the edge brightness is obviously improved. The multiple display screens are built up, the bright lines at the seams disappear, and the overall display effect is uniform.
In theory, the camera vignetting compensation matrix C o m ( i , j ) has nothing to do with the pixel resolution of the LED box of the same physical size and is only related to the camera, so it can be applied to any resolution under the same camera conditions. If the physical size of the box changes, its position in the camera imaging changes and needs to be re-compensated.

3. Results

The camera selected in this experiment is the acA3800-10gm CMOS camera manufactured by BASLER, Ahrensburg, Germany. The lens focal length is 12 mm, the spatial resolution is 2748 × 3840, and the lens is 100 cm away from the surface of the LED display. Select the LED display box with a point pitch of 1.19 mm, with 288 × 256 pixels, and the size is 342.9 mm high and 304.8 mm wide. The 8 LED boxes are built as a large display according to 2 rows and 4 columns.
The International Commission on Display Measurement recommends brightness uniformity as a measure of the extent of consistent changes in the brightness of the screen surface [15]. The specific method is to randomly extract 9 display modules in the full screen range and display a certain primary color on the full screen at the highest gray level and the highest brightness. The brightness value is measured by the color analyzer, and the maximum value is the brightness uniformity L M J of the display module.
L M J = max ( L J R , L J G , L J B )
L J R , L J G and L J B are the brightness uniformity of the three primary colors of red, green, and blue, respectively. The calculation method is as follows:
L J = 1 max L i L ¯ L ¯ × 100 %
L ¯ = i = 1 9 L i
where L is the brightness of the display module, the unit is Candela per square meter (cd/m2); L ¯ is the average value of the brightness arithmetic of the nine display modules.
Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 show the brightness data of red, green, and blue primary colors before and after vignetting compensation respectively.
After camera vignetting compensation, brightness uniformity improved by up to 5.06%. The compensation method has been applied to the actual correction of LED displays to achieve pipelined operation; the display effect is remarkable, and the uniformity is greatly improved.
The actual display effects shown in Figure 10a,c,e,g are the display effects of red base color, green base color, blue base color, and white that have not been compensated by camera vignetting after the initial brightness correction. After the correction of a single box, there will be a very obvious brightness discontinuity. Figure 10b,d,f,h is the display effects of red primary color, green primary color, blue primary color, and white after camera vignetting compensation, respectively, indicating that the use of the camera vignetting spatial array compensation algorithm can significantly improve the brightness uniformity and improve the display quality of the whole machine.
In order to demonstrate the advantages of the method presented in this paper, a comparison was made with the three scholars with the most similar research contents (all members of the project team), as shown in Table 8.
In the comparative analysis, Dr. Mao Xinyue introduced an adaptive acquisition error correction strategy grounded in shift differential [14], targeting camera vignetting rectification. This approach, through the meticulous computation of luminance data presented, achieved a notable enhancement in display uniformity of 3.61%. Nonetheless, it encounters the challenge of repetitive acquisition errors inherent in time-sharing data collection. Wang Sichao, put forth a spatial adaptive least squares-based camera vignetting compensation framework [12,13]. Analyzing pre- and post-compensation luminosity data reveals a 4.69% improvement in display consistency. The practical implementation, however, necessitates meticulous alignment of the cradle head’s rotational axis with the camera’s imaging center, posing a stability concern for the correction efficacy. Tian Zhihui used the low-order surface fitting technology [11] to measure the display uniformity with the help of the luminance meter, achieving an improvement of 5.47%. Although the value is slightly higher than the improvement effect of 5.05% in this paper, the method has obvious limitations: the high cost of the luminance meter equipment and the long measurement time limit its wide application in engineering. Once the relative position between the camera and the LED box changes, the accuracy of low-order surface fitting will decrease significantly, resulting in an attenuation of the degree of display uniformity improvement, and the economy, timeliness, and robustness of the method still need to be further optimized to meet the actual engineering needs. Under the condition that the camera configuration is constant and the position relative to the LED box does not change, the calculated compensation matrix can be applied to the display screen with various resolutions. Once the above conditions are adjusted, just recalculate the compensation matrix according to the procedure described in this paper to ensure the compensation effect.
In summary, the method constructed in this study not only has outstanding performance in improving display uniformity but also has wide applicability and practical effectiveness, providing strong support for pipelined calibration operations of LED displays.

4. Conclusions

A camera vignetting compensation method based on an LED spatial array is proposed, which effectively solves the problem of uneven display caused by inaccurate parameter acquisition caused by camera distortion and is verified by experiments. The method is characterized by easy operation, eliminating the discreteness of the LED light-emitting tube core, and quickly eliminating the brightness difference caused by the camera vignetting, so that the overall uniformity of the display screen is improved by 5.06%.

Author Contributions

Methodology, X.Z.; Formal analysis, X.M.; Resources, Y.C. (Yufeng Chen); Data curation, Y.C. (Yu Chen); Writing—original draft, S.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the major science and technology special projects of the Jilin grant number 20210301001GX.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

Authors Xifeng Zheng, Xinyue Mao and Yu Chen were employed by the company Changchun Cedar Electronics Technology Co, Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Kordecki, A.; Palus, H.; Bal, A. Fast vignetting reduction method for digital still camera. In Proceedings of the 2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 24–27 August 2015; pp. 1145–1150. [Google Scholar]
  2. Bal, A.; Palus, H. A Smooth Non-Iterative Local Polynomial (SNILP) Model of Image Vignetting. Sensors 2021, 21, 7086. [Google Scholar] [CrossRef] [PubMed]
  3. Zhang, Y.; Mao, J.; Wang, Y.; Caiping, L.; Zhang, H.; Tan, H.; Wu, H. An Efficient Optical Mura Compensation System for Large Liquid-Crystal Display Panels. IEEE Trans. Instrum. Meas. 2022, 71, 5023213. [Google Scholar] [CrossRef]
  4. Zhou, W.; He, J.; Peng, X. A high through-put image colorimeter for ultra-high-resolution micro-led panel inspection. In Proceedings of the 2021 International Conference on Optical Instruments and Technology: Optical Systems, Optoelectronic Instruments, Novel Display, and Imaging Technology, Online, 8–10 April 2022; SPIE: Bellingham, WA, USA, 2022; p. 122770S. [Google Scholar]
  5. Brady, M.; Legge, G.E. Camera calibration for natural image studies and vision research. J. Opt. Soc. Am. A 2008, 26, 30–42. [Google Scholar] [CrossRef] [PubMed]
  6. Sawchuk, A.A. Real-Time Correction of Intensity Nonlinearities in Imaging Systems. IEEE Trans. Comput. 1977, C-26, 34–39. [Google Scholar] [CrossRef]
  7. Yu, W. Practical anti-vignetting methods for digital cameras. IEEE Trans. Consum. Electron. 2004, 50, 975–983. [Google Scholar]
  8. Leong, F.J.W.M.; Brady, M.; McGee, J.O. Correction of uneven illumination (vignetting) in digital microscopy images. J. Clin. Pathol. 2003, 56, 619–621. [Google Scholar] [CrossRef]
  9. Ramsay, T.S. Edge-based vignetting factor estimation and correction comparative analysis. Imaging Sci. J. 2017, 65, 299–307. [Google Scholar] [CrossRef]
  10. Zhang, X.; Wang, R.; Chen, Y.; Wang, Y. Correction of vignetting captured by LED display camera. Opt. Precis. Eng. 2010, 18, 2332–2338. [Google Scholar]
  11. Tian, Z.; Miao, J.; Mao, X.; Cheng, H. Vignetting correction for camera acquisition of LED display. J. Lumin. 2016, 37, 1008–1013. [Google Scholar]
  12. Wang, S. Camera Vignetting Compensation Based on Spatial Adaptive Least Square Method. Master’s Thesis, University of Chinese Academy of Sciences, Beijing, China, 2020. [Google Scholar]
  13. Wang, S.; Zheng, X.; Mao, X.; Cheng, H.; Chen, Y. Vignetting compensation during camera acquisition of LED display. Chin. J. Liq. Cryst. Disp. 2019, 34, 778–786. [Google Scholar] [CrossRef]
  14. Mao, X. Research on Pixel-Level Accurate Acquisition and Correction Technology of Ultra-High Density LED Display. Ph.D. Thesis, University of Chinese Academy of Sciences, Beijing, China, 2021. [Google Scholar]
  15. ICDM Display Metrology Standard. Available online: https://www.icdm-sid.org/ (accessed on 22 April 2024.).
Figure 1. Device schematic diagram.
Figure 1. Device schematic diagram.
Electronics 13 01936 g001
Figure 2. Camera vignetting compensation flow chart.
Figure 2. Camera vignetting compensation flow chart.
Electronics 13 01936 g002
Figure 3. Comparison of gray truth value before and after correction of red, green, and blue.
Figure 3. Comparison of gray truth value before and after correction of red, green, and blue.
Electronics 13 01936 g003
Figure 4. The specific process of camera vignetting surface correction.
Figure 4. The specific process of camera vignetting surface correction.
Electronics 13 01936 g004
Figure 5. The schematic diagram of LED space array acquisition method.
Figure 5. The schematic diagram of LED space array acquisition method.
Electronics 13 01936 g005
Figure 6. Comparison of red, green, and blue camera vignetting correction before and after correction.
Figure 6. Comparison of red, green, and blue camera vignetting correction before and after correction.
Electronics 13 01936 g006
Figure 7. Left edge brightness acquisition diagram.
Figure 7. Left edge brightness acquisition diagram.
Electronics 13 01936 g007
Figure 8. Upper edge brightness acquisition diagram.
Figure 8. Upper edge brightness acquisition diagram.
Electronics 13 01936 g008
Figure 9. Blue primary color left edge brightness test data. (The blue line represents the actual brightness value, the orange line represents the fitting brightness value, the gray line represents the brightness value after the edge correction).
Figure 9. Blue primary color left edge brightness test data. (The blue line represents the actual brightness value, the orange line represents the fitting brightness value, the gray line represents the brightness value after the edge correction).
Electronics 13 01936 g009
Figure 10. Actual LED display renderings before and after camera vignetting compensation. (a,c,e,g) are red, green, blue, and white without camera vignetting compensation. (b,d,f,h) are the display effects of red, green, blue and white after camera vignetting compensation, respectively.).
Figure 10. Actual LED display renderings before and after camera vignetting compensation. (a,c,e,g) are red, green, blue, and white without camera vignetting compensation. (b,d,f,h) are the display effects of red, green, blue and white after camera vignetting compensation, respectively.).
Electronics 13 01936 g010
Table 1. Comparison table of horizontal and vertical gray difference before and after three primary color compensation.
Table 1. Comparison table of horizontal and vertical gray difference before and after three primary color compensation.
The Pixel Spacing is 1.19 mmStatusRedGreenBlue
Row gray differenceBefore compensation13.48%8.04%12.03%
After compensation0.51%0.20%0.50%
Column gray differenceBefore compensation12.13%3.64%4.25%
After compensation0.48%0.29%0.49%
Table 2. Red luminance of LED cabinet before vignetting compensation (cd/m2).
Table 2. Red luminance of LED cabinet before vignetting compensation (cd/m2).
Position12345678
1202.44200.74198.24200.58200.15194.61201.39201.16
2201.55191.29192.55194.96197.82189.22197.99190.98
3217.06210.32208.25208.76213.50204.69214.37213.10
4210.40205.03206.91204.17205.97201.66209.78206.44
5206.43199.45201.00199.92204.32194.75203.58199.73
6223.53218.70223.73217.91226.69218.42227.23218.08
L J R 89.60%
Table 3. Red luminance of LED cabinet after vignetting compensation.
Table 3. Red luminance of LED cabinet after vignetting compensation.
Position12345678
1203.94204.99199.00203.57205.26205.29206.28209.27
2208.47207.21201.48203.53205.92205.38204.90208.86
3209.54209.67202.03205.78205.96205.80204.37212.93
4211.28211.32207.40210.20209.80209.56212.18214.10
5213.92212.09207.37210.69210.52211.06210.60214.97
6216.74217.50212.68218.24215.29217.05216.72220.60
L J R 94.65%
Table 4. Green luminance of LED cabinet before vignetting compensation.
Table 4. Green luminance of LED cabinet before vignetting compensation.
Position12345678
1588.38583.22582.23577.69583.96565.74579.17570.86
2599.41566.55579.55570.07575.68554.40569.22555.32
3600.80577.02594.17583.40582.03559.86580.22567.62
4601.13579.55591.03580.54580.84561.65582.57559.83
5594.66569.53583.04568.96570.49551.44581.23557.93
6599.35583.84604.96586.04594.55570.63592.20565.05
L J G 95.36%
Table 5. Green luminance of LED cabinet after vignetting compensation.
Table 5. Green luminance of LED cabinet after vignetting compensation.
Position12345678
1564.44574.42562.51572.40571.33576.65568.87578.13
2578.41572.44566.18571.89577.07574.42571.01572.85
3569.76577.23566.55583.01567.79576.46568.31578.32
4569.74574.47567.87573.72573.78572.34573.01570.37
5576.91578.64568.30571.75573.49573.60577.32574.26
6567.98585.24575.46586.31576.59587.23576.33581.82
L J G 97.68%
Table 6. Blue luminance of LED cabinet before vignetting compensation.
Table 6. Blue luminance of LED cabinet before vignetting compensation.
Position12345678
1106.36104.33106.22103.30104.54101.51106.59100.70
2106.41101.06105.70101.20103.3197.68103.4497.95
3106.05102.91106.37104.41103.0399.84103.40100.39
4106.41104.29107.50103.63104.69101.61105.5199.70
5104.94100.38105.32101.51102.1697.68103.5198.26
6105.01101.54105.20102.61104.02100.30103.0099.04
L J B 94.82%
Table 7. Blue luminance of LED cabinet after vignetting compensation.
Table 7. Blue luminance of LED cabinet after vignetting compensation.
Position12345678
1100.55103.42100.38102.87102.12102.55101.78103.10
2103.44103.43102.76104.32103.15103.25102.56102.85
3101.37104.19101.57105.21101.10102.87101.00103.55
4101.60103.81101.97102.83101.72101.82102.02101.50
5102.64103.24101.94103.87101.68102.49103.52102.93
6100.42103.29101.20103.74101.95103.60101.40101.85
L J B 97.36%
Table 8. The methods in this thesis are compared with those of other scholars.
Table 8. The methods in this thesis are compared with those of other scholars.
Display Uniformity ImprovedCharacteristic
Mao X [14]3.61%The acquisition repeatability error is introduced
Wang S [12,13]4.69%It is difficult for the rotation axis of the head to be consistent with the imaging center of the camera
Tian Z [11]5.47%The luminance meter is costly and time-consuming, and the fitting accuracy will decrease due to changes in relative position
This text5.06%Display uniformity has been significantly improved, and it is universal for multiple resolution displays and relative position changes, which is suitable for pipeline working environments
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, S.; Zheng, X.; Mao, X.; Chen, Y.; Chen, Y. Vignetting Compensation Method for CMOS Camera Based on LED Spatial Array. Electronics 2024, 13, 1936. https://doi.org/10.3390/electronics13101936

AMA Style

Huang S, Zheng X, Mao X, Chen Y, Chen Y. Vignetting Compensation Method for CMOS Camera Based on LED Spatial Array. Electronics. 2024; 13(10):1936. https://doi.org/10.3390/electronics13101936

Chicago/Turabian Style

Huang, Shuo, Xifeng Zheng, Xinyue Mao, Yufeng Chen, and Yu Chen. 2024. "Vignetting Compensation Method for CMOS Camera Based on LED Spatial Array" Electronics 13, no. 10: 1936. https://doi.org/10.3390/electronics13101936

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop