Robust Parameter Design of Derivative Optimization Methods for Image Acquisition Using a Color Mixer

A tuning method was proposed for automatic lighting (auto-lighting) algorithms derived from the steepest descent and conjugate gradient methods. The auto-lighting algorithms maximize the image quality of industrial machine vision by adjusting multiple-color light emitting diodes (LEDs)—usually called color mixers. Searching for the driving condition for achieving maximum sharpness influences image quality. In most inspection systems, a single-color light source is used, and an equal step search (ESS) is employed to determine the maximum image quality. However, in the case of multiple color LEDs, the number of iterations becomes large, which is time-consuming. Hence, the steepest descent (STD) and conjugate gradient methods (CJG) were applied to reduce the searching time for achieving maximum image quality. The relationship between lighting and image quality is multi-dimensional, non-linear, and difficult to describe using mathematical equations. Hence, the Taguchi method is actually the only method that can determine the parameters of auto-lighting algorithms. The algorithm parameters were determined using orthogonal arrays, and the candidate parameters were selected by increasing the sharpness and decreasing the iterations of the algorithm, which were dependent on the searching time. The contribution of parameters was investigated using ANOVA. After conducting retests using the selected parameters, the image quality was almost the same as that in the best-case parameters with a smaller number of iterations.


Introduction
The quality of images acquired from an industrial machine vision system determines the performance of the inspection process during manufacturing [1].Image-based inspection using machine vision is currently widespread, and the image quality is critical in automatic optical inspection [2].The image quality is affected by focusing, which is usually automatized, and illumination, which is still a manual process.Illumination in machine vision has many factors, such as intensity, peak wavelength, bandwidth, light shape, irradiation angle, distance, uniformity, diffusion, and reflection.The active control factors in machine vision are intensity and peak wavelength, though other factors are usually invariable.Although image quality is sensitive to intensity and peak wavelength, the optimal combination of these factors may be varied by the material of the target object [3].Because the active control factors are currently manually changed, it is considerably labor intensive to adjust the illumination condition of the inspection machines in cases of the initial setup or product change.However, the light intensity of a light-emitting diode (LED) can be easily adjusted by varying the electric current.A few articles have been written about auto-lighting by controlling the intensity of a single-color light source [4][5][6].A single-color lighting based on fuzzy control logic is applied to a robot manipulator [7].The light intensity from a single-color source is mostly determined using in an equal step search (ESS), which varies the intensity from minimum to maximum in small intervals.
Color mixers synthesize various colors of light from multiple LEDs.The LEDs are arranged in an optical direction in a back plane, and an optical mixer is attached to a light output [8].The color is varied using a combination of light intensities, which can be adjusted using electric currents.Optical collimators are the most popular device to combine the lights from the multiple LEDs [9][10][11].These studies aim to achieve exact color generation, uniformity in a target plane, and thermal stability.They do not focus on the image quality.Optimal illumination can increase the color contrast in machine vision [12], hence spectral approaches in bio-medical imaging [13,14].
When color mixers are applied to machine vision, the best color and intensity must be found manually.Because automatic search is applied using the ESS, the searching time is long, which is caused by the vast number of light combinations.Thus, we have been studying fast optimization between color and image quality in industrial machine vision [15][16][17].Because the above-mentioned studies were based on non-differential optimum methods, they were stably convergent, but required multiple calls of a cost function for iterations, leading to a longer processing time.Derivative optimum search methods are well-known, simple, and easy to implement [18].The derivative optimum methods are less stable and more oscillatory, but usually faster [18,19].In this study, arbitrary N color sources and image quality were considered for steepest descent (STD) and conjugate gradient (CJG).The optimum methods are composed of functions, variables, and coefficients which are difficult to determine for the inspection process.Algorithm parameters also affect the performance of image processing methods [20], and they can be determined using optimum methods.Thus, a tuning step is necessary to select the value of the coefficients when applying the methods to inspection machines.The relation between the LED inputs and the image quality is complex, difficult to describe, and is actually a black box function.The coefficients are sensitive to convergence, number of iterations, and oscillation, but the function is unknown.The Taguchi method is one of the most popular methods for determining the optimal process parameters with a minimum number of experiments when the system is unknown, complex, and non-linear.The contribution of process parameters can be investigated using ANOVA, and many cases have been proposed in machining processes [21,22].The Taguchi method for robust parameter design was applied to tune the auto-lighting algorithm for achieving the fastest search time and best image quality in the case of a mixed-color source.

Index for Image Quality
The conventional inspection system for color lighting comprises a mixed light source, industrial camera, framegrabber, controller, and light control board.Figure 1 shows a conceptual diagram of the color mixer and machine vision system.The color mixer generates a mixed light and emits it toward a target object, and the camera acquires a digital image, which is a type of response to the mixed light.The digital image is analyzed to study the image properties (e.g., image quality) and to determine the intensity levels of the LEDs in the color mixer.The intensity levels are converted into voltage level using a digital-to-analog converter (DAC) board.The electric current to drive the LEDs is generated using a current driver according to the voltage level.The color mixer and the machine vision form a feedback loop.
The image quality must be evaluated to use optimum methods.There are various image indices proposed in many papers; these are evaluated using pixel operations [23,24].For instance, the brightness, Ī, is calculated from the conception of the average grey level of an m × n pixel image.
where the grey level of pixels is I(x, y) and the size of the image is m × n.Image quality is one of the image indices, and is usually estimated using sharpness.Sharpness actually indicates the deviation and difference of grey levels among pixels.There are dozens of definitions for sharpness, and standard deviation is widely used as sharpness in machine vision [25].Thus, sharpness σ can be written as follows.
Industrial machine vision usually functions in a dark room so that the image acquired by a camera completely depends on the lighting system.The color mixer employed in this study uses multiple color LEDs having individual electric inputs.Because the inputs are all adjusted using the voltage level, the color mixer has a voltage input vector for N LEDs as follows.
As presented in section I, the relationship between the LED inputs and the image quality involve electric, spectral, and optical responses.This procedure cannot easily be described using a mathematical model, and the relationship from (1) to ( 3) is a black box function which can be denoted as an arbitrary function f , which is a cost function in this study.
The best sharpness can be obtained by adjusting V.However, V is an unknown vector.The maximum sharpness can be found using optimum methods, but negative sharpness ρ must be defined because optimum methods are designed for finding the minimum.Hence, negative sharpness is a cost function.
The optimum methods have a general form of problem definition using a cost function as follows [17]:

Derivative Optimum Methods
The steepest descent and conjugate gradient methods are representative of the derivative optimum methods, which involve the differential operation of a cost function written as a gradient.
The STD iterates the equations until it finds a local minimum; a symbol k is necessary to show the current iteration.The STD updates current inputs k V by adding a negative gradient to the current inputs.α is originally determined at ∂ρ(α)/∂α = 0 in STD [18], however it is difficult to obtain using an experimental apparatus.In this study, the α is assumed to be a constant, α.
The CJG has the same method of updating the current inputs.However, the difference lies in calculating the index of the updates ξ.
k ξ usually has an unpredictably large or small value, which causes divergence or oscillation near the optimum.Consequently, the following boundary conditions are given before updating the current inputs.
where η is the convergence coefficient for a limited range and τ is the threshold.The updating of inputs and the acquisition of sharpness are iterated until the gradient becomes smaller than the terminal condition 1 , which indicates that auto-lighting finds the maximum sharpness and the best image quality.
where 1 is an infinitesimal value for the terminal condition.The cost function is acquired using hardware, and the terminal condition considers differential values.The values are discrete and sensitive to noises; hence, an additional terminal condition, 2 , is applied as follows: | k ρ − k−1 ρ| < 2 (13)

System for Experiment
The sharpness and derivative methods were applied to a test system which was constructed in our previous study [6].The test system was composed of a 4 M pixel camera (SVS-4021, SVS-VISTEK, Seefeld, Germany), a coaxial lens (COAX, Edmund Optics, Barrington, NJ, USA), a framegrabber (SOL6M, Matrox, Dorval, QC, Canada), a multi-channel DAC board (NI-6722, NI, Austin, TX, USA), and an RGB mixing light source.Commercial integrated circuits (ICs) of EPROMs were used as sample targets A (EP910JC35, ALTERA, San Jose, CA, USA) and B (Z86E3012KSES, ZILOG, Milpitas, CA, USA), as shown in Figure 2. The camera and the ICs were fixed on Z and XYR axes, respectively.The coaxial lens was attached to the camera, and faced the ICs.Optical fiber from the RGB source was connected to the coaxial lens and illuminated the ICs.Images of the ICs were acquired and transferred into a PC through a CAMERALINK port on the framegrabber.Operating software was constructed using a development tool (Visual Studio 2008, Microsoft, Redmond, WA, USA) and vision library (MIL 8.0, Matrox, Dorval, QC, Canada).Location of the ICs in an image was adjusted using XYR axes after focusing was performed using the Z axis.The inputs of the RGB source were connected to the DAC board.The light color and intensity were adjusted through the board.The STD and CJG for optimum light condition were implemented into the software.

Taguchi Method
The Taguchi method is commonly used to tune the algorithm parameters and optical design in machine vision [26][27][28].A neural network is a massive and complex numerical model, and derivative optimal methods are frequently applied to its training parameters [29,30].Taguchi method is useful to find the learning parameters of neural network and increase learning efficiency in machine vision system [31].Considering the non-linear, multi-dimensional, and black box function systems in this study, we expected that the Taguchi method could be useful in tuning the auto-lighting algorithm.The performance of the algorithm was largely evaluated using the minimum number of iterations and the maximum sharpness.Hence, "the smaller the better" concept was applied in case of the number of iterations and "the larger the better" concept was applied in the case of the sharpness while calculating the signal-to-noise (SN) ratio.Those SN ratios can be obtained using the following equations [32,33]: where u j is the performance index (e.g., sharpness and iteration), and w is the number of experiments.

Experiment Design
The selected parameters were initial voltages of red, green, and blue (RGB) LEDs, V = (v R0 , v G0 , v B0 ), the convergence constant η, and the threshold τ.Because the maximum sharpness is usually formed in low-voltage regions under a single-light condition, the range of the initial voltage was less than half of the full voltage.The ranges of η and τ were between 0.0 and 1.0.These five factors were chosen as control factors.Because all the ranges are divided into five intervals, the level was set at 5. Therefore, the L 25 (5 5 ) model is organized using five control factors and five levels, as shown in Table 1.The combination of the experiment is 25, which is quite a small value considering the multiple color sources and the algorithm parameters.Two sample targets were used for the experiments, as proposed.

Results
The maximum sharpness found using the ESS was σ max = 392.76 at V = (0, 0, 1.2) for Pattern A, and σ max = 358.87 at V = (1.0,0, 0) for Pattern B. The total step number of combinations for RGB was 50 3 = 125, 000.The L 25 (5 5 ) orthogonal arrays for steepest descent and conjugate gradient methods were constructed as shown in Tables 2 and 3. σ max , k max , V R , V G and V B were the optimal statuses found by the steepest descent method by using the selected parameters.Some combinations showed almost the same sharpness as that of the exact solution, some combinations reached the maximum after several steps, and some cases failed to converge.These facts show that parameter selection for a derivative optimum is important because of stability.The SN ratios were calculated using MINITAB for mathematical operations of Taguchi analysis.Figures 3-6 are the results of Taguchi analysis and show the trend of the control factors.The variation in the sharpness was very small, whereas the variation in the number of the iteration was larger, which implied that the parameters were sensitive to iteration.However, the trends of sharpness and the number of iterations were inverse.Sharpness is more important than the number of iterations because the inspection in a manufacturing process must be accurate.Hence, we chose the initial voltage in the sharpness, and τ and η in the number of the iteration.Retest combinations of STD were determined considering figures such as However, when the terminal condition is tightly given, a result similar to the ESS can be obtained with 74 iterations.The retest results using A 5 B 3 C 1 D 5 E 4 were σ max = 357.09,V = (1.02,0.02, 0.00), and 37 iterations.Contributions of the parameters in the STD were evaluated using ANOVA, as shown in Tables 4 and 5.The results of ANOVA were obtained using general linear model in MINITAB.The η was the most significant factor for Pattern A, but initial point was significant for Pattern B. Tables 6 and 7 show contributions of the parameters in the CJD.η was the most significant factor for the sharpness and the iteration.However, initial point was significant for the sharpness, and the iteration was more significant for the iteration.Hence, convergence constant, η, is the most important and the initial point is the second to find optimum of color lighting.τ was a minor factor in the experiments.The sharpnesses in the results were almost the same as that observed in the best-case parameters.However, the number of iterations was relatively small compared to the average number of iterations-even the numbers using ESS.One result had almost the same sharpnesses as that of the exact solution using different voltage.The retest results show that the Taguchi method provides useful parameters with a small number of experiments.Although the maximum sharpness value determined by the proposed methods was a little lower than that determined by ESS, the number of iterations was much smaller.Therefore, the proposed auto-lighting algorithm can reduce the number of iterations, while the image quality remains almost the same.Furthermore, the Taguchi method can reduce laborious tasks and the setup time for the inspection process in manufacturing.

Conclusions
A tuning method was proposed for the auto-lighting algorithm using the Taguchi method.The algorithm maximizes the image quality by adjusting multiple light sources in the shortest time, thus providing a function called auto-lighting.The image quality is defined as sharpness-the standard deviation of the grey level in pixels of an inspected image.The best image quality was found using two differential optimum methods-STD and CJG.The image quality was represented using sharpness, and the minimum of the negative sharpness was found using the steepest descent and conjugate gradient methods.These methods are modified for auto-lighting algorithms.
The Taguchi method was applied to determine the algorithm parameters, such as initial voltage, convergence constant, and threshold.The L 25 (5 5 ) orthogonal array was constructed considering five control factors and five levels of the parameter ranges.The SN ratio of the sharpness was calculated using "the larger the better", and that of the number of iterations was calculated using "the smaller the better".The desired combinations were determined after the Taguchi analysis using the orthogonal array.A retest was conducted by using the desired combination, and the results showed that the Taguchi method provides useful parameter values, and the performance is almost equal to that of the best-case parameters.The Taguchi method will be useful in reducing tasks and the time required to set up the inspection process in manufacturing.

Figure 1 .
Figure 1.System Diagram for color mixing and automatic lighting.

Figure 2 .
Figure 2. Target patterns acquired by maximum sharpness: (a) Pattern A; (b) Pattern B.
Patterns A and B, respectively.A 3 B 3 C 3 D 5 E 5 , and A 5 B 3 C 1 D 5 E 4 were selected for Patterns A and B in case of CJG.The retest results using A 3 B 2 C 5 D 2 E 5 were σ max = 390.07,V = (0.00, 0.00, 1.09), and 19 iterations.The combination of A 5 B 1 C 1 D 1 E 1 was σ max = 357.97,V = (1.02,0.00, 0.00), and 37 iterations.The retest results using A 3 B 3 C 3 D 5 E 5 were σ max = 383.73,V = (0.31, 0.30, 0.3), and 16 iterations.The value of this point was 2% lower than the ESS, and the coordinate is far from the ESS.This indicates a different local minimum compared with the ESS results.

Figures 7 and 8 Figure 7 .Figure 8 .
Figures7 and 8show the convergence of maximum sharpness by employing the STD and the CJD methods.In the figures, V R , V G , and V B are mapped virtually in Cartesian coordinates.The starting point is shown in blue, and the color is varied into others during iteration.The terminal

Table 1 .
Control factors and levels for derivative optimum methods.

Table 2 .
Orthogonal array of steepest descent method for Patterns A and B.

Table 3 .
Orthogonal array of conjugate gradient method for Patterns A and B.

Table 4 .
ANOVA of Pattern A for contribution of steepest descent method.

Table 5 .
ANOVA of Pattern B for contribution of steepest descent method.

Table 6 .
ANOVA of Pattern A for contribution of conjugate gradient method.

Table 7 .
ANOVA of Pattern B for contribution of conjugate gradient method.