Image Intelligent Detection Based on the Gabor Wavelet and the Neural Network

This paper first analyzes the one-dimensional Gabor function and expands it to a two-dimensional one. The two-dimensional Gabor function generates the two-dimensional Gabor wavelet through measure stretching and rotation. At last, the two-dimensional Gabor wavelet transform is employed to extract the image feature information. Based on the back propagation (BP) neural network model, the image intelligent test model based on the Gabor wavelet and the neural network model is built. The human face image detection is adopted as an example. Results suggest that, although there are complex textures and illumination variations on the images of the face database named AT&T, the detection accuracy rate of the proposed method can reach above 0.93. In addition, extensive simulations based on the Yale and extended Yale B datasets further verify the effectiveness of the proposed method.


Introduction
Wavelet theory has been increasingly popular and has quickly developed since its appearance in the 1980s [1].Scholars generally thought that the wavelet transform is a breakthrough of Fourier transform.In addition, the Gabor filter shows a strong robustness towards the luminance and contrast of images and the human facial expressions, and it reflects the most helpful local features for human face recognition [2,3], so the Gabor wavelet has been widely applied to the extraction of human face features.Currently, scholars have deepened their study of neural network theories.The artificial neural network [4,5] is an intelligent system simulated by humans according to information processing of the human brain nerve system, and a new-type structural computing system generated based on a preliminary understanding of the human brain organization structure and activity mechanism.Since it can simulate the human brain nerve system and endow the machine with the perception, learning, and deduction capability of the human brain, it has been widely applied to the model recognition in various fields.
However, how to combine the neural network with the nonlinear theories, such as wavelet theory, fuzzy set, and chaos theory, is a new research direction [6,7].The neural network boasts a collection of favorable characteristics, including fault tolerance, self-adaption, self-learning and generalization capability, and robustness, and the wavelet transform has the temporal frequency local and zooming characteristics, so the Gabor wavelet transform can be employed to reduce the number of input nodes in the neural network and increase the convergence speed on the one hand, and sufficiently and efficiently express the human face characteristics and improve the neural network recognition capability on the other.However, it has been an issue of great concern to both experts and scholars as to how to combine advantages of the two and apply them to the human face recognition technique.The work [8] Symmetry 2016, 8, 130 2 of 11 proposed a method for detecting facial regions by combining a Gabor filter and a convolutional neural network and obtained a detection rate of 87.5%.Kaushal et al. [9] used the feature vector based on Gabor filters as the input of a feed forward neural network (FFNN).A similar work presented in [10] was implemented using Java environment, aiming to object localization and classification.
Therefore, the research focus of this paper is about the image feature extraction based on the Gabor wavelet transform.Combining the intelligent recognition of the back propagation (BP) neural network, this paper puts forward the image intelligent detection based on the Gabor wavelet transform and the neural network model.The human face image detection is taken as an example.First, the model is tested based on the human face database "Yale" [11] with illumination variation and complex texture.Then, using the human face database "AT&T" [12], the detection accuracy rate of the model is given to evaluate the detection performance.Finally, based on the human face database "extended Yale B" [13], the effectiveness of the method is proved by the performance comparison between our proposal and several state-of-the-art methods.

Gabor Wavelet Theory and Feature Transformation
In order to introduce the Gabor wavelet and apply it to the image feature extraction, this paper first introduces the analysis deduction of the one-dimensional Gabor wavelet so as to introduce the two-dimensional Gabor wavelet.
Among them, the one-dimensional Gabor wavelet [14] is constituted by a trigonometric function multiplied by a Gaussian function shown in Equation (1): Conduct integration of the product of Equation ( 1) and the signal frequency, and the one-dimensional Gabor wavelet transform can be expressed below: The left of the equation stands for the frequency information of the signal, x(t), when the frequency is ω and the time is t 0 .Put Equation (1) into Equation (2), and expand the mixed one into the following one: The real and imaginary part of Equation ( 3) expressed in the form of complex number is shown in Equation ( 4) below.
The two-dimensional Gabor wavelet can be generated by expanding the one-dimensional Gabor function into a two-dimensional one, and through measure stretching and rotation [15].The two-dimensional Gabor wavelet can acquire the image information in terms of any measure and any orientation.Through the one-dimensional Gabor wavelet function, it can be seen that the two-dimensional Gabor wavelet function is unique and can be adopted as the primary function for the image extraction and analysis.In other words, the description completeness of images in terms of space and frequency domain can be realized.The wavelet transform reflects a relatively intuitive concept: when the textures are relatively meticulous, the sampling scope of the sample domain is relatively small, while the sampling scope of the opposite frequency domain is relatively large.However, when the textures are relatively coarse, the sampling scope of the space domain is relatively large and the sampling scope of the frequency domain is relatively small.Therefore, the two-dimensional Gabor wavelet can capture features including the selectivity of space position, orientation and space frequency, and the quadrature phase relationship.
The two-dimensional Gabor wavelet core [16] is defined in Equation ( 4): where u and v stand for orientation and measure, respectively; z stands for the coordinate point of the fixed position; is used to compensate the weakening of the energy spectrum; e ) stands for the Gaussian envelop function; e (ik u,v z) stands for the vibration function, the real part of which is the cosine function and the imaginary part of which is the sine function; e (− σ 2 2 ) stands for the DC component; σ stands for the size of the two-dimensional Gabor wavelet, namely the radius of the Gaussian function; k u,v stands for the central frequency of the filter, describing the response of the Gabor filter in terms of different orientations and measures.Therefore, when k u,v is different, a group of Gabor filters can be obtained.The real and imaginary part of Gabor filters based on five frequencies (0.2, 0.22, 0.24, 0.26, and 0.28) and eight orientations (0        dimensional Gabor wavelet can capture features including the selectivity of space position, orientation and space frequency, and the quadrature phase relationship.The two-dimensional Gabor wavelet core [16] is defined in Equation (4): where u and v stand for orientation and measure, respectively; z stands for the coordinate point of the fixed position;   Wavelet transform has the following advantages [17] when being applied to image processing: (1) wavelet decomposition can cover the whole frequency domain; (2) by choosing the proper filter, the wavelet filter can largely reduce or even remove the relevance between different characteristics extracted; (3) the wavelet transform has a "zooming" characteristic and can adopt the wide analysis window in the low-frequency section and the narrow analysis window in the high-frequency section.
Therefore, to the image feature extraction process, the Gabor image feature extraction is to conduct the convolution of input images and the Gabor wavelet described in Equation (4).It is assumed that the input image grey scale is ( , ) I x y and the convolution between I and the Gabor core, * ( , ) where * stands for the convolution factor; , ( , )  Wavelet transform has the following advantages [17] when being applied to image processing: (1) wavelet decomposition can cover the whole frequency domain; (2) by choosing the proper filter, the wavelet filter can largely reduce or even remove the relevance between different characteristics extracted; (3) the wavelet transform has a "zooming" characteristic and can adopt the wide analysis window in the low-frequency section and the narrow analysis window in the high-frequency section.
Therefore, to the image feature extraction process, the Gabor image feature extraction is to conduct the convolution of input images and the Gabor wavelet described in Equation (4).It is assumed that the input image grey scale is I(x, y) and the convolution between I and the Gabor core, G u,v , is shown in Equation (5) below: where * stands for the convolution factor; O u,v (x, y) stands for the convolution image in the corresponding measure of u and the corresponding orientation of v.The neural network is a highly nonlinear system.In terms of different functions and research, there are different neural network models.The BP neural network is a feedforward network adopted by the neural network model as the learning algorithm through the error BP algorithm [18].It is mainly constituted of the input layer, the output layer, and the hidden layer.The nerve cell between layers adopts the fully-interlinked connection style and builds connections through the corresponding network weight coefficient, w.In addition, there is no connection between nerve cells within every layer.The basic idea of the BP algorithm is that the learning process is made up of two processes, namely the signal forward-propagation and the error backward-propagation. Figure 2 shows the specific structure.The neural network is a highly nonlinear system.In terms of different functions and research, there are different neural network models.The BP neural network is a feedforward network adopted by the neural network model as the learning algorithm through the error BP algorithm [18].It is mainly constituted of the input layer, the output layer, and the hidden layer.The nerve cell between layers adopts the fully-interlinked connection style and builds connections through the corresponding network weight coefficient, w.In addition, there is no connection between nerve cells within every layer.The basic idea of the BP algorithm is that the learning process is made up of two processes, namely the signal forward-propagation and the error backward-propagation. Figure 2 shows the specific structure.

Back Propagation Neural Network Model Algorithm Steps
When the signal enters the BP neural network through the signal forward-propagation, the input samples are input through the input layer and are transmitted to the output layer through the processing in the hidden layer.If the practical output of the output layer fails to coincide with the expected output, it will move into the error backward-propagation period.The essence of the above signal forward-propagation and the error backward-propagation is a network iterative process.During the network iterative process, the weight value keeps on adjusting.The process endures until the output network error is reduced below the set error value or until the process reaches the pre-set iterations.Thus, it can be seen that the input and output relationship of the BP neural network is a highly linear system featuring "more input-more output," which is applicable to the prediction and recognition process system.
According to the weight value of the input nodes and the output nodes, the weight value between the input nodes and the hidden nodes, and the weight value between the hidden nodes and the output nodes, the relationship iteration between nodes of various layers is shown below: (1) Signal forward-propagation process: The input, neti, of the i node in the hidden layer: x j stands for the input of the j node in the input layer (j = 1, . . ., M); w ij stands for the weight value from the i node in the hidden layer to the j node in the input layer; θ i stands for the threshold of the i node in the hidden layer; φ(x) stands for the excitation function of the hidden layer; w ki stands for the weight value from the k node in the output layer to the i node in the hidden layer (i = 1, . . ., q); a k stands for the threshold value of the k node in the output layer (k = 1, . . ., L); ψ(x) stands for the excitation function of the output layer; o k stands for the output of the k node in the output layer.

Back Propagation Neural Network Model Algorithm Steps
When the signal enters the BP neural network through the signal forward-propagation, the input samples are input through the input layer and are transmitted to the output layer through the processing in the hidden layer.If the practical output of the output layer fails to coincide with the expected output, it will move into the error backward-propagation period.The essence of the above signal forward-propagation and the error backward-propagation is a network iterative process.During the network iterative process, the weight value keeps on adjusting.The process endures until the output network error is reduced below the set error value or until the process reaches the pre-set iterations.Thus, it can be seen that the input and output relationship of the BP neural network is a highly linear system featuring "more input-more output," which is applicable to the prediction and recognition process system.
According to the weight value of the input nodes and the output nodes, the weight value between the input nodes and the hidden nodes, and the weight value between the hidden nodes and the output nodes, the relationship iteration between nodes of various layers is shown below: The input, net i , of the i node in the hidden layer: The output, y i , of the i node in the hidden layer: The input, net k , of the k node in the output layer: The output, o k , of the k node in the output layer: (2) Error backward-propagation process: The error backward-propagation first starts from the output layer to calculate the output error of the nerve cells in various layers step by step.Then, the weight value and the threshold of various layers are adjusted according to the error gradient descent to make the final modified network output to approximate the expected value.The quadric form error criterion function of every sample, E P , is shown in Equation ( 10): All in all, the major idea of the BP neural network is to modify the threshold and the weight value to make the error function to descend along the gradient orientation.The input layer obtains the practical output by processing the input information in the hidden layer.If the practical output is not in conformity with the sample output, the error will be sent back layer by layer.The weight value of every layer is modified according to the learning rules regulated by the algorithm.Through the repetition of the step, convergence or homeostasis can be achieved.In other words, the step keeps on until the total error between the practical output and the target output reaches the minimum error as required.
The BP neural network model structure schematic diagram established is shown in Figure 3 below.
Symmetry 2016, 8, 130 5 of 11 The output, yi, of the i node in the hidden layer: The input, netk, of the k node in the output layer: The output, k o , of the k node in the output layer: (2) Error backward-propagation process: The error backward-propagation first starts from the output layer to calculate the output error of the nerve cells in various layers step by step.Then, the weight value and the threshold of various layers are adjusted according to the error gradient descent to make the final modified network output to approximate the expected value.The quadric form error criterion function of every sample, P E , is shown in Equation (10): All in all, the major idea of the BP neural network is to modify the threshold and the weight value to make the error function to descend along the gradient orientation.The input layer obtains the practical output by processing the input information in the hidden layer.If the practical output is not in conformity with the sample output, the error will be sent back layer by layer.The weight value of every layer is modified according to the learning rules regulated by the algorithm.Through the repetition of the step, convergence or homeostasis can be achieved.In other words, the step keeps on until the total error between the practical output and the target output reaches the minimum error as required.The BP neural network model structure schematic diagram established is shown in Figure 3 below.1 below.The number of neurons takes 100 in the first layer of the network.The number of neurons in output layer, which is also named as the last layer, depends on the need of practical application.Moreover, we only need to distinguish between faces and non-faces, so one neuron is sufficient to finish the work.Thus, the number of nodes in the output layer takes 1.In this network, we do not have any hidden layers called between the output layer and the input layer.This indicates that the second layer is the output layer.Through experiences, we have found that the training function "trainscg", which is suitable for nonlinear studies, has a good advantage in terms of memory consumption.  1 below.The number of neurons takes 100 in the first layer of the network.The number of neurons in output layer, which is also named as the last layer, depends on the need of practical application.Moreover, we only need to distinguish between faces and non-faces, so one neuron is sufficient to finish the work.Thus, the number of nodes in the output layer takes 1.In this network, we do not have any hidden layers called between the output layer and the input layer.This indicates that the second layer is the output layer.Through experiences, we have found that the training function "trainscg", which is suitable for nonlinear studies, has a good advantage in terms of memory consumption.A sample of dynamic error changes during the network training process is shown in Figure 4 below.Mean squared error (mse) is used as the performance function, namely the network target error.In this network, we set 1 × 10 −5 as the target error.It involves that, when the mse value reaches 1 × 10 −5 , the training stops.The maximum number of epochs (also called training times) sets 400, after that the training phase stops.Experimentally, we realized that the larger the number, the faster the training will be.A sample of dynamic error changes during the network training process is shown in Figure 4 below.Mean squared error (mse) is used as the performance function, namely the network target error.In this network, we set 1 × 10 −5 as the target error.It involves that, when the mse value reaches 1 × 10 −5 , the training stops.The maximum number of epochs (also called training times) sets 400, after that the training phase stops.Experimentally, we realized that the larger the number, the faster the training will be.(1) Conduct convolution between the image to be recognized and the standard template image and improve the resistance against the image luminosity variation.The standard template image is shown in Figure 5 below:  The BP neural network human face detection steps based on the Gabor feature extraction are shown as follows: (1) Conduct convolution between the image to be recognized and the standard template image and improve the resistance against the image luminosity variation.The standard template image is shown in Figure 5 below:

Results and Discussion
Performance evaluation is one of most important aspects in face recognition applications.For the sake of verifying the effectiveness and stability of the proposed method, experiments were conducted on several public face databases, including Yale, AT&T, and extended Yale B, on images in which contain different poses, different expressions, and various illumination conditions.At last, the proposed method is compared with some other state-of-the-art methods.

Experiments and Analysis on the Yale Face Database
For the purpose of verifying the performance, we experiment on the Yale face database consisted of 165 images (137 × 147) with different variations such as facial expressions, luminance changes, and configuration, which includes 15 individuals and 11 grayscale images per individual.A preview image of the database is shown as Figure 6.

Results and Discussion
Performance evaluation is one of most important aspects in face recognition applications.For the sake of verifying the effectiveness and stability of the proposed method, experiments were conducted on several public face databases, including Yale, AT&T, and extended Yale B, on images in which contain different poses, different expressions, and various illumination conditions.At last, the proposed method is compared with some other state-of-the-art methods.

Experiments and Analysis on the Yale Face Database
For the purpose of verifying the performance, we experiment on the Yale face database consisted of 165 images (137 × 147) with different variations such as facial expressions, luminance changes, and configuration, which includes 15 individuals and 11 grayscale images per individual.A preview image of the database is shown as Figure 6.In the experiments, all images are converted, cropped, and down-sampled to 25 × 30 pixels with grayscale.The training set consisted of partial images per subject from the database, and the rest consisted of the testing set.The images with facial expressions, luminance changes, and configuration each underwent the recognition test.From the experimental results, it can be seen that the neural network based on the Gabor feature extraction can accurately recognize the human face.The test results are shown in Figure 7 below.
In the experiments, all images are converted, cropped, and down-sampled to 25 × 30 pixels with grayscale.The training set consisted of partial images per subject from the database, and the rest consisted of the testing set.The images with facial expressions, luminance changes, and configuration each underwent the recognition test.From the experimental results, it can be seen that the neural network based on the Gabor feature extraction can accurately recognize the human face.The test results are shown in Figure 7 below.

Experiments and Analysis on the AT&T Database
Nowadays, the performance of the face recognition system is evaluated by various metrics, in which the recognition rate is commonly used.In order to comprehensively analyze the proposed method's recognition accuracy rate, it was tested in the publicly available AT&T database.This database contains 40 different persons, and every person has 10 different human face images (92 × 112) with 256 grey levels per pixel.For some subjects, the images were taken at different times with illumination variation, different facial expressions, and different facial details.A preview image of the database is shown as Figure 8. Nowadays, the performance of the face recognition system is evaluated by various metrics, in which the recognition rate is commonly used.In order to comprehensively analyze the proposed method's recognition accuracy rate, it was tested in the publicly available AT&T database.This database contains 40 different persons, and every person has 10 different human face images (92 × 112) with 256 grey levels per pixel.For some subjects, the images were taken at different times with illumination variation, different facial expressions, and different facial details.A preview image of the database is shown as Figure 8.As in the previous experiment, we converted, cropped, and down-sampled all images to 25 × 30.Then, we selected partial images randomly from each personʹs images as the training set and others as the testing set.The recognition accuracy results are shown in Table 2 below when we randomly selected four images of each person and performed five experiments.Results suggest that the accuracy rate of human face recognition reaches above 0.93.The recognition accuracy rate is relatively high.As in the previous experiment, we converted, cropped, and down-sampled all images to 25 × 30.Then, we selected partial images randomly from each person's images as the training set and others as the testing set.The recognition accuracy results are shown in Table 2 below when we randomly selected four images of each person and performed five experiments.Results suggest that the accuracy rate of human face recognition reaches above 0.93.The recognition accuracy rate is relatively high.In addition to the recognition accuracy rate, there are several other important and critical metrics available for performance evaluation.The following metrics are also introduced: false accept rate (FAR), false reject rate (FRR), and receiver operating characteristics (ROCs).FAR indicates the percent of the individuals that are incorrectly accepted; FRR measures the percent of valid inputs that are incorrectly rejected; ROC graphs are increasingly used in machine learning and data processing research for organizing and visualizing the performance of a system in recent years [19].It is a graphical representation for visualizing characterization change between FAR and FRR.In the ROC graph, the points on the top left have high FFR and low FAR; thus, the ROC represents smart classifiers.
Furthermore, experiments were carried out using face images from the face database named the extended Yale B face database to analyze the performances of the face recognition using these metrics, and comparative analysis of the experimental results with existing methods is provided precisely in this section.The dataset contains 16,128 grayscale images in GIF format of 28 human subjects under nine poses and 64 illumination conditions.A preview image of the database of faces in the extended Yale B face database is shown as Figure 9.For this database, we simply use the cropped images and resize them to 32 × 32 pixels.In this experiment, we selected a total of 28 individuals' facial images in the database, and each individual has 64 images with different poses and different illuminations.Moreover, 2, 4, 8, 16, and 32 images were randomly chosen from each group as training set; meanwhile, the remaining images were selected as a testing set.Comparison results of recognition rates of the proposed method, Local Gabor (LG, method proposed in [20]) and local gabor binary pattern (LGBP, method proposed in [21]) are shown in Table 3, and the ROC graph is shown as Figure 10.From Table 3, it can be seen that, as the training sample numbers increase, recognition rates of all methods also increase.In addition, when the training sample number is 32, the recognition rate of the proposed method outperforms LG and LGBP by an interval of 6.87% and 3.91%, respectively.Therefore, under the environment with different poses and different illuminations, the proposed method is better than the LG and LGBP face recognition method.In this experiment, we selected a total of 28 individuals' facial images in the database, and each individual has 64 images with different poses and different illuminations.Moreover, 2, 4, 8, 16, and 32 images were randomly chosen from each group as training set; meanwhile, the remaining images were selected as a testing set.Comparison results of recognition rates of the proposed method, Local Gabor (LG, method proposed in [20]) and local gabor binary pattern (LGBP, method proposed in [21]) are shown in Table 3, and the ROC graph is shown as Figure 10.From Table 3, it can be seen that, as the training sample numbers increase, recognition rates of all methods also increase.In addition, when the training sample number is 32, the recognition rate of the proposed method outperforms LG and LGBP by an interval of 6.87% and 3.91%, respectively.Therefore, under the environment with different poses and different illuminations, the proposed method is better than the LG and LGBP face recognition method.In this experiment, we selected a total of 28 individuals' facial images in the database, and each individual has 64 images with different poses and different illuminations.Moreover, 2, 4, 8, 16, and 32 images were randomly chosen from each group as training set; meanwhile, the remaining images were selected as a testing set.Comparison results of recognition rates of the proposed method, Local Gabor (LG, method proposed in [20]) and local gabor binary pattern (LGBP, method proposed in [21]) are shown in Table 3, and the ROC graph is shown as Figure 10.From Table 3, it can be seen that, as the training sample numbers increase, recognition rates of all methods also increase.In addition, when the training sample number is 32, the recognition rate of the proposed method outperforms LG and LGBP by an interval of 6.87% and 3.91%, respectively.Therefore, under the environment with different poses and different illuminations, the proposed method is better than the LG and LGBP face recognition method.

Conclusions
This paper first analyzes the Gabor wavelet theory with relatively strong resistance against image luminance and texture changes and its transform features, puts forward the idea to extract image feature information based on the Gabor wavelet transform, and then builds the image intelligent detection model based on the Gabor wavelet and neural network model.The human face detection experiments based on three datasets are conducted to analyze the validity of the model algorithm.When Gabor wavelet transform and the neural network are combined to test human face, the AT&T human face database is adopted to test the accuracy of the model algorithm, finding out that its accuracy rate is above 0.93, despite the complex texture and luminance changes.Based on the Yale and the extended Yale B face databases, and through the comparison with other state-of-the-art methods, the results illustrate that the method we proposed has an improved performance in face recognition.
Face recognition technology, which attracts an increasing amount of scientific research workers to it, has been vigorously developed and has been applied to many fields.However, there are still some gaps between the actual application and the ideal situation.In future work, we will test our proposed method on real world databases to further validate its effectiveness, such as the face detection data set and benchmark (FDDB) database, the labeled faces in the wild (LFW) database, and so on.At present, the technology has reached a bottleneck, and the research of it has very limited space for improvement.As a result, other techniques based on face recognition have become more challenging and more market-oriented topics, such as age estimation and gender estimation.This provides a good direction for our future research.
compensate the weakening of the energy spectrum; vibration function, the real part of which is the cosine function and the imaginary part of which is the sine function; DC component; σ stands for the size of the two-dimensional Gabor wavelet, namely the radius of the Gaussian function; , u v k stands for the central frequency of the filter, describing the response of the Gabor filter in terms of different orientations and measures.Therefore, when , u v k is different, a group of Gabor filters can be obtained.The real and imaginary part of Gabor filters based on five frequencies (0.2, 0.22, 0.24, 0.26, and 0.28) and eight orientations (0°, 45°, 90°, 135°, 180°, 225°, 270°, and 315°) are shown in Figure 1 below.

Figure 1 .
Figure 1.Gabor filter template based on five frequencies and eight orientations: (a) the template of the real part; and (b) the template of the imaginary part.

Figure 1 .
Figure 1.Gabor filter template based on five frequencies and eight orientations: (a) the template of the real part; and (b) the template of the imaginary part.

Figure 4 .
Figure 4.A sample of dynamic error variations during the network training process.

Figure 4 .
Figure 4.A sample of dynamic error variations during the network training process.

Figure 6 .Figure 6 .
Figure 6.A preview image of the database of faces in the Yale face database.

7 .
Qualitative results of our method on the Yale face database with illumination variation, complex texture, and facial expressions: (a) the recognition results of the face images labeled s1 in the Yale database; and (b) the recognition results the face images labeled s7 in the Yale database

Figure 7 .
Figure 7. Qualitative results of our method on the Yale face database with illumination variation, complex texture, and facial expressions: (a) the recognition results of the images labeled s1 in the Yale database; and (b) the recognition results of the face images labeled s7 in the Yale database.

Figure 8 .
Figure 8.A preview image of the database of faces in the AT&T database.

Figure 8 .
Figure 8.A preview image of the database of faces in the AT&T database.
Symmetry 2016, 8, 130 9 of 11and comparative analysis of the experimental results with existing methods is provided precisely in this section.The dataset contains 16,128 grayscale images in GIF format of 28 human subjects under nine poses and 64 illumination conditions.A preview image of the database of faces in the extended Yale B face database is shown as Figure9.For this database, we simply use the cropped images and resize them to 32 × 32 pixels.

Figure 9 .
Figure 9.A preview image of the database of faces in the extended Yale B face database.

Figure 9 .
Figure 9.A preview image of the database of faces in the extended Yale B face database.

Figure 10 .
Figure 10.The receiver operating characteristic (ROC) curves of the various face recognition methods.

Figure 10 .
Figure 10.The receiver operating characteristic (ROC) curves of the various face recognition methods.

Table 1 .
Key parameters of network training setting and the network convergence results.

Table 1 .
Key parameters of network training setting and the network convergence results.

Table 2 .
Neural network human face recognition accuracy rate based on the Gabor feature extraction.

Table 2 .
Neural network human face recognition accuracy rate based on the Gabor feature extraction.

Table 3 .
Recognition rates of methods on the extended Yale B with different training sample numbers.

Table 3 .
Recognition rates of methods on the extended Yale B with different training sample numbers.

Table 3 .
Recognition rates of methods on the extended Yale B with different training sample numbers.