An Enhanced LBPH Approach to Ambient-Light-Affected Face Recognition Data in Sensor Network

: Although combining a high-resolution camera with a wireless sensing network is effective for interpreting different signals for image presentation on the identiﬁcation of face recognition, its accuracy is still severely restricted. Removing the unfavorable impact of ambient light remains one of the most difﬁcult challenges in facial recognition. Therefore, it is important to ﬁnd an algorithm that can capture the major features of the object when there are ambient light changes. In this study, face recognition is used as an example of image recognition to analyze the differences between Local Binary Patterns Histograms (LBPH) and OpenFace deep learning neural network algorithms and compare the accuracy and error rates of face recognition in different environmental lighting. According to the prediction results of 13 images based on grouping statistics, the accuracy rate of face recognition of LBPH is higher than that of OpenFace in scenes with changes in ambient lighting. When the azimuth angle of the light source is more than +/ − 25 ◦ and the elevation angle is +000 ◦ , the accuracy rate of face recognition is low. When the azimuth angle is between +25 ◦ and − 25 ◦ and the elevation angle is +000 ◦ , the accuracy rate of face recognition is higher. Through the experimental design, the results show that, concerning the uncertainty of illumination angles of lighting source, the LBPH algorithm has a higher accuracy in face recognition.


Introduction
Face recognition has become a major interest in Automatic Optical Inspection (AOI) image processing and computer vision because of its non-invasiveness and easy access.Generally, AOI will classify the flaws into different types, trace the cause of the defects generated by the production machine, and adjust the machine's parameters to reduce the incidence of defects [1].Extracting face features with good discrimination and robustness and constructing efficient and reliable classifiers has always been the focus of face recognition research [2].In recent years, face recognition applications have sprung up abundantly in Taiwan and all around the world.These include examples such as M-Police face recognition in the police department, smartphone Face ID face unlocking [3], entry and exit systems of unmanned shops, library book checkout systems, airport entry and exit systems, etc. [4,5].
Most commercially available automatic systems currently use fixed closed-circuit television (CCTV) cameras [6], which enable the deployment of efficient identification and tracking algorithms, as shown in Figure 1.Following the acquisition of meaningful data by a CCTV camera, it is critical to ensure that the control room receives the same authentic data in order to take any action or send an alarm signal to various departments.Since there are several cameras, the system would generate a substantial number of redundant and unvaluable image data, which causes problems in looking for informative and valuable data from the stack of acquired data, as well as continual bandwidth loss [7].The significance of the viewpoint also needs to be considered for the CCTV system.The quality of the generated image is largely determined by the angle of light, defining the viability of the visual task, and simplifying its performance.In short, visibility is essential for the sensor to recognize a feature [8].
Electronics 2023, 12, x FOR PEER REVIEW 2 of 16 data in order to take any action or send an alarm signal to various departments.Since there are several cameras, the system would generate a substantial number of redundant and unvaluable image data, which causes problems in looking for informative and valuable data from the stack of acquired data, as well as continual bandwidth loss [7].The significance of the viewpoint also needs to be considered for the CCTV system.The quality of the generated image is largely determined by the angle of light, defining the viability of the visual task, and simplifying its performance.In short, visibility is essential for the sensor to recognize a feature [8].With the elimination of a total of tens of thousands of meters of power lines, the promise of wireless sensing networks has recently contributed to the development of image object identification and tracking [9].The image network structure demonstrates the potential for object identification in the monitoring sector [10,11].As a result of highly developed technology, real-time image sensing platforms such as wireless visual sensor networks are created by integrating high-resolution cameras with wireless sensing networks.These platforms enable us to interpret various signals for image display [12].
Although face recognition technology has made great breakthroughs in recent years, it is still affected by different environmental lighting, resulting in a significant decline in accuracy and system failure.Thus, overcoming the adverse effect of ambient light is still one of the core problems of face recognition [13].Popular methods of facial identification include traditional machine learning, e.g., Eigenface [14], Fisherfaces [15], and local binary patterns histogram algorithm (LBPH) [16].Another important model in facial identification is the neural network model, in which deep neural network (DNN) [17] and convolutional neural network (CNN) [18] are widely adopted.In this study, we provide a method for investigating image visibility scenarios with varying degrees of ambient lighting by considering previous research on face recognition.
There are several reports or literature that investigate the effect of face recognition algorithms under different lighting conditions [19], but the detailed description of how different lighting angles affect the accuracy of programs is not mentioned.The LBPH algorithm has good robustness for extracting important features of objects even with changes in environmental lighting, e.g., J. Howse [20] used Haar cascade classifiers and methods such as LBPH, Fisherfaces, or Eigenfaces for object recognition, applied the detectors and classifiers on face recognition, and suggested the possibility of transfer to other fields of recognition.V. B. T. Shoba et al. [21] proposed LBPH plus CNN's face recognition method, which not only reduced the computational cost but also improved the face recognition accuracy to 98.6% in the Yale data set.
I. Mondal et al. [22] proposed a CNN electronic voting system for election to ensure each voter only cast one.Through the algorithm, a voter's face was recognized and verified.L. Zhuang et al. [23] proposed a new method based on deep learning to solve the With the elimination of a total of tens of thousands of meters of power lines, the promise of wireless sensing networks has recently contributed to the development of image object identification and tracking [9].The image network structure demonstrates the potential for object identification in the monitoring sector [10,11].As a result of highly developed technology, real-time image sensing platforms such as wireless visual sensor networks are created by integrating high-resolution cameras with wireless sensing networks.These platforms enable us to interpret various signals for image display [12].
Although face recognition technology has made great breakthroughs in recent years, it is still affected by different environmental lighting, resulting in a significant decline in accuracy and system failure.Thus, overcoming the adverse effect of ambient light is still one of the core problems of face recognition [13].Popular methods of facial identification include traditional machine learning, e.g., Eigenface [14], Fisherfaces [15], and local binary patterns histogram algorithm (LBPH) [16].Another important model in facial identification is the neural network model, in which deep neural network (DNN) [17] and convolutional neural network (CNN) [18] are widely adopted.In this study, we provide a method for investigating image visibility scenarios with varying degrees of ambient lighting by considering previous research on face recognition.
There are several reports or literature that investigate the effect of face recognition algorithms under different lighting conditions [19], but the detailed description of how different lighting angles affect the accuracy of programs is not mentioned.The LBPH algorithm has good robustness for extracting important features of objects even with changes in environmental lighting, e.g., J. Howse [20] used Haar cascade classifiers and methods such as LBPH, Fisherfaces, or Eigenfaces for object recognition, applied the detectors and classifiers on face recognition, and suggested the possibility of transfer to other fields of recognition.V. B. T. Shoba et al. [21] proposed LBPH plus CNN's face recognition method, which not only reduced the computational cost but also improved the face recognition accuracy to 98.6% in the Yale data set.
I. Mondal et al. [22] proposed a CNN electronic voting system for election to ensure each voter only cast one.Through the algorithm, a voter's face was recognized and verified.L. Zhuang et al. [23] proposed a new method based on deep learning to solve the adverse effects of environmental light changes in the process of face recognition.The most common algorithms for facial detection include the Haar Cascade [24] and the histogram of oriented gradients (HOG) classification methods [25].By using Haar feature extractors with a multistage weak classification process (cascading), one can build a high-accuracy face classifier.
The HOG algorithm divides an image into small cells and calculates a histogram of gradient directions.With the fusion of grouped cells, the most frequent gradient direction in a block is kept.The resulting HOG descriptors are used to train a classifier, such as a support vector machine (SVM), to detect faces.
OpenFace is a DNN model for face recognition based on Google's FaceNet paper [26] that rivals the performance and accuracy of proprietary models.Another benefit of Open-Face is that since the development of the model is mainly focused on real-time face recognition on mobile devices, the model can be trained with high accuracy using very little data [27].Because of these advantages, OpenFace has been used in facial behavior analysis toolkit and gets outstanding results [28].
In this study, we trained and tested the LBPH and OpenFace face recognition models on Google's Colaboratory and compared the accuracy between the two face recognition algorithms with changes in the degree of environmental lighting.Our findings are applicable to automated vision systems, which employ advanced algorithms to detect and track numerous pictures from multiple cameras.The analytical results can be utilized to recognize an object in a variety of lighting conditions.

Data Set
The Extended Yale Face Database B data set [29] is a gray-scale face image data set.Two data sets are provided: 1. Figure 2 is the original image with an azimuth angle of +130 • to −130 • , the size is 640 × 480 pixels, and there are 28 people in files and 16,128 images in total.Each person has 9 poses, 64 images of environmental lighting changes, and a total of 576 images.The 9 postures include the following: posture 0 is the front posture; postures 1, 2, 3, 4, and 5 are about 12 • from the camera's optical axis (distance from posture 0); and postures 6, 7, and 8 are about 24 adverse effects of environmental light changes in the process of face recognition.The most common algorithms for facial detection include the Haar Cascade [24] and the histogram of oriented gradients (HOG) classification methods [25].By using Haar feature extractors with a multi-stage weak classification process (cascading), one can build a high-accuracy face classifier.The HOG algorithm divides an image into small cells and calculates a histogram of gradient directions.With the fusion of grouped cells, the most frequent gradient direction in a block is kept.The resulting HOG descriptors are used to train a classifier, such as a support vector machine (SVM), to detect faces.
OpenFace is a DNN model for face recognition based on Google's FaceNet paper [26] that rivals the performance and accuracy of proprietary models.Another benefit of Open-Face is that since the development of the model is mainly focused on real-time face recognition on mobile devices, the model can be trained with high accuracy using very little data [27].Because of these advantages, OpenFace has been used in facial behavior analysis toolkit and gets outstanding results [28].
In this study, we trained and tested the LBPH and OpenFace face recognition models on Google's Colaboratory and compared the accuracy between the two face recognition algorithms with changes in the degree of environmental lighting.Our findings are applicable to automated vision systems, which employ advanced algorithms to detect and track numerous pictures from multiple cameras.The analytical results can be utilized to recognize an object in a variety of lighting conditions.

Data Set
The Extended Yale Face Database B data set [29] is a gray-scale face image data set.Two data sets are provided: 1. Figure 2 is the original image with an azimuth angle of +130° to −130°, the size is 640 × 480 pixels, and there are 28 people in files and 16,128 images in total.Each person has 9 poses, 64 images of environmental lighting changes, and a total of 576 images.The 9 postures include the following: posture 0 is the front posture; postures 1, 2, 3, 4, and 5 are about 12° from the camera's optical axis (distance from posture 0); and postures 6, 7, and 8 are about 24° [30].2. Figure 3 shows face images with an azimuth angle of +130° to −130°, all manually aligned, cropped, and adjusted to 168 × 192 pixels.There are 38 people in total and a total of 2432 images.There is only 1 posture for each person, 64 images of environmental lighting changes, and a total of 64 personal images.In this study, we used these face images for face recognition [31].2. Figure 3 shows face images with an azimuth angle of +130 • to −130 • , all manually aligned, cropped, and adjusted to 168 × 192 pixels.There are 38 people in total and a total of 2432 images.There is only 1 posture for each person, 64 images of environmental lighting changes, and a total of 64 personal images.In this study, we used these face images for face recognition [31].
adverse effects of environmental light changes in the process of face recognition.The most common algorithms for facial detection include the Haar Cascade [24] and the histogram of oriented gradients (HOG) classification methods [25].By using Haar feature extractors with a multi-stage weak classification process (cascading), one can build a high-accuracy face classifier.The HOG algorithm divides an image into small cells and calculates a histogram of gradient directions.With the fusion of grouped cells, the most frequent gradient direction in a block is kept.The resulting HOG descriptors are used to train a classifier, such as a support vector machine (SVM), to detect faces.
OpenFace is a DNN model for face recognition based on Google's FaceNet paper [26] that rivals the performance and accuracy of proprietary models.Another benefit of Open-Face is that since the development of the model is mainly focused on real-time face recognition on mobile devices, the model can be trained with high accuracy using very little data [27].Because of these advantages, OpenFace has been used in facial behavior analysis toolkit and gets outstanding results [28].
In this study, we trained and tested the LBPH and OpenFace face recognition models on Google's Colaboratory and compared the accuracy between the two face recognition algorithms with changes in the degree of environmental lighting.Our findings are applicable to automated vision systems, which employ advanced algorithms to detect and track numerous pictures from multiple cameras.The analytical results can be utilized to recognize an object in a variety of lighting conditions.

Data Set
The Extended Yale Face Database B data set [29] is a gray-scale face image data set.Two data sets are provided: 1. Figure 2 is the original image with an azimuth angle of +130° to −130°, the size is 640 × 480 pixels, and there are 28 people in files and 16,128 images in total.Each person has 9 poses, 64 images of environmental lighting changes, and a total of 576 images.The 9 postures include the following: posture 0 is the front posture; postures 1, 2, 3, 4, and 5 are about 12° from the camera's optical axis (distance from posture 0); and postures 6, 7, and 8 are about 24° [30].2. Figure 3 shows face images with an azimuth angle of +130° to −130°, all manually aligned, cropped, and adjusted to 168 × 192 pixels.There are 38 people in total and a total of 2432 images.There is only 1 posture for each person, 64 images of environmental lighting changes, and a total of 64 personal images.In this study, we used these face images for face recognition [31].

Google Colaboratory
Google's Colaboratory (Colab) provides an interactive environment that allows people to write and execute Python programs and Linux commands on a browser and use TPU and GPU at no cost.Therefore, Colab is suitable for training the small neural network model.In addition, Colab is not a static web page but a "Colab notebook".One can edit the code, add comments, pictures, HTML, LaTeX, and other formats, execute the code in sections, and save the current output results on Colab.

Python Libraries
The face recognition algorithms and packages used in this study are all written in the Python 3 programming language.The main libraries used are shown in Table 1.The opencv and imutils packages were mainly used for image processing.The opencv functions included building LBPH face recognition models, reading images and caffe, executing TensorFlow and other deep learning models, image pre-processing, capturing images from cameras, and displaying images.The imutils program was used in conjunction with opencv to calculate the number of image frames, resize the image, and maintain the image's aspect ratio.The keras library was used for data enhancement.It randomly performed operations such as horizontal translation, vertical translation, zooming, and horizontal flipping on the image to generate many similar images, increasing the number of images for training.The numpy library was used for one-dimensional array and matrix operations (images).The sklearn library was used to split the training set and test set, do label coding, execute model performance evaluation, and implement OpenFace's classifier-SVM.The pickle library was used to save the state of the object as a binary file for use in the next import program, in training, and in test data, and to train an SVM classifier and label encoder (LabelEncoder).The sqlite3 library was used to store image paths, prediction results, and statistical results of training and testing.The matplotlib library was used to visualize the statistical results.

Data Transfer
Wi-Fi networks are currently the most popular type of signal exchange for local area networks and internet access [32].The fully advanced wireless technology offers the possibility to take advantage of Wi-Fi-based sensor networks as wireless transmitting has its greatest influence on outdoor areas [33].Additionally, 100 transmitters and receivers can be supported by each Wi-Fi communication unit [34].In the context of Wi-Fi-based WSNs' behavior, an heuristic strategy known as ADVISES (AutomateD Verification of wSn with Event calculuS) provides a mechanism to understand when sequences of events occur, to drive design decisions, to add or drop nodes, and to investigate the expected minimum channel quality required for a WSN to work [35].
Sending high-resolution multi-spectral pictures between a camera and an application can be particularly difficult, especially if transfer durations must be reduced or equal to image frame timings to avoid congestion in image data transmission [36].Figure 4 describes a method of delivering pictures to clients over various transceiver connections that matches diverse transceiver technologies such as Ethernet, ESATA/SAS, and PCI Express.Standard data are transmitted over the transceiver lines from the source, without any modifications in format or image processing.
be supported by each Wi-Fi communication unit [34].In the context of Wi-Fi-based WSNs' behavior, an heuristic strategy known as ADVISES (AutomateD Verification of wSn with Event calculuS) provides a mechanism to understand when sequences of events occur, to drive design decisions, to add or drop nodes, and to investigate the expected minimum channel quality required for a WSN to work [35].
Sending high-resolution multi-spectral pictures between a camera and an application can be particularly difficult, especially if transfer durations must be reduced or equal to image frame timings to avoid congestion in image data transmission [36].Figure 4 describes a method of delivering pictures to clients over various transceiver connections that matches diverse transceiver technologies such as Ethernet, ESATA/SAS, and PCI Express.Standard data are transmitted over the transceiver lines from the source, without any modifications in format or image processing.After receiving and transferring the image to a personal computer (PC) in the control room, the suggested approaches, LBPH and OpenFace, were utilized to validate the images.In the next step, the image data set was evaluated, and the algorithms successfully recognized the image's features before transmitting it to automatic detection applications.
To guarantee the most precise image data transfer from the sensing field into crucial applications, the WSN's Quality of Service (QoS) must be maintained for as long as possible by paying attention to coverage, topology, scheduling mechanism, deployment strategy, security, density, packet transfer distance, memory, data aggregation, battery, etc. [37], all of which are summarized in Table 2.This strategy has a substantial long-term impact on network effectiveness.Coverage To obtain significant results, the integration of both coverage and connectivity was required.After receiving and transferring the image to a personal computer (PC) in the control room, the suggested approaches, LBPH and OpenFace, were utilized to validate the images.In the next step, the image data set was evaluated, and the algorithms successfully recognized the image's features before transmitting it to automatic detection applications.
To guarantee the most precise image data transfer from the sensing field into crucial applications, the WSN's Quality of Service (QoS) must be maintained for as long as possible by paying attention to coverage, topology, scheduling mechanism, deployment strategy, security, density, packet transfer distance, memory, data aggregation, battery, etc. [37], all of which are summarized in Table 2.This strategy has a substantial long-term impact on network effectiveness.Existing topology control strategies were classified into two categories in this study: network connectivity and network coverage.Spikes of existing protocols and techniques were offered for each area, with a focus on barrier coverage, blanket coverage, sweep coverage, power control, and power management.

Coverage problem in wireless sensor networks:
A survey [40] Coverage To obtain significant results, the integration of both coverage and connectivity was required.
Maximum target coverage problem in mobile wireless sensor network [41] The Maximum Target Coverage with Limited Mobile (MTCLM) COLOUR algorithm performed well when the target density was low.
Deployment strategies for wireless sensor networks [42] Deployment The deployment affected the efficiency and the effectiveness of sensor networks.
Service-oriented node scheduling scheme for wireless sensor networks using Markov random field model [43] Scheduling A new MRF-based multi-service node scheduling (MMNS) method revealed that the approach efficiently extended network lifetime.

Work Metrics Result
A reinforcement learning-based sleep scheduling (RLSSC) algorithm for desired area coverage in solar-powered wireless sensor networks [44] The results revealed that RLSSC could successfully modify the working mode of nodes in a group by recognizing the environment and striking a balance of energy consumption across nodes to extend the network's life, while keeping the intended range.
Energy-aware and density-based clustering and relaying protocol (EA-DB-CRP) for gathering data in wireless sensor networks [45] Density A proposed energy-aware and density-based clustering and routing protocol (EA-DB-CRP) had a significant impact on network lifetime and energy utilization when compared to other relevant studies.
Security for WSN based on elliptic curve cryptography [46] Security The installation of the 160-bit ECC processor on the Xilinx Spartan 3an FPGA met the security requirements of sensor network designed to achieve speed in 32-bit numerical computations.
An Adaptive Enhanced Technique for Locked Target Detection and Data Transmission over Internet of Healthcare Things [47] Color and gray-scale image with varied text sizes, combined with encryption algorithms (AES and RSA), gave superior outcomes in a hybrid security paradigm for data protection diagnostic text.
Secure data aggregation in wireless sensor networks [48] Data aggregation The study presented a thorough examination of the notion of secure data aggregation in wireless sensor networks, focusing on the relationship between data aggregation and its security needs.

LBPH: Face Recognition Algorithm
LBPH is a method that can extract the texture features of the image through a local binary pattern (LBP) and conduct statistics through a series of histograms.Finally, after calculating the face distance using the Euclidean distance, LBPH outputs the classification result.

OpenFace: Face Recognition Algorithm
The model used in the research was the OpenFace model nn4.small2.v1,and the network structure is shown in Figure 5.The accuracy of this model was 0.9292 ± 0.0134 [49], which was based on the Labeled Faces in the Wild (LFW) data set [50] as a benchmark.The Area Under Curve (AUC) was close to 1, which shows the high accuracy of the model's prediction.The AUC of the nn4.small2.v1model was 0.973.

Data Pre-Processing
First, we downloaded 38 people's pre-cut face images in The Extended Yale Face Database B data set from the website and converted the images' format from PGM to JPEG for better image management.Among the downloaded images, 7 people's images were partially damaged and removed, and the remaining 31 people's images were applied in the research.
Next, we selected 1 of 31 people as the reference and 62 images (including duplicate

Data Pre-Processing
First, we downloaded 38 people's pre-cut face images in The Extended Yale Face Database B data set from the website and converted the images' format from PGM to JPEG for better image management.Among the downloaded images, 7 people's images were partially damaged and removed, and the remaining 31 people's images were applied in the research.
Next, we selected 1 of 31 people as the reference and 62 images (including duplicate images) from 64 images as the reference.We then divided these images into 7 groups according to the elevation angles (refer to Table 3), created 7 folders with the group name, and put the images into the corresponding folder.We used the procedure to repeat the same process with the remaining 30 people exactly as followed for that of the reference person, as shown in Figure 6.We used data enhancement technology for each image in each group by randomly panning, zooming, and flipping the images horizontally.Each image was expanded to 20 images and was stored in a folder with the same name as the image itself.

Data Pre-Processing
First, we downloaded 38 people's pre-cut face images in The Extended Yale Face Database B data set from the website and converted the images' format from PGM to JPEG for better image management.Among the downloaded images, 7 people's images were partially damaged and removed, and the remaining 31 people's images were applied in the research.
Next, we selected 1 of 31 people as the reference and 62 images (including duplicate images) from 64 images as the reference.We then divided these images into 7 groups according to the elevation angles (refer to Table 3), created 7 folders with the group name, and put the images into the corresponding folder.We used the procedure to repeat the same process with the remaining 30 people exactly as followed for that of the reference person, as shown in Figure 6.We used data enhancement technology for each image in each group by randomly panning, zooming, and flipping the images horizontally.Each image was expanded to 20 images and was stored in a folder with the same name as the image itself.Finally, the 62 original images for each person were removed from the 7 group folders.Refer to Figure 7 for the complete process above.Finally, the 62 original images for each person were removed from the 7 group folders.Refer to Figure 7 for the complete process above.
To briefly explain the rules of naming image names, the image name of yaleB01_P00A+000E+00 in Table 2 is used as an example: yaleB01 represents a person's name, and the three capital English letters P, A, and E represent Posture, Azimuth, and Elevation, respectively.In this study, only one posture existed for every person, and thus Posture is recorded as P00.To briefly explain the rules of naming image names, the image nam yaleB01_P00A+000E+00 in Table 2 is used as an example: yaleB01 represents a per name, and the three capital English letters P, A, and E represent Posture, Azimuth Elevation, respectively.In this study, only one posture existed for every person, and Posture is recorded as P00.

Data Set Split
As shown in Table 4, all the images after data enhancement were divided into parts: 80% of the images were the training set, and 20% of the images were the tes However, the division was different from the traditional splitting method.Tradition 80% of everyone's images are added up to form the training set, and 20% of every images are added up to form the test set.

Data Set Split
As shown in Table 4, all the images after data enhancement were divided into two parts: 80% of the images were the training set, and 20% of the images were the test set.However, the division was different from the traditional splitting method.Traditionally, 80% of everyone's images are added up to form the training set, and 20% of everyone's images are added up to form the test set.Multiplying Formulas ( 1) and ( 2) by 31 (people) was the number of sheets in the training set and test set, as shown in Table 3.The reason for splitting immediately was that the prediction accuracy of each image needed to be counted later.If the traditional data splitting method were used, the data splitting would have been uneven, and the subsequent statistics would have lost accuracy.

LBPH Model Training
First, we divided the image list of the data set (all image paths) into a training set and a test set and stored them in the Python list.Next, we extracted the person's name from the path of each image as a label and saved it in the Python list.Finally, we serialized the image path and label of each test set into a binary file and saved it to the hard disk.
We read the images of the training set, converted them to gray-scale images, and added them to the Python list.Through the label encoder, we encoded all tags into corresponding numbers, which represented the same person.Next, all gray-scale images and encoded labels were sent into the LBPH model for training.After the model was trained, we saved the LBPH model as a yaml format file and saved the label encoder as a binary file through serialization.

OpenFace Model Training
First, we divided the image list of the data set (all image paths) into a training set and a test set, extracted the names of people from each image path as labels, and stored them in the Python list.We then stored the image paths and tags of the training set and the test set into the SQLite database.
We read the image of the training set and set the image size to a width of 600 pixels, and the height was automatically adjusted with the width to maintain the aspect ratio.We used opencv's blobFromImage function to perform channel exchange, feature normalization, and image size adjustment to 96 × 96 pixels.We sent the image to the OpenFace pre-trained neural network nn4.small2.v1model for inference and then output a 128-dimensional feature We used numpy's flatten function to flatten the 128-dimensional vector into a onedimensional array and added the array to the Python list.After we read the images of the test set and skipped a flattened procedure, the images of the test set were processed in the same way as the training set and finally stored in the Python list.
While the labels were encoded in the same way as in LBPH, the 128-dimensional feature vector of the training set and the encoded labels were sent into the SVM classifier for training.After training, the SVM classifier, the label encoder, the 128-dimensional feature vector of the test set, and the names of the test set were individually stored as binary files through serialization.

LBPH Prediction Image
We loaded the trained LBPH model and label encoder from the hard disk and retrieved the image paths and tags of all test sets from the binary file.From the image path of each test set, we extracted the person's name, group name, and file name.We read all the images, converted them into gray-scale images, and saved them into the Python list.In the label part, the name of the person was label-encoded and converted into the corresponding number.We sent all gray-scale images into the LBPH model for prediction.After the prediction was completed, we saved the group name, file name, test label, prediction label, and prediction distance into the Python list.We then used the Python dictionary to store the list with the name of the person as the key and the data as the value, and we saved the dictionary as a binary file through serialization.Through deserialization, the binary dictionary file was read, and the person's name, group name, file name, test label, predicted label, and predicted distance in the list were extracted out from the dictionary by using the name of the person as the key.We then compared the test tags with the predicted tags one by one and recorded the identification results.At the end, the serial number, group name, file name, test label, predicted label, predicted distance, and identification result were stored in the SQLite database together.

OpenFace Prediction Image
We read the trained SVM classifier, label encoder, test set images, and labels from memory through deserialization.We encoded the label through the label encoder and converted it into a corresponding number.We used the SVM classifier to predict each test set image and store the prediction results and probabilities.We pulled the image path of all test sets from the SQLite database and extracted the file name and group name from it.We compared the label of the test set with the predicted result and recorded the identification result.Finally, the serial number, person name, group name, file name, coded test set label, predicted label, predicted probability, and identification result were stored in the SQLite database.

Statistics and Visualization
We obtained the prediction results from the SQLite database of LBPH and OpenFace.We counted the number of correct and incorrect predictions for each image of each group and saved the number, person name, group name, file name, status, and quantity into the SQLite database.
For the visualization part, we selected the A+120-120E+00 grouping, shown in Figure 8, for more detailed statistics.This group consisted of 13 photos, where A+120-120 meant the range was from +120 • to −120 • in azimuth and E+00 meant elevation was 0 • .The reason for choosing this group was that the range of azimuth angles in this group was widely distributed, and E+00 made illumination even in the longitudinal plane.With A+000 as the center, the lighting effect was symmetrical on the face and was easier to observe, at which the azimuth will cause the accuracy to rise or fall sharply.
We read the trained SVM classifier, label encoder, test set images, and labels from memory through deserialization.We encoded the label through the label encoder and converted it into a corresponding number.We used the SVM classifier to predict each test set image and store the prediction results and probabilities.We pulled the image path of all test sets from the SQLite database and extracted the file name and group name from it.We compared the label of the test set with the predicted result and recorded the identification result.Finally, the serial number, person name, group name, file name, coded test set label, predicted label, predicted probability, and identification result were stored in the SQLite database.

Statistics and Visualization
We obtained the prediction results from the SQLite database of LBPH and OpenFace.We counted the number of correct and incorrect predictions for each image of each group and saved the number, person name, group name, file name, status, and quantity into the SQLite database.
For the visualization part, we selected the A+120--120E+00 grouping, shown in Figure 8, for more detailed statistics.This group consisted of 13 photos, where A+120--120 meant the range was from +120° to −120° in azimuth and E+00 meant elevation was 0°.The reason for choosing this group was that the range of azimuth angles in this group was widely distributed, and E+00 made illumination even in the longitudinal plane.With A+000 as the center, the lighting effect was symmetrical on the face and was easier to observe, at which the azimuth will cause the accuracy to rise or fall sharply.

Experimental Results
Figure 9 is a stacking diagram of LBPH's A+120-120E+00 grouping error rate comparison of different ambient light levels.Figure 10 illustrates the stacking method of Figure 9.The bottom of the stacked image is the azimuth +120 • , the center white block is the azimuth +000 • , and the top is the azimuth −120 • .The recognition error rate is lower closer to the center white block, and higher otherwise.Therefore, the recognition error rates of the uppermost and the lowermost blocks are the highest, and the recognition error rate decreases as the azimuth angle approaches +000

Experimental Results
Figure 9 is a stacking diagram of LBPH's A+120--120E+00 grouping error rate comparison of different ambient light levels.Figure 10 illustrates the stacking method of Figure 9.The bottom of the stacked image is the azimuth +120°, the center white block is the azimuth +000°, and the top is the azimuth −120°.The recognition error rate is lower closer to the center white block, and higher otherwise.Therefore, the recognition error rates of the uppermost and the lowermost blocks are the highest, and the recognition error rate decreases as the azimuth angle approaches +000°.Figure 11 is a stacking diagram of OpenFace's A+120--120E+00 grouping error rate comparison of different ambient light levels.The recognition error rates of the uppermost and the lowermost blocks are the highest, and they decrease as the azimuth angle approaches +000°.Overall, OpenFace's recognition error rate was about 20% to 49% higher than LBPH's in A+120--120E+00 grouped images.Accurate rate = ∑ 31 1 ((Number of correctly identified sheets ÷ 4) × 100%) ÷31 (3) Figure 12 shows the average ambient lighting accuracy rates of 31 people grouped by A+120--120E+00.Formula (3) is the calculation formula for the average recognition accuracy rates of each azimuth angle in Figure 12. Figure 12 shows the average ambient lighting accuracy rates of 31 people grouped by A+120-120E+00.Formula (3) is the calculation formula for the average recognition accuracy rates of each azimuth angle in Figure 12.
First, we calculated the ratio of the number of correct predictions to the total number of predictions and then converted the ratio into a percentage.Finally, we added up all the percentages and divided by 31 people to get the average.
For the picture with A+000 • as the center line, the recognition accuracy rate was almost symmetrical.There was not much difference between the light source on the left or the right.The closer the recognized azimuth angle was to azimuth +000 • , the higher the recognition accuracy rate; the farther away the azimuth angle +000 • , the lower the recognition accuracy rate.Overall, the recognition accuracy rate of LBPH under changes in environmental lighting was far better than that of OpenFace.Therefore, LBPH is more suitable for applications in environments with light changes.From Figure 12, whether LBPH or OpenFace was used at the azimuth angle of −010 • , the recognition accuracy rate was the highest.LBPH was 34.68% more accurate than OpenFace at the azimuth angles of −010 • .LBPH was 38.71% more accurate than OpenFace at the azimuth angles of +120 • .Accurate rate = ∑ 31 1 ((Number of correctly identified sheets ÷ 4) × 100%) ÷31 (3) Figure 12 shows the average ambient lighting accuracy rates of 31 people grouped by A+120--120E+00.Formula (3) is the calculation formula for the average recognition accuracy rates of each azimuth angle in Figure 12.First, we calculated the ratio of the number of correct predictions to the total number of predictions and then converted the ratio into a percentage.Finally, we added up all the percentages and divided by 31 people to get the average.
For the picture with A+000° as the center line, the recognition accuracy rate was almost symmetrical.There was not much difference between the light source on the left or the right.The closer the recognized azimuth angle was to azimuth +000°, the higher the recognition accuracy rate; the farther away the azimuth angle +000°, the lower the recognition accuracy rate.Overall, the recognition accuracy rate of LBPH under changes in environmental lighting was far better than that of OpenFace.Therefore, LBPH is more suitable for applications in environments with light changes.From Figure 12, whether LBPH or OpenFace was used at the azimuth angle of −010°, the recognition accuracy rate was the highest.LBPH was 34.68% more accurate than OpenFace at the azimuth angles of −010°.LBPH was 38.71% more accurate than OpenFace at the azimuth angles of +120°.Error rate = ∑ 31 1 (( Number of sheets with identification errors ÷ 4) × 100%) ÷ 31 (4) Figure 13 shows the average ambient light error rate of 31 people grouped by A+120-120E+00, and Formula (4) is the calculation formula for the average identification error rate of each azimuth angle shown in Figure 13.First, we calculated the ratio of the number of prediction errors to the total number of predictions and converted it into a percentage.Finally, we added up all percentages and divided by 31 people to get the average.Error rate = ∑ 31 1 (Number of sheets with identification errors ÷ 4) × 100%) ÷ 31 (4) Figure 13 shows the average ambient light error rate of 31 people grouped by A+120--120E+00, and Formula (4) is the calculation formula for the average identification error rate of each azimuth angle shown in Figure 13.First, we calculated the ratio of the number of prediction errors to the total number of predictions and converted it into a percentage.Finally, we added up all percentages and divided by 31 people to get the average.With A+000° as the center line, the recognition error rate was almost symmetrical.Overall, the recognition error rate of LBPH was much lower than that of OpenFace.When both azimuth angles were in the range of +25° to −25°, the recognition error rate was relatively low.When both azimuth angles were in the range of +50° to +120° and −50° to −120°, the recognition error rate was relatively high.For LBPH, when the azimuth angle was +50° to +25°, the identification error rate was reduced by 26.62%; when the azimuth angle was −50° to −25°, the identification error rate was reduced by 17.74%.For OpenFace, when the azimuth angle was +50° to +25°, the recognition error rate dropped by 13.71%; when the azimuth angle was -50° to −25°, the recognition error rate dropped by 20.17%.From the azimuth angle of +50° to +25° and −50° to −25°, we found that the recognition error rate was significantly reduced.This means that the change of the azimuth angle reduces the shadow on the face, and the contours of the facial features are clearer, reducing the error rate of face recognition.With A+000 • as the center line, the recognition error rate was almost symmetrical.Overall, the recognition error rate of LBPH was much lower than that of OpenFace.When both azimuth angles were in the range of +25 • to −25 • , the recognition error rate was relatively low.When both azimuth angles were in the range of +50 • to +120 • and −50 • to −120 • , the recognition error rate was relatively high.For LBPH, when the azimuth angle was +50 • to +25 • , the identification error rate was reduced by 26.62%; when the azimuth angle was −50 • to −25 • , the identification error rate was reduced by 17.74%.For OpenFace, when the azimuth angle was +50 • to +25 • , the recognition error rate dropped by 13.71%; when the azimuth angle was -50 • to −25 • , the recognition error rate dropped by 20.17%.From the azimuth angle of +50 • to +25 • and −50 • to −25 • , we found that the recognition error rate was significantly reduced.This means that the change of the azimuth angle reduces the shadow on the face, and the contours of the facial features are clearer, reducing the error rate of face recognition.

Discussion
The benefits and drawbacks of both LBPH and OpenFace were published in previous studies [27].In general, OpenFace has consistently higher accuracy than that of LBPH in adding more samples, although increasing the number of samples makes overall accuracy drop.OpenFace's SVM has faster training time than that of LBPH, no matter how many samples are included.In times of prediction per image, OpenFace increases slightly more than LBPH does, but it does not increase further while increasing the sample numbers.With sample sizes larger than 50, the prediction time per image of LBPH exceeds that of OpenFace executed on a GPU.
The circumstances of images have great influence on the accuracy of these two methods [51].Image noises affect LBPH more than OpenFace.Under various camera resolutions, OpenFace shows more robustly than LBPH.However, in different lighting conditions, LPBH operates better than OpenFace.
According to the least required amount of training data to achieve a good result, OpenFace requires fewer than 20 images per person to achieve 95% accuracy, while LBPH requires far more to achieve similar results.A main advantage of OpenFace is its design for real-time identification and that it can be easily worked on mobile devices.One can use very little data to train an OpenFace model to achieve high accuracy.
LBPH is superior to OpenFace in that it works better in unevenly lighting circumstances.LBPH is suitable in systems with inconsistent illumination, such as payment systems.LBPH is less affected by different lighting because of its calculation of each pixel's binary number according to neighborhood, which can reduce interference of nearby illumination.Throughout the procedure, LBPH can reflect the local characteristics of each pixel.
The previous studies did not analyze in detail the impact of various lightings on face recognition, nor did they conduct thorough research on the two methods at different illumination angles.We confirmed that in terms of different lighting angles, LBPH is more consistently accurate than OpenFace.LBPH has an accuracy of close to or more than 90%, especially while the lighting angle is within + or -25 degrees in the horizontal plane.

Conclusions and Future Work
From the results, LBPH is more suitable than OpenFace in recognition applications with ambient lighting changes.When the light source is kept larger than the azimuth angle of +25 • or less than −25 • and the elevation angle +000 • , the shadows on the face will increase, and the recognition accuracy will be lower; if not, the result will be the reverse.According to the results of face recognition with changes in ambient light, compared with that of the OpenFace recognition model, LBPH has superior performance in classification and recognition.
Therefore, for image recognition (such as face recognition) that requires more detailed output for line texture features but is affected by ambient light, LBPH has a greater level of recognition accuracy for its object recognition or classification results than that of OpenFace.Furthermore, the previously made important image collection can be employed in the subsequent data transfer process.
In addition, most facial images are captured in natural settings.As a result, the contents of the picture can be very complicated, and the lightening state can be very diverse.The other major problems of images from cameras are distortion, noise, and different resolutions of lens systems.Applying the appropriate methods will improve the system and make it more robust and efficient so that it can be implemented in realistic settings.
Since a trained LBPH algorithm is very robust to changes in ambient light, it can be deployed on a Raspberry Pi or other edge computing processors to be implemented in

Figure 4 .
Figure 4.The process of image recognition from multiple cameras into PC.

Figure 4 .
Figure 4.The process of image recognition from multiple cameras into PC.

Figure 7 .
Figure 7. Data set processing flow chart.

Figure 7 .
Figure 7. Data set processing flow chart.

Figure 9 .
Figure 9.Comparison of the error rate of LBPH with different ambient light levels.Figure 9. Comparison of the error rate of LBPH with different ambient light levels.

Figure 9 .
Figure 9.Comparison of the error rate of LBPH with different ambient light levels.Figure 9. Comparison of the error rate of LBPH with different ambient light levels.

Figure 9 .
Figure 9.Comparison of the error rate of LBPH with different ambient light levels.

Figure 10 .
Figure 10.Illustration of stacked graphs of different illumination error rates for a single user.

Figure 10 .
Figure 10.Illustration of stacked graphs of different illumination error rates for a single user.

Figure 11 .
Figure 11.Comparison of error rate of OpenFace with different ambient light levels.

Figure 11 .
Figure 11.Comparison of error rate of OpenFace with different ambient light levels.

Figure 11 .
Figure 11.Comparison of error rate of OpenFace with different ambient light levels.

Table 1 .
List of python libraries.

Table 2 .
Comparison of related work by metrics.

Table 2 .
Comparison of related work by metrics.

Table 4 .
Number of images in training set and test set after data enhancement.

Table 4 .
Number of images in training set and test set after data enhancement.In this paper, the new splitting method was to read the 20 images after each data enhancement in each group of personal images and split each group into a training set and a test set.Thus, in each enhancement image group, there were 16 training set images and 4 test set images.The number of sheets in the training set and test set for each personal image was calculated according to Formulas (1) and (2).