Previous Article in Journal
A Hybrid Learnable Fusion of ConvNeXt and Swin Transformer for Optimized Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Raspberry Pi-Based Face Recognition Door Lock System

by
Seifeldin Sherif Fathy Ali Elnozahy
,
Senthill C. Pari
* and
Lee Chu Liang
Faculty of Engineering, Multimedia University, Persiaran Multimedia, Cyberjaya 63100, Malaysia
*
Author to whom correspondence should be addressed.
Submission received: 3 February 2025 / Revised: 8 April 2025 / Accepted: 17 April 2025 / Published: 20 May 2025

Abstract

:
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its CPU, the system uses facial recognition for authentication. A camera module for real-time image capturing, a relay module for solenoid lock control, and OpenCV for image processing are essential. The system uses the DeepFace library to detect user emotions and adaptive learning to improve recognition accuracy for approved users. The device also adapts to poor lighting and distances, and it sends real-time remote monitoring messages. Some of the most important things that have been achieved include adaptive facial recognition, ensuring that the system changes as it is used, and integrating real-time notifications and emotion detection without any problems. Face recognition worked well in many settings. Modular architecture facilitated hardware–software integration and scalability for various applications. In conclusion, this study created an intelligent facial recognition door lock system using Raspberry Pi hardware and open-source software libraries. The system addresses traditional access control limits and is practical, scalable, and inexpensive, demonstrating biometric technology’s potential in modern security systems.

1. Introduction

Traditional door-locking mechanisms face challenges such as lost keys, duplication risks, and unauthorized access. Facial recognition offers a secure and user-friendly alternative by leveraging unique biological features [1]. With advancements in artificial intelligence (AI) and computer vision, facial recognition is now widely adopted in banking, healthcare, and public security, making it a crucial component of smart access control systems.
Facial recognition systems integrate computer vision and AI to enable real-time detection and authentication. Computer vision extracts facial features, while AI continuously improves accuracy through adaptive learning. These technologies create a robust and scalable security solution that is capable of adapting to environmental changes, enhancing its overall reliability.
This paper presents a Raspberry Pi-based face recognition door lock system (Raspberry Pi Ltd., Cambridge, UK), offering affordability, scalability, and advanced technological capabilities. The system employs a camera module for real-time face detection, a solenoid lock for secure access, and software optimizations for improved performance. Adaptive learning allows the system to update facial data dynamically, accommodating variations in appearance due to aging, hairstyles, or accessories without requiring manual retraining. Preprocessing techniques, including lighting normalization and feature extraction, ensure consistent performance across different environmental conditions, making the system effective even in low-light settings.
An additional feature, emotion detection, enhances user interaction by recognizing emotional states such as happiness, anger, or surprise, extending the system’s applications beyond security to workplaces, educational institutions, and public spaces. Lighting condition adaptation ensures reliable operation under varying illumination conditions, reducing the impact of shadows or glare on recognition accuracy. Scaling accuracy further ensures that users can be identified at different distances and angles, increasing flexibility and usability.
The system also integrates real-time notifications, allowing the Raspberry Pi to communicate with connected devices without requiring an app. Recognized users receive voice feedback indicating access, while unauthorized detections trigger alerts, ensuring enhanced security and user awareness. This feature extends the system’s usability to scenarios requiring real-time monitoring, such as office environments and restricted facilities. Multiple authorized face recognition further allows for seamless multi-user support without requiring separate hardware configurations.
By combining adaptive learning, real-time processing, and smart security features, this study delivers a practical, scalable, and efficient solution to modern access control challenges. The proposed system addresses key limitations of existing solutions by enhancing recognition accuracy, improving usability, and enabling real-time adaptability to changing conditions.
This paper presents a Raspberry Pi-based facial recognition door lock system that balances affordability, scalability, and advanced technology. By leveraging the Raspberry Pi’s processing power, the system integrates real-time facial recognition with adaptive learning, refining its accuracy over time. Robust preprocessing techniques mitigate environmental factors like poor lighting and variable distances, ensuring consistent performance. Beyond its technical capabilities, this system offers operational advantages such as real-time notifications for access attempts, keeping users informed and in control. Multi-user management without retraining simplifies access control, making it ideal for shared environments like offices, schools, and multi-tenant buildings. This paper delivers a comprehensive, efficient, and reliable access management solution by addressing limitations of traditional locks and biometric systems. To achieve this, the system integrates a Raspberry Pi, camera module, relay module, and solenoid lock for real-time operation. The software development phase introduces adaptive face recognition, emotion detection, and preprocessing for lighting adaptation, ensuring reliable performance in diverse conditions while maintaining high accuracy and efficiency.
Low cost and standalone design:
  • Uses a Raspberry Pi for processing without requiring a dedicated server or cloud services.
  • Reduces implementation costs while maintaining reliability.
Real-time face recognition:
  • Implements an efficient face detection and recognition algorithm to authenticate users.
  • Ensures quick and accurate access control.
Emotion detection enhancement:
  • Integrates an emotion detection module to analyze users’ expressions.
  • Can be extended for additional security features or user interaction improvements.
Secure and automated door lock system:
  • Recognized faces trigger an automated unlocking mechanism.
  • Unrecognized faces keep the door locked, enhancing security.
Scalability and customization:
  • Allows adding multiple authorized users by updating the stored face encodings.
  • Can be adapted for various access control applications beyond door locks.

2. Related Works

This review assesses research on facial recognition-based access control systems, particularly those utilizing Raspberry Pi for cost-effective implementation. Various approaches integrating machine learning, edge computing, and preprocessing techniques have been explored to enhance recognition accuracy and system adaptability.
Nasreen Dakhil Hasan and Adnan M. Abdulazeez [2] conducted a comprehensive review of deep learning-based facial recognition techniques. Their study highlights the advancements in convolutional neural networks (CNNs) and autoencoders for improving accuracy across various conditions, including lighting changes, occlusions, and facial expressions. The review also discusses emerging technologies such as 3D facial reconstruction and multimodal biometrics, emphasizing ethical concerns like privacy and bias in AI-driven facial recognition. While their work provides valuable insights into the broader landscape of deep learning applications in face recognition, our study focuses on optimizing real-time performance for Raspberry Pi while maintaining efficiency and adaptability.
M. Alshar’e et al. [3] proposed a deep learning-based home security system using MobileNet and AlexNet for biometric verification. While their CNN-based hierarchical feature extraction improves recognition under controlled conditions, the computational complexity limits its real-time application on Raspberry Pi. Privacy concerns regarding biometric data storage were also highlighted, advocating for on-device processing and encryption. Our study optimizes recognition efficiency for Raspberry Pi while incorporating adaptive learning and preprocessing enhancements.
A. Jha et al. [4] developed a low-cost access control system using the Haar cascade classifier on Raspberry Pi. While their approach is efficient for resource-constrained devices, its limitations include poor performance under low lighting, rigid facial angles, and a static face dataset requiring manual updates. Our study addresses these challenges through advanced preprocessing, dynamic face database updates, and real-time notifications for improved scalability and adaptability.
A. D. Singh et al. [5] implemented a Haar-cascade-based facial recognition system for residential access control, emphasizing affordability and simplicity. However, accuracy issues under varying lighting conditions and the lack of dynamic database updates restricted its effectiveness. We enhance this framework by integrating adaptive learning, robust preprocessing, and real-time notifications for improved usability and security.
D. G. Padhan et al. [6] utilized the HOG algorithm with IoT integration for facial recognition-based home security. While computationally efficient, this system struggles in low-light conditions and requires manual database updates, limiting its scalability. Our work improves recognition accuracy through preprocessing techniques such as illumination normalization and gamma correction, while adaptive learning automates database updates.
N. F. Nkem et al. [7] explored PCA-based facial recognition for low-power security applications, emphasizing dimensionality reduction. However, PCA’s sensitivity to lighting and facial orientation and its static database restrict its real-world applicability. Our system overcomes these limitations by integrating adaptive learning and preprocessing techniques, ensuring higher accuracy and robustness.
In summary, prior studies have laid the foundation for Raspberry Pi-based facial recognition systems but often lacked adaptability, real-time learning, and robust preprocessing for varying environmental conditions. Our work builds on these findings by addressing computational constraints, enhancing scalability, and integrating real-time security features to create a more efficient and practical access control system.

3. Design Methodology

The Raspberry Pi-based facial recognition door lock solution required hardware, software, and rigorous testing to address modern access control issues. This study discusses hardware integration, adaptive facial recognition software development, and system testing to verify the system’s performance. The implementation process prioritizes safety, reliability, scalability, and usability. The implementation process necessitated meticulous integration of hardware and software, consideration of Raspberry Pi’s processor limitations, and the resolution of environmental variables.
The HOG algorithm segments an image into small cells, calculates a Histogram of Orientated Gradients for each cell, and subsequently normalizes local contrast in overlapping blocks. It is utilized with machine learning methods such as Support Vector Machines (SVMs) for object detection.
Adaptive learning personalizes educational experiences according to individuals’ requirements and speeds, leveraging technology to deliver customized information, evaluations, and feedback. The adaptive algorithm enhances learning results by addressing each learner’s distinct strengths and limitations. This work addresses personalized learning pathways, which are highly beneficial for recognizing face-customized content.
Figure 1 illustrates the system’s operational workflow, from camera initialization to final authentication, providing a structured approach to secure access control.
Figure 1 illustrates the sequential steps in the face recognition door lock system. Each block represents a key process. We turn on the system, initialize it, and prepare it for operation. The local camera is ready to receive customer signals from the door. When the Raspberry Pi receives the signals, it activates the systems and switches to camera mode. The system will consult its library to identify the face. The captured face frame will analyze the image preprocessing within the system and implement various processes for recognizing the face. The system compares the detected face with stored encodings. If the face matches a registered user, the system proceeds to unlock. Otherwise, it keeps the door locked. The system will not permit access if it does not recognize faces.
The system’s main processor, the Raspberry Pi, is connected to a camera module for real-time video, a relay module, and a solenoid lock for physical access. Each hardware component was selected based on compatibility, cost, and low-power applications. The Raspberry Pi and peripheral devices communicated well thanks to precise GPIO connections during assembly. OpenCV and DeepFace are Python 3.12.6 libraries for facial recognition and emotion detection. Adaptive learning dynamically updates the authorized face database, preprocessing adjusts to environmental conditions, and real-time notifications notify users of access events. Coded systems behave smoothly and responsively between hardware and software.
System stability and scalability require extensive testing and validation. The system was evaluated under various conditions, including variations in lighting, distances, and facial orientations, to assess its performance. Metrics such as detection accuracy, processing time, and emotion detection effectiveness were analyzed to ensure robustness. As highlighted in [8], implementing an embedded facial recognition system on Raspberry Pi requires balancing computational efficiency, image processing techniques, and algorithm selection to achieve optimal performance under real-world conditions. Testing insights led to iterative system modifications, further enhancing its functionality and reliability.

3.1. Hardware Implementation

The hardware implementation of the Raspberry Pi-based facial recognition door lock system ensures smooth operation and reliability. This section details the setup, including components, roles, and connections. The Raspberry Pi Model 3B serves as the central processor, handling facial recognition, emotion detection, and device connectivity. It interfaces with a camera module for real-time video input and a relay module controlling the solenoid lock for secure access.
Key components such as DC barrel jacks, microSD cards, jumper wires, and power supplies ensure stable connections. The relay module enables safe control of the high-voltage solenoid lock circuit, while the 5 V and 12 V power supplies support system reliability. The Raspberry Pi stores the operating system and scripts on a microSD card. Figure 2 presents a circuit schematic illustrating the interactions between the hardware components, demonstrating the system’s connectivity and functionality.
This diagram visually explains the flow of power and control signals within the system, offering clarity on the role and connectivity of each component. It simplifies the understanding of how the Raspberry Pi interacts with the solenoid lock through the relay module while maintaining separate power supplies for different components.

3.2. Hardware Assembly

This section illustrates the process of assembling the hardware components of the Raspberry Pi-based face recognition door lock system. The assembly integrates the Raspberry Pi, camera module, relay module, solenoid lock, and other necessary components into a functional setup. The objective is to ensure secure connections, operational reliability, and a well-organized layout for easy maintenance. The preloaded microSD card containing the operating system and software is inserted into the Raspberry Pi Model 3B’s slot. The Raspberry Pi Camera Module is connected to the CSI (Camera Serial Interface) port, ensuring that the metal contacts on the ribbon cable face the CSI port for proper connectivity. The Raspberry Pi is secured on a stable platform or within a protective case to prevent damage during assembly. The position of the relay module is close to the Raspberry Pi for tidy and secure wiring. The positive terminal of the 12 V solenoid lock is connected to the relay’s NO (Normally Open) terminal. Basic GPIO scripts are run on the Raspberry Pi to test the functionality of the relay and solenoid lock, ensuring that the relay triggers correctly and the solenoid lock responds as intended. The Raspberry Pi-based face recognition door lock system’s hardware implementation forms the paper’s backbone, integrating multiple components into a cohesive and functional setup. Each component, from the Raspberry Pi Model 3B to the solenoid lock and relay module, is vital in ensuring secure and reliable access control. The carefully designed circuit connections and the logical assembly process provide a robust foundation for the system. Including power supply mechanisms and adaptable configurations further enhances the system’s efficiency and scalability. This meticulously constructed hardware platform sets the stage for seamless integration with the software and testing phases, demonstrating this paper’s commitment to precision and practicality.

3.3. Software Implementation

The software implementation of the Raspberry Pi-based face recognition door lock system involves an intricate combination of algorithms and processes designed for efficiency, accuracy, and adaptability. The development began with a simulation phase, where the feasibility of the face recognition algorithm was tested using a laptop and a pre-trained CNN model. This simulation provided valuable insights into the algorithm’s performance, forming the foundation for the subsequent hardware integration.
Building on this groundwork, the system was implemented on the Raspberry Pi to achieve real-time face detection, recognition, and dynamic adaptability. The system detects faces, recognizes authorized individuals, adapts to changing conditions, and notifies remote devices in real time. The following subsections outline the implementation of key features, including the facial recognition algorithm, the notification system, emotion detection, and the dynamic addition of unknown faces to the authorized list.
The software components were developed with Python, leveraging libraries such as face_recognition, cv2, DeepFace, and others. Advanced preprocessing techniques, such as lighting normalization and gamma correction, ensure reliable performance under varying environmental conditions. These components collectively create a robust and efficient system that is capable of handling diverse use cases, such as recognizing faces at different distances, adapting to dynamic lighting conditions, and processing real-time notifications on remote devices. The project utilizes several essential Python libraries for face recognition, GPIO control, and notification handling.
A preloaded dataset of photos of authorized individuals is stored on the Raspberry Pi. This dataset is processed during initialization to generate 128-dimensional encodings, which are stored in the encodings file. These encodings are later used for face matching during runtime.

3.4. Simulation and Preliminary Testing

A facial recognition system simulation tested the algorithm’s viability and real-time processing. The simulation included simulated door lock/unlock functionality using facial recognition findings. System setup included the Haar cascade classifier for webcam face detection. The pre-trained CNN model classifies identified faces as authorized (e.g., “SEIF”) or unauthorized (“NOT SEIF”). Real-time frame processing uses the laptop webcam. OpenCV and TensorFlow are the core video processing and model execution libraries.
Three steps were employed for the algorithm and process overview. Face Detection was the Haar cascade classifier that detected faces in the video stream and then cropped and preprocessed them for recognition. The CNN model pre-trained in facial recognition identified faces as SEIF (authorized) or not SEIF. Face photos were resized to 224 × 224 pixels to meet the model’s input criteria, before being normalized and expanded. If SEIF was recognized, the system displayed “Door is unlocking…” to simulate unlocking the door. For unknown faces, the system simulated locking the door with “Door is locking…”.
During the simulation, the system successfully detected and recognized faces in real time, as illustrated in Figure 3.

3.5. Facial Recognition Algorithm

Face recognition systems deployed on edge devices, such as the Raspberry Pi, must balance computational efficiency with recognition accuracy. Traditional deep learning models, such as CNN-based approaches, offer high accuracy but are computationally expensive, making them impractical for real-time processing on low-power devices. In contrast, lightweight algorithms such as the Histogram of Oriented Gradients (HOG) have demonstrated their effectiveness in edge computing scenarios, enabling efficient face detection without requiring GPU acceleration [9].
The Histogram of Oriented Gradients (HOG) algorithm has been widely used for efficient face detection [10]. This method provides real-time performance while maintaining accuracy, making it suitable for resource-constrained environments. HOG-based detection extracts key facial features using gradient orientation patterns, ensuring robustness in different environmental conditions. Additionally, its computational simplicity allows it to outperform more complex deep learning-based approaches in terms of processing speed on embedded hardware.

3.6. System Setup for Facial Recognition

Before delving into the algorithm’s details, it is essential to understand how the system is prepared to execute facial recognition effectively:
  • Virtual environment: The facial recognition script is executed within a Python virtual environment on the Raspberry Pi. This ensures that all dependencies, such as the OpenCV, DeepFace, and face_recognition libraries, are correctly managed and isolated from the base system. During the system setup, the script initializes the camera module and loads the necessary libraries. The Raspberry Pi terminal logs provide a detailed summary of these initialization steps.
  • Dataset of authorized faces: A preloaded dataset of photos of authorized individuals is stored on the Raspberry Pi. This dataset is processed during initialization to generate 128-dimensional encodings, which are stored in the encodings file. These encodings are later used for face matching during runtime.
  • RealVNC Viewer: RealVNC Viewer was utilized during development and testing to provide remote access to the Raspberry Pi. This tool allowed for effective debugging, script execution monitoring, and system setup adjustments.
According to Raspberry Pi Face Recognition by Adrian Rosebrock [11], while both methods utilize deep learning techniques for face recognition on the Raspberry Pi, this paper’s implementation distinguishes itself by incorporating emotion detection and adaptive learning capabilities, offering a more comprehensive analysis of facial features and expressions.

3.7. Process Overview

The face recognition process begins with face detection, where the HOG model identifies key facial landmarks such as the eyes, nose, and mouth, ensuring real-time performance on the Raspberry Pi 3B. Once a face is detected, it undergoes face encoding, converting it into a unique 128-dimensional vector that acts as a biometric fingerprint. These encodings are pre-stored in the system’s encodings_file, containing authorized individuals’ facial data. During face matching, the system compares the newly detected encoding with stored ones using Euclidean distance, recognizing a face if the distance falls below a 0.4 threshold, and granting access accordingly.
As the system processes video frames, it detects and recognizes faces in real time. Figure 4 demonstrates the real-time face recognition process, displaying the detected face along with the identified emotion and access status.
To provide a structured overview of the facial recognition process, Table 1 summarizes the key steps involved in the algorithm. This table outlines the sequential operations, starting from capturing the frame to displaying the recognition results and triggering corresponding actions.

3.8. Lighting Normalization and Gamma Correction

Lighting inconsistencies, such as low-light conditions or bright backgrounds, can negatively impact face detection and recognition accuracy. To address this, the system implements the following enhancements: Contrast-Limited Adaptive Histogram Equalization (CLAHE) enhances image contrast by redistributing the intensity values across the image, ensuring uniform brightness. This is particularly useful in dim or uneven lighting environments. The gamma correction adjusts the brightness dynamically, enhancing darker regions of the frame to reveal facial features. The gamma correction formula is as follows:
I m a g e c o r r e c t e d = 255 × I m a g e o r i g i n a l 255 γ
where
  • I m a g e o r i g i n a l is the original pixel intensity.
  • γ is the gamma adjustment factor (e.g., γ = 1.2).
  • I m a g e   c o r r e c t e d is the brightness-adjusted intensity.
Lighting conditions significantly impact face detection accuracy. Figure 5 illustrates how the implemented lighting normalization technique enhances visibility and ensures consistent facial recognition even in low-light environments.

3.9. Integration into the System

The algorithm is integrated into the system via a Python script that processes each video frame captured by the Raspberry Pi Camera Module V2. It dynamically adjusts the brightness using the apply_lighting_adaptation function, which incorporates CLAHE and gamma correction. Detected faces are resized to reduce the computational load, and their locations are scaled back for precise visualization. To ensure real-time performance, the system processes resized frames at 25% of their original size during detection. Detected face locations are scaled back to their original dimensions for precise matching. Additionally, the system employs dynamic frame skipping, adjusting the number of frames processed based on CPU usage to avoid overloading the Raspberry Pi.

3.10. Adaptive Learning and Unknown Face Addition

The Raspberry Pi-based facial recognition system’s capacity to dynamically adapt to new inputs is a major improvement. Unlike static face recognition systems, this system uses adaptive learning to update face encodings over time [12]. This feature improves recognition accuracy by learning from repeated interactions and evolving with the user. Traditional facial recognition systems employ manually updated databases that must be retrained for new users. This work uses adaptive learning to dynamically update facial encodings and recognize new users without retraining them. Incremental learning improves facial recognition accuracy by continuously revising facial encodings based on new observations [13]. This section details these features’ design, implementation, and significance, showing how adaptive learning improves usability, scalability, and reliability.
Adaptive learning: recognition improvement over time. Every contact improves the system’s knowledge about authorized users through adaptive learning. This feature overcomes common facial recognition issues, such as modest appearance differences due to the following:
  • Lighting: Adjustments can affect facial features.
  • Subtle changes in facial contours over time due to aging.
  • Dynamic factors: Accessories like glasses or caps.
The system updates its encoding when it detects a known face. This continual learning process updates the stored data with the user, reducing false negatives and improving identification accuracy. During each identification cycle, the system compares the detected face’s encoding to that of the stored ones. A match updates the stored encoding with the new one. This captures tiny facial alterations over time, improving the recognition reliability. Continuous recognition accuracy without manual updates is a benefit of adaptive learning. Adaptability allows for natural changes in user appearance, while efficiency ensures minimal retraining.

3.11. Unknown Face Detection and Dynamic Addition

The system identifies faces that do not match stored encodings as “Unknown”. This triggers a prompt to the user, allowing them to dynamically add new individuals to the list of authorized faces. This feature ensures scalability and eliminates the need for preloading datasets for every user. When an unrecognized face is detected, the system displays a real-time prompt on the Raspberry Pi terminal, requesting the user to decide whether to authorize the face. The prompt includes the following options:
  • Press ‘a’: To authorize the detected face and add it to the dataset of authorized individuals.
  • Press ‘q’: To ignore the detected face and quit the operation.
This functionality is implemented in the Python script, where the system identifies faces as “Unknown” when no match is found in the encodings_file. Upon receiving the user input to authorize the face, the system saves the corresponding 128-dimensional face encoding along with the name provided by the user. This interactive feature facilitates adaptive learning.

3.12. Storing New Face Encodings

Once a face is authorized, the system dynamically updates its dataset. The 128-dimensional encoding of the newly detected face, along with the name entered by the user, is stored in the encodings_file. This ensures that the face will be recognized in subsequent interactions without requiring a complete system restart or retraining process. This dynamic addition process is efficient and secure, maintaining the robustness of the face recognition system while adapting to new users.

3.13. Recognizing Newly Added Faces

After adding a new face, the system immediately integrates the updated encoding into the recognition pipeline. Subsequent frames demonstrate the system’s ability to correctly identify the newly authorized individual by displaying their name and emotional state on the video stream. This step confirms the success of the adaptive learning feature and showcases the system’s ability to evolve dynamically.
Once the unknown face is authorized and added to the system, it is successfully recognized in subsequent detections, as demonstrated in Figure 6.

3.14. Integration into the Recognition Workflow

The recognition workflow naturally incorporates adaptive learning and dynamic addition features. “Detect a Face” identifies a face within the frame and calculates its encoding. Align with Established Encodings: We revise the encoded information for the recognized individual upon identification. If we find no match, we classify the face as “unknown”. The “Unknown” classification prompts the user to authorize a face and assign it a name. The Retail Establishment New Encodings represents the latest encoding, with its name saved permanently for future identification. Incorporating additional individuals dynamically guarantees the system’s adaptability to changing user needs. The system grants complete authority over access rights by encouraging users to permit unidentified individuals. Adaptive learning guarantees the system’s accuracy and relevance as users’ appearances evolve. The facial recognition door lock system is much more flexible, scalable, and easy to use when adaptive learning and dynamic integration of unfamiliar faces are used. Because of these features, there is no need for retraining or updating datasets by hand, and the system will always work well in real-world situations. By incorporating these functionalities, the system dynamically grows, providing a strong and user-friendly access control solution.

3.15. Emotion Detection

Emotion detection is a critical feature of the system, enhancing its capabilities beyond basic facial recognition. By analyzing facial expressions in real time, the system determines the dominant emotion displayed by a recognized or unknown individual. This functionality improves both security monitoring and user experience customization.
In terms of security, the system can flag unusual emotional states such as fear, anger, or anxiety in unrecognized individuals. The system also logs emotion data, which can be analyzed to identify potential security risks over time.
Beyond security, emotion detection enhances the overall user experience by enabling personalized responses. In a smart-home setting, future improvements could enable the system to adjust environmental factors such as lighting or music based on the detected mood of the resident. If stress is detected, calming music or dim lighting could be activated to create a more comfortable atmosphere. Emotion detection also has applications in accessibility, as it can assist individuals with disabilities by providing adaptive responses based on their emotional cues. In workplace environments, detecting emotional distress in employees entering restricted areas may help prevent security incidents or unauthorized access.
For enhanced facial recognition and emotion detection, the system integrates DeepFace, a deep learning-based facial analysis library that is capable of performing real-time facial verification and emotion classification [14]. DeepFace employs a convolutional neural network (CNN) model pre-trained on large datasets, ensuring high accuracy in recognizing user expressions such as happiness, sadness, and anger. Recent studies highlight the effectiveness of CNN-based emotion recognition models, demonstrating their ability to extract subtle facial features that contribute to accurate classification [15]. The DeepFace library provides pre-trained models that classify facial expressions into predefined emotional categories: happiness, sadness, anger, surprise, fear, disgust, and neutrality. The library was selected due to its high accuracy and efficient integration with Python, making it suitable for real-time processing on the Raspberry Pi. The system captures frames from the Raspberry Pi Camera Module V2. Once a face is detected, the bounding box of the detected face is used to isolate the facial region.
Increased emotion recognition modules in door lock systems can identify both a person’s identification and their emotional state, improving functionality. Such an upgrade can improve security, usability, and adaptability. Adding emotion recognition to a door-locking system can improve security, personalization, and emotional adaptation. The technology uses advanced AI, machine learning algorithms, and sensor data to authenticate users and respond intelligently to their emotions, boosting safety and convenience. However, emotional data must be handled ethically and without privacy concerns.
The extracted face region is passed to DeepFace’s emotion recognition module. This module outputs the probabilities for each emotion and identifies the dominant emotion. The detected emotion is displayed on the live video feed, as shown in Figure 7, and sent to the connected notification system. For example, if a user is detected as “fear”, the system sends a notification stating “Detected emotion: Fear”.

3.16. Enhancements for Real-Time Performance

To ensure that the emotion detection module could operate efficiently on the Raspberry Pi, several optimizations were implemented. Emotion detection is computationally intensive. To balance performance and responsiveness, the system skips a configurable number of frames during processing, based on CPU usage. For example, during high CPU loads, emotion detection is performed every 10th frame instead of every frame. The bounding box containing the facial region is resized to a fixed resolution before passing it to DeepFace. This minimizes computation while preserving recognition accuracy. Detected emotions are displayed alongside the recognized name in the live video feed. For example, “SEIF—Happy” is overlaid on the bounding box of a recognized individual, as shown in Figure 8.

3.17. Notification System

The notification system is essential to the facial recognition door lock system, providing real-time access to event feedback. The system sends textual and auditory notifications for recognized faces, unknown faces, and emotions observed during recognition. It uses a socket-based client–server communication mechanism without hardware or mobile apps. Real-time notifications using IoT protocols improve face recognition systems. This approach logs access events and provides real-time notifications [16]. Real-time updates offer swift feedback on facial recognition, encompassing successful recognition, identification of unknown faces, and identification of people’s emotions. Improved-usability Text-to-Speech (TTS) on the server (laptop) delivers notifications in text and audio formats. Simplicity and efficiency enable effortless use without a smartphone app or cumbersome configurations, making it user-friendly. This system’s Raspberry Pi client sends messages to the laptop server over a TCP socket. The server processes these alerts, presents them on the terminal, and sounds a TTS alert. This keeps users informed of system activities, whether near or remote.
The notification phases of the system workflow initiate notifications. These are activated when the notification system detects an authorized face, an unknown face, or an emotion. A message includes the recognized person’s name and sentiments. Client-side communication: The client connects to the server via TCP and transmits a notification. The Server-side process logs the notification in the terminal and reads it aloud using TTS. Feedback sent to the user receives real-time system activity updates, allowing them to act (e.g., authorize an unfamiliar face) or stay informed. Real-time input from the notification system improves face recognition door lock usability and efficiency. The socket-based architecture assures low-latency communication, while TTS provides intuitive auditory notifications. This approach meets this paper’s simplicity and accessibility goals by not requiring extra programs or hardware. The notification system is crucial to the paper’s functionality, providing access to authorized users and alerts about unknown faces, and reporting observed emotions.

4. Data Presentation and Discussion of Findings

The results obtained from testing the Raspberry Pi-based face recognition door lock system are presented here, followed by an in-depth discussion of these findings. The system was evaluated under various conditions, including different lighting environments, varying distances, and diverse emotional expressions. Numerical results and visual evidence are provided to validate the system’s performance and highlight its strengths and limitations. The findings are also compared to those of existing studies, demonstrating the advancements achieved in this paper.

4.1. Data Presentation

This section provides a detailed presentation of the results obtained during the system’s operation. The data are organized into tables and figures to illustrate performance metrics such as detection times, recognition accuracy, and notification delays. Each feature of the system, including emotion detection, adaptive learning, and real-time notifications, is analyzed in detail. This structured approach highlights the system’s capabilities and areas for improvement.
The face recognition system is designed to support multiple registered users. The system processes each face independently, ensuring accurate identification and authentication for different individuals. Key aspects of multiple-user handling include the following:
Face storage and recognition:
  • Each registered user’s face encoding is stored in a dataset.
  • When a face is detected, the system compares it against all stored encodings.
Simultaneous face detection:
  • The system can recognize multiple faces in a single frame but processes them sequentially.
  • If multiple registered users are detected, the system grants access to the first recognized face.
Priority-based access control:
  • The system can be modified to assign priority levels to users (e.g., admin users vs. regular users).
  • If conflicting commands arise, predefined priority rules determine the system’s response.

4.1.1. Face Recognition in Varying Conditions

We tested the facial recognition technology with various lighting conditions and distances, as shown in Figure 9 and Figure 10. The system was precise and reliable.
The system demonstrated high reliability under various conditions. In optimal lighting, it recognized the user “SEIF” with 99.5% accuracy and a detection time of 498 ms. Under low-light conditions, the CLAHE algorithm improved the visibility, maintaining 92.8% accuracy, although the detection time increased to 678 ms due to processing.
At a moderate distance, the system achieved 96.2% accuracy, with a detection time of 567 ms, while at close range, facial landmark visibility enabled a 99.5% accuracy rate in 498 ms. Even at a difficult, narrow angle, the system recognized the user with 92.4% accuracy in 610 ms, proving its adaptability.
Table 2 summarizes the recognition accuracy and detection times across various lighting and distance scenarios, demonstrating the system’s robustness.
The table highlights the system’s high recognition accuracy in bright light (98.5%) and at some distances (99.5%), demonstrating optimal performance in favorable conditions. In challenging environments, such as low light or dim light, the recognition accuracy decreased but remained above 90%, showing resilience. The average detection time ranged from 521.45 ms to 610.56 ms, with longer times observed under difficult conditions such as low light and greater distances.

4.1.2. Emotion Detection Results

Emotion detection was integrated into the system using the DeepFace library. Figure 11 captures the system identifying an authorized user (“Doda”) with the emotion “sad”. The system accurately labeled the user and detected the dominant emotion, showcasing the effectiveness of emotion detection under varying expressions.
A breakdown of the emotions detected during testing is presented in Table 3, providing insights into the system’s emotion recognition accuracy.
The system’s ability to categorize emotions accurately is evident from Table 3. Neutral emotions were the most frequently detected, which was expected given the controlled testing environment. However, the system also successfully recognized and categorized less common emotions, such as sadness, happiness, anger, and surprise. This ability to differentiate emotional states makes this system suitable for applications requiring emotional intelligence, such as personalized user experiences or monitoring emotional well-being.

4.1.3. Notification System Performance

The notification system’s delay in responding to events was also evaluated. Table 4 outlines the average delays for different notification events.
The notification system exhibited low latency, with delays averaging around 200 ms across all event types. This ensures real-time feedback to users, enabling timely responses to access events and emotion detections. The slight variation in delay times can be attributed to the complexity of processing different types of notifications.
The notification system plays a critical role in providing real-time updates. The notification “Access granted for SEIF!” is displayed on the laptop server. The system sends a notification whenever an authorized user is recognized. The notification delay was approximately 195 ms, ensuring real-time feedback. Multiple notifications were generated by the system when different authorized users were detected and granted access simultaneously. The system successfully identified multiple users and granted access accordingly. The log output showed multiple entries for different users, including “MO” and “SEIF”, with real-time access-granted messages. This demonstrates the system’s effectiveness in handling multiple face detections dynamically and efficiently. The system notifies the user about the detected emotion, such as “neutral”. This feature adds a layer of interactivity by providing insights into the user’s emotional state during access. The terminal output on the Raspberry Pi displays the connection between the Raspberry Pi and the laptop server. The log messages indicate that the system successfully establishes a connection, updates face encodings for improved recognition, and grants access when a recognized face is detected. This highlights the seamless communication between the Raspberry Pi and the server for real-time authentication and updates.

4.1.4. Multiple Authorized Faces and Unknown Face Handling

The system was tested for its ability to detect and recognize multiple authorized users while also handling unknown faces dynamically. Figure 12 showcases the system detecting an unknown face and displaying the notification “Unknown face detected! Press ‘a’ to authorize, ‘q’ to quit”.
Figure 13 shows the system recognizing “Fede” after being successfully added to the authorized list. The system updates its recognition database instantly, allowing the newly authorized user to gain access without requiring full model retraining.
Figure 14 displays the system recognizing two authorized users, “SEIF” and “Doda”, simultaneously. The system correctly identified both individuals and their respective emotions. This feature highlights the system’s capability of detecting multiple authorized users in a single frame, ensuring group access without additional processing delays.

4.1.5. Adaptive Learning Capability

The system features an adaptive learning capability that enables users to dynamically update face encodings, thereby enhancing its recognition accuracy over time. The system’s terminal output indicates the updates to the face encodings for improved recognition. The system refines its recognition capability by updating the face encodings whenever an authorized user is detected multiple times. This feature enhances its accuracy and minimizes misidentifications in future detections.

4.1.6. Overall System Performance

The overall system performance was evaluated based on key metrics such as face detection time, emotion detection time, and notification delays. Table 5 summarizes the numerical results collected during the system’s operation.
The data demonstrate the system’s robust performance during testing. A total of 180 frames were processed, with 142 faces detected. Among these, 2 were recognized as authorized, while 23 faces were identified as unknown. The average face detection time of 564.43 ms highlights the system’s responsiveness, which aligns with real-time requirements. Emotion detection took an average of 327.63 ms, showing efficiency in integrating additional processing tasks. The average movement between frames (28.84 pixels) underscores the stability of detection under varying conditions.
According to the above, we compared the average face detection time (564.43 ms) and the average emotion detection time (327.63 ms). Table 5 illustrates the difference between these two processing tasks, emphasizing the efficiency of emotion detection relative to face recognition.

4.1.7. Comparison with Existing Studies

Face recognition systems deployed on Raspberry Pi often face trade-offs between speed and accuracy. This study achieved an average face detection time of 564.43 ms, which is lower than in previous studies using deep learning-based models [17].
To benchmark the system, its numerical results were compared with similar studies [17,18]. Table 6 highlights the differences in detection time and accuracy between this paper and the two referenced studies.
The comparison shows that this paper achieved superior accuracy (96.7%) compared to the referenced studies. The real-time face recognition system using Raspberry Pi 4 achieved an accuracy of 94% [18]. The LighterFace model had an accuracy of 90.6%, lower than both this paper and the Raspberry Pi 4 study, but reported improvements in speed compared to YOLOv5. For the face detection speed, as mentioned, our paper achieved 564.43 ms, while the LighterFace model achieved a speed of 1543 ms [18].

4.1.8. System Termination and Resource Management

Efficiently managing system resources is critical for embedded applications. The system is designed to properly close all running threads and clean up resources upon termination to prevent memory leaks or hardware issues.
This image shows the facial recognition system’s shutdown terminal output. Before quitting, the system stops threads, closes notifications, and frees resources. This minimizes CPU and memory utilization and ensures a smooth shutdown. An appropriate shutdown procedure enhances stability and reliability, making the system acceptable for continuous operation in real-world circumstances.

4.2. Discussion of Results

In this section, we interpret and explain the results of the Raspberry Pi-based face recognition door lock system, critically examine the study, compare its findings to the literature, and address limitations and future research. The topic centers around facial recognition accuracy, processing speed, adaptability, and real-time notifications.

4.2.1. Interpretation of Results

The system achieved 96.7% accuracy, surpassing [17] (94%) and [18] (90.6%). It performed well under different lighting, distances, and angles, with CLAHE improving low-light recognition. Face detection averaged 564.43 ms, supporting real-time applications, while emotion detection at 327.63 ms enables added functionality without delay. Unlike prior models limited to single-user recognition, the system can identify multiple authorized users simultaneously, making it suitable for offices and shared spaces. The real-time notification system enhances security by alerting administrators to unauthorized access attempts.

4.2.2. Justification of Approach

The Raspberry Pi 3B was chosen for its low power consumption and real-time processing. OpenCV and deep learning optimize its efficiency and accuracy. Adaptive learning allows for dynamic user authorization without retraining, reducing the administrative overhead. Emotion detection adds value for security monitoring and automation.
The system has limitations, including processing constraints under high-traffic conditions, potential latency with large databases, and reliance on stable network connectivity for real-time notifications. Future improvements could include hardware upgrades like the Raspberry Pi 4 or Google’s Coral Edge TPU (Google, Mountain View, CA, USA), infrared-based recognition for better lighting adaptation, and optimized database indexing or cloud storage for scalability.

4.2.3. Impact on Policy and Practice

Face recognition raises privacy and security concerns, including potential misuse. Studies suggest on-device processing and differential privacy techniques to mitigate risks [19]. The system reduces reliance on key-based access, preventing credential loss or theft. Dynamic user management improves accessibility in workplaces with frequent personnel changes. Future deployments should balance on-device storage with cloud-based recognition for scalability. Integration with IoT-based security frameworks would allow for remote monitoring and control, enhancing smart buildings’ security. Organizations must ensure compliance with data protection laws and encryption to secure biometric data.

5. Conclusions

This study developed a Raspberry Pi-based face recognition door lock system integrating adaptive learning, real-time emotion detection, and IoT-based notifications. The system achieved high facial recognition accuracy using the HOG algorithm, with lighting normalization and gamma correction enhancing its performance under varying conditions. The existing authors’ models face security system problems, and face recognition is not reliable. The proposed model uses the HOG and CNN methods, reducing its complexity, and the proposed method was developed with a lower cost and higher efficiency than other existing models. DeepFace enables real-time emotion analysis, while real-time notifications improve security by alerting authorized personnel of access attempts. Adaptive learning allows for the seamless addition of new faces without retraining, ensuring scalability and efficiency.
Despite its effectiveness, the system has limitations, including computational constraints of the Raspberry Pi 3B and sensitivity to extreme lighting. Future work should explore hardware upgrades, cloud-based processing, and advanced deep learning models to enhance its accuracy and responsiveness. This research contributes to biometric security, providing a cost-effective and adaptable solution for secure access control.

Author Contributions

Conceptualization, S.S.F.A.E. and S.C.P.; methodology, S.S.F.A.E.; software, S.S.F.A.E.; validation, S.S.F.A.E., S.C.P., and L.C.L.; formal analysis, S.C.P.; investigation, S.C.P.; resources, L.C.L.; data curation, S.S.F.A.E.; writing—original draft preparation, S.S.F.A.E.; writing—review and editing, S.S.F.A.E.; visualization, S.C.P.; supervision, S.C.P.; project administration, L.C.L.; funding acquisition, S.C.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Informed Consent Statement

Informed consent for publication was obtained from all identifiable human participants.

Data Availability Statement

No data are associated with this article.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this manuscript. In addition, ethical issues, including plagiarism, informed consent, misconduct, data fabrication and/or falsification, double publication and/or submission, and redundancies, have been completely observed by the authors.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
CLAHEContrast-Limited Adaptive Histogram Equalization
CNNConvolutional neural network
CPUCentral Processing Unit
CSICamera Serial Interface
DCDirect Current
GPIOGeneral-Purpose Input/Output
GPUGraphics Processing Unit
HDRHigh Dynamic Range
HOGHistogram of Oriented Gradients
NONormally Open
PCAPrincipal Component Analysis
RFIDRadio-Frequency Identification
TCPTransmission Control Protocol
TPUTensor Processing Unit
TTSText-to-Speech
URLUniform Resource Locator

References

  1. Gururaj, H.L.; Soundarya, B.C.; Priya, S.; Shreyas, J.; Flammini, F. A Comprehensive Review of Face Recognition Techniques, Trends, and Challenges. IEEE Access 2024, 12, 107903–107926. [Google Scholar] [CrossRef]
  2. Hasan, N.D.; Abdulazeez, A.M. Face Recognition Based on Deep Learning: A Comprehensive Review. Indones. J. Comput. Sci. 2024, 13, 3779–3797. [Google Scholar]
  3. Alshar’e, M.; Nasar, M.R.A.; Kumar, R.; Sharma, M.; Vir, D.; Tripathi, V. A Face Recognition Method In Machine Learning For Enhancing Security In Smart Home. In Proceedings of the 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 28–29 April 2022; pp. 1081–1086. [Google Scholar] [CrossRef]
  4. Jha, A.; Bulbule, R.; Nagrale, N.; Belambe, T. Raspberry Pi-Powered Door Lock With Facial Recognition. In Proceedings of the 2024 IEEE International Students’ Conference on Electrical, Electronics and Computer Science (SCEECS), Bhopal, India, 24–25 February 2024; pp. 1–5. [Google Scholar] [CrossRef]
  5. Singh, A.D.; Jangra, B.S.; Singh, R. Face Recognition Door Lock System Using Raspberry Pi. Int. J. Res. Appl. Sci. Eng. Technol. (IJRASET) 2022, 10, 1733–1735. [Google Scholar] [CrossRef]
  6. Padhan, D.G.; Divya, M.; Varma, S.N.; Manasa, S.; C, V.; Pakkiraiah, B. Home Security System Based On Facial Recognition. In Proceedings of the Department of Electrical and Electronics Engineering, Bhubaneswar, India, 9–12 August 2023. [Google Scholar]
  7. Nkem, N.F. Face Recognition Door Lock System Using Raspberry Pi. Glob. Sci. J. (GSJ) 2022, 10, 1390–1394. [Google Scholar]
  8. Pecolt, S.; Błaz, A.; Królikowski, T.; Maciejewski, I.; Gierula, K.; Glowinski, S. Personal Identification Using Embedded Raspberry Pi-Based Face Recognition Systems. Appl. Sci. 2025, 15, 887. [Google Scholar] [CrossRef]
  9. George, A.; Ecabert, C.; Shahreza, H.O.; Kotwal, K.; Marcel, S. EdgeFace: Efficient Face Recognition Model for Edge Devices. IEEE Trans. Biom. Behav. Identity Sci. 2024, 6, 158–168. [Google Scholar] [CrossRef]
  10. Chaiyarab, L.; Mopung, C.; Charoenpong, T. Authentication System By Using Hog Face Recognition Technique And Web-Based For Medical Dispenser Machine. In Proceedings of the 4th IEEE International Conference on Knowledge Innovation and Invention (ICKEI), Taichung, Taiwan, 23–25 July 2021; pp. 97–100. [Google Scholar] [CrossRef]
  11. Rosebrock, A. Raspberry Pi Face Recognition. Available online: https://pyimagesearch.com/2018/06/25/raspberry-pi-face-recognition/ (accessed on 25 June 2018).
  12. Rana, M.S.; Fattah, S.A.; Uddin, S.; Rashid, R.U.; Noman, R.M.; Quasem, F.B. Real-Time Deep Learning Based Face Recognition System Using Raspberry Pi. In Proceedings of the 26th International Conference on Computer and Information Technology (ICCIT), Cox’s Bazar, Bangladesh, 13–15 December 2023; pp. 1–6. [Google Scholar] [CrossRef]
  13. Madhavan, S.; Kumar, N. Incremental Methods in Face Recognition: A Survey. Artif. Intell. Rev. 2021, 54, 253–303. [Google Scholar] [CrossRef]
  14. Selvaganesan, J.; Sudharani, B.; Shekhar, S.N.C.; Vaishnavi, K.; Priyadarsini, K.; Raju, K.S.; Rao, T.S. Enhancing Face Recognition Performance: A Comprehensive Evaluation of Deep Learning Models and a Novel Ensemble Approach with Hyperparameter Tuning. Soft Comput. 2024, 28, 12399–12424. [Google Scholar] [CrossRef]
  15. Li, S.; Deng, W. Deep Facial Expression Recognition: A Survey. IEEE Trans. Affect. Comput. 2022, 13, 1195–1215. [Google Scholar] [CrossRef]
  16. Prasad, H.H. IoT-Based Door Access Control Using Face Recognition. Int. Res. J. Eng. Technol. (IRJET) 2019, 6, 1222–1225. [Google Scholar]
  17. Eddine, B.A.; Zohra, C.F. Real-Time Face Recognition System Using Raspberry Pi 4. In Proceedings of the 3rd International Conference on Advanced Electrical Engineering (ICAEE), Sidi-Bel-Abbes, Algeria, 5–7 November 2024; pp. 1–6. [Google Scholar] [CrossRef]
  18. Shi, Y.; Zhang, H.; Guo, W.; Zhou, M.; Li, S.; Li, J.; Ding, Y. LighterFace Model for Community Face Detection and Recognition. Information 2024, 15, 215. [Google Scholar] [CrossRef]
  19. Chamikara, M.A.P.; Bertok, P.; Khalil, I.; Liu, D.; Camtepe, S. Privacy Preserving Face Recognition Utilizing Differential Privacy. Comput. Secur. 2020, 97, 101951. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the system.
Figure 1. Flowchart of the system.
Iot 06 00031 g001
Figure 2. Circuit diagram illustrating the hardware connections for the face recognition door lock system.
Figure 2. Circuit diagram illustrating the hardware connections for the face recognition door lock system.
Iot 06 00031 g002
Figure 3. Face detection and recognition during simulation (https://id.wikipedia.org/wiki/Jude_Bellingham, accessed on 1 February 2025 (for the right picture)).
Figure 3. Face detection and recognition during simulation (https://id.wikipedia.org/wiki/Jude_Bellingham, accessed on 1 February 2025 (for the right picture)).
Iot 06 00031 g003
Figure 4. Face detection and recognition in real time.
Figure 4. Face detection and recognition in real time.
Iot 06 00031 g004
Figure 5. Effect of lighting normalization on face detection.
Figure 5. Effect of lighting normalization on face detection.
Iot 06 00031 g005
Figure 6. Successful recognition of newly authorized face (https://id.wikipedia.org/wiki/Mohamed_Salah, accessed on 1 February 2025).
Figure 6. Successful recognition of newly authorized face (https://id.wikipedia.org/wiki/Mohamed_Salah, accessed on 1 February 2025).
Iot 06 00031 g006
Figure 7. Real-time emotion detection display.
Figure 7. Real-time emotion detection display.
Iot 06 00031 g007
Figure 8. Detected emotion displayed in real-time video feed.
Figure 8. Detected emotion displayed in real-time video feed.
Iot 06 00031 g008
Figure 9. Face recognition in different lighting environments.
Figure 9. Face recognition in different lighting environments.
Iot 06 00031 g009
Figure 10. Face recognition at different distances and angles.
Figure 10. Face recognition at different distances and angles.
Iot 06 00031 g010
Figure 11. Detection of a “Sad” emotion.
Figure 11. Detection of a “Sad” emotion.
Iot 06 00031 g011
Figure 12. Detection of an unknown face (https://id.wikipedia.org/wiki/Federico_Valverde, accessed on 1 February 2025).
Figure 12. Detection of an unknown face (https://id.wikipedia.org/wiki/Federico_Valverde, accessed on 1 February 2025).
Iot 06 00031 g012
Figure 13. Successful recognition of a newly authorized user.
Figure 13. Successful recognition of a newly authorized user.
Iot 06 00031 g013
Figure 14. Detection of multiple authorized users.
Figure 14. Detection of multiple authorized users.
Iot 06 00031 g014
Table 1. Overview of the facial recognition algorithm.
Table 1. Overview of the facial recognition algorithm.
StepDescription
Capture FrameCaptures a video frame using the Raspberry Pi camera.
Enhance LightingApplies CLAHE and gamma correction to normalize brightness.
Detect FacesIdentifies facial regions using the HOG-based detector.
Encode FacesGenerates a 128-dimensional encoding for detected faces.
Compare with EncodingsMatches face encodings with the stored dataset using Euclidean distance.
Output ResultDisplays recognition results and triggers actions (e.g., notifications).
Table 2. Performance under varying lighting and distance.
Table 2. Performance under varying lighting and distance.
Test ConditionRecognition Accuracy (%)Average Detection Time (ms)
Bright Light98.5%521.45
Low Light92.8%601.23
Dim Light90.3%678.31
Some Distance99.5%498.20
Medium Distance96.2%567.43
Narrow Angle92.4%610.56
Table 3. Detected emotions (counts and percentages).
Table 3. Detected emotions (counts and percentages).
EmotionCountPercentage
Neutral139.15%
Sad74.93%
Angry21.41%
Surprise10.70%
Happy32.11%
Fear10.70%
Table 4. Notification event delays.
Table 4. Notification event delays.
Notification EventAverage Delay (ms)
Authorized Face Detected195
Unknown Face Detected210
Emotion Detected198
Table 5. Summary of numerical results.
Table 5. Summary of numerical results.
MetricValue
Total Frames Processed180
Total Faces Detected142
Unique Authorized Faces2
Unique Unknown Faces23
Average Face Detection Time564.43 ms
Average Emotion Detection Time327.63 ms
Average Movement Between Frames28.84 pixels
Table 6. Comparison with existing studies.
Table 6. Comparison with existing studies.
StudyAccuracy (%) (Average)
This Paper96.7%
Study [17]94%
Study [18]90.6%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elnozahy, S.S.F.A.; Pari, S.C.; Liang, L.C. Raspberry Pi-Based Face Recognition Door Lock System. IoT 2025, 6, 31. https://doi.org/10.3390/iot6020031

AMA Style

Elnozahy SSFA, Pari SC, Liang LC. Raspberry Pi-Based Face Recognition Door Lock System. IoT. 2025; 6(2):31. https://doi.org/10.3390/iot6020031

Chicago/Turabian Style

Elnozahy, Seifeldin Sherif Fathy Ali, Senthill C. Pari, and Lee Chu Liang. 2025. "Raspberry Pi-Based Face Recognition Door Lock System" IoT 6, no. 2: 31. https://doi.org/10.3390/iot6020031

APA Style

Elnozahy, S. S. F. A., Pari, S. C., & Liang, L. C. (2025). Raspberry Pi-Based Face Recognition Door Lock System. IoT, 6(2), 31. https://doi.org/10.3390/iot6020031

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop