Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (443)

Search Parameters:
Keywords = camera security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 554 KiB  
Systematic Review
Smart Homes: A Meta-Study on Sense of Security and Home Automation
by Carlos M. Torres-Hernandez, Mariano Garduño-Aparicio and Juvenal Rodriguez-Resendiz
Technologies 2025, 13(8), 320; https://doi.org/10.3390/technologies13080320 - 30 Jul 2025
Viewed by 141
Abstract
This review examines advancements in smart home security through the integration of home automation technologies. Various security systems, including surveillance cameras, smart locks, and motion sensors, are analyzed, highlighting their effectiveness in enhancing home security. These systems enable users to monitor and control [...] Read more.
This review examines advancements in smart home security through the integration of home automation technologies. Various security systems, including surveillance cameras, smart locks, and motion sensors, are analyzed, highlighting their effectiveness in enhancing home security. These systems enable users to monitor and control their homes in real-time, providing an additional layer of security. The document also examines how these security systems can enhance the quality of life for users by providing greater convenience and control over their domestic environment. The ability to receive instant alerts and access video recordings from anywhere allows users to respond quickly to unexpected situations, thereby increasing their sense of security and well-being. Additionally, the challenges and future trends in this field are addressed, emphasizing the importance of designing solutions that are intuitive and easy to use. As technology continues to evolve, it is crucial for developers and manufacturers to focus on creating products that seamlessly integrate into users’ daily lives, facilitating their adoption and use. This comprehensive state-of-the-art review, based on the Scopus database, provides a detailed overview of the current status and future potential of smart home security systems. It highlights how ongoing innovation in this field can lead to the development of more advanced and efficient solutions that not only protect homes but also enhance the overall user experience. Full article
(This article belongs to the Special Issue Smart Systems (SmaSys2024))
Show Figures

Figure 1

37 pages, 55522 KiB  
Article
EPCNet: Implementing an ‘Artificial Fovea’ for More Efficient Monitoring Using the Sensor Fusion of an Event-Based and a Frame-Based Camera
by Orla Sealy Phelan, Dara Molloy, Roshan George, Edward Jones, Martin Glavin and Brian Deegan
Sensors 2025, 25(15), 4540; https://doi.org/10.3390/s25154540 - 22 Jul 2025
Viewed by 220
Abstract
Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional [...] Read more.
Efficient object detection is crucial to real-time monitoring applications such as autonomous driving or security systems. Modern RGB cameras can produce high-resolution images for accurate object detection. However, increased resolution results in increased network latency and power consumption. To minimise this latency, Convolutional Neural Networks (CNNs) often have a resolution limitation, requiring images to be down-sampled before inference, causing significant information loss. Event-based cameras are neuromorphic vision sensors with high temporal resolution, low power consumption, and high dynamic range, making them preferable to regular RGB cameras in many situations. This project proposes the fusion of an event-based camera with an RGB camera to mitigate the trade-off between temporal resolution and accuracy, while minimising power consumption. The cameras are calibrated to create a multi-modal stereo vision system where pixel coordinates can be projected between the event and RGB camera image planes. This calibration is used to project bounding boxes detected by clustering of events into the RGB image plane, thereby cropping each RGB frame instead of down-sampling to meet the requirements of the CNN. Using the Common Objects in Context (COCO) dataset evaluator, the average precision (AP) for the bicycle class in RGB scenes improved from 21.08 to 57.38. Additionally, AP increased across all classes from 37.93 to 46.89. To reduce system latency, a novel object detection approach is proposed where the event camera acts as a region proposal network, and a classification algorithm is run on the proposed regions. This achieved a 78% improvement over baseline. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

30 pages, 10173 KiB  
Article
Integrated Robust Optimization for Lightweight Transformer Models in Low-Resource Scenarios
by Hui Huang, Hengyu Zhang, Yusen Wang, Haibin Liu, Xiaojie Chen, Yiling Chen and Yuan Liang
Symmetry 2025, 17(7), 1162; https://doi.org/10.3390/sym17071162 - 21 Jul 2025
Viewed by 334
Abstract
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized [...] Read more.
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized as operating in resource-constrained environments. In such scenarios, deploying powerful Transformer-based models like ChatGPT and Vision Transformers is highly impractical because of their large parameter sizes and intensive computational requirements. While lightweight Transformer models, such as MobileViT, offer a promising solution to meet storage and computational limitations, their robustness remains insufficient. This poses a significant security risk for AI applications, particularly in critical edge environments. To address this challenge, our research focuses on enhancing the robustness of lightweight Transformer models under resource-constrained conditions. First, we propose a comprehensive robustness evaluation framework tailored for lightweight Transformer inference. This framework assesses model robustness across three key dimensions: noise robustness, distributional robustness, and adversarial robustness. It further investigates how model size and hardware limitations affect robustness, thereby providing valuable insights for robustness-aware model design. Second, we introduce a novel adversarial robustness enhancement strategy that integrates lightweight modeling techniques. This approach leverages methods such as gradient clipping and layer-wise unfreezing, as well as decision boundary optimization techniques like TRADES and SMART. Together, these strategies effectively address challenges related to training instability and decision boundary smoothness, significantly improving model robustness. Finally, we deploy the robust lightweight Transformer models in real-world resource-constrained environments and empirically validate their inference robustness. The results confirm the effectiveness of our proposed methods in enhancing the robustness and reliability of lightweight Transformers for edge AI applications. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

25 pages, 624 KiB  
Article
Development of a Specialized Telemedicine Protocol for Cognitive Disorders: The TeleCogNition Project in Greece
by Efthalia Angelopoulou, Ioannis Stamelos, Evangelia Smaragdaki, Kalliopi Vourou, Evangelia Stanitsa, Dionysia Kontaxopoulou, Christos Koros, John Papatriantafyllou, Vasiliki Zilidou, Evangelia Romanopoulou, Efstratia-Maria Georgopoulou, Paraskevi Sakka, Haralampos Karanikas, Leonidas Stefanis, Panagiotis Bamidis and Sokratis Papageorgiou
Geriatrics 2025, 10(4), 94; https://doi.org/10.3390/geriatrics10040094 - 16 Jul 2025
Viewed by 1007
Abstract
Background/Objectives: Access to specialized care for patients with cognitive impairment in remote areas is often limited. Despite the increasing adoption of telemedicine, standardized guidelines have not yet been specified. This study aimed to develop a comprehensive protocol for the specialized neurological, neuropsychological, and [...] Read more.
Background/Objectives: Access to specialized care for patients with cognitive impairment in remote areas is often limited. Despite the increasing adoption of telemedicine, standardized guidelines have not yet been specified. This study aimed to develop a comprehensive protocol for the specialized neurological, neuropsychological, and neuropsychiatric assessment of patients with cognitive disorders in remote areas through telemedicine. Methods: We analyzed data from (i) a comprehensive literature review of the existing recommendations, reliability studies, and telemedicine models for cognitive disorders, (ii) insights from a three-year experience of a specialized telemedicine outpatient clinic for cognitive movement disorders in Greece, and (iii) suggestions coming from dementia specialists experienced in telemedicine (neurologists, neuropsychologists, psychiatrists) who took part in three focus groups. A critical synthesis of the findings was performed in the end. Results: The final protocol included: technical and organizational requirements (e.g., a high-resolution screen and a camera with zoom, room dimensions adequate for gait assessment, a noise-canceling microphone); medical history; neurological, neuropsychiatric, and neuropsychological assessment adapted to videoconferencing; ethical–legal aspects (e.g., data security, privacy, informed consent); clinician–patient interaction (e.g., empathy, eye contact); diagnostic work-up; linkage to other services (e.g., tele-psychoeducation, caregiver support); and instructions for treatment and follow-up. Conclusions: This protocol is expected to serve as an example of good clinical practice and a source for official telemedicine guidelines for cognitive disorders. Ultimate outcomes include the potential enhanced access to specialized care, minimized financial and logistical costs, and the provision of a standardized, effective model for the remote diagnosis, treatment, and follow-up. This model could be applied not only in Greece, but also in other countries with similar healthcare systems and populations living in remote, difficult-to-access areas. Full article
Show Figures

Graphical abstract

10 pages, 4530 KiB  
Article
A Switchable-Mode Full-Color Imaging System with Wide Field of View for All Time Periods
by Shubin Liu, Linwei Guo, Kai Hu and Chunbo Zou
Photonics 2025, 12(7), 689; https://doi.org/10.3390/photonics12070689 - 8 Jul 2025
Viewed by 255
Abstract
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging [...] Read more.
Continuous, single-mode imaging systems fail to deliver true-color high-resolution imagery around the clock under extreme lighting. High-fidelity color and signal-to-noise ratio imaging across the full day–night cycle remains a critical challenge for surveillance, navigation, and environmental monitoring. We present a competitive dual-mode imaging platform that integrates a 155 mm f/6 telephoto daytime camera with a 52 mm f/1.5 large-aperture low-light full-color night-vision camera into a single, co-registered 26 cm housing. By employing a sixth-order aspheric surface to reduce the element count and weight, our system achieves near-diffraction-limited MTF (>0.5 at 90.9 lp/mm) in daylight and sub-pixel RMS blur < 7 μm at 38.5 lp/mm under low-light conditions. Field validation at 0.0009 lux confirms high-SNR, full-color capture from bright noon to the darkest nights, enabling seamless switching between long-range, high-resolution surveillance and sensitive, low-light color imaging. This compact, robust design promises to elevate applications in security monitoring, autonomous navigation, wildlife observation, and disaster response by providing uninterrupted, color-faithful vision in all lighting regimes. Full article
(This article belongs to the Special Issue Research on Optical Materials and Components for 3D Displays)
Show Figures

Figure 1

24 pages, 589 KiB  
Article
FaceCloseup: Enhancing Mobile Facial Authentication with Perspective Distortion-Based Liveness Detection
by Yingjiu Li, Yan Li and Zilong Wang
Computers 2025, 14(7), 254; https://doi.org/10.3390/computers14070254 - 27 Jun 2025
Viewed by 612
Abstract
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to [...] Read more.
Facial authentication has gained widespread adoption as a biometric authentication method, offering a convenient alternative to traditional password-based systems, particularly on mobile devices equipped with front-facing cameras. While this technology enhances usability and security by eliminating password management, it remains highly susceptible to spoofing attacks. Adversaries can exploit facial recognition systems using pre-recorded photos, videos, or even sophisticated 3D models of victims’ faces to bypass authentication mechanisms. The increasing availability of personal images on social media further amplifies this risk, making robust anti-spoofing mechanisms essential for secure facial authentication. To address these challenges, we introduce FaceCloseup, a novel liveness detection technique that strengthens facial authentication by leveraging perspective distortion inherent in close-up shots of real, 3D faces. Instead of relying on additional sensors or user-interactive gestures, FaceCloseup passively analyzes facial distortions in video frames captured by a mobile device’s camera, improving security without compromising user experience. FaceCloseup effectively distinguishes live faces from spoofed attacks by identifying perspective-based distortions across different facial regions. The system achieves a 99.48% accuracy in detecting common spoofing methods—including photo, video, and 3D model-based attacks—and demonstrates 98.44% accuracy in differentiating between individual users. By operating entirely on-device, FaceCloseup eliminates the need for cloud-based processing, reducing privacy concerns and potential latency in authentication. Its reliance on natural device movement ensures a seamless authentication experience while maintaining robust security. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

21 pages, 1370 KiB  
Article
Joint Data Hiding and Partial Encryption of Compressive Sensed Streams
by Cristina-Elena Popa, Constantin-Cristian Damian and Daniela Coltuc
Information 2025, 16(7), 513; https://doi.org/10.3390/info16070513 - 20 Jun 2025
Viewed by 240
Abstract
This paper proposes a method to secure Compressive Sensing (CS) streams. It involves protecting part of the measurements with a secret key and inserting code into the remaining measurements. The secret key is generated via a cryptographically secure pseudorandom number generator (CSPRNG) and [...] Read more.
This paper proposes a method to secure Compressive Sensing (CS) streams. It involves protecting part of the measurements with a secret key and inserting code into the remaining measurements. The secret key is generated via a cryptographically secure pseudorandom number generator (CSPRNG) and XORed with the measurements to be inserted. For insertion, we use a reversible data hiding (RDH) scheme, which is a prediction error expansion algorithm modified to match the statistics of CS measurements. The reconstruction from the embedded stream results in a visibly distorted image. The image distortion is controlled by the number of embedded levels. In our tests, embedding on 10 levels results in ≈18 dB distortion for images of 256×256 pixels reconstructed with the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). A particularity of the presented method is on-the-fly insertion, which makes it appropriate for the sequential acquisition of measurements with a single-pixel camera. On-the-fly insertion avoids the buffering of CS measurements for the subsequent standard encryption and generation of a thumbnail image. Full article
(This article belongs to the Section Information Theory and Methodology)
Show Figures

Figure 1

21 pages, 2333 KiB  
Article
Human-Centric Depth Estimation: A Hybrid Approach with Minimal Data
by Yuhyun Kim, Heejin Ahn, Taeseop Kim, Byungtae Ahn and Dong-Geol Choi
Electronics 2025, 14(11), 2283; https://doi.org/10.3390/electronics14112283 - 4 Jun 2025
Viewed by 670
Abstract
This study presents a novel system for accurate camera-to-person distance estimation in CCTV environments. To address the limitations of existing approaches—which often require extensive training data and lack object-level precision—we propose a hybrid framework that integrates SAM’s zero-shot segmentation with monocular depth estimation. [...] Read more.
This study presents a novel system for accurate camera-to-person distance estimation in CCTV environments. To address the limitations of existing approaches—which often require extensive training data and lack object-level precision—we propose a hybrid framework that integrates SAM’s zero-shot segmentation with monocular depth estimation. Our method isolates human subjects from complex backgrounds and incorporates Kernel Density Estimation (KDE), log-space learning, and linear residual blocks to improve prediction accuracy. This approach is designed to resolve the non-linear mapping between visual features and metric distances. Evaluations on a custom dataset demonstrate a mean absolute error (MAE) of 0.65 m on 1612 test images, using only 30 training samples. Notably, the use of SAM for fine-grained segmentation significantly outperforms conventional bounding box methods, reducing the MAE from 0.82 m to 0.65 m. The proposed system offers immediate applicability to security surveillance and disaster response scenarios, with its minimal data requirements enhancing its practical deployability. Full article
(This article belongs to the Collection Computer Vision and Pattern Recognition Techniques)
Show Figures

Figure 1

17 pages, 661 KiB  
Systematic Review
Security Challenges for Users of Extensible Smart Home Hubs: A Systematic Literature Review
by Tobias Rødahl Thingnes and Per Håkon Meland
Future Internet 2025, 17(6), 238; https://doi.org/10.3390/fi17060238 - 28 May 2025
Viewed by 371
Abstract
Smart home devices and home automation systems, which control features such as lights, blinds, heaters, door locks, cameras, and speakers, have become increasingly popular and can be found in homes worldwide. Central to these systems are smart home hubs, which serve as the [...] Read more.
Smart home devices and home automation systems, which control features such as lights, blinds, heaters, door locks, cameras, and speakers, have become increasingly popular and can be found in homes worldwide. Central to these systems are smart home hubs, which serve as the primary control units, allowing users to manage connected devices from anywhere in the world. While this feature is convenient, it also makes smart home hubs attractive targets for cyberattacks. Unfortunately, the average user lacks substantial cybersecurity knowledge, making the security of these systems crucial. This is particularly important as smart home systems are expected to safeguard users’ privacy and security within their homes. This paper synthesizes eight prevalent cybersecurity challenges associated with smart home hubs through a systematic literature review. The review process involved identifying relevant keywords, searching, and screening 713 papers in multiple rounds to arrive at a final selection of 16 papers, which were then summarized and synthesized. This process included research from Scopus published between January 2019 and November 2024 and excluded papers on prototypes or individual features. The study is limited by scarce academic sources on open-source smart home hubs, strict selection criteria, rapid technological changes, and some subjectivity in study inclusion. The security of extensible smart home hubs is a complex and evolving issue. This review provides a foundation for understanding the key challenges and potential solutions, which is useful for future research and development to secure this increasingly important part of our everyday homes. Full article
(This article belongs to the Special Issue Human-Centered Cybersecurity)
Show Figures

Figure 1

13 pages, 4670 KiB  
Article
Methodology for Enablement of Human Digital Twins for Quality Assurance in the Aerospace Manufacturing Domain
by Christopher Lee Colaw, Garrett Madison, Bill Tseng, Grayson Michael Griser, Gage Truelson, Adam Gallo and Yildirim Hurmuzlu
Sensors 2025, 25(11), 3362; https://doi.org/10.3390/s25113362 - 27 May 2025
Viewed by 596
Abstract
This paper will examine a methodology to enable the usage of Human Digital Twins (HDTs) for Quality Assurance in the aerospace manufacturing domain. Common-place hardware and infrastructure, including cloud-based facility security cameras, cloud-based commercial virtual environments, a virtual reality (VR) headset, and artificial [...] Read more.
This paper will examine a methodology to enable the usage of Human Digital Twins (HDTs) for Quality Assurance in the aerospace manufacturing domain. Common-place hardware and infrastructure, including cloud-based facility security cameras, cloud-based commercial virtual environments, a virtual reality (VR) headset, and artificial intelligence (AI) detection algorithms, have been connected via application programming interfaces (API) to enable a 24-h surveillance and feedback capability for a representative aerospace manufacturing cell. Human operators who perform defined manufacturing assembly operations in real life in the cell can utilize this methodology to digitize their performance and provide objective evidence of conformity and safety messaging for their human-centric manufacturing operation in real time. The digitization of real human-centric performance using this methodology creates the foundation for a HDT. This paper will present the application of HDTs in a manner that can easily be scaled across manufacturing operations while utilizing technologies that are already commonly inserted into existing manufacturing operations, which facilitates the exploration of HDT concepts without the need for expensive capital purchases and emerging technologies. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

28 pages, 4438 KiB  
Article
A Cybersecurity Risk Assessment for Enhanced Security in Virtual Reality
by Rebecca Acheampong, Dorin-Mircea Popovici, Titus C. Balan, Alexandre Rekeraho and Ionut-Alexandru Oprea
Information 2025, 16(6), 430; https://doi.org/10.3390/info16060430 - 23 May 2025
Viewed by 1008
Abstract
Our society is becoming increasingly dependent on technology, with immersive virtual worlds such as Extended Reality (XR) transforming how we connect and interact. XR technologies enhance communication and operational efficiency. They have been adopted in sectors such as manufacturing, education, and healthcare. However, [...] Read more.
Our society is becoming increasingly dependent on technology, with immersive virtual worlds such as Extended Reality (XR) transforming how we connect and interact. XR technologies enhance communication and operational efficiency. They have been adopted in sectors such as manufacturing, education, and healthcare. However, the immersive and interconnected nature of XR introduces security risks that span from technical and human to psychological vulnerabilities. In this study, we examined security threats in XR environments through a scenario-driven risk assessment, using a hybrid approach combining Common Vulnerability Scoring System (CVSS) metrics and a custom likelihood model to quantify risks. This methodology provides a comprehensive risk evaluation method, identifying critical vulnerabilities such as Remote Code Execution (RCE), social engineering, excessive permission exploitation, unauthorized access, and data exfiltration. The findings reveal that human vulnerabilities, including users’ susceptibility to deception and excessive trust in familiar interfaces and system prompts, significantly increase attack success rates. Additionally, developer mode, once enabled, remains continuously active, and the lack of authentication requirements for installing applications from unknown sources, coupled with poor permission management on the part of the users, creates security gaps that attackers can exploit. Furthermore, permission management in XR devices is often broad and persistent and lacks real-time notifications, allowing malicious applications to exploit microphone, camera, and location access without the users knowing. By leveraging CVSS scores and a structured likelihood-based risk assessment, we quantified the severity of these threats, with RCE, social engineering, and insecure app installation emerging as the greatest risks. This study highlights the necessity of implementing granular permission controls, formalized developer mode restrictions, and structured user education programs to mitigate XR-specific threats. Full article
(This article belongs to the Special Issue Extended Reality and Cybersecurity)
Show Figures

Figure 1

25 pages, 9886 KiB  
Article
DeepGun: Deep Feature-Driven One-Class Classifier for Firearm Detection Using Visual Gun Features and Human Body Pose Estimation
by Harbinder Singh, Oscar Deniz, Jesus Ruiz-Santaquiteria, Juan D. Muñoz and Gloria Bueno
Appl. Sci. 2025, 15(11), 5830; https://doi.org/10.3390/app15115830 - 22 May 2025
Viewed by 684
Abstract
The increasing frequency of mass shootings at public events and public buildings underscores the limitations of traditional surveillance systems, which rely on human operators monitoring multiple screens. Delayed response times often hinder security teams from intervening before an attack unfolds. Since firearms are [...] Read more.
The increasing frequency of mass shootings at public events and public buildings underscores the limitations of traditional surveillance systems, which rely on human operators monitoring multiple screens. Delayed response times often hinder security teams from intervening before an attack unfolds. Since firearms are rarely seen in public spaces and constitute anomalous observations, firearm detection can be considered as an anomaly detection (AD) problem, for which one-class classifiers (OCCs) are well-suited. To address this challenge, we propose a holistic firearm detection approach that integrates OCCs with visual hand-held gun features and human pose estimation (HPE). In the first stage, a variational autoencoder (VAE) learns latent representations of firearm-related instances, ensuring that the latent space is dedicated exclusively to the target class. Hand patches of variable sizes are extracted from each frame using body landmarks, dynamically adjusting based on the subject’s distance from the camera. In the second stage, a unified feature vector is generated by integrating VAE-extracted latent features with landmark-based arm positioning features. Finally, an isolation forest (IFC)-based OCC model evaluates this unified feature representation to estimate the probability that a test sample belongs to the firearm-related distribution. By utilizing skeletal representations of human actions, our approach overcomes the limitations of appearance-based gun features extracted by camera, which are often affected by background variations. Experimental results on diverse firearm datasets validate the effectiveness of our anomaly detection approach, achieving an F1-score of 86.6%, accuracy of 85.2%, precision of 95.3%, recall of 74.0%, and average precision (AP) of 83.5%. These results demonstrate the superiority of our method over traditional approaches that rely solely on visual features. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3864 KiB  
Article
Raspberry Pi-Based Face Recognition Door Lock System
by Seifeldin Sherif Fathy Ali Elnozahy, Senthill C. Pari and Lee Chu Liang
IoT 2025, 6(2), 31; https://doi.org/10.3390/iot6020031 - 20 May 2025
Viewed by 1784
Abstract
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its [...] Read more.
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its CPU, the system uses facial recognition for authentication. A camera module for real-time image capturing, a relay module for solenoid lock control, and OpenCV for image processing are essential. The system uses the DeepFace library to detect user emotions and adaptive learning to improve recognition accuracy for approved users. The device also adapts to poor lighting and distances, and it sends real-time remote monitoring messages. Some of the most important things that have been achieved include adaptive facial recognition, ensuring that the system changes as it is used, and integrating real-time notifications and emotion detection without any problems. Face recognition worked well in many settings. Modular architecture facilitated hardware–software integration and scalability for various applications. In conclusion, this study created an intelligent facial recognition door lock system using Raspberry Pi hardware and open-source software libraries. The system addresses traditional access control limits and is practical, scalable, and inexpensive, demonstrating biometric technology’s potential in modern security systems. Full article
Show Figures

Figure 1

20 pages, 69905 KiB  
Article
Lossless Thumbnail Preservation Encryption Based on Reversible Information Hiding
by Junlin Ouyang, Tingjian Shi and Ruijie Wang
Electronics 2025, 14(10), 2060; https://doi.org/10.3390/electronics14102060 - 19 May 2025
Viewed by 360
Abstract
With the proliferation of multi-camera smartphones, image generation has proliferated and cloud storage services have become the primary tool for storing and sharing photos. However, this also poses privacy and security risks. Traditional image encryption techniques, while protecting privacy, also lead to loss [...] Read more.
With the proliferation of multi-camera smartphones, image generation has proliferated and cloud storage services have become the primary tool for storing and sharing photos. However, this also poses privacy and security risks. Traditional image encryption techniques, while protecting privacy, also lead to loss of image availability in the cloud. To balance security and availability, TPE (Thumbnail Preserving Encryption) is proposed. However, the decryption effect of the existing schemes is generally unsatisfactory, and many existing schemes are unable to achieve perfect restoration in practical applications. Meanwhile, a few fully reversible schemes are limited by the proposed algorithms, which makes it difficult to be extended to a wider range of applications. To solve this contradiction, this paper proposes a TPE scheme based on reversible information hiding. Specifically, the scheme preserves the DC coefficients of the image during the encryption process and encrypts the AC coefficients to enhance the security of the image, thus obtaining the intermediate encrypted image. Then, the intermediate encrypted image is pre-decrypted, and the subtle error between the original image and the intermediate encrypted image is used as the compensation information. In order to achieve lossless decryption, we introduce the reversible information hiding technique to embed the compensation information into the intermediate image, and we finally obtain the encrypted image. This is also applicable to other high-quality TPE schemes and can provide ideas for their optimization direction. The experimental results show that this scheme not only achieves lossless decryption but also outperforms other TPE schemes in terms of visual effect, while the file extension size is kept at a low level. The research in this paper provides new ideas for balancing image privacy protection and usability, which has important theoretical and practical significance. Full article
Show Figures

Figure 1

30 pages, 12255 KiB  
Article
Unmanned Aerial Vehicle-Based Hyperspectral Imaging for Potato Virus Y Detection: Machine Learning Insights
by Siddat B. Nesar, Paul W. Nugent, Nina K. Zidack and Bradley M. Whitaker
Remote Sens. 2025, 17(10), 1735; https://doi.org/10.3390/rs17101735 - 15 May 2025
Viewed by 1140
Abstract
The potato is the third most important crop in the world, and more than 375 million metric tonnes of potatoes are produced globally on an annual basis. Potato Virus Y (PVY) poses a significant threat to the production of seed potatoes, resulting in [...] Read more.
The potato is the third most important crop in the world, and more than 375 million metric tonnes of potatoes are produced globally on an annual basis. Potato Virus Y (PVY) poses a significant threat to the production of seed potatoes, resulting in economic losses and risks to food security. Current detection methods for PVY typically rely on serological assays for leaves and PCR for tubers; however, these processes are labor-intensive, time-consuming, and not scalable. In this proof-of-concept study, we propose the use of unmanned aerial vehicles (UAVs) integrated with hyperspectral cameras, including a downwelling irradiance sensor, to detect the PVY in commercial growers’ fields. We used a 400–1000 nm visible and near-infrared (Vis-NIR) hyperspectral camera and trained several standard machine learning and deep learning models with optimized hyperparameters on a curated dataset. The performance of the models is promising, with the convolutional neural network (CNN) achieving a recall of 0.831, reliably identifying the PVY-infected plants. Notably, UAV-based imaging maintained performance levels comparable to ground-based methods, supporting its practical viability. The hyperspectral camera captures a wide range of spectral bands, many of which are redundant in identifying the PVY. Our analysis identified five key spectral regions that are informative in identifying the PVY. Two of them are in the visible spectrum, two are in the near-infrared spectrum, and one is in the red-edge spectrum. This research shows that early-season PVY detection is feasible using UAV hyperspectral imaging, offering the potential to minimize economic and yield losses. It also highlights the most relevant spectral regions that carry the distinctive signatures of PVY. This research demonstrates the feasibility of early-season PVY detection using UAV hyperspectral imaging and provides guidance for developing cost-effective multispectral sensors tailored to this task. Full article
Show Figures

Figure 1

Back to TopTop