Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (140)

Search Parameters:
Keywords = surveillance camera security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 2765 KB  
Article
A Novel Classification Model for Suspicious Human Activities in Diverse Environments Using Fused Feature Block and Machine Vision Techniques
by Bushra Mughal, Fernando B. Duarte, Tiago Cunha Reis and Carlos Jorge Dos Santos Limão Sebastiã
Digital 2026, 6(2), 30; https://doi.org/10.3390/digital6020030 - 13 Apr 2026
Viewed by 401
Abstract
Automated detection of suspicious human activities in complex and crowded environments remains a critical challenge in modern surveillance systems due to high false-positive rates, poor contrast and generalization across diverse scenes. We propose a GM_CNN3D Model for the classification of suspicious activity based [...] Read more.
Automated detection of suspicious human activities in complex and crowded environments remains a critical challenge in modern surveillance systems due to high false-positive rates, poor contrast and generalization across diverse scenes. We propose a GM_CNN3D Model for the classification of suspicious activity based on a Deep Fused Feature Block (DFFB) framework that integrates handcrafted spatial descriptors (PCA-HOG and Motion-HOG) with deep spatiotemporal features extracted from 3D Convolution Neural Network (3D-CNN). Motion regions are first localized using a Gaussian Mixture Model (GMM), after which handcrafted and deep features are concatenated in a dimensionality-normalized fusion stage, followed by a fully connected layer and softmax classification. The system is evaluated on five diverse and publicly available datasets: Violent Crowd, Hockey Fight, Kaggle Fight, Movies Fight, and Custom Annotated YouTube Clips, achieving up to 99.12% accuracy, 98.7% F1-score, and a ROC-AUC of 0.992, outperforming state-of-the-art CNN, LSTM, and SlowFast models. All datasets include real world scenarios with varying lighting, crowd density, and camera viewpoints, with annotations created manually where unavailable. The proposed method demonstrates robust cross-scene performance, enabling automated alarming and reduced false positives in real-time security operations. Full article
Show Figures

Figure 1

25 pages, 3673 KB  
Systematic Review
Recent Advances in Multi-Camera Computer Vision for Industry 4.0 and Smart Cities: A Systematic Review
by Carlos Julio Fierro-Silva, Carolina Del-Valle-Soto, Samih M. Mostafa and José Varela-Aldás
Algorithms 2026, 19(4), 249; https://doi.org/10.3390/a19040249 - 25 Mar 2026
Viewed by 837
Abstract
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and [...] Read more.
The rapid deployment of surveillance cameras in urban, industrial, and domestic environments has intensified the need for intelligent systems capable of analyzing video streams beyond the limitations of single-camera setups. Unlike traditional single-camera approaches, multi-camera systems expand spatial coverage, reduce blind spots, and enable consistent tracking of people and objects across non-overlapping views, thereby improving robustness against occlusions and viewpoint changes. This article presents a comprehensive review of multi-camera vision systems published between 2020 and 2025, covering application domains including public security and biometrics, intelligent transportation, smart cities and IoT, healthcare monitoring, precision agriculture, industry and robotics, pan–tilt–zoom (PTZ) camera networks, and emerging areas such as retail and forensic analysis. The review synthesizes predominant technical approaches, including deep-learning-based detection, multi-target multi-camera tracking (MTMCT), re-identification (Re-ID), spatiotemporal fusion, and edge computing architectures. Persistent challenges are identified, particularly in inter-camera data association, scalability, computational efficiency, privacy preservation, and dataset availability. Emerging trends such as distributed edge AI, cooperative camera networks, and active perception are discussed to outline future research directions toward scalable, privacy-aware, and intelligent multi-camera infrastructures. Full article
Show Figures

Figure 1

21 pages, 1469 KB  
Article
Development of Surveillance Robots Based on Face Recognition Using High-Order Statistical Features and Evidence Theory
by Slim Ben Chaabane, Rafika Harrabi, Anas Bushnag and Hassene Seddik
J. Imaging 2026, 12(3), 107; https://doi.org/10.3390/jimaging12030107 - 28 Feb 2026
Viewed by 648
Abstract
The recent advancements in technologies such as artificial intelligence (AI), computer vision (CV), and Internet of Things (IoT) have significantly extended various fields, particularly in surveillance systems. These innovations enable real-time facial recognition processing, enhancing security and ensuring safety. However, mobile robots are [...] Read more.
The recent advancements in technologies such as artificial intelligence (AI), computer vision (CV), and Internet of Things (IoT) have significantly extended various fields, particularly in surveillance systems. These innovations enable real-time facial recognition processing, enhancing security and ensuring safety. However, mobile robots are commonly employed in surveillance systems to handle risky tasks that are beyond human capability. In this paper, we present a prototype of a cost-effective mobile surveillance robot built on the Raspberry PI 4, designed for integration into various industrial environments. This smart robot detects intruders using IoT and face recognition technology. The proposed system is equipped with a passive infrared (PIR) sensor and a camera for capturing live-streaming video and photos, which are sent to the control room through IoT technology. Additionally, the system uses face recognition algorithms to differentiate between company staff and potential intruders. The face recognition method combines high-order statistical features and evidence theory to improve facial recognition accuracy and robustness. High-order statistical features are used to capture complex patterns in facial images, enhancing discrimination between individuals. Evidence theory is employed to integrate multiple information sources, allowing for better decision-making under uncertainty. This approach effectively addresses challenges such as variations in lighting, facial expressions, and occlusions, resulting in a more reliable and accurate face recognition system. When the system detects an unfamiliar individual, it sends out alert notifications and emails to the control room with the captured picture using IoT. A web interface has also been set up to control the robot from a distance through Wi-Fi connection. The proposed face recognition method is evaluated, and a comparative analysis with existing techniques is conducted. Experimental results with 400 test images of 40 individuals demonstrate the effectiveness of combining various attribute images in improving human face recognition performance. Experimental results indicate that the algorithm can identify human faces with an accuracy of 98.63%. Full article
Show Figures

Figure 1

12 pages, 561 KB  
Data Descriptor
Perceptions of Security, Victimization, and Coexistence: A Database from Cali, Colombia
by Jhon James Mora, Enrique Javier Burbano-Valencia, Angie Mondragón-Mayo and José Santiago Arroyo Mina
Data 2026, 11(2), 41; https://doi.org/10.3390/data11020041 - 14 Feb 2026
Cited by 1 | Viewed by 778
Abstract
This article addresses a key evidence gap in urban safety policy in Colombia: the absence of publicly accessible microdata that jointly measure victimization, perception of security, and probability of sanctions among socioeconomically vulnerable residents. It aims to provide a clean, linkable dataset that [...] Read more.
This article addresses a key evidence gap in urban safety policy in Colombia: the absence of publicly accessible microdata that jointly measure victimization, perception of security, and probability of sanctions among socioeconomically vulnerable residents. It aims to provide a clean, linkable dataset that enables analysis of variations in these issues across demographic and territorial groups in Cali (recently classified as the 29th most dangerous city worldwide, with 1028 and 1065 homicides in 2024 and 2025, respectively). It reports face-to-face survey data collected from 22 July to 16 August 2024, at Sistema de Identificación de Potenciales Beneficiarios de Programas Sociales (SISBEN) service points. The final dataset includes 2139 adults (aged 18–95 years) and combines (i) primary responses on perceived safety (e.g., public space safety and surveillance cameras), perceived likelihood of sanction, victimization, and self-protection measures with (ii) selected sociodemographic and household characteristics drawn from SISBEN IV records. Individual-level linkage was implemented using respondent identification at interviews, yielding an integrated anonymized file suitable for replication and secondary analysis. The dataset enables distributive analyses of insecurity (e.g., by sex, age, and ethnicity—including Afro-descendant populations) within a policy-relevant target group and supports evaluation and targeting of local interventions by providing individual-level indicators. Full article
Show Figures

Figure 1

5 pages, 214 KB  
Proceeding Paper
Methodology for Rapid Security Testing of IP Cameras
by Lidia Prudente-Tixteco, Gabriel Sanchez-Perez, Jesus Olivares-Mercado and Aldo Hernandez-Suarez
Eng. Proc. 2026, 123(1), 33; https://doi.org/10.3390/engproc2026123033 - 11 Feb 2026
Viewed by 698
Abstract
There are many types of IP surveillance cameras that connect to organizational or home data networks. However, these devices have vulnerabilities from their technological nature, and people often ignore procedures to protect their networks and devices, which generates security risks for networks, users, [...] Read more.
There are many types of IP surveillance cameras that connect to organizational or home data networks. However, these devices have vulnerabilities from their technological nature, and people often ignore procedures to protect their networks and devices, which generates security risks for networks, users, and information where they are connected. IP camera vulnerabilities can be exploited by threats and unauthorized persons to cause damage to an infrastructure. Security tests require specific knowledge, equipment, and specialized tools. Furthermore, their execution includes different steps and devices that require time for execution and processing. A methodology for rapid security testing of IP cameras could help identify vulnerabilities and security gaps to select cybersecurity controls to mitigate the risk of their use. This article presents a proof of concept for a methodology for rapid security tests on IP cameras based on NIST SP 800-115, to guide analysts in security tests to obtain results that allow them to take actions to mitigate risks. Full article
(This article belongs to the Proceedings of First Summer School on Artificial Intelligence in Cybersecurity)
Show Figures

Figure 1

28 pages, 2553 KB  
Review
Comparative Study of Supervised Deep Learning Architectures for Background Subtraction and Motion Segmentation on CDnet2014
by Oussama Boufares, Wajdi Saadaoui and Mohamed Boussif
Signals 2026, 7(1), 14; https://doi.org/10.3390/signals7010014 - 2 Feb 2026
Viewed by 791
Abstract
Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination [...] Read more.
Foreground segmentation and background subtraction are critical components in many computer vision applications, such as intelligent video surveillance, urban security systems, and obstacle detection for autonomous vehicles. Although extensively studied over the past decades, these tasks remain challenging, particularly due to rapid illumination changes, dynamic backgrounds, cast shadows, and camera movements. The emergence of supervised deep learning-based methods has significantly enhanced performance, surpassing traditional approaches on the benchmark dataset CDnet2014. In this context, this paper provides a comprehensive review of recent supervised deep learning techniques applied to background subtraction, along with an in-depth comparative analysis of state-of-the-art approaches available on the official CDnet2014 results platform. Specifically, we examine several key architecture families, including convolutional neural networks (CNN and FCN), encoder–decoder models such as FgSegNet and Motion U-Net, adversarial frameworks (GAN), Transformer-based architectures, and hybrid methods combining intermittent semantic segmentation with rapid detection algorithms such as RT-SBS-v2. Beyond summarizing existing works, this review contributes a structured cross-family comparison under a unified benchmark, a focused analysis of performance behavior across challenging CDnet2014 scenarios, and a critical discussion of the trade-offs between segmentation accuracy, robustness, and computational efficiency for practical deployment. Full article
Show Figures

Figure 1

36 pages, 4183 KB  
Article
Distinguishing a Drone from Birds Based on Trajectory Movement and Deep Learning
by Andrii Nesteruk, Valerii Nikitin, Yosyp Albrekht, Łukasz Ścisło, Damian Grela and Paweł Król
Sensors 2026, 26(3), 755; https://doi.org/10.3390/s26030755 - 23 Jan 2026
Viewed by 934
Abstract
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a [...] Read more.
Unmanned aerial vehicles (UAVs) increasingly share low-altitude airspace with birds, making early distinguishing between drones and biological targets critical for safety and security. This work addresses long-range scenarios where objects occupy only a few pixels and appearance-based recognition becomes unreliable. We develop a model-driven simulation pipeline that generates synthetic data with a controlled camera model, atmospheric background and realistic motion of three aerial target types: multicopter, fixed-wing UAV and bird. From these sequences, each track is encoded as a time series of image-plane coordinates and apparent size, and a bidirectional long short-term memory (LSTM) network is trained to classify trajectories as drone-like or bird-like. The model learns characteristic differences in smoothness, turning behavior and velocity fluctuations, and to achieve reliable separation between drone and bird motion patterns on synthetic test data. Motion-trajectory cues alone can support early distinguishing of drones from birds when visual details are scarce, providing a complementary signal to conventional image-based detection. The proposed synthetic data and sequence classification pipeline forms a reproducible testbed that can be extended with real trajectories from radar or video tracking systems and used to prototype and benchmark trajectory-based recognizers for integrated surveillance solutions. The proposed method is designed to generalize naturally to real surveillance systems, as it relies on trajectory-level motion patterns rather than appearance-based features that are sensitive to sensor quality, illumination, or weather conditions. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Graphical abstract

15 pages, 1607 KB  
Article
Using Steganography and Artificial Neural Network for Data Forensic Validation and Counter Image Deepfakes
by Matimu Caswell Nkuna, Ebenezer Esenogho and Ahmed Ali
Computers 2026, 15(1), 61; https://doi.org/10.3390/computers15010061 - 15 Jan 2026
Viewed by 772
Abstract
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems. [...] Read more.
The merging of the Internet of Things (IoT) and Artificial Intelligence (AI) advances has intensified challenges related to data authenticity and security. These advancements necessitate a multi-layered security approach to ensure the security, reliability, and integrity of critical infrastructure and intelligent surveillance systems. This paper proposes a two-layered security approach that combines a discrete cosine transform least significant bit 2 (DCT-LSB-2) with artificial neural networks (ANNs) for data forensic validation and mitigating deepfakes. The proposed model encodes validation codes within the LSBs of cover images captured by an IoT camera on the sender side, leveraging the DCT approach to enhance the resilience against steganalysis. On the receiver side, a reverse DCT-LSB-2 process decodes the embedded validation code, which is subjected to authenticity verification by a pre-trained ANN model. The ANN validates the integrity of the decoded code and ensures that only device-originated, untampered images are accepted. The proposed framework achieved an average SSIM of 0.9927 across the entire investigated embedding capacity, ranging from 0 to 1.988 bpp. DCT-LSB-2 showed a stable Peak Signal-to-Noise Ratio (average 42.44 dB) under various evaluated payloads ranging from 0 to 100 kB. The proposed model achieved a resilient and robust multi-layered data forensic validation system. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Graphical abstract

21 pages, 65310 KB  
Article
The Effect of Electromagnetic Pulse Attacks on USB Camera Performance
by Gang Wei, Lei Shu, Wei Lin, Xing Yang, Ru Han, Kailiang Li and Kai Huang
J. Sens. Actuator Netw. 2026, 15(1), 4; https://doi.org/10.3390/jsan15010004 - 29 Dec 2025
Viewed by 1994
Abstract
The camera is a core device for modern surveillance and data collection, widely used in various fields including security, transportation, and healthcare. However, their widespread deployment has proportionally escalated associated security risks. This paper initially examines the current state of research on attack [...] Read more.
The camera is a core device for modern surveillance and data collection, widely used in various fields including security, transportation, and healthcare. However, their widespread deployment has proportionally escalated associated security risks. This paper initially examines the current state of research on attack methods targeting camera systems, providing a comprehensive review of various attack techniques and their security implications. Subsequently, we focus on a specific attack method against universal serial bus (USB) cameras, known as electromagnetic pulse (EMP) attacks, which utilize EMP to prevent the system from detecting the cameras. We simulated EMP attacks using a solar insecticidal lamp (which generates EMP by releasing high-voltage pulses) and a commercially available EMP generator. The performance of the cameras under various conditions was evaluated by adjusting the number of filtering magnetic rings on the USB cable and the distance between the camera and the interference source. The results demonstrate that some USB cameras are vulnerable to EMP attacks. Although EMP attacks might not invariably cause image distortion or permanent damage, their covert nature can lead to false detection of system failures, data security, and system maintenance. Based on these findings, it is recommended to determine the optimal number of shielding rings for cameras or their safe distance from EMP sources through the experimental approach outlined in this study, thereby enhancing the security and resilience of USB camera enabled systems in specific scenarios. Full article
Show Figures

Figure 1

11 pages, 258 KB  
Article
Perspectives on Rising Societal Crime on Workplace Productivity in a Small Island Developing State
by Adeoye Adenekan, Marsha Ivey and Srikanta Banerjee
Int. J. Environ. Res. Public Health 2025, 22(12), 1858; https://doi.org/10.3390/ijerph22121858 - 12 Dec 2025
Viewed by 442
Abstract
Objectives: The crime rate in Trinidad and Tobago has increased over the last few years. It is important to understand the impact of rising societal crime on university workplace productivity in order to make meaningful recommendations to mitigate the negative effects of crime. [...] Read more.
Objectives: The crime rate in Trinidad and Tobago has increased over the last few years. It is important to understand the impact of rising societal crime on university workplace productivity in order to make meaningful recommendations to mitigate the negative effects of crime. Methods: We conducted semi-structured interviews online via Zoom and face-to-face with both academic and non-academic staff from a university located in Trinidad and Tobago in April 2025. We employed purposive sampling and topics explored included participants’ views on crime, the effect of crime on workplace productivity, the effect of crime on workplace concentration, the effect of crime on participants’ mental health, concerns about safety at the workplace, and desired changes or suggestions to ensure improved safety at the workplace. Data were manually analyzed, and we employed thematic analysis to understand the participants’ data. Results: Analysis included data from 10 participants. Participants represented both academic and non-academic staff, with varied ethnic backgrounds, age range, and were both from Mount Hope and the main campus. Seven of the participants believed that their work productivity had been negatively affected by the crime situation. All the participants agreed that the crime situation was out of control; two of the participants claimed to have been victims of crime. Five of the participants believed they had experienced depressive symptoms, while six participants claimed to have experienced poor concentration on the job. Five participants expressed genuine concerns that something terrible could happen to them within their workplace premises. In order to improve security at the workplace, seven of the participants suggested the employment of more security personnel, while six participants highlighted the need for more surveillance and closed-circuit television (CCTV) cameras. Participants identified four major categories or themes: views on crime and its effects on individuals; effects of crime on workplace productivity; effects of crime on mental well-being; and suggestions and opportunities to improve security at the workplace. Conclusions: From this study, it can be inferred that the majority of the participants were negatively affected by the climate of crime in the country. A comprehensive risk assessment would identify potential risks and vulnerabilities faced by staff, while enhanced surveillance measures and the promotion of the Employee Assistance Program (EAP) can support those impacted. Staff should also be trained to respond effectively to potential threats. Full article
(This article belongs to the Section Behavioral and Mental Health)
9 pages, 1449 KB  
Proceeding Paper
Modeling and Control of a Pan–Tilt Servo System for Face Tracking Using Deep Learning and PID
by Mihnea Dimitrie Doloiu, Ioan-Alexandru Spulber, Ilie Indreica, Gigel Măceșanu, Bogdan Sibisan and Tiberiu-Teodor Cociaș
Eng. Proc. 2025, 113(1), 75; https://doi.org/10.3390/engproc2025113075 - 19 Nov 2025
Viewed by 1381
Abstract
This paper presents a comprehensive modeling and control strategy for a pan–tilt (PT) servo system designed for real-time object tracking (specifically face detection) using deep learning and PID control. The system integrates a YOLO-based neural network to detect and localize the target within [...] Read more.
This paper presents a comprehensive modeling and control strategy for a pan–tilt (PT) servo system designed for real-time object tracking (specifically face detection) using deep learning and PID control. The system integrates a YOLO-based neural network to detect and localize the target within an image, mapping its coordinates from 3D space onto the 2D image plane through a mathematically defined geometric camera model. A complete mathematical representation of the pan–tilt mechanism is developed, accounting for all relevant forces and system components. Based on this model, a PID controller is designed, and its parameters are identified and implemented using the Ziegler–Nichols tuning method. Experimental results demonstrate that the system effectively tracks objects in real time, exhibiting minimal latency and precise motor responses. These findings suggest that the proposed approach is well-suited for practical applications, including security surveillance, assistive technologies, and interactive robotics. Full article
(This article belongs to the Proceedings of The Sustainable Mobility and Transportation Symposium 2025)
Show Figures

Figure 1

41 pages, 2952 KB  
Systematic Review
Advancements and Challenges in Deep Learning-Based Person Re-Identification: A Review
by Liang Zhao, Yuyan Han and Zhihao Chen
Electronics 2025, 14(22), 4398; https://doi.org/10.3390/electronics14224398 - 12 Nov 2025
Cited by 1 | Viewed by 2872
Abstract
Person Re-Identification (Re-ID), a critical component of intelligent surveillance and security systems, seeks to match individuals across disjoint camera networks under complex real-world conditions. While deep learning has revolutionized Re-ID through enhanced feature representation and domain adaptation, a holistic synthesis of its advancements, [...] Read more.
Person Re-Identification (Re-ID), a critical component of intelligent surveillance and security systems, seeks to match individuals across disjoint camera networks under complex real-world conditions. While deep learning has revolutionized Re-ID through enhanced feature representation and domain adaptation, a holistic synthesis of its advancements, unresolved challenges, and ethical implications remains imperative. This survey offers a structured and critical examination of Re-ID in the deep learning era, organized into three pillars: technological innovations, persistent barriers, and future frontiers. We systematically analyze breakthroughs in deep architectures (e.g., transformer-based models, hybrid global-local networks), optimization paradigms (contrastive, adversarial, and self-supervised learning), and robustness strategies for occlusion, pose variation, and cross-domain generalization. Critically, we identify underexplored limitations such as annotation bias, scalability-accuracy trade-offs, and privacy-utility conflicts in real-world deployment. Beyond technical analysis, we propose emerging directions, including causal reasoning for interpretable Re-ID, federated learning for decentralized data governance, open-world lifelong adaptation frameworks, and human-AI collaboration to reduce annotation costs. By integrating technical rigor with societal responsibility, this review aims to bridge the gap between algorithmic advancements and ethical deployment, fostering transparent, sustainable, and human-centric Re-ID systems. Full article
Show Figures

Figure 1

26 pages, 9360 KB  
Article
Multi-Agent Hierarchical Reinforcement Learning for PTZ Camera Control and Visual Enhancement
by Zhonglin Yang, Huanyu Liu, Hao Fang, Junbao Li and Yutong Jiang
Electronics 2025, 14(19), 3825; https://doi.org/10.3390/electronics14193825 - 26 Sep 2025
Cited by 2 | Viewed by 1582
Abstract
Border surveillance, as a critical component of national security, places increasingly stringent demands on the target perception capabilities of video monitoring systems, especially in wide-area and complex environments. To address the limitations of existing systems in low-confidence target detection and multi-camera collaboration, this [...] Read more.
Border surveillance, as a critical component of national security, places increasingly stringent demands on the target perception capabilities of video monitoring systems, especially in wide-area and complex environments. To address the limitations of existing systems in low-confidence target detection and multi-camera collaboration, this paper proposes a novel visual enhancement method for cooperative control of multiple PTZ (Pan–Tilt–Zoom) cameras based on hierarchical reinforcement learning. The proposed approach establishes a hierarchical framework composed of a Global Planner Agent (GPA) and multiple Local Executor Agents (LEAs). The GPA is responsible for global target assignment, while the LEAs perform fine-grained visual enhancement operations based on the assigned targets. To effectively model the spatial relationships among multiple targets and the perceptual topology of the cameras, a graph-based joint state space is constructed. Furthermore, a graph neural network is employed to extract high-level features, enabling efficient information sharing and collaborative decision-making among cameras. Experimental results in simulation environments demonstrate the superiority of the proposed method in terms of target coverage and visual enhancement performance. Hardware experiments further validate the feasibility and robustness of the approach in real-world scenarios. This study provides an effective solution for multi-camera cooperative surveillance in complex environments. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 2566 KB  
Article
Secure and Decentralized Hybrid Multi-Face Recognition for IoT Applications
by Erëza Abdullahu, Holger Wache and Marco Piangerelli
Sensors 2025, 25(18), 5880; https://doi.org/10.3390/s25185880 - 19 Sep 2025
Cited by 4 | Viewed by 2146
Abstract
The proliferation of smart environments and Internet of Things (IoT) applications has intensified the demand for efficient, privacy-preserving multi-face recognition systems. Conventional centralized systems suffer from latency, scalability, and security vulnerabilities. This paper presents a practical hybrid multi-face recognition framework designed for decentralized [...] Read more.
The proliferation of smart environments and Internet of Things (IoT) applications has intensified the demand for efficient, privacy-preserving multi-face recognition systems. Conventional centralized systems suffer from latency, scalability, and security vulnerabilities. This paper presents a practical hybrid multi-face recognition framework designed for decentralized IoT deployments. Our approach leverages a pre-trained Convolutional Neural Network (VGG16) for robust feature extraction and a Support Vector Machine (SVM) for lightweight classification, enabling real-time recognition on resource-constrained devices such as IoT cameras and Raspberry Pi boards. The purpose of this work is to demonstrate the feasibility and effectiveness of a lightweight hybrid system for decentralized multi-face recognition, specifically tailored to the constraints and requirements of IoT applications. The system is validated on a custom dataset of 20 subjects collected under varied lighting conditions and facial expressions, achieving an average accuracy exceeding 95% while simultaneously recognizing multiple faces. Experimental results demonstrate the system’s potential for real-world applications in surveillance, access control, and smart home environments. The proposed architecture minimizes computational load, reduces dependency on centralized servers, and enhances privacy, offering a promising step toward scalable edge AI solutions. Full article
(This article belongs to the Special Issue Secure and Decentralised IoT Systems)
Show Figures

Figure 1

26 pages, 2329 KB  
Article
Federated Learning for Surveillance Systems: A Literature Review and AHP Expert-Based Evaluation
by Yongjoo Shin, Hansung Kim, Jaeyeong Jeong and Dongkyoo Shin
Electronics 2025, 14(17), 3500; https://doi.org/10.3390/electronics14173500 - 1 Sep 2025
Cited by 1 | Viewed by 2824
Abstract
This study explores the application of federated learning (FL) in security camera surveillance systems to overcome the structural limitations inherent in traditional centralized artificial intelligence (AI) training approaches, while simultaneously enhancing operational efficiency and data security. Conventional centralized AI models require the transmission [...] Read more.
This study explores the application of federated learning (FL) in security camera surveillance systems to overcome the structural limitations inherent in traditional centralized artificial intelligence (AI) training approaches, while simultaneously enhancing operational efficiency and data security. Conventional centralized AI models require the transmission of raw surveillance data from individual security camera units to a central server for model training, which poses significant challenges, including network congestion, a heightened risk of personal data leakage, and inadequate adaptation to localized environmental characteristics. These limitations are particularly critical in high-security environments such as military bases and government facilities, where reliability and real-time processing are paramount. In contrast, FL enables decentralized training by retaining data on local devices and sharing only model parameters with a central aggregator, thereby improving privacy preservation, reducing communication overhead, and facilitating adaptive, context-aware learning. This paper does not present a new federated learning algorithm or original experiment. Instead, it synthesizes existing research findings and applies the Analytic Hierarchy Process (AHP) to evaluate and prioritize critical factors for deploying FL in surveillance systems. By combining literature-based evidence with structured expert judgment, this study provides practical guidelines for real-world application. This paper identifies four key performance metrics—detection accuracy, false alarm rate, response time, and network load—and conducts a comparative analysis of FL and centralized AI-based approaches in the recent literature. In addition, the AHP is employed to evaluate expert survey data, quantitatively prioritizing eight critical factors for effective FL implementation. The results highlight detection accuracy and data security as the most significant concerns, indicating that FL presents a promising solution for future smart surveillance infrastructures. This research contributes to the advancement of AI-powered surveillance systems that are both high-performing and resilient under stringent privacy and operational constraints. Full article
Show Figures

Figure 1

Back to TopTop