Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (195)

Search Parameters:
Keywords = surveillance robots

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 408 KiB  
Article
Process Limit of Detection for Salmonella Typhi, Vibrio cholerae, Rotavirus, and SARS-CoV-2 in Surface Water and Wastewater
by Pengbo Liu, Orlando Sablon, Anh Nguyen, Audrey Long and Christine Moe
Water 2025, 17(14), 2077; https://doi.org/10.3390/w17142077 - 11 Jul 2025
Viewed by 189
Abstract
Wastewater-based epidemiology (WBE) has historically proven to be a powerful surveillance tool, particularly during the SARS-CoV-2 pandemic. Effective WBE depends on the sensitive detection of pathogens in wastewater. However, determining the process limit of detection (PLOD) of WBE through a comprehensive evaluation that [...] Read more.
Wastewater-based epidemiology (WBE) has historically proven to be a powerful surveillance tool, particularly during the SARS-CoV-2 pandemic. Effective WBE depends on the sensitive detection of pathogens in wastewater. However, determining the process limit of detection (PLOD) of WBE through a comprehensive evaluation that accounts for pathogen concentration, nucleic acid extraction, and molecular analysis has rarely been documented. We prepared dilution series with known concentrations of S. Typhi, V. cholerae, rotavirus, and SARS-CoV-2 in surface water and wastewater. Pathogen concentration was performed using Nanotrap particles with the KingFisher™ Apex robotic platform, followed by nucleic acid extraction. Quantitative real-time PCR (qPCR) and digital PCR (dPCR) were used to detect the extracted nucleic acids of the pathogens. The PLODs and recovery efficiencies for each of the four pathogens in surface water and wastewater were determined. Overall, the observed PLODs for S. Typhi, V. cholerae, and rotavirus in surface water and wastewater were approximately 3 log10 loads (2.1–2.8 × 103/10 mL) using either qPCR or dPCR as the detection method. For SARS-CoV-2, the PLOD in surface water was 2.9 × 104/10 mL with both RT-qPCR and dPCR, one log10 higher than the PLODs of the other three pathogens. In wastewater, the PLOD for SARS-CoV-2 was 2.9 × 104/10 mL using RT-qPCR and 2.9 × 103/10 mL using dPCR. The mean recovery rates of S. Typhi, V. cholerae, rotavirus, and SARS-CoV-2 for dPCR in both surface water and wastewater were below 10.4%, except for S. Typhi and V. cholerae in wastewater, which showed significantly higher recoveries, from 26.5% at 4.6 × 105/10 mL for S. Typhi to 58.8% at 4.8 × 105/10 mL for V. cholerae. Our study demonstrated that combining qPCR or dPCR analysis with automated Nanotrap particle concentration and nucleic acid extraction using the KingFisher™ platform enables the sensitive detection of S. Typhi, V. cholerae, rotavirus, and SARS-CoV-2 in surface water and wastewater. Full article
(This article belongs to the Section Water and One Health)
Show Figures

Figure 1

25 pages, 1155 KiB  
Article
A Framework for Bluetooth-Based Real-Time Audio Data Acquisition in Mobile Robotics
by Sandeep Gupta, Udit Mamodiya, A. K. M. Zakir Hossain and Ahmed J. A. Al-Gburi
Signals 2025, 6(3), 31; https://doi.org/10.3390/signals6030031 - 2 Jul 2025
Viewed by 319
Abstract
This paper presents a novel framework addressing the fundamental challenge of concurrent real-time audio acquisition and motor control in resource-constrained mobile robotics. The ESP32-based system integrates a digital MEMS microphone with rover mobility through a unified Bluetooth protocol. Key innovations include (1) a [...] Read more.
This paper presents a novel framework addressing the fundamental challenge of concurrent real-time audio acquisition and motor control in resource-constrained mobile robotics. The ESP32-based system integrates a digital MEMS microphone with rover mobility through a unified Bluetooth protocol. Key innovations include (1) a dual-thread architecture enabling non-blocking concurrent operation, (2) an adaptive eight-bit compression algorithm optimizing bandwidth while preserving audio quality, and (3) a mathematical model for real-time resource allocation. A comprehensive empirical evaluation demonstrates consistent control latency below 150 ms with 90–95% audio packet delivery rates across varied environments. The framework enables mobile acoustic sensing applications while maintaining responsive motor control, validated through comprehensive testing in 40–85 dB acoustic environments at distances up to 10 m. A performance analysis demonstrates the feasibility of high-fidelity mobile acoustic sensing on embedded platforms, opening new possibilities for environmental monitoring, surveillance, and autonomous acoustic exploration systems. Full article
Show Figures

Figure 1

16 pages, 1093 KiB  
Article
A Lightweight Framework for Audio-Visual Segmentation with an Audio-Guided Space–Time Memory Network
by Yunpeng Zuo and Yunwei Zhang
Appl. Sci. 2025, 15(12), 6585; https://doi.org/10.3390/app15126585 - 11 Jun 2025
Viewed by 433
Abstract
As a multimodal fusion task, audio-visual segmentation (AVS) aims to locate sounding objects at the pixel level within a given image. This capability holds significant importance and practical value in applications such as intelligent surveillance, multimedia content analysis, and human–robot interaction. However, existing [...] Read more.
As a multimodal fusion task, audio-visual segmentation (AVS) aims to locate sounding objects at the pixel level within a given image. This capability holds significant importance and practical value in applications such as intelligent surveillance, multimedia content analysis, and human–robot interaction. However, existing AVS models typically feature complex architectures, require a large number of parameters, and are challenging to deploy on embedded platforms. Furthermore, these models often lack integration with object tracking mechanisms and fail to address the issue of the mis-segmentation of unvoiced objects caused by environmental noise in real-world scenarios. To address these challenges, this research proposes a lightweight audio-visual segmentation framework incorporating an audio-guided space–time memory network (AG-STMNet). First, a mask generator with a scoring mechanism was developed to identify sounding objects from generated masks. This component integrates Fastsam, a lightweight, pre-trained, object-aware segmentation model, with WAV2CLIP, a parameter-efficient audio-visual alignment model. Subsequently, AG-STMNet, an audio-guided video object segmentation network, was introduced to track sounding objects using video object segmentation techniques while mitigating environmental noise. Finally, the mask generator and AG-STMNet were combined to form the complete framework. The experimental results demonstrate that the framework achieves a mean Intersection over Union (mIoU) score of 41.5, indicating its potential as a viable lightweight solution for practical applications. Full article
(This article belongs to the Special Issue Artificial Intelligence and Its Application in Robotics)
Show Figures

Figure 1

26 pages, 11410 KiB  
Article
High-Speed Multiple Object Tracking Based on Fusion of Intelligent and Real-Time Image Processing
by Yuki Kawawaki and Yuji Yamakawa
Sensors 2025, 25(11), 3400; https://doi.org/10.3390/s25113400 - 28 May 2025
Viewed by 796
Abstract
Multiple object tracking (MOT) is a critical and active research topic in computer vision, serving as a fundamental technique across various application domains such as human–robot interaction, autonomous driving, and surveillance. MOT typically consists of two key components: detection, which produces bounding boxes [...] Read more.
Multiple object tracking (MOT) is a critical and active research topic in computer vision, serving as a fundamental technique across various application domains such as human–robot interaction, autonomous driving, and surveillance. MOT typically consists of two key components: detection, which produces bounding boxes around objects, and association, which links current detections to existing tracks. Two main approaches have been proposed: one-shot and two-shot methods. While previous works have improved MOT systems in terms of both speed and accuracy, most works have focused primarily on enhancing association performance, often overlooking the impact of accelerating detection. Thus, we propose a high-speed MOT system that balances real-time performance, tracking accuracy, and robustness across diverse environments. Our system comprises two main components: (1) a hybrid tracking framework that integrates low-frequency deep learning-based detection with classical high-speed tracking, and (2) a detection label-based tracker management strategy. We evaluated our system in six scenarios using a high-speed camera and compared its performance against seven state-of-the-art (SOTA) two-shot MOT methods. Our system achieved up to 470 fps when tracking two objects, 243 fps with three objects, and 178 fps with four objects. In terms of tracking accuracy, our system achieved the highest MOTA, IDF1, and HOTA scores with high-accuracy detection. Even with low detection accuracy, it demonstrated the potential of long-term association for high-speed tracking, achieving comparable or better IDF1 scores. We hope that our multi-processing architecture contributes to the advancement of MOT research and serves as a practical and efficient baseline for systems involving multiple asynchronous modules. Full article
Show Figures

Figure 1

37 pages, 2036 KiB  
Article
GCN-Transformer: Graph Convolutional Network and Transformer for Multi-Person Pose Forecasting Using Sensor-Based Motion Data
by Romeo Šajina, Goran Oreški and Marina Ivašić-Kos
Sensors 2025, 25(10), 3136; https://doi.org/10.3390/s25103136 - 15 May 2025
Viewed by 1031
Abstract
Multi-person pose forecasting involves predicting the future body poses of multiple individuals over time, involving complex movement dynamics and interaction dependencies. Its relevance spans various fields, including computer vision, robotics, human–computer interaction, and surveillance. This task is particularly important in sensor-driven applications, where [...] Read more.
Multi-person pose forecasting involves predicting the future body poses of multiple individuals over time, involving complex movement dynamics and interaction dependencies. Its relevance spans various fields, including computer vision, robotics, human–computer interaction, and surveillance. This task is particularly important in sensor-driven applications, where motion capture systems, including vision-based sensors and IMUs, provide crucial data for analyzing human movement. This paper introduces GCN-Transformer, a novel model for multi-person pose forecasting that leverages the integration of Graph Convolutional Network and Transformer architectures. We integrated novel loss terms during the training phase to enable the model to learn both interaction dependencies and the trajectories of multiple joints simultaneously. Additionally, we propose a novel pose forecasting evaluation metric called Final Joint Position and Trajectory Error (FJPTE), which assesses both local movement dynamics and global movement errors by considering the final position and the trajectory leading up to it, providing a more comprehensive assessment of movement dynamics. Our model uniquely integrates scene-level graph-based encoding and personalized attention-based decoding, introducing a novel architecture for multi-person pose forecasting that achieves state-of-the-art results across four datasets. The model is trained and evaluated on the CMU-Mocap, MuPoTS-3D, SoMoF Benchmark, and ExPI datasets, which are collected using sensor-based motion capture systems, ensuring its applicability in real-world scenarios. Comprehensive evaluations on the CMU-Mocap, MuPoTS-3D, SoMoF Benchmark, and ExPI datasets demonstrate that the proposed GCN-Transformer model consistently outperforms existing state-of-the-art (SOTA) models according to the VIM and MPJPE metrics. Specifically, based on the MPJPE metric, GCN-Transformer shows a 4.7% improvement over the closest SOTA model on CMU-Mocap, 4.3% improvement over the closest SOTA model on MuPoTS-3D, 5% improvement over the closest SOTA model on the SoMoF Benchmark, and a 2.6% improvement over the closest SOTA model on the ExPI dataset. Unlike other models with performances that fluctuate across datasets, GCN-Transformer performs consistently, proving its robustness in multi-person pose forecasting and providing an excellent foundation for the application of GCN-Transformer in different domains. Full article
Show Figures

Figure 1

32 pages, 2540 KiB  
Article
Formation Control of Wheeled Mobile Robots with Fault-Tolerance Capabilities
by Muhammad Shahab, Ali Nasir and Nezar M. Alyazidi
Robotics 2025, 14(5), 59; https://doi.org/10.3390/robotics14050059 - 27 Apr 2025
Viewed by 561
Abstract
This research investigates the impact of actuator faults on the formation control of multiple-wheeled mobile robots—a critical aspect in coordinating multi-robot systems for applications such as surveillance, exploration, and transportation. When a fault occurs in any of the robots, it can disrupt the [...] Read more.
This research investigates the impact of actuator faults on the formation control of multiple-wheeled mobile robots—a critical aspect in coordinating multi-robot systems for applications such as surveillance, exploration, and transportation. When a fault occurs in any of the robots, it can disrupt the formation and adversely affect the system’s performance, thereby compromising system efficiency and reliability. While numerous studies have focused on fault-tolerant control strategies to maintain formation integrity, there is a notable gap in the literature regarding the relationship between controller gains and settling time under varying degrees of actuator loss. In this paper, we develop a kinematic model of wheeled mobile robots and implement a leader–follower-based formation control strategy. Actuator faults are systematically introduced with varying levels of effectiveness (e.g., 80%, 60%, and 40% of full capacity) to observe their effects on formation maintenance. We generate data correlating controller gains with settling time under different actuator loss conditions and fit a polynomial curve to derive an equation describing this relationship. Comprehensive MATLAB simulations are conducted to evaluate the proposed methodology. The results demonstrate the influence of actuator faults on the formation control system and provide valuable insights into optimizing controller gains for improved fault tolerance. These findings contribute to the development of more robust multi-robot systems capable of maintaining formation and performance despite the presence of actuator failures. Full article
(This article belongs to the Section Intelligent Robots and Mechatronics)
Show Figures

Figure 1

30 pages, 13157 KiB  
Article
Development of IoT-Based Hybrid Autonomous Networked Robots
by Maki K. Habib and Chimsom I. Chukwuemeka
Technologies 2025, 13(5), 168; https://doi.org/10.3390/technologies13050168 - 23 Apr 2025
Viewed by 540
Abstract
Autonomous Networked Robot (ANR) systems feature multi-robot systems (MRSs) and wireless sensor networks (WSNs). These systems help to extend coverage, maximize efficiency in data routing, and provide practical and reliable task management, among others. This article presents the development and implementation of an [...] Read more.
Autonomous Networked Robot (ANR) systems feature multi-robot systems (MRSs) and wireless sensor networks (WSNs). These systems help to extend coverage, maximize efficiency in data routing, and provide practical and reliable task management, among others. This article presents the development and implementation of an IoT-based hybrid ANR system integrated with different cloud platforms. The system comprises two main components: the physical hybrid ANR, the simulation development environment (SDE) with hardware in the loop (HIL), and the necessary core interfaces. Both are integrated to facilitate system component development, simulation, testing, monitoring, and validation. The operational environment (local and/or distributed) of the designed system is divided into zones, and each zone comprises static IoT-based sensor nodes (SSNs) and a mobile robot with integrated onboard IoT-based sensor nodes (O-SSNs) called the mobile robot sensor node (MRSN). Global MRSNs (G-MRSNs) navigate spaces not covered by a zone. The mobile robots navigate within/around their designated spaces and to any of their SSNs. The SSNs and the O-SSN of each zone are supported by the ZigBee protocol, forming a WSN. The MRSNs and G-MRSNs communicate their collected data from different zones to the base station (BS) through the IoT base station gateway (IoT-BSG) using wireless serial protocol. The base station analyzes and visualizes the received data through GUIs and communicates data through the IoT/cloud using the Wi-Fi protocol. The developed system is demonstrated for event detection and surveillance. Experimental results of the implemented/simulated ANR system and HIL experiments validate the performance of the developed IoT-based hybrid architecture. Full article
(This article belongs to the Special Issue IoT-Enabling Technologies and Applications)
Show Figures

Figure 1

24 pages, 4894 KiB  
Article
Design and Implementation of a Position-Based Coordinated Formation System for Underwater Multiple Small Spherical Robots
by Xihuan Hou, Shuxiang Guo, Zan Li, Huimin Shi, Na Yuan and Huiming Xing
Oceans 2025, 6(2), 21; https://doi.org/10.3390/oceans6020021 - 14 Apr 2025
Viewed by 793
Abstract
Due to the excellent concealment and high mobility, multiple small spherical underwater robots are essential for near coast defending missions. The formation of multiple small spherical underwater robots is particularly effective for tasks such as patrolling, reconnaissance, surveillance, and capturing sensitive targets. Moreover, [...] Read more.
Due to the excellent concealment and high mobility, multiple small spherical underwater robots are essential for near coast defending missions. The formation of multiple small spherical underwater robots is particularly effective for tasks such as patrolling, reconnaissance, surveillance, and capturing sensitive targets. Moreover, some tasks need higher flexibility and mobility, such as reconnaissance, surveillance, or target encirclement at fixed locations. For this purpose, this paper proposes a position-based formation mechanism which is easily deployed for multiple spherical robots. A position planning method during the formation process is designed. This method creatively integrates the virtual linkage strategy with an improved consensus algorithm and the artificial potential field (APF) method. The virtual linkage strategy is in charge of computing the global formation desired target positions for robots according to the predefined position of the virtual leader joint. The improved consensus algorithm and APF are responsible for planning the local desired positions between two formation desired target positions, which is able to prevent collisions and excessive communication distance between robots. In order to verify the effectiveness of the proposed formation mechanism, adequate simulations and experiments are conducted. Thereby, the proposed formation frame offers great potential for future practical marine operations of the underwater multi-small robot systems. Full article
Show Figures

Figure 1

20 pages, 1282 KiB  
Review
The Role of Prophylactic Gastrectomy in Gastric Adenocarcinoma and Proximal Polyposis of the Stomach: A Systematic Review
by Cosmina Fugărețu, Valeriu Marin Șurlin, Catalin Misarca, Daniela Marinescu, Stefan Patrascu, Sandu Ramboiu, Radu Petre, Victor Dan Eugen Strâmbu and Michael Schenker
J. Clin. Med. 2025, 14(7), 2522; https://doi.org/10.3390/jcm14072522 - 7 Apr 2025
Viewed by 708
Abstract
Background/Objectives: Gastric adenocarcinoma and proximal polyposis of the stomach (GAPPS) is a recently discovered autosomal dominant transmission disease. Patients with this condition have a higher risk of developing gastric cancer. There are numerous questions regarding the natural history of this condition, as [...] Read more.
Background/Objectives: Gastric adenocarcinoma and proximal polyposis of the stomach (GAPPS) is a recently discovered autosomal dominant transmission disease. Patients with this condition have a higher risk of developing gastric cancer. There are numerous questions regarding the natural history of this condition, as well as concerning the diagnostic and therapeutic management of these patients. In this systematic review, we aimed to examine the current literature to determine the role of prophylactic gastrectomy in patients diagnosed with gastric adenocarcinoma and proximal polyposis of the stomach. Additional outcomes are Helicobacter pylori (HP) infection, treatment with proton pump inhibitors (PPI), and colonoscopic examination and abdominal imaging examination, as they are important factors in the therapeutic decision. Methods: We performed a systematic review of the articles published in PubMed and Google Scholar, according to the PRISMA 2020 criteria. Results: We obtained 24 studies that included 83 patients diagnosed with GAPPS, of which 42 underwent prophylactic gastrectomy, 24 benefited from endoscopic follow-up, and 17 were diagnosed with gastric cancer at the first gastroscopic examination. In the prophylactic gastrectomy specimens, malignant gastric disease was confirmed in 10% of cases. GAPPS has been diagnosed more frequently in women. Conclusions: So far, the specialized literature includes a limited number of patients diagnosed with GAPPS. There are also no guidelines yet for the diagnosis and treatment of these patients. Prophylactic gastrectomy or endoscopic surveillance are the only options for patients diagnosed with GAPPS without gastric cancer at the initial examination. For prophylactic gastrectomy, the robotic and laparoscopic approach was preferred. For establishing appropriate lymphadenectomy in prophylactic gastrectomy, future research on gastrectomy specimens is necessary. Most of the included studies were deficient in terms of postoperative follow-up of patients. Thus, we consider it useful to include these patients in a single database. For a comprehensive examination of these and making an appropriate therapeutic decision, we consider it necessary to perform a colonoscopic evaluation, take abdominal imaging, and determine the Helicobacter pylori infection status. Full article
(This article belongs to the Special Issue Gastrointestinal Cancer: Outcomes and Therapeutic Management)
Show Figures

Figure 1

14 pages, 428 KiB  
Systematic Review
Minimally Invasive Salvage Approaches for Management of Recurrence After Primary Renal Mass Ablation
by Mohammadreza Askarpour and Alireza Aminsharifi
Cancers 2025, 17(6), 974; https://doi.org/10.3390/cancers17060974 - 13 Mar 2025
Cited by 1 | Viewed by 783
Abstract
Objectives: Thermal ablation has emerged as an effective, nephron-sparing treatment for small renal masses (SRMs), particularly in patients with comorbidities. However, tumor recurrence remains a challenge, necessitating evidence-based approaches for salvage management. This review examines the outcomes of minimally invasive modalities for [...] Read more.
Objectives: Thermal ablation has emerged as an effective, nephron-sparing treatment for small renal masses (SRMs), particularly in patients with comorbidities. However, tumor recurrence remains a challenge, necessitating evidence-based approaches for salvage management. This review examines the outcomes of minimally invasive modalities for managing recurrence following the primary ablation of SRMs. Methods: A literature review was conducted using the Medline database, following PRISMA guidelines. Studies published between 1981 and 2024 were screened based on predefined PICO criteria. Inclusion focused on patients with tumor recurrence after primary ablation therapy who underwent minimally invasive salvage treatments including repeat ablation, laparoscopic/robotic partial or radical nephrectomy, or active surveillance. Data extracted included patient demographics, initial treatments, recurrence timelines, salvage modalities, and outcomes. Results: Of 364 patients across 29 studies, 249 (68.4%) underwent re-ablation, 82 (22.5%) were treated with laparoscopic/robotic partial or radical nephrectomy, and 33 (9%) were managed with active surveillance. Among 249 patients with re-ablation, 179 (71.9%) showed no evidence of disease, 67 (26.9%) experienced recurrence, and outcomes were indeterminate in 3 (1.2%). Salvage minimally invasive partial or radical nephrectomy outcomes were available for 64 cases, with 63 (98.4%) showing no recurrence. In 33 patients under active surveillance, 4 died, 4 underwent surgery, and 25 remained under monitoring, though follow-up data were limited. Conclusion: Minimally invasive salvage modalities, such as re-ablation and laparoscopic/robotic partial or radical nephrectomy, effectively manage tumor recurrence, achieving >70% and >98% oncological success, respectively, despite technical challenges. Future studies directly comparing these modalities are essential for establishing standardized protocols for salvage management. Full article
Show Figures

Figure 1

27 pages, 3723 KiB  
Article
SESAME: Automated Security Assessment of Robots and Modern Multi-Robot Systems
by Manos Papoutsakis, George Hatzivasilis, Emmanouil Michalodimitrakis, Sotiris Ioannidis, Maria Michael, Antonis Savva, Panagiota Nikolaou, Eftychia Stokkou and Gizem Bozdemir
Electronics 2025, 14(5), 923; https://doi.org/10.3390/electronics14050923 - 26 Feb 2025
Viewed by 1335
Abstract
As robotic systems become more integrated into our daily lives, there is growing concern about cybersecurity. Robots used in areas such as autonomous driving, surveillance, surgery, home assistance, and industrial automation can be vulnerable to cyber-attacks, which could have serious real-world consequences. Modern [...] Read more.
As robotic systems become more integrated into our daily lives, there is growing concern about cybersecurity. Robots used in areas such as autonomous driving, surveillance, surgery, home assistance, and industrial automation can be vulnerable to cyber-attacks, which could have serious real-world consequences. Modern robotic systems face a unique set of threats due to their evolving characteristics. This paper outlines the SESAME project’s methodology for the automated security analysis of multi-robot systems (MRS) and the production of Executable Digital Dependability Identities (EDDIs). Addressing security challenges in MRS involves overcoming complex factors such as increased connectivity, human–robot interactions, and a lack of risk awareness. The proposed methodology encompasses a detailed process, starting from system description and vulnerability identification and moving to the generation of attack trees and security EDDIs. The SESAME security methodology leverages structured repositories like Common Vulnerabilities and Exposures (CVE), Common Weakness Enumeration (CWE), and Common Attack Pattern Enumeration and Classification (CAPEC) to identify potential vulnerabilities and associated attacks. The introduction of Template Attack Trees facilitates modeling potential attacks, helping security experts develop effective mitigation strategies. This approach not only identifies, but also connects, specific vulnerabilities to possible exploits, thereby generating comprehensive security assessments. By merging safety and security assessments, this methodology ensures the overall dependability of MRS, providing a robust framework to mitigate cyber–physical threats. Full article
(This article belongs to the Special Issue Cyber-Physical Systems: Recent Developments and Emerging Trends)
Show Figures

Figure 1

19 pages, 5398 KiB  
Article
EHC-GCN: Efficient Hierarchical Co-Occurrence Graph Convolution Network for Skeleton-Based Action Recognition
by Ying Bai, Dongsheng Yang, Jing Xu, Lei Xu and Hongliang Wang
Appl. Sci. 2025, 15(4), 2109; https://doi.org/10.3390/app15042109 - 17 Feb 2025
Viewed by 942
Abstract
In tasks such as intelligent surveillance and human–computer interaction, developing rapid and effective models for human action recognition is crucial. Currently, Graph Convolution Networks (GCNs) are widely used for skeleton-based action recognition. Still, they primarily face two issues: (1) The insufficient capture of [...] Read more.
In tasks such as intelligent surveillance and human–computer interaction, developing rapid and effective models for human action recognition is crucial. Currently, Graph Convolution Networks (GCNs) are widely used for skeleton-based action recognition. Still, they primarily face two issues: (1) The insufficient capture of global joint responses, making it difficult to utilize the correlations between all joints. (2) Existing models often tend to be over-parameterized. In this paper, we therefore propose an Efficient Hierarchical Co-occurrence Graph Convolution Network (EHC-GCN). By employing a simple and practical hierarchical co-occurrence framework to adjust the degree of feature aggregation on demand, we first use spatial graph convolution to learn the local features of joints and then aggregate the global features of all joints. Secondly, we introduce depth-wise separable convolution layers to reduce the model parameters. Additionally, we apply a two-stream branch and attention mechanism to further extract discriminative features. On two large-scale datasets, the proposed EHC-GCN achieves better or comparable performance on both 2D and 3D skeleton data to the state-of-the-art methods, with fewer parameters and lower computational complexity, which will be more beneficial for application on computing resource-limited robot platforms. Full article
Show Figures

Figure 1

19 pages, 30440 KiB  
Article
A Method for the Calibration of a LiDAR and Fisheye Camera System
by Álvaro Martínez, Antonio Santo, Monica Ballesta, Arturo Gil and Luis Payá
Appl. Sci. 2025, 15(4), 2044; https://doi.org/10.3390/app15042044 - 15 Feb 2025
Cited by 2 | Viewed by 1470
Abstract
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data [...] Read more.
LiDAR and camera systems are frequently used together to gain a more complete understanding of the environment in different fields, such as mobile robotics, autonomous driving, or intelligent surveillance. Accurately calibrating the extrinsic parameters is crucial for the accurate fusion of the data captured by both systems, which is equivalent to finding the transformation between the reference systems of both sensors. Traditional calibration methods for LiDAR and camera systems are developed for pinhole cameras and are not directly applicable to fisheye cameras. This work proposes a target-based calibration method for LiDAR and fisheye camera systems that avoids the need to transform images to a pinhole camera model, reducing the computation time. Instead, the method uses the spherical projection of the image, obtained with the intrinsic calibration parameters and the corresponding point cloud for LiDAR–fisheye calibration. Thus, unlike a pinhole-camera-based system, a wider field of view is provided, adding more information, which will lead to a better understanding of the environment itself, as well as enabling using fewer image sensors to cover a wider area. Full article
Show Figures

Figure 1

23 pages, 13710 KiB  
Article
Research on Indoor Positioning Systems and Autonomous Mechatronic Systems for Surveillance of Intrabuilding Zones
by Alexandru Vlăsceanu, Mihai Avram, Victor Constantin and Edgar Moraru
Appl. Sci. 2025, 15(2), 918; https://doi.org/10.3390/app15020918 - 17 Jan 2025
Cited by 1 | Viewed by 1142
Abstract
Given increasingly complex threats, adapting surveillance approaches to meet the necessary security levels is essential. The aim of this paper is to develop a surveillance architecture based on autonomous mechatronic systems (mobile robots and drones) designed to secure areas of strategic interest, covering [...] Read more.
Given increasingly complex threats, adapting surveillance approaches to meet the necessary security levels is essential. The aim of this paper is to develop a surveillance architecture based on autonomous mechatronic systems (mobile robots and drones) designed to secure areas of strategic interest, covering a predefined space while providing enhanced security at minimal costs. Designing such a system is challenging due to the need for continuous monitoring, raising issues around autonomy, as well as the design of the navigation and command and control systems. This paper presents key surveillance systems, demonstrating their efficiency and further development potential. This paper discusses the steps taken to enable a team of autonomous mobile robots to monitor a known indoor location by using a specialized positioning system. Steps are taken to deploy, implement, and configure the said indoor global positioning system (GPS). Among those steps, a study is performed by attaching one of the mobile beacons to a linear axis and moving said axis to known points. This position is read by both the axis and the positioning system, and the results are compared. Full article
Show Figures

Figure 1

17 pages, 7356 KiB  
Article
Increasing Neural-Based Pedestrian Detectors’ Robustness to Adversarial Patch Attacks Using Anomaly Localization
by Olga Ilina, Maxim Tereshonok and Vadim Ziyadinov
J. Imaging 2025, 11(1), 26; https://doi.org/10.3390/jimaging11010026 - 17 Jan 2025
Viewed by 1422
Abstract
Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses [...] Read more.
Object detection in images is a fundamental component of many safety-critical systems, such as autonomous driving, video surveillance systems, and robotics. Adversarial patch attacks, being easily implemented in the real world, provide effective counteraction to object detection by state-of-the-art neural-based detectors. It poses a serious danger in various fields of activity. Existing defense methods against patch attacks are insufficiently effective, which underlines the need to develop new reliable solutions. In this manuscript, we propose a method which helps to increase the robustness of neural network systems to the input adversarial images. The proposed method consists of a Deep Convolutional Neural Network to reconstruct a benign image from the adversarial one; a Calculating Maximum Error block to highlight the mismatches between input and reconstructed images; a Localizing Anomalous Fragments block to extract the anomalous regions using the Isolation Forest algorithm from histograms of images’ fragments; and a Clustering and Processing block to group and evaluate the extracted anomalous regions. The proposed method, based on anomaly localization, demonstrates high resistance to adversarial patch attacks while maintaining the high quality of object detection. The experimental results show that the proposed method is effective in defending against adversarial patch attacks. Using the YOLOv3 algorithm with the proposed defensive method for pedestrian detection in the INRIAPerson dataset under the adversarial attacks, the mAP50 metric reaches 80.97% compared to 46.79% without a defensive method. The results of the research demonstrate that the proposed method is promising for improvement of object detection systems security. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

Back to TopTop