Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = distance-estimated landmark vector

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 9886 KiB  
Article
DeepGun: Deep Feature-Driven One-Class Classifier for Firearm Detection Using Visual Gun Features and Human Body Pose Estimation
by Harbinder Singh, Oscar Deniz, Jesus Ruiz-Santaquiteria, Juan D. Muñoz and Gloria Bueno
Appl. Sci. 2025, 15(11), 5830; https://doi.org/10.3390/app15115830 - 22 May 2025
Viewed by 804
Abstract
The increasing frequency of mass shootings at public events and public buildings underscores the limitations of traditional surveillance systems, which rely on human operators monitoring multiple screens. Delayed response times often hinder security teams from intervening before an attack unfolds. Since firearms are [...] Read more.
The increasing frequency of mass shootings at public events and public buildings underscores the limitations of traditional surveillance systems, which rely on human operators monitoring multiple screens. Delayed response times often hinder security teams from intervening before an attack unfolds. Since firearms are rarely seen in public spaces and constitute anomalous observations, firearm detection can be considered as an anomaly detection (AD) problem, for which one-class classifiers (OCCs) are well-suited. To address this challenge, we propose a holistic firearm detection approach that integrates OCCs with visual hand-held gun features and human pose estimation (HPE). In the first stage, a variational autoencoder (VAE) learns latent representations of firearm-related instances, ensuring that the latent space is dedicated exclusively to the target class. Hand patches of variable sizes are extracted from each frame using body landmarks, dynamically adjusting based on the subject’s distance from the camera. In the second stage, a unified feature vector is generated by integrating VAE-extracted latent features with landmark-based arm positioning features. Finally, an isolation forest (IFC)-based OCC model evaluates this unified feature representation to estimate the probability that a test sample belongs to the firearm-related distribution. By utilizing skeletal representations of human actions, our approach overcomes the limitations of appearance-based gun features extracted by camera, which are often affected by background variations. Experimental results on diverse firearm datasets validate the effectiveness of our anomaly detection approach, achieving an F1-score of 86.6%, accuracy of 85.2%, precision of 95.3%, recall of 74.0%, and average precision (AP) of 83.5%. These results demonstrate the superiority of our method over traditional approaches that rely solely on visual features. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 1522 KiB  
Article
Local Homing Navigation Based on the Moment Model for Landmark Distribution and Features
by Changmin Lee and DaeEun Kim
Sensors 2017, 17(11), 2658; https://doi.org/10.3390/s17112658 - 17 Nov 2017
Cited by 7 | Viewed by 4575
Abstract
[-10]For local homing navigation, an agent is supposed to return home based on the surrounding environmental information. According to the snapshot model, the home snapshot and the current view are compared to determine the homing direction. In this paper, we propose a novel [...] Read more.
[-10]For local homing navigation, an agent is supposed to return home based on the surrounding environmental information. According to the snapshot model, the home snapshot and the current view are compared to determine the homing direction. In this paper, we propose a novel homing navigation method using the moment model. The suggested moment model also follows the snapshot theory to compare the home snapshot and the current view, but the moment model defines a moment of landmark inertia as the sum of the product of the feature of the landmark particle with the square of its distance. The method thus uses range values of landmarks in the surrounding view and the visual features. The center of the moment can be estimated as the reference point, which is the unique convergence point in the moment potential from any view. The homing vector can easily be extracted from the centers of the moment measured at the current position and the home location. The method effectively guides homing direction in real environments, as well as in the simulation environment. In this paper, we take a holistic approach to use all pixels in the panoramic image as landmarks and use the RGB color intensity for the visual features in the moment model in which a set of three moment functions is encoded to determine the homing vector. We also tested visual homing or the moment model with only visual features, but the suggested moment model with both the visual feature and the landmark distance shows superior performance. We demonstrate homing performance with various methods classified by the status of the feature, the distance and the coordinate alignment. Full article
Show Figures

Figure 1

24 pages, 2237 KiB  
Article
Landmark-Based Homing Navigation Using Omnidirectional Depth Information
by Changmin Lee, Seung-Eun Yu and DaeEun Kim
Sensors 2017, 17(8), 1928; https://doi.org/10.3390/s17081928 - 22 Aug 2017
Cited by 17 | Viewed by 6326
Abstract
A number of landmark-based navigation algorithms have been studied using feature extraction over the visual information. In this paper, we apply the distance information of the surrounding environment in a landmark navigation model. We mount a depth sensor on a mobile robot, in [...] Read more.
A number of landmark-based navigation algorithms have been studied using feature extraction over the visual information. In this paper, we apply the distance information of the surrounding environment in a landmark navigation model. We mount a depth sensor on a mobile robot, in order to obtain omnidirectional distance information. The surrounding environment is represented as a circular form of landmark vectors, which forms a snapshot. The depth snapshots at the current position and the target position are compared to determine the homing direction, inspired by the snapshot model. Here, we suggest a holistic view of panoramic depth information for homing navigation where each sample point is taken as a landmark. The results are shown in a vector map of homing vectors. The performance of the suggested method is evaluated based on the angular errors and the homing success rate. Omnidirectional depth information about the surrounding environment can be a promising source of landmark homing navigation. We demonstrate the results that a holistic approach with omnidirectional depth information shows effective homing navigation. Full article
Show Figures

Figure 1

21 pages, 1159 KiB  
Article
Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines
by Deepak Ghimire and Joonwhoan Lee
Sensors 2013, 13(6), 7714-7734; https://doi.org/10.3390/s130607714 - 14 Jun 2013
Cited by 254 | Viewed by 17021
Abstract
Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks [...] Read more.
Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+) facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Back to TopTop