Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = Replay-Mobile dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 2059 KiB  
Article
Continual Semi-Supervised Malware Detection
by Matthew Chin and Roberto Corizzo
Mach. Learn. Knowl. Extr. 2024, 6(4), 2829-2854; https://doi.org/10.3390/make6040135 - 10 Dec 2024
Cited by 2 | Viewed by 2041
Abstract
Detecting malware has become extremely important with the increasing exposure of computational systems and mobile devices to online services. However, the rapidly evolving nature of malicious software makes this task particularly challenging. Despite the significant number of machine learning works for malware detection [...] Read more.
Detecting malware has become extremely important with the increasing exposure of computational systems and mobile devices to online services. However, the rapidly evolving nature of malicious software makes this task particularly challenging. Despite the significant number of machine learning works for malware detection proposed in the last few years, limited interest has been devoted to continual learning approaches, which could allow models to showcase effective performance in challenging and dynamic scenarios while being computationally efficient. Moreover, most of the research works proposed thus far adopt a fully supervised setting, which relies on fully labelled data and appears to be impractical in a rapidly evolving malware landscape. In this paper, we address malware detection from a continual semi-supervised one-class learning perspective, which only requires normal/benign data and empowers models with a greater degree of flexibility, allowing them to detect multiple malware types with different morphology. Specifically, we assess the effectiveness of two replay strategies on anomaly detection models and analyze their performance in continual learning scenarios with three popular malware detection datasets (CIC-AndMal2017, CIC-MalMem-2022, and CIC-Evasive-PDFMal2022). Our evaluation shows that replay-based strategies can achieve competitive performance in terms of continual ROC-AUC with respect to the considered baselines and bring new perspectives and insights on this topic. Full article
Show Figures

Figure 1

26 pages, 5534 KiB  
Article
Cyber Attack Detection for Self-Driving Vehicle Networks Using Deep Autoencoder Algorithms
by Fawaz Waselallah Alsaade and Mosleh Hmoud Al-Adhaileh
Sensors 2023, 23(8), 4086; https://doi.org/10.3390/s23084086 - 18 Apr 2023
Cited by 46 | Viewed by 11068
Abstract
Connected and autonomous vehicles (CAVs) present exciting opportunities for the improvement of both the mobility of people and the efficiency of transportation systems. The small computers in autonomous vehicles (CAVs) are referred to as electronic control units (ECUs) and are often perceived as [...] Read more.
Connected and autonomous vehicles (CAVs) present exciting opportunities for the improvement of both the mobility of people and the efficiency of transportation systems. The small computers in autonomous vehicles (CAVs) are referred to as electronic control units (ECUs) and are often perceived as being a component of a broader cyber–physical system. Subsystems of ECUs are often networked together via a variety of in-vehicle networks (IVNs) so that data may be exchanged, and the vehicle can operate more efficiently. The purpose of this work is to explore the use of machine learning and deep learning methods in defence against cyber threats to autonomous cars. Our primary emphasis is on identifying erroneous information implanted in the data buses of various automobiles. In order to categorise this type of erroneous data, the gradient boosting method is used, providing a productive illustration of machine learning. To examine the performance of the proposed model, two real datasets, namely the Car-Hacking and UNSE-NB15 datasets, were used. Real automated vehicle network datasets were used in the verification process of the proposed security solution. These datasets included spoofing, flooding and replay attacks, as well as benign packets. The categorical data were transformed into numerical form via pre-processing. Machine learning and deep learning algorithms, namely k-nearest neighbour (KNN) and decision trees, long short-term memory (LSTM), and deep autoencoders, were employed to detect CAN attacks. According to the findings of the experiments, using the decision tree and KNN algorithms as machine learning approaches resulted in accuracy levels of 98.80% and 99%, respectively. On the other hand, the use of LSTM and deep autoencoder algorithms as deep learning approaches resulted in accuracy levels of 96% and 99.98%, respectively. The maximum accuracy was achieved when using the decision tree and deep autoencoder algorithms. Statistical analysis methods were used to analyse the results of the classification algorithms, and the determination coefficient measurement for the deep autoencoder was found to reach a value of R2 = 95%. The performance of all of the models that were built in this way surpassed that of those already in use, with almost perfect levels of accuracy being achieved. The system developed is able to overcome security issues in IVNs. Full article
Show Figures

Figure 1

20 pages, 2499 KiB  
Article
Monocular Facial Presentation–Attack–Detection: Classifying Near-Infrared Reflectance Patterns
by Ali Hassani, Jon Diedrich and Hafiz Malik
Appl. Sci. 2023, 13(3), 1987; https://doi.org/10.3390/app13031987 - 3 Feb 2023
Cited by 2 | Viewed by 2769
Abstract
This paper presents a novel material spectroscopy approach to facial presentation–attack–defense (PAD). Best-in-class PAD methods typically detect artifacts in the 3D space. This paper proposes similar features can be achieved in a monocular, single-frame approach by using controlled light. A mathematical model is [...] Read more.
This paper presents a novel material spectroscopy approach to facial presentation–attack–defense (PAD). Best-in-class PAD methods typically detect artifacts in the 3D space. This paper proposes similar features can be achieved in a monocular, single-frame approach by using controlled light. A mathematical model is produced to show how live faces and their spoof counterparts have unique reflectance patterns due to geometry and albedo. A rigorous dataset is collected to evaluate this proposal: 30 diverse adults and their spoofs (paper-mask, display-replay, spandex-mask and COVID mask) under varied pose, position, and lighting for 80,000 unique frames. A panel of 13 texture classifiers are then benchmarked to verify the hypothesis. The experimental results are excellent. The material spectroscopy process enables a conventional MobileNetV3 network to achieve 0.8% average-classification-error rate, outperforming the selected state-of-the-art algorithms. This demonstrates the proposed imaging methodology generates extremely robust features. Full article
(This article belongs to the Special Issue Application of Biometrics Technology in Security)
Show Figures

Figure 1

11 pages, 2438 KiB  
Article
Replay Attack Detection Based on High Frequency Missing Spectrum
by Junming Yuan, Mijit Ablimit and Askar Hamdulla
Information 2023, 14(1), 7; https://doi.org/10.3390/info14010007 - 22 Dec 2022
Cited by 5 | Viewed by 3073
Abstract
Automatic Speaker Verification (ASV) has its benefits compared to other biometric verification methods, such as face recognition. It is convenient, low cost, and more privacy protected, so it can start being used for various practical applications. However, voice verification systems are vulnerable to [...] Read more.
Automatic Speaker Verification (ASV) has its benefits compared to other biometric verification methods, such as face recognition. It is convenient, low cost, and more privacy protected, so it can start being used for various practical applications. However, voice verification systems are vulnerable to unknown spoofing attacks, and need to be upgraded with the pace of forgery techniques. This paper investigates a low-cost attacking scenario in which a playback device is used to impersonate the real speaker. The replay attack only needs a recording and playback device to complete the process, so it can be one of the most widespread spoofing methods. In this paper, we explore and investigate some spectral clues in the high sampling rate recording signals, and utilize this property to effectively detect the replay attack. First, a small scale genuine-replay dataset of high sample rates are constructed using some low-cost mobile terminals; then, the signal features are investigated by comparing their spectra; machine learning models are also applied for evaluation. The experimental results verify that the high frequency spectral clue in the replay signal provides a convenient and reliable way to detect the replay attack. Full article
Show Figures

Figure 1

28 pages, 5650 KiB  
Article
Enhanced Deep Learning Architectures for Face Liveness Detection for Static and Video Sequences
by Ranjana Koshy and Ausif Mahmood
Entropy 2020, 22(10), 1186; https://doi.org/10.3390/e22101186 - 21 Oct 2020
Cited by 19 | Viewed by 6500
Abstract
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best [...] Read more.
Face liveness detection is a critical preprocessing step in face recognition for avoiding face spoofing attacks, where an impostor can impersonate a valid user for authentication. While considerable research has been recently done in improving the accuracy of face liveness detection, the best current approaches use a two-step process of first applying non-linear anisotropic diffusion to the incoming image and then using a deep network for final liveness decision. Such an approach is not viable for real-time face liveness detection. We develop two end-to-end real-time solutions where nonlinear anisotropic diffusion based on an additive operator splitting scheme is first applied to an incoming static image, which enhances the edges and surface texture, and preserves the boundary locations in the real image. The diffused image is then forwarded to a pre-trained Specialized Convolutional Neural Network (SCNN) and the Inception network version 4, which identify the complex and deep features for face liveness classification. We evaluate the performance of our integrated approach using the SCNN and Inception v4 on the Replay-Attack dataset and Replay-Mobile dataset. The entire architecture is created in such a manner that, once trained, the face liveness detection can be accomplished in real-time. We achieve promising results of 96.03% and 96.21% face liveness detection accuracy with the SCNN, and 94.77% and 95.53% accuracy with the Inception v4, on the Replay-Attack, and Replay-Mobile datasets, respectively. We also develop a novel deep architecture for face liveness detection on video frames that uses the diffusion of images followed by a deep Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) to classify the video sequence as real or fake. Even though the use of CNN followed by LSTM is not new, combining it with diffusion (that has proven to be the best approach for single image liveness detection) is novel. Performance evaluation of our architecture on the REPLAY-ATTACK dataset gave 98.71% test accuracy and 2.77% Half Total Error Rate (HTER), and on the REPLAY-MOBILE dataset gave 95.41% accuracy and 5.28% HTER. Full article
Show Figures

Figure 1

24 pages, 5227 KiB  
Article
Presentation Attack Face Image Generation Based on a Deep Generative Adversarial Network
by Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Kyoung Jun Noh and Kang Ryoung Park
Sensors 2020, 20(7), 1810; https://doi.org/10.3390/s20071810 - 25 Mar 2020
Cited by 8 | Viewed by 4461
Abstract
Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face [...] Read more.
Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training. Full article
Show Figures

Figure 1

Back to TopTop