Next Article in Journal
Variational Autoencoding with Conditional Iterative Sampling for Missing Data Imputation
Next Article in Special Issue
Watermarking for Large Language Models: A Survey
Previous Article in Journal
A Lightweight GCT-EEGNet for EEG-Based Individual Recognition Under Diverse Brain Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Universal Adversarial Perturbation and Frequency Band Filters Against Face Recognition

1
School of Electronic and Information Engineering, University of Electronic Science and Technology of China, Zhongshan Institute, Zhongshan 528402, China
2
School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(20), 3287; https://doi.org/10.3390/math12203287
Submission received: 3 September 2024 / Revised: 10 October 2024 / Accepted: 18 October 2024 / Published: 20 October 2024
(This article belongs to the Special Issue New Solutions for Multimedia and Artificial Intelligence Security)

Abstract

:
Universal adversarial perturbation (UAP) exhibits universality as it is independent of specific images. Although previous investigations have shown that the classification of natural images is susceptible to universal adversarial attacks, the impact of UAP on face recognition has not been fully investigated. Thus, in this paper we assess the vulnerability of face recognition for UAP. We propose FaUAP-FBF, which exploits the frequency domain by learning high, middle, and low band filters as an additional dimension of refining facial UAP. The facial UAP and filters are alternately and repeatedly learned from a training set. Furthermore, we convert non-target attacks to target attacks by customizing a target example, which is an out-of-distribution sample for a training set. Accordingly, non-target and target attacks form a uniform target attack. Finally, the variance of cosine similarity is incorporated into the adversarial loss, thereby enhancing the attacking capability. Extensive experiments on LFW and CASIA-WebFace datasets show that FaUAP-FBF has a higher fooling rate and better objective stealthiness metrics across the evaluated network structures compared to existing universal adversarial attacks, which confirms the effectiveness of the proposed FaUAP-FBF. Our results also imply that UAP poses a real threat for face recognition systems and should be taken seriously when face recognition systems are being designed.

1. Introduction

Deep neural networks (DNNs) play an increasingly vital role in processing large-scale data tasks, such as natural language processing [1], speech recognition [2], image classification [3], and face recognition [4]. Among them, face recognition serves as a powerful technique to confirm individual identity by analyzing facial features in the facial image. Currently, state-of-the-art face recognition methods primarily utilize DNNs to extract facial features to accomplish face recognition tasks. On the other hand, despite the significant progress of DNNs in a variety of tasks, they still suffer from security vulnerabilities. Attacks targeting DNNs, such as data poisoning [5], backdoor attacks [6], and adversarial attacks [7], have posed fatal threats to the integrity and reliability of DNNs. Adversarial attacks, in particular, were first discovered by Szegedy et al. [8]. By overlaying imperceptible and carefully crafted perturbations on input images, these attacks can lead to erroneous decisions from DNNs.
Generally, adversarial perturbations can be tailored specifically for each image and can also be learned using an image set sampled from an independent and identical distribution and used to contaminate any unseen image. The latter is known as universal adversarial perturbation (UAP). UAP was proposed by Moosavi-Dezfooli et al. [9] and can significantly diminish the neural network’s performance. Due to the generality of UAP, it has gained considerable attention in the task of natural image classification. Subsequent studies have proposed various approaches, including optimization-based and generative-network-based methods, to generate perturbations. In the face recognition realm, most adversarial attacks are image-specific, which means that perturbations are crafted for each facial image. For instance, in physical-world oriented face adversarial attacks [10], by generating a special kind of glasses around the eyes, the adversarial sample with the addition of the glasses is able to fool the face recognition network with a high probability. Adversarial attacks in the digital domain cheat face recognition models through iterative use of a small transparent patch on the face [11]. The approach proposed in [12] utilizes generative adversarial networks to produce adversarial samples with different makeup styles based on various face shapes to deceive the target model.
As a subset of image, and inspired by the success of UAP for natural images, we argue that UAP is feasible for fooling face recognition system. Unlike the natural image classification, however, facial image recognition belongs to fine-grained recognition. Thus, the proposed UAP regarding natural images is insufficient for facial images and needs refining to adapt to fine-grained cases. In addition, the existing UAP for natural images is generated from ether the spatial domain or the frequency domain rather than simultaneously considering both of the two domains. We believe the UAP designed covering both domains tends to have promising performance. To combine all the mentioned factors, we exploit the frequency domain as an additional dimension to refining perturbations produced from the spatial domain, namely Facial UAP and frequency band filters (FaUAP-FBFs). Concretely, in the frequency domain there exists a notable band region partition for the energy distribution of images where most of the energies concentrate on the low band and significantly attenuate in the middle and high bands. Furthermore, the band partition is directly connected to human visual perception; for instance, human vision is usually more sensitive to the low band. It prompts us to differentiate the importance and influence for perturbations in different bands, which can be implemented by learning different band filters. Under the general proposed framework, we also propose two innovative strategies. One innovative strategy is that we convert non-target attacks to target attacks by customizing a target example. In essence, the customized target example is an out-of-distribution sample for a training set. Hence, reducing the distance between the adversarial example and the customized example is aligned with out-of-distribution non-target attack. Also, by using the customized example, the attack is immune to the uncertain non-target victim. Accordingly, non-target and target attacks form a uniform target attack. The other innovative strategy is that, to enhance the attacking capacity, the distribution of cosine similarity between a batch of adversarial example and the target example is considered and the variance of the distribution is incorporated into the adversarial loss. This incorporation facilitates a more consistent alignment between the adversarial example and the target example.
The main contributions of this paper are summarized as follows:
  • We assess the vulnerability of face recognition for UAP and exploit the frequency domain by learning high, middle, and low band filters as an additional dimension of refining facial UAP.
  • We customize a target example to convert the non-target attack to a target attack. The customized example is an out-of–distribution sample for a training set. Accordingly, non-target and target attacks form a uniform target attack.
  • We introduce variance of cosine similarity between a batch of adversarial examples and the target example into the adversarial loss. Reducing the variance contributes to aligning the adversarial example with the target example.
The organization of this paper is as follows: We briefly review related works in Section 2. In Section 3, we elaborate on the research motivation and the methodology for the proposed FaUAP-FBF. Then, we conduct extensive experiments and demonstrate the effectiveness of FaUAP-FBF in Section 4. Finally, we summarize the full paper and provide details of future work in Section 5.

2. Related Works

2.1. Background of Face Recognition

Face recognition is routinely divided into two sub-tasks: face verification [13], which determines whether a pair of facial images belong to the same person, and face identification [14], which identifies the person in a facial image. Generally, the current state-of-the-art face recognition systems use deep CNNs to extract feature representation from facial images, then the systems compute the distance between two feature representations, thereby quantifying the similarity between the two facial images. The similarity is ultimately utilized to accomplish face recognition. Network structure and loss of DNNs are two essential factors impacting the performance of face recognition. The more prevalent network structures are VGGNet [15], GoogLeNet [16], and ResNet [17]. For loss, Triplet loss [18], CosFace [19], and ArcFace [20] are well known and commonly used.

2.2. Adversarial Attacks on Face Recognition

As general attacks, existing attacks imposed on face recognition are also along the physical domain and the digital domain. Physical domain attacks endeavor to generate specialized wearable items or physical objects to deceive face recognition system. Sharif et al. [10] printed adversarial perturbations on eyeglass frames, which enables one to evade detection or impersonate others in face recognition. Komkov et al. [21] printed a rectangular strip of paper with a pattern and pasted it onto a hat, which does not cover the attacker’s facial features but still fools the face recognition system. Ibsen et al. [22] printed a special facial image onto a T-shirt that could confuse the face recognition system. Zheng et al. [23] explored sticker attacks by considering the complexity of environmental changes under different physical conditions. Digital domain attacks endeavor to generate digital perturbations to deceive face recognition systems. Dabouei et al. [24] modified the tiny spatial locations of the key points of the face and achieved a successful attack on the face recognition system. Dong et al. [25] proposed an evolutionary attack in a decision-based black-box scenario, which could characterize the geometrical structure of the decision boundary along the search direction and shrink the search space to improve the black-box attack efficiency. Hussain et al. [26] proposed a real-time attack based on an adversarial transfer network, where facial images could be fed into an adversarial transfer network to rapidly generate adversarial facial images for the purpose of real-time attacks.

2.3. Universal Adversarial Attacks on Image Classification

Universal adversarial attacks aim to use a single perturbation on any example to fool DNNs. Due to its convenience in one-generation and multiple use, this attack has attracted significant attentions from researchers. Moosavi-Dezfooli et al. [9] first demonstrated the existence of UAP for non-target attacks against CNNs on natural image classification tasks. Mopuri et al. [27] proposed the fast feature fool (FFF) algorithm, which generates universal adversarial perturbations without relying on data. Subsequent research connected UAP generation with generative adversarial networks (GANs). Hayes et al. [28] proposed the universal adversarial network, aimed at learning the distribution of perturbations rather than a single perturbation. Mopuri et al. [29] proposed a network for adversary generation (NAG) to create adversarial perturbations for a given CNN classifier. This network utilizes the attributions of examples to model the distribution of adversarial perturbations and achieves effectiveness and diversity in perturbation generation thereafter. Zhang et al. [30] offer a new perspective on the relationship between carrier image and UAP. They suggest that perturbations hold key features dominating model decisions, which enables the carrier image to be treated as noise. Based on this idea, they designed a feature-dominant algorithm allowing the use of external data for both target and non-target universal attacks. Building upon UAP, Deng et al. [31] proposed generating UAP for texture images in the frequency domain by limiting the intensity of the perturbation using an JND model in the frequency domain. Sun et al. [32] explored universal adversarial attacks on vision transformers. Zolfi et al. [33] proposed a universal adversarial mask that could be used in the real world, but it was useless, however, under a surveillance setting. Duan et al. [34] utilized the common gradient of the perturbations of multiple face images to optimize universal adversarial perturbation, and dominant feature loss was proposed to improve the attack capability of the perturbation. Qiao et al. [35] exploited universal adversarial perturbation as a watermark to defend against facial forgery across a wide range of forgery methods.
Most of the existing adversarial attacks against face recognition focus on image-specific perturbation. This paper follows the approach of universal adversarial attacks, aiming to generate universal perturbation that can deceive facial images across the entire dataset, namely, image-agnostic perturbation.

3. Proposed Method

3.1. Refining UAP in Frequency Domain via Learnable Filters

For a general understanding of our idea, we here outline the general procedure of FaUAP-FBF, as shown in Figure 1. The lowercase letters and capital letters denote examples in the spatial domain and frequency domain, respectively. The two domains are mutually converted by using discrete cosine transform (DCT) and inverse discrete cosine transform (IDCT). For the images, DCT is used to transform an image block from the spatial domain to the frequency domain, and IDCT is used to reconstruct an image block from the frequency domain to the spatial domain. Suppose there is a matrix A R n × n , where a i , j represents the element at position ( i , j ) ; after DCT transformation, we obtain the transformed matrix B R n × n , where the element b i , j at position ( i , j ) can be expressed as:
b i , j = c i c j p = 0 n 1 q = 0 n 1 a p , q cos ( 2 p + 1 ) i π 2 n cos ( 2 q + 1 ) j π 2 n ,
where c i = c j = 1 / 4 n for i = j = 0 and  c i = c j = 1 / 2 n otherwise. A can be reconstructed from B by using IDCT, i.e.,
a i , j = p = 0 n 1 q = 0 n 1 c p c q b p , q cos ( 2 i + 1 ) p π 2 n cos ( 2 j + 1 ) q π 2 n .
For an 8 × 8 spatial block, DCT encapsulates 64 distinct frequency components. We employ the widely used Type-II DCT for both DCT and IDCT.
The filters used in FaUAP-FBF consist of fixed filters and learnable filters in high, middle, and low frequency bands, in which the fixed filters are used to separate the legitimate example into non-overlapped three frequency regions, and learnable filters are used to refine the perturbation using three band filters. Details for frequency domain filtering using fixed filters and learnable filters are shown in Figure 2a,b. { 𝕗 l , 𝕗 m , 𝕗 h } and { f l , f m , f h } denote the fixed filters and learnable filters, respectively. For the fixed filters, the low-, middle-, and high-frequency bands account for hard separated regions in the entire spectrum, respectively, where each band region is sketched by a yellow color, and the values in the corresponding spectrum are set to 1, or 0 for the fixed filters. Further, they are used to initialize the learnable filters, and the ultimate values in the learnable filters are located in between 0 and 1. The fixed filters and learnable filters are shown in Figure 1 and  Figure 2.
First, both UAP and legitimate images in a training set are converted to the frequency domain through DCT, in which UAP in the spatial domain is initialized with Gaussian noise and the three learnable band filters are initialized with fixed filters. To calculate loss, both image and perturbation need to be converted back into the spatial domain through IDCT. During learning, both UAP in the spatial domain and the learnable filters are alternately and repeatedly updated in terms of a weighted combination of adversarial loss and stealthiness loss, and UAP in the spatial domain is also constrained by l -norm. The iteration continues until a certain criterion is satisfied. The target function is formulated as follows:
( v ^ , f ^ l , f ^ m , f ^ h ) = arg min v , f l , f m , f h E x tr D L adv x adv ( tr ) , x tar + λ L ste x adv ( tr ) , x tr s . t . v ξ
x adv ( tr ) = x tr + v ˜
x adv ( tr ) ( s ) = x tr ( s ) + v ˜ s , s = l , m , h
where L adv and L ste denote the adversarial loss and stealthiness loss, respectively, and λ controls the balance between them. v ^ and f l ^ , f m ^ , f h ^ denote the optimal UAP and filters. x tar may be the customized target example or a genuine target example, and  ξ is a parameter controlling the strength of the perturbation. x tr and x adv ( tr ) denote the legitimate training example and the corresponding adversarial example, and they are related with Equation (2). From Figure 1, it can be seen that, in addition to the whole example in the spatial domain, respective bands of the example in the spatial domain are also needed to calculate stealthiness loss. The respective bands of the example in the spatial domain are obtained using Equation (3). The calculations of v ˜ and { v ˜ l , v ˜ m , v ˜ h } are shown in Figure 2b.
Once v ^ and f l ^ , f m ^ , f h ^ are obtained, they can be utilized to yield an adversarial example for a legitimate test example, as explained in Equation (4):
x adv ( te ) = x te + IDCT DCT v ^ · f s ^ , s = l , m , h
We apply the frequency domain as an additional dimension to iteratively optimize universal adversarial perturbations, allowing the optimizer to continually refine the perturbation to achieve the objective of fooling face recognition systems. The use of learnable filters ensures that the extracted frequency segment is not solely confined to the predefined high, middle-, and low-frequency ranges. Incorporating these adjustable filters facilitates the capture of subtle yet valuable information within each frequency segment. Consequently, this approach aligns the generated perturbation more closely with the frequency-divided facial image. The visualization of ultimately learned filters f s ^ , s = l , m , h are shown in Figure 3. The pipeline of FaUAP-FBF is provided in Algorithm 1, and the definitions of the notations used in this paper are listed in Table 1.
Algorithm 1 The procedure of FaUAP-FBF
  • Input: Training set D , x tar (customized target example x ct or a genuine target example), substitute target model F ( · ) , fooling rate δ , l norm restriction of perturbation ξ , fixed filters { 𝕗 s , s = l , m , h } and decision threshold t, learning rate η .
  • Output: Universal adversarial perturbations v and learnable filters { f s , s = l , m , h }
    1:
    Initialize v Gaussian noise , { f s } { 𝕗 s }
    2:
    while  FR < δ  do
    3:
        for a batch of examples { x i } in D  do
    4:
            Perform DCT transformation to obtain { X i } ; use { 𝕗 s } to obtain { X s ( i ) }
    5:
            Perform DCT transformation to obtain V ; use { f s } to obtain V s
    6:
            Perform IDCT transformation to obtain { x adv ( i ) } = { x i } + IDCT V · f s , s = l , m , h
    7:
            if average Similarity F x adv ( i ) , F x i > t  or
    average Similarity F x adv ( i ) , F x tar < t  then
    8:
                ( Δ v , Δ { f s } ) η v , { f s } L all
    9:
               Update the learnable filters { f s } { f s } + Δ { f s }
    10:
                Update the perturbation v v + Δ v
    11:
                Clip v to satisfy the l norm restriction ξ
    12:
                Update V , V s , { x adv ( i ) }
    13:
            end if
    14:
        end for
    15:
    end while
    16:
    return  v and { f s }
We abstract seven computation units and summarize the overall and dominant time required for FaUAP-FBF by using the seven computation units and parameters of the training set and the hyper-parameters of the learning algorithm. These are listed in Table 2. Specifically, there are two phases of forward and backward computation. In forward computation, there are five computation units, including DCT, IDCT, filtering, and feature extraction from VGG and from the substitute target model, in which the consumed time of the five computation units are denoted as T DCT , T IDCT , T filt , T VGG , and T tar , respectively. In backward computation, there are two computation units, including filter gradient computation and perturbation gradient computation, in which the times consumed for the two computation units are denoted as T grad for f R 3 × 3 and T grad for v R p × q , respectively. Here, p and q denote the width and height of UAP. N and M denote the number of training examples and the number of blocks in size 8 × 8 . l and n are the learning iteration number and batch size. Notice that in forward computation, the legitimate image set computation only needs one time and is completed before iteration for learning the UAP and filters.

3.2. Non-Target Attack via Customizing a Target Example

Unlike the conventional non-target attack that forces the adversarial example to be away from an image of the specified person, we convert the non-target attack to be specific to varying victims to a unique target attack by customizing an example that is utilized as the target. Specifically, a customized target example x ct is sought by maximizing its distance from the overall distribution of the legitimate dataset. The flowchart is shown in Figure 4.
First, an image subset x i , i = 1 , 2 , n is selected from legitimate dataset D , which contains one facial image for each identity, and their average yields a mean image x o associated with the face dataset:
x o = 1 n i = 1 n x i , x i D .
Subsequently, x ct is initialized by Gaussian noise and iteratively updated by progressively increasing the Euclidean distance between the embedding of x ct and x o , in which the embeddings F ( x ct ) and F ( x o ) are extracted from the substitute target model (Arcface model). The procedure leads to x ^ ct , which significantly deviates from the distribution of the legitimate dataset, serving as our customized target example. It can be formulated as follows:
x ^ ct = arg max x ct F ( x ct ) , F ( x o ) 2
where F refers to the substitute target model and F ( · ) denotes the embedding of the input image.
Since x ^ ct has a significant distinction from the average of the legitimate dataset, decreasing the distance between the adversarial example and x ^ ct is consistent with increasing the distance between the adversarial example and the image of an uncertain victim, thus achieving the desired effect of a non-target attack immune to specific non-target victims. Under a genuine target attack, the only necessary alteration is to replace the customized target example with the image of the desired victim, thereby unifying the non-target and target attack into a uniform target attack.

3.3. Loss Setting

3.3.1. Stealthiness Loss

The role of stealthiness loss is to control the visual concealment of the adversarial perturbation. We utilize the VGG (visual geometry group) network [15] to assess the stealthiness loss. VGG involves stacking multiple convolutional and pooling layers to extract hierarchical feature maps from images. VGG utilizes universal convolutional kernels and a layer hierarchy to increase the network’s depth. In particular, we use VGG 19 in our framework. VGG 19 includes 16 convolution plus ReLU layers, 5 max pooling layers, and 3 fully-connected plus ReLU layers, and all convolution kernels are 3 × 3 in size. The shallow layers primarily focus on extracting low-level features such as edges, textures, and colors, which are sensitive to the local structures and fine details present in images. The subsequent deeper layers extract more advanced semantic features and higher-level features that possess the capability of capturing the abstract and global information present in images, enabling the network to recognize semantic objects in a more holistic manner. For stealthiness loss, low-level information such as texture is more likely to be noticed by the human eye and more aligned with the image quality assessment; thus, we use the feature maps given by the shallow layer of the VGG network to assess the stealthiness of the perturbations, as described in Equation (7):
L ste = E x tr D φ j ( x tr ) , φ j ( x adv ( tr ) ) 2 + s = l , m , h φ j ( x tr ( s ) ) , φ j ( x adv ( tr ) ( s ) ) 2
where L ste contains two parts; the first part evaluates the discrepancy between the whole legitimate example and the whole adversarial example, and the second part evaluates the discrepancy between them in the low-, middle-, and high-frequency bands, respectively, and φ j is the feature map of the jth layer of the VGG network. We employ the ninth layer in our experiment.

3.3.2. Adversarial Loss

The role of adversarial loss is to control the attacking performance of the adversarial perturbation. In face recognition, cosine similarity between two embeddings is the one most frequently used to measure the discrepancy of two examples. We employ one minus the cosine similarity between the adversarial example and the target example as one item in adversarial loss shown in Equation (8):
L sim = E x tr D 1 cos F x adv ( tr ) , F x tar
where x tar refers to the target example, which may be the customized target example or a genuine target example.
To further enhance the efficacy of adversarial loss, we incorporate the variance of cosine similarity in the training set as the other item into the adversarial loss. The variance represents the divergence of cosine similarity distribution. By progressively reducing the variance, we effectively narrow the gap between the adversarial example and the target example, thereby improving the fooling success rate. The variance item in adversarial loss is defined in Equations (9) and (10):
L var = E x tr D cos F x adv ( tr ) , F x tar μ 2
μ = E x tr D cos F x adv ( tr ) , F x tar
We record some of the data produced during the learning process and count the histograms of cosine similarity value and show the histograms at four instants, labeled by T 1 , T 2 , T 3 , and T 4 , respectively, in Figure 5. The horizontal coordinate is the cosine similarity between an adversarial example and the customized target example, and the vertical coordinate is the occurrence number corresponding to that value. The results of T 1 happened at an initial instant. The result of T 2 and T 3 happened at two intermediate instants and T 2 is earlier than T 3 . T 4 is the ending instant. Such an evolving process demonstrates that not only the cosine similarity itself has gradually approached 1 but also the aggregation of the cosine similarity has been strengthened at the ending instant. The result has confirmed the usefulness of the introduction of variance items into adversarial loss for boosting the attacking success.
The adversarial loss is the combination of L sim and L var weighted by κ , which is written as Equation (11).
L adv = L sim + κ L var

3.3.3. Overall Loss

The overall loss L all is the combination of L adv and L ste weighted by λ , which is written as Equation (12).
L all = L adv + λ L ste

4. Experimental Results and Analysis

In this section, we conduct a comprehensive experiments to evaluate the effectiveness of the proposed FaUAP-FBF. First, we provide the overall setting of the experiment. Then, we compare FaUAP-FBF with other methods. The results demonstrate that FaUAP-FBF achieves a superior attacking rate for face recognition among the compared attacking methods. Last, we conduct ablation experiments to reveal the influences of multiple factors for FaUAP-FBF.

4.1. Experimental Setup

Our experiments use a PyTorch framework and are accelerated using a single GTX TITAN XP GPU (12 GB). We employ two commonly used face datasets, including LFW and CASIA-WebFace as the training set. Both datasets are organized as pairs of images from the same individuals. The LFW dataset encompasses over 5000 identities and 6000 pairs of facial images for training. Additionally, 2000 pairs of facial images not present in the training set are selected for testing. As regards CASIA-WebFace, considering its extensive coverage of diverse facial identity categories, we select 1000 facial identities and use 6000 pairs of facial images for training. Similarly, we choose 2000 pairs of facial images different from those in the training set for testing. We employ Arcface loss to pre-train three substitute target networks: MobileNetV1 [36], MobileFaceNet [37], and IResNet50 [38]. The perturbation constraint ξ is set to 0.12. The batch size is set to 10 pairs of images and the learning rate η is set to 0.01. Table 3 lists the parameters κ and λ in loss for different datasets and substitute target networks. The three objective evaluation metrics for stealthiness are SSIM, PSNR, and LPIPS, in which a larger SSIM value indicates better image quality, a larger PSNR value indicates better image quality, and a smaller LPIPS value indicates better image quality.

4.2. Results

4.2.1. Non-Target Attack via the Customized Target Example

In this experiment, we test the performance of a non-target attack via the customized target example to achieve a non-target attack. We randomly select 6000 facial images from LFW and compute their average to initialize the customized target example, and IResNet50 and MobileFaceNet are taken as substitute target models.To this end, there are no existing universal perturbation attacking methods tailored against face recognition. Thus, we compare our proposed FaUAP-FBF with two existing methods: the UAP generated from natural images [9] and the FTGAP generated from texture images [31]. “Random” as the baseline is also compared. The results are presented in Table 4.
Our proposed FaUAP-FBF achieves an approximate fooling rate of 80 % on the respective test sets. The highest fooling rate of 85.02 % is achieved on the LFW database and IResNet50 model. Due to the correlation between the stealthiness and image quality of the adversarial examples, the objective values of SSIM, PSNR, and LPIPS ultimately attain the favorable stealthiness compared to other methods. As show in Table 4, it can be observed that the UAP designed for natural images does exhibit favorable result at the fooling rate. Additionally, since it generates perturbations from the spatial domain, its stealthiness metrics are less competitive compared to the other two methods. The FTGAP, on the other hand, generates perturbations from the frequency domain but lacks a fine exploitation within the frequency domain. Consequently, its fooling rate is inferior to our proposed FaUAP-FBF. The adopted legitimate and adversarial examples in this experiment are shown in Figure 6. Among all adversarial examples, the ones generated by FaUAP-FBF appear to have better visual concealment compared to the other methods.

4.2.2. Target Attack via a Specified Target Example

In this experiment, we test the performance of target attacks via a specified target example to achieve a target attack. The used database and substitute target models are identical to the setting in the non-target attack. The results are listed in Table 5. The fooling rates are able to reach approximately 80% and favorable stealthiness metrics have been obtained. The results fully confirm the effectiveness of FaUAP-FBF for unified non-target and target attacks.

4.2.3. Black-Box Attacks

The previous experiments use the same target model during learning and testing, which can be regarded as a white-box attack. However, in real scenarios, the learned UAP is usually used to cheat an unknown target model, namely a black-box attacks. Thus, in this experiment we evaluate the attacking capability of FaUAP-FBF under the black-box attack setting. Besides IResNet50 and MobileFaceNet, we also take MobileNetV1 as a target model to test. All the results are listed in Table 6. To have a complete comparison, we also show the fooling rates of white-box attacks. It is evident that the white-box attack achieves an approximate fooling rate of 80% in diagonal positions. However, the black-box fooling rates in non-diagonal positions have remarkably reduced, implying that our proposed FaUAP-FBF is deficient in black-box settings. We argue that the distinct distribution of embedding across different target models leads to the insufficient generalization of UAP. The purpose of this experiment is to offer valuable insights and inspiration for future investigations for attaining competent UAP as a black-box attack.

4.3. Ablation Study

The motivation of the ablation study is to evaluate the roles of essential components in FaUAP-FBF. We use LFW as a database and IResnet50 and MobileFaceNet as substitute target models.

4.3.1. Impact of Learnable Filters

In this experiment, we explore the role of learnable filters. Learnable filters adapt to the training samples to attain dynamic filter characteristics from high-, middle-, and low-frequency components, respectively. For comparison, we impose fixed filters on both the legitimate example and UAP. We test the fooling rates and objective stealthiness metrics presented in Table 7. Comparatively, learnable filters are capable of obtaining a more balanced fooling rate and stealthiness.

4.3.2. Impact of Frequency Separation

In this experiment, we explore the role of three bands of separation in frequency. For comparison, we directly generate perturbation across the entire frequency spectrum. The fooling rate and objective stealthiness metrics presented in Table 8 indicate that frequency separation leads to an enhanced balanced performance between fooling rate and stealthiness.

4.3.3. Impact of Customized Target Example

In this experiment, we explore the role of a customized target example. Since the target example is customized to have a significant distinction from the average of the legitimate dataset, decreasing the distance between the adversarial example and the customized target example is consistent with increasing the distance between the adversarial example and the image of any victim, thus achieving the desired effect of a non-target attack immune to a specific non-target victim. This frequency separation approach allows for a finer and more effective utilization for the frequency domain. For comparison, we employ a random image as the target example. The fooling rate and objective stealthiness metrics are presented in Table 9. The results show that the adoption of a customized target example has improved both fooling rate and stealthiness compared to the random target example.

4.3.4. Impact of Variance of Cosine Similarity in Loss

In this experiment, we explore the role of variance of cosine similarity in loss. By continuously reducing this variance value, we enforce the adversarial examples within a batch size to cluster more and be closer to the target. For comparison, we remove the variance of cosine similarity from the total loss. The results in Table 10 show that the removal of the variance of cosine similarity in loss has resulted in significantly lessened fooling rates, while the stealthiness is comparable. The result confirms the merit of the variance of cosine similarity in loss.

5. Conclusions

In this paper, we leverage UAP and frequency band filters against face recognition, namely FaUAP-FBF. We assess the vulnerability of face recognition systems for UAP and exploit the frequency domain by learning high, middle, and low band filters as an additional dimension of refining UAP. In addition, we propose a customized target example for non-target attacks that is immune to specific non-target victims, thereby unifying non-target and target attacks to a uniform target attack. Finally, by introducing the variance of cosine similarity into adversarial loss, we obtain an improved attacking performance. Experimental results validate the efficacy of FaUAP-FBF. However, the remarkable reduction of fooling rates in the black-box setting implies that our proposed FaUAP-FBF is deficient in a black-box attack. Our future work will further exploit the frequency domain to tailor competent universal adversarial attacks in a black-box setting. In addition, we have not deeply considered the intrinsic characteristics of facial images in our proposed FaUAP-FBF; for instance, the importance of key regions on the face for the trade-off between fooling rate and stealthiness with filters in the frequency domain as an additional dimension to refining UAP. Thus, this is also an essential aspect for future work.

Author Contributions

Conceptualization, L.Z.; methodology, L.Z., X.J., and G.S.; software, B.H. and X.J.; validation, L.Z., B.H., and X.J.; supervision, G.S.; project administration, X.J. and G.S.; funding acquisition, L.Z. and G.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the National Natural Science Foundation of China (Grant Nos. U23B2023 and 61901096), the CCF-Ant Privacy Computing Special Research Fund (Grant No. CCF-AFSG RF20220019.a), and the Guangdong Basic and Applied Basic Research Foundation (Grant No. 2023A1515010815).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kang, Y.; Cai, Z.; Tan, C.W.; Huang, Q.; Liu, H. Natural language processing (NLP) in management research: A literature review. J. Manag. Anal. 2020, 7, 139–172. [Google Scholar] [CrossRef]
  2. Li, J.; Zhang, X.; Li, F.; Huang, L. Speech emotion recognition based on optimized deep features of dual-channel complementary spectrogram. Inf. Sci. 2023, 649, 119649. [Google Scholar] [CrossRef]
  3. Zhang, K.; Hao, W.; Yu, X.; Shao, T. An interpretable image classification model Combining a fuzzy neural network with a variational autoencoder inspired by the human brain. Inf. Sci. 2024, 661, 119885. [Google Scholar] [CrossRef]
  4. Li, L.; Mu, X.; Li, S.; Peng, H. A review of face recognition technology. IEEE Access 2020, 8, 139110–139120. [Google Scholar] [CrossRef]
  5. Liu, H.; Ditzler, G. Data poisoning against information-theoretic feature selection. Inf. Sci. 2021, 573, 396–411. [Google Scholar] [CrossRef]
  6. Li, Y.; Jiang, Y.; Li, Z.; Xia, S.T. Backdoor learning: A survey. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 5–22. [Google Scholar] [CrossRef]
  7. Zhu, H.; Zheng, H.; Zhu, Y.; Sui, X. Boosting the transferability of adversarial attacks with adaptive points selecting in temporal neighborhood. Inf. Sci. 2023, 641, 119081. [Google Scholar] [CrossRef]
  8. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2013, arXiv:1312.6199. [Google Scholar]
  9. Moosavi-Dezfooli, S.M.; Fawzi, A.; Fawzi, O.; Frossard, P. Universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1765–1773. [Google Scholar]
  10. Sharif, M.; Bhagavatula, S.; Bauer, L.; Reiter, M.K. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 Acm Sigsac Conference on Computer and Communications Security, Vienna, Austria, 24–28 October 2016; pp. 1528–1540. [Google Scholar]
  11. Parmar, R.; Kuribayashi, M.; Takiwaki, H.; Raval, M.S. On fooling facial recognition systems using adversarial patches. In Proceedings of the 2022 International Joint Conference on Neural Networks (IJCNN), Padua, Italy, 18–23 July 2022; pp. 1–8. [Google Scholar] [CrossRef]
  12. Hu, S.; Liu, X.; Zhang, Y.; Li, M.; Zhang, L.Y.; Jin, H.; Wu, L. Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 15014–15023. [Google Scholar]
  13. Ranjan, R.; Bansal, A.; Zheng, J.; Xu, H.; Gleason, J.; Lu, B.; Nanduri, A.; Chen, J.C.; Castillo, C.D.; Chellappa, R. A fast and accurate system for face detection, identification, and verification. IEEE Trans. Biom. Behav. Identity Sci. 2019, 1, 82–96. [Google Scholar] [CrossRef]
  14. Adjabi, I.; Ouahabi, A.; Benzaoui, A.; Taleb-Ahmed, A. Past, present, and future of face recognition: A review. Electronics 2020, 9, 1188. [Google Scholar] [CrossRef]
  15. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  16. Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
  17. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for Image recognition. arxiv 2015, arXiv:1512.03385. [Google Scholar]
  18. Hermans, A.; Beyer, L.; Leibe, B. In defense of the triplet loss for person re-identification. arXiv 2017, arXiv:1703.07737. [Google Scholar]
  19. Wang, H.; Wang, Y.; Zhou, Z.; Ji, X.; Gong, D.; Zhou, J.; Li, Z.; Liu, W. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 5265–5274. [Google Scholar]
  20. Deng, J.; Guo, J.; Xue, N.; Zafeiriou, S. Arcface: Additive angular margin loss for deep face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4690–4699. [Google Scholar]
  21. Komkov, S.; Petiushko, A. Advhat: Real-world adversarial attack on arcface face id system. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 819–826. [Google Scholar]
  22. Ibsen, M.; Rathgeb, C.; Brechtel, F.; Klepp, R.; Pöppelmann, K.; George, A.; Marcel, S.; Busch, C. Attacking face recognition with t-shirts: Database, vulnerability assessment and detection. IEEE Access 2023, 11, 57867–57879. [Google Scholar] [CrossRef]
  23. Zheng, X.; Fan, Y.; Wu, B.; Zhang, Y.; Wang, J.; Pan, S. Robust physical-world attacks on face recognition. Pattern Recognit. 2023, 133, 109009. [Google Scholar] [CrossRef]
  24. Dabouei, A.; Soleymani, S.; Dawson, J.; Nasrabadi, N. Fast geometrically-perturbed adversarial faces. In Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa Village, HI, USA, 7–11 January 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1979–1988. [Google Scholar]
  25. Dong, Y.; Su, H.; Wu, B.; Li, Z.; Liu, W.; Zhang, T.; Zhu, J. Efficient decision-based black-box adversarial attacks on face recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7714–7722. [Google Scholar]
  26. Hussain, S.; Huster, T.; Mesterharm, C.; Neekhara, P.; An, K.; Jere, M.; Sikka, H.; Koushanfar, F. Reface: Real-time adversarial attacks on face recognition systems. arXiv 2022, arXiv:2206.04783. [Google Scholar]
  27. Mopuri, K.R.; Garg, U.; Babu, R.V. Fast feature fool: A data independent approach to universal adversarial perturbations. arXiv 2017, arXiv:1707.05572. [Google Scholar]
  28. Hayes, J.; Danezis, G. Learning universal adversarial perturbations with generative models. In Proceedings of the 2018 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 24 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 43–49. [Google Scholar]
  29. Mopuri, K.R.; Ojha, U.; Garg, U.; Babu, R.V. Nag: Network for adversary generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 742–751. [Google Scholar]
  30. Zhang, C.; Benz, P.; Imtiaz, T.; Kweon, I.S. Understanding adversarial examples from the mutual influence of images and perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 14521–14530. [Google Scholar]
  31. Deng, Y.; Karam, L.J. Frequency-tuned universal adversarial attacks on texture recognition. IEEE Trans. Image Process. 2022, 31, 5856–5868. [Google Scholar] [CrossRef]
  32. Hu, H.; Sun, G. Inheritance Attention Matrix-Based Universal Adversarial Perturbations on Vision Transformers. IEEE Signal Process. Lett. 2021, 28, 1923–1927. [Google Scholar] [CrossRef]
  33. Zolfi, A.; Avidan, S.; Elovici, Y.; Shabtai, A. Adversarial mask: Real-world universal adversarial attack on face recognition models. In Proceedings of the 2023 European Conference on Machine Learning and Knowledge Discovery in Databases, Turin, Italy, 18 September 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 304–320. [Google Scholar]
  34. Duan, W.; Gao, C.; Li, P.; Zhu, C. Universal Adversarial Attack for Face Recognition Based on Commonality Gradient. Comput. Syst. Appl. 2024, 33, 222–230. (In Chinese) [Google Scholar]
  35. Qiao, T.; Zhao, B.; Shi, R.; Han, M.; Hassaballah, M.; Retraint, F.; Luo, X. Scalable Universal Adversarial Watermark Defending against Facial Forgery. IEEE Trans. Inf. Forensics Secur. 2024, 19, 8998–9011. [Google Scholar] [CrossRef]
  36. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
  37. Chen, S.; Liu, Y.; Gao, X.; Han, Z. Mobilefacenets: Efficient cnns for accurate real-time face verification on mobile devices. In Proceedings of the Biometric Recognition: 13th Chinese Conference, CCBR 2018, Urumqi, China, 11–12 August 2018; pp. 428–438. [Google Scholar]
  38. Duta, I.C.; Liu, L.; Zhu, F.; Shao, L. Improved residual networks for image and video recognition. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 9415–9422. [Google Scholar] [CrossRef]
Figure 1. Diagram of FaUAP-FBF. First, images in a training set are converted to the frequency domain through DCT and UAP is initialized with Gaussian noise, and the three band learnable filters are also initialized with fixed filters. Subsequently, the UAP and filters are alternately and repeatedly updated in terms of a weighted combination of adversarial loss and stealthiness loss, and UAP is also constrained by l -norm. During iteration, the filtered perturbation and legitimate example has to be inversely converted to the spatial domain using IDCT and is used to calculate the adversarial loss and stealthiness loss. x ct is proposed to implement a unique non-target attack that is immune to the varying victim. The iteration continues until a certain criterion is fulfilled.
Figure 1. Diagram of FaUAP-FBF. First, images in a training set are converted to the frequency domain through DCT and UAP is initialized with Gaussian noise, and the three band learnable filters are also initialized with fixed filters. Subsequently, the UAP and filters are alternately and repeatedly updated in terms of a weighted combination of adversarial loss and stealthiness loss, and UAP is also constrained by l -norm. During iteration, the filtered perturbation and legitimate example has to be inversely converted to the spatial domain using IDCT and is used to calculate the adversarial loss and stealthiness loss. x ct is proposed to implement a unique non-target attack that is immune to the varying victim. The iteration continues until a certain criterion is fulfilled.
Mathematics 12 03287 g001
Figure 2. (a) DCT, filtering in the frequency domain and IDCT for the image. (b) DCT, filtering in the frequency domain and IDCT for the perturbation.
Figure 2. (a) DCT, filtering in the frequency domain and IDCT for the image. (b) DCT, filtering in the frequency domain and IDCT for the perturbation.
Mathematics 12 03287 g002
Figure 3. Visualization of ultimately learned filters f l ^ , f m ^ , f h ^ . All filters are 8 × 8 in size.
Figure 3. Visualization of ultimately learned filters f l ^ , f m ^ , f h ^ . All filters are 8 × 8 in size.
Mathematics 12 03287 g003
Figure 4. Flowchart of customizing target example.
Figure 4. Flowchart of customizing target example.
Mathematics 12 03287 g004
Figure 5. Histograms of cosine similarity values at T 1 , T 2 , T 3 , and T 4 instants. The cosine similarity increases from left to right along the horizontal direction. The histogram continuously moves from left to right, and the aggregation of the histogram is also enhanced as the training proceeds.
Figure 5. Histograms of cosine similarity values at T 1 , T 2 , T 3 , and T 4 instants. The cosine similarity increases from left to right along the horizontal direction. The histogram continuously moves from left to right, and the aggregation of the histogram is also enhanced as the training proceeds.
Mathematics 12 03287 g005
Figure 6. Some legitimate examples vs. the adversarial examples: (a,e,i,m) are legitimate examples; (b,f,j,n) are generated using UAP [9]; (c,g,k,o) are generated using FTGAP [31]; and (d,h,l,p) are generated using FaUAP-FBF.
Figure 6. Some legitimate examples vs. the adversarial examples: (a,e,i,m) are legitimate examples; (b,f,j,n) are generated using UAP [9]; (c,g,k,o) are generated using FTGAP [31]; and (d,h,l,p) are generated using FaUAP-FBF.
Mathematics 12 03287 g006
Table 1. Definitions of used notations.
Table 1. Definitions of used notations.
NotationDefinition
v, Vspatial domain/frequency domain perturbation
x, Xspatial domain/frequency domain legitimate example
V l , V m , V h low/middle/high frequency component of perturbation
X l , X m , X h low/middle/high frequency component of legitimate example
𝕗 l , 𝕗 m , 𝕗 h low/middle/high frequency band for fixed filter
f l , f m , f h low/middle/high frequency band for learnable filter
x tr , x adv ( tr ) spatial domain legitimate example/adversarial example in training set
x te , x adv ( te ) spatial domain legitimate example/adversarial example in test set
x tr ( l ) , x tr ( m ) , x tr ( h ) low/middle/high frequency component in spatial domain legitimate example in training set
x adv ( tr ) ( l ) , x adv ( tr ) ( m ) , x adv ( tr ) ( h ) low/middle/high frequency component in spatial domain adversarial example in training set
x tar , x ct spatial domain target example/customized target example
Table 2. Computation Analysis for FaUAP-FBF during training.
Table 2. Computation Analysis for FaUAP-FBF during training.
ForwardBackward
legitimate image setadversarial image setfilters
N × M × T DCT + 3 × T IDCT + T filt + N × 4 × T VGG l × N / n × N × 4 × T VGG + T tar l × N / n × 3 × T grad for f R 3 × 3
perturbationperturbation
l × N / n × M × T DCT + 3 × T IDCT + T filt l × N / n × T grad for v R p × q
Table 3. The hyperparameter setting for the loss function.
Table 3. The hyperparameter setting for the loss function.
DatasetSubstitute Target Model κ λ
LFWIResNet500.50.09
MobileFaceNet0.420.042
CASIA-WebFaceIResNet500.20.045
MobileFaceNet0.20.004
Table 4. Comparison results of fooling rate (%) and stealthiness.
Table 4. Comparison results of fooling rate (%) and stealthiness.
DatasetSubstitute Target ModelMethodFR ↑SSIM ↑PSNR ↑LPIPS ↓
LFWIResNet50Random20.790.4018.930.62
UAP [9]76.620.7627.580.25
FTGAP [31]82.140.8931.700.24
FaUAP-FBF85.020.9233.490.14
LFWMobileFaceNetRandom23.160.3618.080.63
UAP [9]79.050.8529.720.23
FTGAP [31]79.360.8530.520.19
FaUAP-FBF80.940.9032.130.18
CASIA-WebFaceIResNet50Random37.050.4920.880.46
UAP [9]71.150.7828.040.29
FTGAP [31]76.250.8530.460.26
FaUAP-FBF78.550.9032.130.12
CASIA-WebFaceMobileFaceNetRandom30.250.5121.400.44
UAP [9]76.300.7627.540.30
FTGAP [31]78.410.8730.670.22
FaUAP-FBF80.380.8831.350.23
Table 5. Specified target attack results of fooling rate (%) and stealthiness.
Table 5. Specified target attack results of fooling rate (%) and stealthiness.
DatasetSubstitute Target ModelFR ↑SSIM ↑PSNR ↑LPIPS ↓
LFWIResNet5084.940.9233.250.10
MobileFaceNet82.910.9132.400.11
CASIA-WebFaceIResNet5081.520.9232.980.12
MobileFaceNet80.030.8931.330.14
Table 6. Fooling rates of black-box attacks (%).
Table 6. Fooling rates of black-box attacks (%).
Database Test IResnet50MobileFaceNetMobileNetV1
Learning
LFWIResnet50 85.02 11.35 15.85
MobileFaceNet42.7380.9435.14
MobileNetV148.2823.1479.21
CASIA-WebFaceIResnet5078.5529.5024.10
MobileFaceNet57.2580.3855.15
MobileNetV152.8544.1580.22
Table 7. Comparison between fixed filters and learnable filters.
Table 7. Comparison between fixed filters and learnable filters.
Substitute Target ModelMethodFR ↑SSIM ↑PSNR ↑LPIPS ↓
IResNet50FaUAP-FBF85.020.9233.490.14
Fixed filter84.480.8831.560.20
MobileFaceNetFaUAP-FBF80.940.9032.130.18
Fixed filter80.150.8630.730.25
Table 8. Comparison between full frequency and frequency separation.
Table 8. Comparison between full frequency and frequency separation.
Substitute Target ModelMethodFR ↑SSIM ↑PSNR ↑LPIPS ↓
IResNet50FaUAP-FBF85.020.9233.490.14
Full-frequency81.630.8932.090.21
MobileFaceNetFaUAP-FBF80.940.9032.130.18
Full-frequency77.160.8630.640.21
Table 9. Comparison between customized target example and random target example.
Table 9. Comparison between customized target example and random target example.
Substitute Target ModelMethodFR ↑SSIM ↑PSNR ↑LPIPS ↓
IResNet50FaUAP-FBF85.020.9233.490.14
Random82.680.9133.060.15
MobileFaceNetFaUAP-FBF80.940.9032.130.18
Random78.140.8831.520.19
Table 10. Comparison between with and without variance adversarial loss item.
Table 10. Comparison between with and without variance adversarial loss item.
Substitute Target ModelMethodFR ↑SSIM ↑PSNR ↑LPIPS ↓
IResNet50FaUAP-FBF85.020.9233.490.14
No-var80.240.9232.870.14
MobileFaceNetFaUAP-FBF80.940.9032.130.18
No-var75.970.8931.790.19
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, L.; He, B.; Jin, X.; Sun, G. Leveraging Universal Adversarial Perturbation and Frequency Band Filters Against Face Recognition. Mathematics 2024, 12, 3287. https://doi.org/10.3390/math12203287

AMA Style

Zhou L, He B, Jin X, Sun G. Leveraging Universal Adversarial Perturbation and Frequency Band Filters Against Face Recognition. Mathematics. 2024; 12(20):3287. https://doi.org/10.3390/math12203287

Chicago/Turabian Style

Zhou, Limengnan, Bufan He, Xi Jin, and Guangling Sun. 2024. "Leveraging Universal Adversarial Perturbation and Frequency Band Filters Against Face Recognition" Mathematics 12, no. 20: 3287. https://doi.org/10.3390/math12203287

APA Style

Zhou, L., He, B., Jin, X., & Sun, G. (2024). Leveraging Universal Adversarial Perturbation and Frequency Band Filters Against Face Recognition. Mathematics, 12(20), 3287. https://doi.org/10.3390/math12203287

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop