Next Article in Journal
Post-Quantum Secure Lightweight Revocable IBE with Decryption Key Exposure Resistance
Previous Article in Journal
Quantum Shannon Information Theory—Design of Communication, Ciphers, and Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Deep Active Learning via Distance-Measured Data Mixing and Adversarial Training

1
School of Computer Science and Engineering, Changchun University of Technology, Changchun 130012, China
2
Sendelta International Academy Shenzhen, Shenzhen 518100, China
3
College of Bigdata and Internet, Shenzhen Technology University, Shenzhen 518118, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(11), 1159; https://doi.org/10.3390/e27111159
Submission received: 3 October 2025 / Revised: 6 November 2025 / Accepted: 13 November 2025 / Published: 14 November 2025

Abstract

Accurate uncertainty estimation in unlabeled data represents a fundamental challenge in active learning. Traditional deep active learning approaches suffer from a critical limitation: uncertainty-based selection strategies tend to concentrate excessively around noisy decision boundaries, while diversity-based methods may miss samples that are crucial for decision-making. This over-reliance on confidence metrics when employing deep neural networks as backbone architectures often results in suboptimal data selection. We introduce Distance-Measured Data Mixing (DM2), a novel framework that estimates sample uncertainty through distance-weighted data mixing to capture inter-sample relationships and the underlying data manifold structure. This approach enables informative sample selection across the entire data distribution while maintaining focus on near-boundary regions without overfitting to the most ambiguous instances. To address noise and instability issues inherent in boundary regions, we propose a boundary-aware feature fusion mechanism integrated with fast gradient adversarial training. This technique generates adversarial counterparts of selected near-boundary samples and trains them jointly with the original instances, thereby enhancing model robustness and generalization capabilities under complex or imbalanced data conditions. Comprehensive experiments across diverse tasks, model architectures, and data modalities demonstrate that our approach consistently surpasses strong uncertainty-based and diversity-based baselines while significantly reducing the number of labeled samples required for effective learning.

1. Introduction

Deep neural networks typically require extensive labeled datasets for effective training, making data annotation a slow, expensive, and complex process [1]. Active learning (AL) addresses this challenge by strategically selecting the most informative samples from an unlabeled pool for annotation, thereby reducing the overall labeling burden [2]. A prevalent approach prioritizes samples with low-confidence predictions, as these high-uncertainty instances are empirically proven to provide valuable information for model improvement [3]. However, strategies that rely exclusively on uncertainty estimation often concentrate sample selection within narrow regions of the feature space, resulting in inadequate coverage of the overall data distribution and potential amplification of label noise [4]. Incorporating diversity considerations into the selection process can mitigate these issues by ensuring broader distributional coverage and capturing richer information content across the data manifold.
Recent active learning research has therefore pursued two complementary objectives: uncertainty estimation and diversity promotion. Uncertainty-based methods such as Least Confidence [3,5,6] prioritize samples exhibiting minimal predictive confidence. Deep Bayesian techniques further enhance uncertainty estimation by utilizing posterior predictive distributions to refine entropy-based and mutual-information-based selection criteria  [7,8,9,10,11]. Alternative approaches employ auxiliary models to improve uncertainty estimation or guide the selection process  [12,13,14,15,16,17]. Complementing these uncertainty-focused strategies, diversity-oriented methods such as VAAL [18] explore varied regions within the latent space, while BADGE [19] selects points via k-means++ in gradient-embedding space to jointly encourage diversity and gradient-driven uncertainty. Mixing-based active learners like Alpha-Mix also synthesize inputs, but they score and select anchors before mixing and do not explicitly target boundary sensitivity within the AL loop. A brief comparison of pipeline choices highlights these differences: while BADGE relies on gradient embeddings for selection diversity and Alpha-Mix leverages mixing as post-selection augmentation, DM2 couples metric-driven neighbor matching with mixed-sample scoring and adversarial perturbations, thereby unifying diversity, calibrated uncertainty near the boundary, and robustness within a single AL loop. Despite these advances, a fundamental trade-off remains: uncertainty-centric querying can oversample noisy boundary points, whereas diversity-only selection may overlook the most decision-relevant instances in complex, imbalanced, or noisy data—motivating designs like DM2 that explicitly integrate boundary awareness with coverage.
From a decision boundary perspective, samples with high uncertainty typically reside near class separators, where noise and label ambiguity are most prevalent. Excessive emphasis on such points can propagate labeling errors and compromise training quality. To investigate this phenomenon, we conducted a preliminary study using Least Confidence on CIFAR-10 [20] with MobileNet [21] as the backbone architecture. Following an initial training phase, we selected 1000 samples per iteration based on lowest confidence scores and visualized the selections using t-SNE, comparing direct top-k selection (offset 0) with an offset strategy that skips the top 300 lowest-confidence samples. As illustrated in Figure 1, the offset strategy distributes selections more broadly across the feature space, enhancing both coverage and diversity. This approach achieved 84.11% accuracy compared to 83.5% without offset, demonstrating that strategically shifting away from the most uncertain samples can capture richer information content and improve overall performance.
Motivated by these observations and drawing inspiration from Alpha-Mix [1], which employs tuned mixing strategies to introduce variability while preserving salient features, we propose a distance-measured data mixing framework (DM2) for deep active learning that simultaneously addresses uncertainty estimation, diversity promotion, and robustness enhancement. Our method DM2 introduces three algorithmic novelties: (i) neighbor selection in the representation space using a combined 1 + 2 distance to robustly pair anchors with semantically proximate yet distinct neighbors; (ii) scoring on mixed samples rather than on anchors, which directly estimates the informativeness of interpolated boundary cases; (iii) an explicit boundary-aware adversarial augmentation step integrated into each query round to probe model fragility near decision surfaces. Building upon this foundation, we further introduce a boundary-aware feature fusion mechanism via adversarial training: we generate adversarial counterparts for selected near-boundary samples using fast gradient methods and train them jointly with the original instances. This approach enhances generalization capabilities and robustness in complex, noisy environments by stabilizing the learning process around decision boundaries.
Our contributions are summarized as follows:
  • We introduce Distance-Measured Data Mixing (DM2) Active Learning, a novel deep active learning framework that estimates sample uncertainty through distance-weighted mixing of data samples. By exploiting inter-sample relationships and distributional structure, this method selects informative instances across the data manifold, including near-boundary regions, thereby enhancing the diversity of queried samples.
  • To address noise susceptibility in challenging scenarios, we augment Distance-Measured Data Mixing with adversarial training (DM2-AT). We generate fast gradient adversarial samples for selected near-boundary instances and train them jointly with the original data, improving model robustness and generalization performance under complex data distributions.
  • Comprehensive experiments across diverse tasks, model architectures, and data modalities demonstrate that our method achieves superior performance while significantly reducing labeling requirements compared to existing approaches.

2. Related Work

Uncertainty-based approaches select the most ambiguous unlabeled samples according to the current model. Since the model is initially trained with a limited dataset, these ambiguous samples provide valuable information for subsequent training rounds. (1) Prominent uncertainty selection methods include Least Confidence [5] and Entropy Sampling [22]. The Margin Sampling method [6] evaluates the difference between the confidence levels of the highest and second-highest prediction classes. BatchBALD [23] selects samples by maximizing the joint information gain of a batch. (2) Bayesian framework approaches focus on model parameters [4,24], often integrating Bayesian belief networks with Monte Carlo sampling [25]. Deep Bayesian approximation methods like MC-Dropout [7] are employed to address the challenge of probabilistic prediction. Query by Committee (QBC) methods facilitate multi-model training [26], while adversarial training methods [10,11,18] provide additional robustness. Furthermore, the variance between predicted probabilities within a set [27] has been proposed as a measure of uncertainty. (3) Model-based active learning trains a separate model for active instance selection. Variational Autoencoders (VAEs) [18] utilize a V-shaped autoencoder to model data distribution. CoreGCN [13] employs Graph Convolutional Networks (GCNs) to represent relationships between examples. LL4AL [14] integrates a lightweight module to learn the prediction error of unlabeled examples, capturing the learning loss in active learning. ProbCover [28] is a novel active learning algorithm designed for low-budget scenarios, aiming to maximize probability coverage. Methods like ISAL and ent-gn [15,16] propose using influence functions [17] to estimate potential model changes, thereby informing training strategies. These techniques often prioritize points near the decision boundary. However, they may overlook valuable data away from the decision boundary by relying solely on predicted class likelihood.
In active learning, diversity refers to selecting representative and varied samples for labeling. Methods often assign confidence scores based on classifier uncertainty and sample diversity [29]. One strategy uses entropy and mutual information within a CRF graphical model for query selection [30]. The state-of-the-art method Coreset [12] focuses on selecting data with diverse representations. BADGE [19] explores the relationship between data diversity and uncertainty using Bayesian methods and clustering. CoreGCN [13] employs graph embeddings for diverse data selection. Hierarchical agglomerative clustering (HAC) [31] distributes uncertain examples across clusters. BatchBALD [23] also considers sample diversity by selecting complementary samples to avoid redundancy. Recent advances utilize parameters from the final neural layer, while Alpha-Mix [1] employs feature mixing with alpha values. Noise Stability [32] introduces a greedy algorithm that adds noise to highlight differences between samples. These methods propose complex paradigms for modeling diversity, enhancing effectiveness but increasing computational complexity and model coupling.

3. Distance-Measured Data Mixing

This section presents our theoretical framework for Distance-Measured Data Mixing (DM2), as illustrated in Figure 2. Our approach focuses on selecting samples that effectively balance diversity and uncertainty considerations. The process begins by feeding unlabeled data through the model to extract feature-layer embeddings. We then compute similarity distances between samples within these embedding spaces and select multiple similar instances for linear mixing according to predetermined proportions. The model evaluates these mixed samples to determine their confidence levels. Finally, we rank samples by confidence scores and select data indices with the lowest-confidence values, adding the corresponding original samples to the labeled candidate set to complete each selection round.

3.1. Formal Definition

Given an unlabeled data pool U and an initially empty labeled data pool L, the objective of active learning is to select a subset of samples X i from U based on a predefined annotation budget (e.g., selecting 1000 samples at a time from a pool of 50,000 samples). These selected samples are annotated by human experts and added to L, such that L L X i , Y i , where Y i represents the newly acquired annotations. The selected samples are subsequently removed from the unlabeled pool: U U X i .
A neural network model is defined as a function f θ , where θ denotes the model parameters. For input data U = X 1 , X 2 , , X n , the model generates predictions Y ^ i = f θ ( X i ) . The model is trained by minimizing the cross-entropy loss function:
Y , Y ^ = 1 N i = 1 N c = 1 C y i , c log y ^ i , c ,
where N represents the number of samples, C denotes the number of classes, and  y ^ i , c is the predicted probability that the i-th sample belongs to class c. The optimization objective is to minimize this loss function:
θ * = arg min θ ( Y , f θ ( X ) ) .
In subsequent data selection phases, the trained model evaluates samples from the unlabeled pool U to obtain confidence scores for each instance. Based on these confidence estimates, the method selects samples to be added to the labeled pool L, initiating the next cycle of training and data selection.

3.2. Feature Extraction

The feature representation from the final layer of a convolutional neural network (CNN) captures the highest-level, most abstract features from input images. These features effectively encapsulate global information and complex patterns, making them particularly valuable for tasks such as classification and recognition.
For each sample in the unlabeled data pool U = X 1 , X 2 , , X n , we extract features from the last convolutional layer of the CNN. The output features are denoted as F Conv ( X i ) R d , where d represents the feature dimension. Here, U i denotes the feature representation of the i-th sample extracted through this layer. The feature extraction process is formalized as
U i = F Conv ( X i ) .

3.3. Distance Measured

Euclidean distance effectively captures geometric relationships in continuous feature spaces, making it well-suited for detecting subtle differences between samples and demonstrating high sensitivity to small variations in data. In contrast, Manhattan distance is particularly effective for measuring differences in discrete or sparse feature spaces.
To ensure fair comparisons and prevent magnitude differences from skewing similarity, we first normalize feature vectors. Let U i and U j denote the original feature vectors of samples i and j, respectively. We define their normalized counterparts as
U ˜ i : = U i U i 2 , U ˜ j : = U j U j 2 ,
where · 2 denotes the Euclidean (L2) norm.
Because Euclidean and Manhattan distances have different scales, directly averaging them can introduce bias. We therefore compute each distance on the normalized vectors and additionally normalize each distance by the feature dimension d to align scales before aggregation. The (per-dimension) Euclidean and Manhattan distances are then defined as
d E ( U ˜ i , U ˜ j ) = 1 d k = 1 d U ˜ i k U ˜ j k 2 ,
d M ( U ˜ i , U ˜ j ) = 1 d k = 1 d U ˜ i k U ˜ j k .
The combined distance metric, obtained by averaging the normalized distances in Equations (4) and (5), is expressed as
d c ( U i , U j ) = d E ( U ˜ i , U ˜ j ) + d M ( U ˜ i , U ˜ j ) 2 .

3.4. Linear Data Mixing

The distance function d c ( U i , U j ) calculates the similarity between feature samples U i and U j . In this step, we select n samples X 1 , X 2 , , X n that are most similar to sample X i for linear mixing. We denote the set of nearest neighbors of U i based on d c ( U i , U j ) as N i , and employ the parameter λ to control the degree of mixing. This process operates directly at the data level rather than on feature representations. The mixing formula is expressed as
X i ^ = λ · X i + ( 1 λ ) · j N i X j ,
where X ^ i represents the mixed data sample derived from X i , N i denotes the index set of the n nearest neighbors of sample X i , and  λ controls the mixing weight of the original sample X i . The linearly mixed sample X ^ i is then fed into the pre-trained model for prediction:
P i = f θ ( X ^ i ) .
The output of the classification model, P i , is typically a vector representing the predicted logits for each class. This output is converted into a probability distribution using the softmax function:
Softmax ( P i ) = exp ( P i 1 ) c exp ( P i c ) , , exp ( P i C ) c exp ( P i c ) ,
where C represents the number of classes and P i c denotes the model’s logit score for sample X ^ i belonging to class c.
The probability distribution from the classification model’s output is utilized to determine the confidence level for each sample. The highest predicted probability typically serves as a confidence indicator:
C i = max ( softmax ( P i ) ) .
This represents the model’s maximum predicted probability for sample X ^ i , indicating its confidence level. Samples are ranked by their confidence scores, and those with the lowest confidence are selected for the next active learning batch. When confidence levels are sorted in ascending order, sort [ 0 ] corresponds to the index of the sample with the lowest confidence, while sort [ N 1 ] represents the highest confidence sample. From this sorted arrangement, we select the n samples with the lowest confidence scores for annotation. The indices of these selected samples constitute the active learning dataset D a :
D a = { X sort [ 0 ] , X sort [ 1 ] , , X sort [ N 1 ] } .
We return the selected n sample indices for active learning annotation. The corresponding samples are retrieved from the original unlabeled dataset U = X 1 , X 2 , , X m using these indices and added to the labeled pool L. This process is repeated iteratively throughout the entire active learning cycle, as shown in Algorithm 1.
Algorithm 1 Distance-Measured Data Mixing Active Learning (DM2)
  1:
Input:  f θ : randomly initialized neural network, U: unlabeled data pool,
  2:
        L: initial labeled data pool, B: query budget per round,
  3:
        T: number of acquisition rounds, k: number of neighbors, λ : MixUp parameter
  4:
Output: L: updated labeled pool
  5:
Begin:
  6:
        for  r o u n d 1 to T do
  7:
               Train the model f θ on the current labeled pool L.
  8:
               Extract features { F i = F conv ( x i ) } for all samples { x i } U .
  9:
               for each sample x i U  do
10:
                        Compute distance δ i j = | | F i F j | | 1 to all other samples x j U .
11:
                        Identify N ( x i ) , the set of k-nearest neighbors to x i based on δ i j .
12:
               end for
13:
               Initialize an empty set for acquisition scores, S { } .
14:
               for each sample x i U  do
15:
                        Randomly select one neighbor x j from its neighbor set N ( x i ) .
16:
                        Generate a synthetic sample: x ^ i = λ x i + ( 1 λ ) x j .
17:
                        Calculate model output probabilities for the synthetic sample: p ( y ^ | x ^ i ) = Softmax ( f θ ( x ^ i ) ) .
18:
                        Compute the uncertainty score S i = Entropy ( p ( y ^ | x ^ i ) ) .
19:
               end for
20:
               Select a set of B samples X B = { x i U } corresponding to the highest scores in S.
21:
               Query the true labels Y B for the selected samples in X B .
22:
               Add the newly labeled data to the labeled pool: L L ( X B , Y B ) .
23:
               Remove the selected samples from the unlabeled pool: U U X B .
24:
        end for
25:
Return L

4. Adversarial Training for Boundary Data Feature Fusion

This section presents an active learning algorithm that integrates adversarial training with feature fusion for boundary data samples. This method employs adversarial training to enhance model robustness, thereby strengthening performance when processing noisy and complex data. Simultaneously, the active learning strategy reduces the required number of training samples, lowering overall training costs. Specifically, in our boundary data feature fusion approach for active learning, samples selected in each round are initially augmented through adversarial training to generate adversarial counterparts. These adversarial samples are subsequently merged with the existing labeled pool, enabling the model to fully exploit the augmented data during updates.
The advantage of this approach lies in its incorporation of active learning properties to reduce labeled data requirements while leveraging adversarial training to enhance the model’s classification capabilities, particularly when confronting noise and interference in real-world scenarios. Through this combination, the model’s information recognition performance is significantly improved, achieving more accurate classification in complex environments and demonstrating adaptability across diverse application scenarios. This algorithm not only improves model stability and classification accuracy but also reduces training sample requirements while adapting to large-scale dataset challenges, offering substantial practical value, especially in applications requiring rapid deployment and efficient training. The complete methodological process is illustrated in Figure 3.

4.1. FGSM Confrontation Training

Adversarial training serves as a method to improve model generalization by incorporating adversarial samples into the training dataset. This approach compels the model to learn from these challenging examples, thereby improving its ability to defend against adversarial perturbations. The Fast Gradient Sign Method (FGSM) represents one of the most widely used techniques to generate adversarial samples, and FGSM adversarial training constitutes a training methodology that employs FGSM to generate adversarial samples and incorporate them into the training set [33].
FGSM is an algorithm that efficiently generates adversarial perturbations by computing the gradient of the loss function with respect to the input. The fundamental principle involves applying a small perturbation along the direction of the loss function’s gradient to input samples, thereby causing the model to produce erroneous predictions. This perturbation is computed individually for each input sample, making it inherently sample-specific [34]. The FGSM generation process follows these steps:
Calculate the gradient: For each input sample and its corresponding label, we first compute the gradient of the loss function with respect to the input:
x L ( θ , x , y ) ,
where L ( θ , x , y ) represents the loss function with model parameters θ , input sample x, and true label y. This gradient indicates the direction in which small changes to the input would most significantly increase the loss.
Generate adversarial perturbations: Using the computed gradient to generate adversarial perturbations, the key principle of FGSM involves computing the sign of the gradient (representing the direction of the gradient) and adding perturbations along that direction. The perturbation magnitude is controlled by a small constant parameter:
adv sample = x + ϵ · sign ( x L ( θ , x , y ) ) ,
where sign ( · ) is the sign function that extracts the sign of each element in the gradient vector, and  ϵ is the hyperparameter that controls the perturbation magnitude. This formula represents the process of applying a small perturbation to input samples along the direction of the loss function’s gradient. The sign function ensures that the perturbation moves in the direction that would maximally increase the loss, while the ϵ parameter bounds the perturbation size to maintain the adversarial sample’s similarity to the original input.
The generated adversarial samples are incorporated alongside the original samples during the training process, enabling the model to learn correct predictions when confronted with adversarially perturbed inputs. This approach enhances the model’s robustness by exposing it to challenging examples that lie near the decision boundary.
In adversarial training, the training process comprises two complementary components. First, positive sample training follows the traditional approach by utilizing original data for model training. Second, adversarial sample training incorporates adversarial samples generated using FGSM into the training data. During each training step, a batch of data is selected from the training set, where each sample consists of an input x and its corresponding label y. FGSM is then applied to generate adversarial perturbations for each sample, producing the corresponding adversarial examples.
The training procedure calculates losses for both the original samples and their adversarial counterparts, combining these losses for backpropagation to update model parameters, where L ( θ , x , y ) represents the loss computed on the original sample, and  L ( θ , adv sample , y ) denotes the loss computed on the adversarial sample:
L total = 1 2 ( L ( θ , x , y ) + L ( θ , adv sample , y ) ) ,
where this combined loss function ensures that the model simultaneously learns to make correct predictions on both normal and adversarially perturbed inputs.
For all hyperparameters, unless otherwise specified, we adopt the following settings in all experiments. FGSM perturbation magnitude ϵ: We search over ϵ 1 / 255 , 2/255, 4/255, 8/255 for image input (pixel range [ 0 , 1 ] ) and report the results for the selected value in each experiment; the default is ϵ = 8 / 255 . Loss mixing coefficient λ: We weight clean and adversarial losses as L total = ( 1 λ ) L ( θ , x , y ) + λ L ( θ , adv sample , y ) . We tune λ over the range λ 0.3 , 0.5 , 0.7 and use λ = 0.5 by default (corresponding to Equation (14)). Number of neighbors k: For modules that require neighbor retrieval (e.g., regularization of the consistency of k-NN or augmentation based on neighborhood, when applicable in our pipeline), we select k from 5 , 10 , 20 with a default of k = 10 .
FGSM adversarial training constitutes an effective method for improving model robustness. By generating adversarial samples and incorporating them into the training data, this approach enhances the model’s ability to adapt to input perturbations, enabling the model to maintain performance when confronted with adversarial examples during inference.

4.2. Adversarial Training for Sample Selection

This method combines active learning and adversarial training to enhance model stability and performance through the following process:
First, the trained model performs forward propagation to extract feature representations from the final layer for each sample, as expressed in Equation (3). We then select the most representative samples by computing pairwise similarity using a combined Manhattan and Euclidean distance metric, as shown in Equation (6). This combination leverages the strengths of both metrics to achieve stable similarity calculations, particularly for features with varying scales.
After identifying the n most similar samples through inter-sample similarity calculations, we merge them using the MixUp fusion method to generate new training instances that enhance model generalization, as shown in Equation (7). The model evaluates classification confidence and returns the corresponding index list for active learning selection.
Following sample selection by the boundary data feature fusion algorithm, the original images undergo forward propagation through the neural network to obtain prediction results via the fully connected layer. FGSM then calculates the gradient of the loss function with respect to the input image through backpropagation, revealing how each pixel should be modified to maximize the loss. Larger gradient magnitudes indicate greater pixel impact on the loss function.
The perturbation is computed and adversarial samples are generated according to
δ = ϵ · sign ( x L ( θ , x , y ) ) ,
where ϵ represents the perturbation step size, x L ( θ , x , y ) denotes the gradient of the loss function with respect to input sample x, L ( θ , x , y ) is the model’s loss function, and  sign ( · ) is the sign function. The parameter ϵ determines the perturbation magnitude: smaller values (e.g., ϵ = 0.03 ) are used for simpler tasks like MNIST or SVHN, while larger values (e.g., ϵ = 0.1 ) are selected for complex tasks like CIFAR-10. Larger perturbations make training for robust performance more challenging.
The calculated perturbations are added to the original samples to generate adversarial samples:
H = x + δ ,
where H represents the adversarial sample resulting from adding the perturbation δ to the original input x.
Algorithm overview: We work with a model f θ that iteratively improves using an unlabeled pool U and a labeled set L. For each x U , we extract a feature embedding F conv ( x ) R d from the model’s penultimate layer and measure pairwise similarity using the combined distance D ( x i , x j ) = F conv ( x i ) F conv ( x j ) 1 + F conv ( x i ) F conv ( x j ) 2 . Each example x has a neighborhood N ( x ) consisting of its top-n most similar peers under D. We synthesize interpolated examples x ^ = λ x + ( 1 λ ) x using MixUp with λ Beta ( α , α ) and x N ( x ) to probe the decision boundary and calibrate uncertainty.
Based on uncertainty u ( x ; f θ ) , we select the B most uncertain samples to form batch X B for labeling, obtaining labels Y B . For each selected sample x X B , we create an adversarial counterpart using FGSM: we form a pseudo-label y ( x ) from the model’s current prediction, compute the input gradient g ( x ) = x L ( f θ ( x ) , y ( x ) ) , and craft a perturbation δ ( x ) = ε , sign ( g ( x ) ) , producing the adversarial example H ( x ) = x + δ ( x ) (clipped to the valid input range). The labeled set is augmented with both clean and adversarial pairs ( x , y ) , ( H ( x ) , y ) for x X B , y Y B .
Training minimizes the combined objective L total = ( 1 λ adv ) L orig + λ adv L adv , where λ adv balances clean accuracy and robustness, and  ε controls perturbation strength. This cycle—feature extraction, neighborhood identification, uncertainty-based selection, and adversarial augmentation—repeats until convergence. By selecting samples that are uncertain and lie in dense feature regions, then training on their adversarial variants, the algorithm enhances model robustness while reducing labeling costs, as shown in Algorithm 2.
Algorithm 2 Distance-Measured Data Mixing with Adversarial Training (DM2-AT)
  1:
Input: Model f θ , unlabeled pool U, labeled pool L, batch size B, neighbor count n, MixUp parameter α , FGSM step size ϵ
  2:
Output: Trained model f θ
  3:
Begin:
  4:
        while model has not converged do
  5:
                Extract features F conv ( x i ) for all x i U .
  6:
                For each x i U , find its top-n neighbors N ( x i ) using L1+L2 distance.
  7:
                Generate synthetic set X ^ via MixUp on pairs from U and their neighbors.
  8:
                Score X ^ with model uncertainty to select the B most uncertain original samples X B U .
  9:
                for each x i X B  do
10:
                        g i x L ( f θ ( x i ) , y i ) // y i is the model’s predicted label
11:
                        H i x i + ϵ · sign ( g i )
12:
                end for
13:
                Query true labels Y B for the selected samples X B .
14:
                Update L L { ( X B , Y B ) , ( H B , Y B ) } and U U X B .
15:
                Retrain f θ on L using a combined loss for original and adversarial samples.
16:
        end while
17:
Return  f θ

5. Theoretical Analysis

5.1. Notation and Setup

Let U = { X 1 , , X m } denote the unlabeled pool and L the labeled pool. A model f θ with parameters θ produces class probabilities Y ^ i = f θ ( X i ) and is trained by minimizing the cross-entropy loss in (1), with optimal parameters θ * given by (2). Feature embeddings are extracted by U i = F Conv ( X i ) R d as in (3). Distances are measured by d E and d M in (4)–(5) and combined as d c in (6). For each anchor X i , a neighbor set N i is defined as the indices of the n nearest neighbors under d c . Mixed inputs are formed at the data level by
X ^ i = λ X i + ( 1 λ ) j N i X j with λ [ 0 , 1 ] ,
as in (7). The model outputs logits P i = f θ ( X ^ i ) , which are mapped to probabilities via softmax (9), and the confidence is C i = max ( softmax ( P i ) ) in (10). The acquisition set comprises the indices of the n lowest-confidence samples, cf. (11).

5.2. Geometric Rationale for Distance-Weighted Mixing

We analyze the effect of DM2 on two axes crucial for active learning: (i) uncertainty exposure at decision boundaries; (ii) diversity preservation through local neighborhood mixing.
We work under standard conditions often met in deep representation spaces: (A1) The embedding F Conv is locally Lipschitz: F Conv ( X ) F Conv ( X ) L X X for some L > 0 . (A2) The classifier head of f θ is L f -Lipschitz in input space on compact domains. (A3) Nearby points under d c have high label-correlation: there exists η [ 0 , 1 ) such that for j N i , P [ Y j Y i ] η ; equivalently, neighborhoods are label-homogeneous with bounded noise. (A4) Calibration around the decision boundary: near regions where class posteriors are close (small margin), confidence max c y ^ i c decreases monotonically with the distance to the margin hyper-surface.
Assumption (A1)–(A3) capture that d c is a surrogate for semantic proximity, while (A4) links geometric proximity to predictive uncertainty.
Consider a first-order expansion of f θ w.r.t. the input:
f θ ( X ^ i ) f θ λ X i + ( 1 λ ) j N i X j λ f θ ( X i ) + ( 1 λ ) j N i f θ ( X j ) + ε i ,
with a remainder term ε i 1 2 L f λ X i + ( 1 λ ) j N i X j X i 2 by (A2). Thus, to first order, the logits on the mixed input approximate an average of neighbor logits. When N i is label-homogeneous, the average logit sharpens the predicted class; when N i straddles a class boundary, the average logit becomes ambiguous, lowering confidence.

5.3. Uncertainty Amplification Near Class Boundaries

Define the pointwise margin for logits P ( x ) as
γ ( x ) P ( 1 ) ( x ) P ( 2 ) ( x ) ,
the gap between the top-two logits. By softmax monotonicity, smaller γ ( x ) implies lower confidence C ( x ) .
Lemma 1.
Margin reduction under heterogeneous neighborhoods. Let X i have neighbors N i with class proportions π c ( c π c = 1 ), and suppose f θ is locally linear around { X i } { X j : j N i } . Then, for the mixed input X ^ i with weight λ ( 0 , 1 ) ,
γ ( X ^ i ) λ γ ( X i ) + ( 1 λ ) Δ i ,
where Δ i is the top-two logit gap of the neighbor-averaged prediction P ¯ i = j N i P ( X j ) . If N i spans multiple classes so that P ¯ i is class-ambiguous, then Δ i is small, and hence γ ( X ^ i ) λ γ ( X i ) , yielding C i reduced relative to C ( X i ) .
Proof. 
Local linearity yields P ( X ^ i ) λ P ( X i ) + ( 1 λ ) P ¯ i . Let a ( · ) denote the top-two logit gap (an affine functional restricted to the two dominant coordinates). Then a ( λ p + ( 1 λ ) p ¯ ) = λ a ( p ) + ( 1 λ ) a ( p ¯ ) for any logits p , p ¯ sharing the same top-two ordering; otherwise, the gap cannot increase beyond the convex combination by triangle inequality. Hence, γ ( X ^ i ) λ γ ( X i ) + ( 1 λ ) Δ i . If neighbors are heterogeneous, Δ i is small due to averaging conflicting logits, which lowers γ ( X ^ i ) and therefore C i by softmax monotonicity in the gap. □
Samples whose neighbor sets N i cross decision boundaries are systematically assigned lower confidence after mixing and are thus prioritized by DM2. This aligns selection with true boundary regions where labels are most informative for reducing model uncertainty.

5.4. Diversity Preservation via Distance Coupling

Let G be the k-NN graph on { U i } under d c . DM2 forms mixes anchored at many distinct nodes with their local neighborhoods. If the acquisition selects the n lowest-confidence anchors after mixing, these anchors tend to be located on edges or cuts of G that cross clusters. Under mild clusterability:
(A5) The embedding decomposes into r well-separated clusters { C 1 , , C r } with inter-cluster distances larger than intra-cluster distances under d c .
Then, boundary regions appear around each cut ( C a , C b ) ; mixed inputs that pool neighbors from both C a and C b reduce confidence within each cut. Consequently, the n lowest-confidence anchors are spread across multiple cuts, promoting diversity without explicit diversity regularizers.

5.5. Stability of Mixed Confidence Under Neighbor Noise

Consider neighbor noise: a fraction ρ of N i are erroneous neighbors (e.g., misembedded or outliers). Let P i true be the average logits over true semantic neighbors and P i noise over noisy neighbors. Then
P ( X ^ i ) λ P ( X i ) + ( 1 λ ) ( 1 ρ ) P i true + ρ P i noise .
If P i noise P i true δ (bounded contamination), the perturbation to logits is at most ( 1 λ ) ρ δ , so the induced confidence change satisfies
| C ( X ^ i ) C ˜ ( X ^ i ) | L sm ( 1 λ ) ρ δ ,
where L sm is the Lipschitz constant of the softmax–max operator. Thus, DM2 confidence is robust to small neighbor noise for moderate λ .

5.6. Choice of the Combined Distance d c

The combined metric d c = 1 2 ( d E + d M ) inherits the following:
(i) Metric property: Since d E and d M are metrics on R d , any positive weighted sum is a metric. Hence d c satisfies non-negativity, symmetry, and the triangle inequality.
(ii) Sensitivity balance: d E is sensitive to dense directions, while d M is robust to sparse, axis-aligned deviations. Averaging thus mitigates anisotropy and promotes stable neighbor sets in heterogeneous embeddings.
d c defined by (6) is a metric on R d .
Proof. 
For all x , y , z R d : Non-negativity and identity of indiscernibles follow from those of d E and d M . Symmetry is immediate. For the triangle inequality,
d c ( x , z ) = 1 2 d E ( x , z ) + d M ( x , z ) 1 2 d E ( x , y ) + d E ( y , z ) + d M ( x , y ) + d M ( y , z ) = d c ( x , y ) + d c ( y , z ) .

5.7. Acquisition Optimality Under a Localized Fisher Criterion

Let Σ ( x ) denote the conditional Fisher information of f θ at input x with respect to parameters θ (under the model distribution). For classification with softmax outputs, points near the decision boundary tend to have larger Fisher trace tr Σ ( x ) , which correlates with higher expected gradient magnitude.
Define the mixed-point Fisher score
Φ i ( λ ) E θ ( Y , f θ ( X ^ i ) ) 2 | X i , N i tr Σ ( X ^ i ) .
Under (A1)–(A4) and local linearization, if N i is heterogeneous, X ^ i approaches the boundary and tr Σ ( X ^ i ) increases. Therefore selecting minimum-confidence X ^ i approximately maximizes Φ i ( λ ) among anchors, aligning DM2 with a proxy of information gain.
Theorem 1.
Informative selection under DM2. Suppose (A1)–(A5) hold and that f θ is locally linear in a neighborhood containing { X i } { X j : j N i } . Then, for any fixed λ ( 0 , 1 ) , ranking anchors X i by ascending confidence C i on mixed inputs X ^ i is equivalent to ranking by a non-increasing function of the margin γ ( X ^ i ) and thus, up to a monotone transform, by tr Σ ( X ^ i ) . Consequently, the DM2 acquisition set approximates a maximizer of the localized Fisher score among anchors, favoring boundary-spanning, diverse regions of the data manifold.
Proof sketch. 
Softmax confidence is a monotone function of the logit gap γ ( X ^ i ) ; hence, ordering by C i equals ordering by γ ( X ^ i ) . Under local linearization and (A4), smaller γ ( X ^ i ) implies proximity to the decision boundary, where the Fisher information increases for multinomial logistic models. Thus, ranking by C i approximates ranking by tr Σ ( X ^ i ) . Cluster separation (A5) ensures that anchors selected across different cuts yield coverage of multiple boundary regions (diversity). □

5.8. On the Mixing Coefficient λ

The coefficient λ trades off anchor faithfulness and boundary probing.
If λ 1 , X ^ i X i , recovering standard uncertainty sampling. If λ 0 , X ^ i collapses to neighbor averages, which may over-smooth and obscure fine boundaries. Under (A3), there exists an interval Λ ( 0 , 1 ) such that for λ Λ , heterogeneous neighborhoods strictly reduce γ ( X ^ i ) relative to γ ( X i ) while homogeneous neighborhoods preserve or increase it. Therefore, DM2 self-selects anchors with heterogeneous N i .
Existence of a beneficial mixing range. Assume there exist anchors with γ ( X i ) > 0 and heterogeneous N i such that Δ i < γ ( X i ) in Lemma 1. Then, for any λ ( 0 , 1 ) ,
γ ( X ^ i ) λ γ ( X i ) + ( 1 λ ) Δ i < γ ( X i ) .
Thus, confidence strictly decreases for such anchors; conversely, if N i is homogeneous with large margin, confidence is non-decreasing for λ near 1.
Proof. 
Immediate from Lemma 1 and the strict inequality Δ i < γ ( X i ) . □

5.9. Considerations and Summary

Let m = | U | and d be the feature dimension. Computing pairwise d c naively is O ( m 2 d ) ; approximate k-NN reduces this to near-linear time in m. Mixing and forward passes scale as O ( m n · C f ) , where n = | N i | and C f is model inference cost. Hence, with approximate neighbors and mini-batched evaluation, DM2 scales to large pools.
Mixing within d c -based neighborhoods yields mixed inputs whose logits approximate convex combinations of neighbor logits. Heterogeneous neighborhoods reduce the logit margin and thus confidence, preferentially surfacing boundary samples for labeling. The acquisition is robust-to-moderate neighbor noise and approximates selection by localized Fisher information. The combined distance d c is a proper metric that balances Euclidean and Manhattan sensitivities, stabilizing neighbor selection.

6. Experimental Results

We evaluate our method against state-of-the-art and baseline active learning approaches, including random selection, least confidence selection, and entropy sampling. Our approach is validated on both balanced and imbalanced image classification tasks using MobileNet [21] architectures. Experiments employ 7 to 10 active learning cycles with labeling budgets ranging from 20 to over 2500 samples per cycle. Data selections follow standard active learning practices without replacement, and all results are averaged over 5 runs. All experiments are implemented using PyTorch 2.2 [35].
For MNIST [36], we employ a CNN classification model with the Adam optimizer [37] at learning rate 10 3 and batch size 96, training for 50 epochs per cycle. For CIFAR-10, CIFAR-10s [38], CIFAR-10C [39], and SVHN [40], we use MobileNet [21] with the SGD optimizer [41], initial learning rate 0.1, batch size 128, momentum 0.9, and weight decay 5 × 10 4 . Training proceeds for 200 epochs with learning rate decay to 0.01 at epoch 160. We compare against established baselines including Random Selection [3], Entropy [22], Least Confidence [5], Margin [6], BALD [4], CoreSet [12], EntropyBayesian [7], UncertainGCN [42], BADGE [19], ProbCover [28], Alpha-Mix [1], and NoiseStability [32], using identical parameters for fair comparison.
Table 1 demonstrates that our method outperforms all other active learning approaches across most datasets. The DM2 method achieves superior performance compared to all baseline active learning methods on all datasets except MNIST. For the MNIST dataset, the simplicity and limited data volume result in the boundary-based selection model failing to learn useful information more rapidly than simpler selection strategies. The excellent performance on the SVHN dataset demonstrates that the DM2 method exhibits strong stability when handling imbalanced datasets. This robustness likely stems from the method’s design, which avoids data imbalance issues that can negatively impact auxiliary model training and subsequently degrade task model performance. In experiments with CIFAR-10s containing noisy data, the DM2 method successfully identifies samples conducive to model learning and demonstrates superior noise stability compared to competing approaches.
To further validate our method’s robustness, we conducted additional experiments using CIFAR-10s and SVHN datasets with ResNet18 [43] and VGG16 [44] architectures. Standard data augmentation techniques were applied during training, including random horizontal flipping and cropping. As shown in Table 2, our method consistently outperforms all other active learning approaches across these more complex architectures.

6.1. Robustness for Adversarial Training

The comparative experimental results for different active learning strategies are presented in Table 3. The results demonstrate that our method incorporating adversarial training achieves higher model accuracy compared to other active learning approaches, indicating that the boundary data feature fusion algorithm with adversarial training proposed in this work effectively improves model performance.
The experiments also reveal that conventional active learning methods often struggle to enhance performance on test sets in complex environments. Traditional active learning approaches learn only from clean datasets and exhibit reduced effectiveness when recognizing data in challenging conditions. By integrating adversarial training with the method from Section 4, our approach generates adversarial samples based on selected data, enhancing the model’s ability to learn from difficult-to-classify samples. This enables more effective integration and learning from data in complex environments, thereby improving overall performance.
The experimental results in Table 3 show that the DM2-AT method proposed in this work significantly outperforms competing methods across different models and datasets, confirming the effectiveness of our approach. The method increases learning challenges by generating adversarial samples through adversarial training on selected samples. When facing real-world scenarios with complex environments, this approach demonstrates stronger capability in identifying samples under noise interference, efficiently reducing the data requirements for building machine learning models while achieving superior performance.

6.2. Convergence Analysis

To analyze the performance trends of our method across different data selection cycles, we plotted convergence graphs based on the results from Table 1 and Table 2. Figure 4 presents trend plots for six different datasets and model combinations. All experimental values are averaged over three runs using consistent learning rates and parameters across all methods for fair comparison.
Figure 4 is organized into two groups: the upper panels show results for DM2, while the lower panels display results for DM2-AT. The method demonstrates gradual improvement across subsequent epochs, particularly in CIFAR-10 experiments using MobileNet and VGG16 architectures, where our approach surpasses competing methods. For CIFAR-10s, the model quickly learns to select more relevant data, ultimately outperforming all alternatives. On SVHN, our method demonstrates clear superiority by the 6th round, consistently outperforming other approaches throughout the remaining cycles.
Our method exhibits significant advantages across multiple datasets and model architectures, demonstrating broad applicability. In contrast, competing methods show less adaptability and inconsistent performance. Panel (e) reveals that the CIFAR-10C dataset presents challenges, with the trend chart showing some fluctuations in data selection performance. In panel (f), experiments using CIFAR-10C with ResNet34 show smoother progression and higher performance compared to MobileNet.
The ResNet18 model trained on SVHN-C in panel (h) exhibits favorable convergence trends. Except for the 3000th iteration where DM2-AT did not surpass several competing methods, it achieved excellent results across all other iterations. In panel (g), SVHN-C initially presents challenges during early training. However, after processing 5000 data points, the model rapidly identifies samples with stronger feature information, leading to sharp performance improvements and ultimately achieving superior results. These findings demonstrate the effectiveness of our active learning approach that fuses features from adversarial training boundary data.

6.3. Time Efficiency

The computational efficiency of active learning methods depends on the cost of sample distance calculations and subset selection procedures, as presented in Table 4. We evaluated the time efficiency of several effective methods using identical hyperparameters from our experiments. Our methods demonstrate computational efficiency comparable to state-of-the-art algorithms and exhibit favorable scalability as the annotation budget or number of categories increases.

6.4. Ablation Study

The active learning method proposed in this work incorporates two key components: boundary data feature fusion and adversarial training. Since removing adversarial training yields a method similar to that in Section 3, we conduct ablation experiments focusing on sample distances, fusion ratios, and perturbation values.
To assess method validity, we replace components in the ablation study: using Euclidean distance instead of the combined Euclidean and Manhattan distance, employing equal-weight fusion instead of adaptive fusion ratios, and using a fixed perturbation value of 0.05 instead of adaptive values. Table 5 presents the ablation results, demonstrating that adversarial training significantly enhances fault tolerance and robustness.
According to the experimental results in Table 5, replacing the combined distance metric with Euclidean distance alone struggles to accurately capture data distributions across different datasets, frequently leading to model confusion and suboptimal performance. Ablation experiments using equal-weight fusion show that uniform fusion causes merged features to lose distinctiveness, resulting in deteriorated model recognition performance.
The ablation study confirms that the adversarial training-based boundary data feature fusion algorithm enhances sample efficiency in active learning while improving the model’s generalization capability across diverse datasets and challenging conditions.
Active learning represents a prominent research direction for deep neural networks, enabling efficient model training with reduced sample requirements. We propose a simple yet stable method that exploits inter-sample relationships and data distribution characteristics. Through uncertainty prediction based on similarity measures and weighted mixing strategies, our approach demonstrates superior performance in both theoretical analysis and experimental evaluation across multiple tasks. The integration of adversarial training with boundary data feature fusion further enhances model robustness and generalization capability in complex environments.
Future work will focus on more challenging scenarios where the computational efficiency and cost reduction benefits of active learning become increasingly significant. We aim to extend our approach to handle larger-scale datasets and more complex domain adaptation problems, where traditional supervised learning approaches face substantial annotation costs and computational constraints.

Author Contributions

Conceptualization, S.S.; Methodology, X.W.; Software, X.W. and S.D.; Validation, X.W.; Formal analysis, S.S.; Investigation, X.W. and J.J.; Data curation, X.W. and S.D.; Writing—original draft, S.S.; Writing—review & editing, S.S.; Visualization, S.D.; Supervision, J.J.; Project administration, J.J.; Funding acquisition, J.J. All authors have read and agreed to the published version of the manuscript.

Funding

This paper was supported by Natural Science Foundation of Jilin Province (Grant No. YDZJ202401610ZYTS), Shenzhen Science and Technology Program (Grant No. JCYJ20250604145014018) and Natural Science Foundation of Top Talent of SZTU (Grant No. GDRC202413).

Data Availability Statement

The data presented in this study are openly available in [CIFAR-10] [https://www.cs.utoronto.ca/~kriz/learning-features-2009-TR.pdf] (accessed on 1 October 2025) [21].

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Parvaneh, A.; Abbasnejad, E.; Teney, D.; Haffari, G.R.; Van Den Hengel, A.; Shi, J.Q. Active learning by feature mixing. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12237–12246. [Google Scholar]
  2. Munjal, P.; Hayat, N.; Hayat, M.; Sourati, J.; Khan, S. Towards robust and reproducible active learning using neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 223–232. [Google Scholar]
  3. Settles, B. Active Learning Literature Survey. Technical Report. 2009. Available online: https://minds.wisconsin.edu/handle/1793/60660 (accessed on 1 January 2024).
  4. Gal, Y.; Islam, R.; Ghahramani, Z. Deep bayesian active learning with image data. In Proceedings of the International Conference on Machine Learning, PMLR, Sydney, Australia, 6–11 August 2017; pp. 1183–1192. [Google Scholar]
  5. Lewis, D.D. A sequential algorithm for training text classifiers: Corrigendum and additional data. In Proceedings of the ACM SIGIR Forum, New York, NY, USA, 3–6 July 1994; Volume 29, pp. 13–19. [Google Scholar]
  6. Kremer, J.; Steenstrup Pedersen, K.; Igel, C. Active learning with support vector machines. WIREs Data Min. Knowl. Discov. 2014, 4, 313–326. [Google Scholar] [CrossRef]
  7. Gal, Y.; Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In Proceedings of the International Conference on Machine Learning, PMLR, New York, NY, USA, 20–22 June 2016; pp. 1050–1059. [Google Scholar]
  8. Freund, Y.; Seung, H.S.; Shamir, E.; Tishby, N. Selective sampling using the query by committee algorithm. Mach. Learn. 1997, 28, 133–168. [Google Scholar] [CrossRef]
  9. Gorriz, M.; Carlier, A.; Faure, E.; Giro-i Nieto, X. Cost-effective active learning for melanoma segmentation. arXiv 2017, arXiv:1711.09168. [Google Scholar] [CrossRef]
  10. Ducoffe, M.; Precioso, F. Adversarial active learning for deep networks: A margin based approach. arXiv 2018, arXiv:1802.09841. [Google Scholar] [CrossRef]
  11. Mayer, C.; Timofte, R. Adversarial sampling for active learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 3071–3079. [Google Scholar]
  12. Sener, O.; Savarese, S. Active learning for convolutional neural networks: A core-set approach. arXiv 2017, arXiv:1708.00489. [Google Scholar]
  13. Caramalau, R.; Bhattarai, B.; Kim, T.K. Sequential graph convolutional network for active learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 9583–9592. [Google Scholar]
  14. Yoo, D.; Kweon, I.S. Learning loss for active learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 93–102. [Google Scholar]
  15. Liu, Z.; Ding, H.; Zhong, H.; Li, W.; Dai, J.; He, C. Influence selection for active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9274–9283. [Google Scholar]
  16. Wang, T.; Li, X.; Yang, P.; Hu, G.; Zeng, X.; Huang, S.; Xu, C.Z.; Xu, M. Boosting active learning via improving test performance. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–1 March 2022; Volume 36, pp. 8566–8574. [Google Scholar]
  17. Koh, P.W.; Liang, P. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning. PMLR, Sydney, Australia, 6–11 August 2017; pp. 1885–1894. [Google Scholar]
  18. Sinha, S.; Ebrahimi, S.; Darrell, T. Variational adversarial active learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5972–5981. [Google Scholar]
  19. Ash, J.T.; Zhang, C.; Krishnamurthy, A.; Langford, J.; Agarwal, A. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv 2019, arXiv:1906.03671. [Google Scholar]
  20. Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images. Master’s Thesis, University of Toronto, Toronto, ON, Canada, 2009. [Google Scholar]
  21. Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv 2017, arXiv:1704.04861. [Google Scholar] [CrossRef]
  22. Wang, D.; Shang, Y. A new active labeling method for deep learning. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 112–119. [Google Scholar]
  23. Kirsch, A.; van Amersfoort, J.; Gal, Y. BatchBALD: Efficient and Diverse Batch Acquisition for Deep Bayesian Active Learning. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada, 8–14 December 2019; Volume 32. Available online: https://proceedings.neurips.cc/paper_files/paper/2019/file/95323660ed2124450caaac2c46b5ed90-Paper.pdf (accessed on 1 January 2024).
  24. Houlsby, N.; Huszár, F.; Ghahramani, Z.; Lengyel, M. Bayesian active learning for classification and preference learning. arXiv 2011, arXiv:1112.5745. [Google Scholar] [CrossRef]
  25. Loquercio, A.; Segu, M.; Scaramuzza, D. A general framework for uncertainty estimation in deep learning. IEEE Robot. Autom. Lett. 2020, 5, 3153–3160. [Google Scholar] [CrossRef]
  26. Kuo, W.; Häne, C.; Yuh, E.; Mukherjee, P.; Malik, J. Cost-sensitive active learning for intracranial hemorrhage detection. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, 16–20 September 2018; Proceedings, Part III 11. Springer: Berlin/Heidelberg, Germany, 2018; pp. 715–723. [Google Scholar]
  27. Beluch, W.H.; Genewein, T.; Nürnberger, A.; Köhler, J.M. The power of ensembles for active learning in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9368–9377. [Google Scholar]
  28. Yehuda, O.; Dekel, A.; Hacohen, G.; Weinshall, D. Active learning through a covering lens. Adv. Neural Inf. Process. Syst. 2022, 35, 22354–22367. [Google Scholar]
  29. Elhamifar, E.; Sapiro, G.; Yang, A.; Sasrty, S.S. A convex optimization framework for active learning. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 209–216. [Google Scholar]
  30. Hasan, M.; Roy-Chowdhury, A.K. Context aware active learning of activity recognition models. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4543–4551. [Google Scholar]
  31. Citovsky, G.; DeSalvo, G.; Gentile, C.; Karydas, L.; Rajagopalan, A.; Rostamizadeh, A.; Kumar, S. Batch active learning at scale. Adv. Neural Inf. Process. Syst. 2021, 34, 11933–11944. [Google Scholar]
  32. Li, X.; Yang, P.; Gu, Y.; Zhan, X.; Wang, T.; Xu, M.; Xu, C. Deep active learning with noise stability. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, ON, Canada, 26–27 February 2024; Volume 38, pp. 13655–13663. [Google Scholar]
  33. Yang, C.; Wu, Q.; Li, H.; Chen, Y. Generative poisoning attack method against neural networks. arXiv 2017, arXiv:1703.01340. [Google Scholar] [CrossRef]
  34. Guo, R.; Chen, Q.; Liu, H.; Wang, W. Adversarial robustness enhancement for deep learning-based soft sensors: An adversarial training strategy using historical gradients and domain adaptation. Sensors 2024, 24, 3909. [Google Scholar] [CrossRef] [PubMed]
  35. Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic Differentiation in Pytorch. 2017. Available online: https://openreview.net/pdf?id=BJJsrmfCZ (accessed on 1 January 2024).
  36. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
  37. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  38. Wang, Z.; Qinami, K.; Karakozis, I.C.; Genova, K.; Nair, P.; Hata, K.; Russakovsky, O. Towards fairness in visual recognition: Effective strategies for bias mitigation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 8919–8928. [Google Scholar]
  39. Wang, H.; Xiao, C.; Kossaifi, J.; Yu, Z.; Anandkumar, A.; Wang, Z. Augmax: Adversarial composition of random augmentations for robust training. Adv. Neural Inf. Process. Syst. 2021, 34, 237–250. [Google Scholar]
  40. Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; Ng, A.Y. Reading digits in natural images with unsupervised feature learning. In Proceedings of the NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain, 12–17 December 2011; Volume 2011, p. 4. [Google Scholar]
  41. Bottou, L. Large-scale machine learning with stochastic gradient descent. In Proceedings of the COMPSTAT’2010: 19th International Conference on Computational Statistics, France, Paris, 22–27 August 2010; Keynote, Invited and Contributed Papers. Springer: Berlin/Heidelberg, Germany, 2010; pp. 177–186. [Google Scholar]
  42. Zhao, X.; Chen, F.; Hu, S.; Cho, J.H. Uncertainty aware semi-supervised learning on graph data. Adv. Neural Inf. Process. Syst. 2020, 33, 12827–12836. [Google Scholar]
  43. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  44. Mittal, S. Image Classification of Satellite Using VGG16 Model. In Proceedings of the 2024 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024; pp. 401–404. [Google Scholar] [CrossRef]
Figure 1. The left side of the figure illustrates a heat map of the data distribution for the Least Confidence method, achieving 83.5% accuracy. The right side depicts the data distribution when offsetting 300 data points, resulting in 84.11% accuracy. This comparison highlights the impact of offsetting on sample diversity and model performance.
Figure 1. The left side of the figure illustrates a heat map of the data distribution for the Least Confidence method, achieving 83.5% accuracy. The right side depicts the data distribution when offsetting 300 data points, resulting in 84.11% accuracy. This comparison highlights the impact of offsetting on sample diversity and model performance.
Entropy 27 01159 g001
Figure 2. Overview of DM2. Data is input into the model to obtain feature-layer embeddings. We calculate sample similarity distances on these embeddings and select similar samples for linear blending. The model is tested with mixed samples to determine confidence levels. Data with low confidence is identified, and the original data is added to the labeled data pool.
Figure 2. Overview of DM2. Data is input into the model to obtain feature-layer embeddings. We calculate sample similarity distances on these embeddings and select similar samples for linear blending. The model is tested with mixed samples to determine confidence levels. Data with low confidence is identified, and the original data is added to the labeled data pool.
Entropy 27 01159 g002
Figure 3. Overview of adversarial training for sample selection, where the input consists of near-boundary data selected by DM2 from Section 3. After generating adversarial samples, the original samples are added to the labeling pool alongside their adversarial counterparts. Through iterative selection of both original and adversarial sample sets, the model’s robustness is enhanced.
Figure 3. Overview of adversarial training for sample selection, where the input consists of near-boundary data selected by DM2 from Section 3. After generating adversarial samples, the original samples are added to the labeling pool alongside their adversarial counterparts. Through iterative selection of both original and adversarial sample sets, the model’s robustness is enhanced.
Entropy 27 01159 g003
Figure 4. Classification performance of different methods across various datasets and models. The graphs are divided into two groups: experiments on datasets for DM2 are shown at the upper panels, while experiments for DM2-AT are at the lower panels. Our method shows significant advantages across multiple datasets and models, proving its wide applicability.
Figure 4. Classification performance of different methods across various datasets and models. The graphs are divided into two groups: experiments on datasets for DM2 are shown at the upper panels, while experiments for DM2-AT are at the lower panels. Our method shows significant advantages across multiple datasets and models, proving its wide applicability.
Entropy 27 01159 g004
Table 1. Accuracy (%) of experimental results with different datasets.
Table 1. Accuracy (%) of experimental results with different datasets.
MethodsMNISTCIFAR-10CIFAR-10sSVHNAvg
Entropy [22] (IJCNN14)92.78 ± 0.1484.00 ± 0.1364.00 ± 1.4270.91 ± 9.0177.92
Margin [6] (WIREs14)93.42 ± 0.1383.62 ± 0.0763.47 ± 0.3769.64 ± 4.8477.53
Least Confidence [5]92.98 ± 0.1383.51 ± 0.0463.89 ± 1.7970.52 ± 2.2577.73
Random [3]87.78 ± 0.3281.62 ± 0.1962.94 ± 0.0169.82 ± 3.4275.54
EntropyBayesian [7] (ICML16)92.05 ± 0.6583.62 ± 0.2561.82 ± 0.5671.21 ± 4.1377.18
CoreSet [12]89.45 ± 1.1283.04 ± 0.1463.78 ± 0.7271.41 ± 1.9476.85
UncertainGCN [42] (NIPS20)87.56 ± 2.4783.51 ± 0.1061.40 ± 4.4370.65 ± 0.0775.78
ProbCover [28] (NIPS22)88.75 ± 0.6881.63 ± 0.1764.13 ± 1.8869.04 ± 12.5875.89
BALD [4] (ICML17)92.55 ± 1.2282.03 ± 0.2263.62 ± 1.8170.36 ± 3.1277.14
BADGE [19]92.35 ± 0.0683.83 ± 0.0562.46 ± 1.6971.94 ± 2.3577.65
Alpha-Mix [1] (CVPR22)93.10 ± 0.4684.22 ± 0.0562.58 ± 0.7269.43 ± 0.2877.25
NoiseStability [32] (IJCAI24)92.63 ± 0.3783.87 ± 0.0462.55 ± 0.3669.87 ± 19.8577.23
DM2 (Ours)93.15 ± 0.3284.27 ± 0.0164.71 ± 3.8272.23 ± 2.4178.59
Bold denotes the best result and underline denotes the second result in each column.
Table 2. Accuracy (%) of experimental results under large scale models.
Table 2. Accuracy (%) of experimental results under large scale models.
MethodsSVHN (ResNet18)CIFAR-10 (vgg16)Avg
Entropy [22]92.94 ± 0.0475.97 ± 5.5684.46
Margin [6]92.99 ± 0.0179.51 ± 0.2486.25
Least Confidence [5]93.02 ± 0.0179.57 ± 0.1886.3
Random [3]91.32 ± 0.0977.84 ± 0.0884.58
EntropyBayesian [7]91.38 ± 0.0977.12 ± 3.3484.27
CoreSet [12]92.48 ± 0.0177.10 ± 0.0884.79
UncertainGCN [42]91.57 ± 0.0678.82 ± 0.0585.20
ProbCover [28]90.88 ± 0.1977.62 ± 0.1784.25
BALD [4]92.67 ± 0.0275.51 ± 1.284.09
BADGE [19]93.04 ± 0.0679.20 ± 0.7686.12
Alpha-Mix [1]92.69 ± 0.0379.06 ± 0.3485.88
NoiseStability [32]92.91 ± 0.0479.10 ± 0.4286.01
DM2 (Ours)93.04 ± 0.0479.71 ± 0.0986.38
Bold denotes the best result in each column.
Table 3. Comparison of accuracy results of different methods on different models (%).
Table 3. Comparison of accuracy results of different methods on different models (%).
MethodsMNIST-C (CNN)SVHN-C (MobileNet)CIFAR10-C (MobileNet)SVHN-C (ResNet18)CIFAR10-C (ResNet34)
Entropy [22]84.74 ± 2.5472.51 ± 1.2567.21 ± 0.5479.53 ± 0.8974.25 ± 2.56
Least Confidence [5]85.62 ± 1.6773.66 ± 0.7268.35 ± 1.8581.20 ± 2.5675.61 ± 1.87
Margin [6]84.83 ± 1.3473.24 ± 1.5467.04 ± 1.7879.98 ± 2.2874.32 ± 1.43
Random [3]82.61 ± 3.7870.28 ± 2.6663.27 ± 0.2878.61 ± 3.9573.01 ± 1.22
EntropyBayesian [7]83.34 ± 1.3271.64 ± 2.8463.59 ± 2.7278.91 ± 0.3173.20 ± 0.18
CoreSet [12]84.52 ± 2.5472.91 ± 1.5262.18 ± 3.3679.01 ± 1.0273.61 ± 2.45
UncertainGCN [42]83.11 ± 1.9871.37 ± 1.7364.36 ± 1.8478.23 ± 2.4372.93 ± 1.95
ProbCover [28]80.13 ± 2.7570.01 ± 3.4365.23 ± 0.7679.54 ± 3.6173.81 ± 3.68
BALD [4]84.55 ± 1.4372.29 ± 2.8564.87 ± 2.0579.02 ± 0.8872.77 ± 2.84
BADGE [19]85.78 ± 0.7473.35 ± 1.9465.79 ± 1.9680.11 ± 1.9674.32 ± 1.29
Alpha-Mix [1]85.91 ± 0.3273.59 ± 1.2267.91 ± 2.6781.32 ± 1.6175.29 ± 2.71
NoiseStability [32]85.26 ± 1.2273.03 ± 2.5166.52 ± 1.4380.89 ± 2.2773.04 ± 1.43
DM2-AT (Ours)86.63 ± 0.9174.56 ± 2.0269.05 ± 1.7782.02 ± 2.5176.63 ± 1.82
Bold denotes the best result in each column.
Table 4. Time effect result.
Table 4. Time effect result.
MethodMnist (/s)CIFAR-10 (/m)
BADGE22.07
EntropyBayesian43.25
BALD164.58
NoiseStability204.21
Our32.07
Bold denotes the best result in each column.
Table 5. Ablation study: accuracy comparison results (%).
Table 5. Ablation study: accuracy comparison results (%).
Datasets-Distance-Fusion Ratio-PerturbationDM2-AT
MNIST-C (CNN)83.45 ± 0.9784.29 ± 1.4585.62 ± 0.5186.63 ± 0.91
CIFAR10-C (MobileNet)73.32 ± 1.4472.24 ± 2.3173.52 ± 1.3274.56 ± 2.02
SVHN-C (MobileNet)66.49 ± 1.4166.22 ± 0.8367.13 ± 1.4269.05 ± 1.77
SVHN-C (ResNet18)78.89 ± 1.5380.35 ± 2.8981.02 ± 0.8482.02 ± 2.51
CIFAR10-C (ResNet34)74.93 ± 0.7175.44 ± 1.3775.58 ± 2.3276.63 ± 1.82
Bold denotes the best result in each column.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, S.; Wang, X.; Dong, S.; Jiang, J. Robust Deep Active Learning via Distance-Measured Data Mixing and Adversarial Training. Entropy 2025, 27, 1159. https://doi.org/10.3390/e27111159

AMA Style

Song S, Wang X, Dong S, Jiang J. Robust Deep Active Learning via Distance-Measured Data Mixing and Adversarial Training. Entropy. 2025; 27(11):1159. https://doi.org/10.3390/e27111159

Chicago/Turabian Style

Song, Shinan, Xing Wang, Shike Dong, and Jingyan Jiang. 2025. "Robust Deep Active Learning via Distance-Measured Data Mixing and Adversarial Training" Entropy 27, no. 11: 1159. https://doi.org/10.3390/e27111159

APA Style

Song, S., Wang, X., Dong, S., & Jiang, J. (2025). Robust Deep Active Learning via Distance-Measured Data Mixing and Adversarial Training. Entropy, 27(11), 1159. https://doi.org/10.3390/e27111159

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop