Next Article in Journal
Structured Stability of Hybrid Stochastic Differential Equations with Superlinear Coefficients and Infinite Memory
Previous Article in Journal
Spin-2 Particle in Coulomb Field: Non-Relativistic Approximation
Previous Article in Special Issue
Addressing Structural Asymmetry: Unsupervised Joint Training of Bilingual Embeddings for Non-Isomorphic Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Leveraging Asymmetric Adaptation with Dynamic Sparse LoRA for Enhanced Nuance in LLM-Based Offensive Language Detection

1
School of National Security, Pople’s Public Security University of China, Beijing 710041, China
2
School of Police Administration, People’s Public Security University of China, Beijing 461000, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1076; https://doi.org/10.3390/sym17071076
Submission received: 4 June 2025 / Revised: 22 June 2025 / Accepted: 27 June 2025 / Published: 7 July 2025

Abstract

The challenge of detecting nuanced, context-dependent offensive language highlights the need for Large Language Model (LLM) adaptation strategies that can effectively address inherent data and task asymmetries. Standard Parameter-Efficient Finetuning (PEFT) methods like Low-Rank Adaptation (LoRA), while efficient, often employ a more uniform, or symmetric, update mechanism that can be suboptimal for capturing such linguistic subtleties. In this paper, we propose Dynamic Sparse LoRA (DS-LoRA), a novel technique that leverages asymmetric adaptation to enhance LLM finetuning for nuanced offensive language detection. DS-LoRA achieves this by (1) incorporating input-dependent gating mechanisms, enabling the asymmetric modulation of LoRA module contributions based on instance-specific characteristics, and (2) promoting asymmetric sparsity within LoRA update matrices via L1 regularization. This dual asymmetric strategy empowers the model to selectively engage and refine only the most pertinent parameters for a given input, fostering a more parsimonious and contextually aware adaptation. Extensive experiments on benchmark datasets demonstrate DS-LoRA’s significant overperformance over standard LoRA and other strong baselines, particularly in identifying subtle and contextually ambiguous offensive content, underscoring the benefits of its asymmetric adaptive capabilities.

1. Introduction

The landscape of data mining and machine learning, particularly in natural language processing, has been dramatically reshaped by the advent of Large Language Models (LLMs) [1,2,3]. These powerful models process vast, diverse, and often inherently asymmetric data, pushing the boundaries of automated text understanding. In this context, inherently asymmetric data refers to datasets where the distribution of crucial information is not uniform. This can manifest as severe class imbalance (e.g., far more non-offensive than offensive examples) or as a non-uniform spread of linguistic features, where the cues for a specific class are subtle and appear in only a small, unpredictable subset of instances. As we delve deeper into LLM adaptation, the intricate interplay between symmetry and asymmetry in their architecture, data, and training processes becomes increasingly apparent, presenting both opportunities and challenges for future research. Specifically, the task of detecting offensive language—encompassing hate speech, toxicity, and cyberbullying [4,5]—underscores this interplay. Offensive language itself is an asymmetric linguistic phenomenon, characterized by subtlety, indirectness, irony, and high dependence on socio-cultural context [6,7,8], making its detection a formidable challenge even for advanced LLMs.
While LLMs demonstrate remarkable capabilities [9,10], finetuning these massive models for specific downstream tasks like nuanced offensive language detection can be computationally prohibitive [11]. This has spurred the development of Parameter-Efficient Finetuning (PEFT) techniques [12,13,14]. Among these, Low-Rank Adaptation (LoRA) [15] has emerged as a popular approach, injecting trainable low-rank matrices into LLM layers to drastically reduce trainable parameters. LoRA has been successfully applied to various tasks, including instruction following [16,17].
However, standard LoRA, despite its successes, often employs a relatively symmetric adaptation strategy. It typically uses a fixed, predetermined rank for its decomposition matrices across all targeted layers and for all inputs. Furthermore, the LoRA update matrices are inherently dense within their low-rank structure. This static and uniform (or symmetric) allocation may not be optimal for tasks defined by asymmetry, such as detecting subtle offensive language. Different layers or input instances might benefit from varying degrees of adaptation [18]. For instance, overt offensive expressions might require minimal adjustment from the base LLM, while nuanced microaggressions or sarcastic abuse might necessitate substantial, fine-grained, and thus more asymmetric parameter shifts. A dense update mechanism within the low-rank projection can also lead to an inefficient use of the parameter budget when only a sparse, asymmetric subset of features truly needs adjustment.
To address these limitations and better harness the power of LLMs for asymmetrically complex tasks, we propose Dynamic Sparse LoRA (DS-LoRA). DS-LoRA is an innovative extension to LoRA that explicitly incorporates asymmetry into the finetuning process, making it tailored for capturing the subtle nuances of offensive language. It introduces two key enhancements that foster asymmetric adaptation:
  • We incorporate lightweight, learnable gating mechanisms that dynamically and asymmetrically scale the contribution of each LoRA module based on the input instance. This allows the model to “decide” how much adaptation is needed for a given piece of text, effectively treating different inputs asymmetrically.
  • We apply L1 regularization to the LoRA matrices during training. This encourages parameter-level asymmetry by promoting sparsity within the low-rank update matrices themselves, compelling the model to learn which specific low-rank components are crucial and pruning redundant ones.
By combining dynamic, input-dependent gating (introducing input-level asymmetry) with learned parameter-level sparsity (introducing structural asymmetry in the updates), DS-LoRA creates a more adaptive, parsimonious, and ultimately more effective finetuning strategy. This approach enables the LLM to make highly selective and targeted adjustments, better attuned to the specific, often asymmetric, characteristics of the input text. This capability is crucial for distinguishing subtle offensive content from benign statements, directly addressing how LLMs can leverage data asymmetry for improved representation and how model design can incorporate asymmetric elements for optimized performance. Furthermore, our analysis of the learned gate activations provides insights into the model’s internal adaptive behavior, revealing how DS-LoRA dynamically allocates capacity in response to different input types.
We evaluate DS-LoRA by finetuning recent LLMs on established benchmark datasets for offensive language detection. The experimental results demonstrate that DS-LoRA significantly outperforms standard LoRA and other strong baselines. Our contributions, reframed through the lens of symmetry and asymmetry, are threefold:
  • We propose DS-LoRA, a novel PEFT method that integrates dynamic gating and learned sparsity to achieve an asymmetric and adaptive LoRA framework, specifically designed for nuanced NLP tasks characterized by inherent data and linguistic asymmetries.
  • We demonstrate through extensive experiments that DS-LoRA significantly outperforms standard LoRA and other competitive baselines in detecting offensive language, especially in challenging cases involving asymmetrically expressed subtlety and context dependency, while maintaining or improving parameter efficiency.
  • We provide an analysis of the learned gate behaviors and sparsity patterns, offering insights into how DS-LoRA achieves its performance gains through dynamic asymmetric adaptation, contributing to a better understanding of adaptive and potentially more explainable finetuning mechanisms.

2. Related Work

The automatic detection of offensive language has been a focal point of NLP research for over a decade, driven by the need to moderate online content and mitigate online harm [19]. Early approaches, relying on lexicons and traditional machine learning models like SVMs with engineered features [20], often struggled with the implicit, context-dependent, and thus highly asymmetric nature of offensive language. These methods lacked the capacity to capture the deeper semantic understanding required to discern subtle offensive cues from benign language.
The advent of deep learning, with models like CNNs [21], RNNs [22], and attention-based architectures [23], brought significant advancements by learning hierarchical feature representations. More recently, pretrained transformer-based models such as BERT [24] and RoBERTa [25] have set new benchmarks by learning rich contextual embeddings [26,27]. However, finetuning these powerful models presents challenges, especially in adapting them to the nuanced, asymmetric variations of offensiveness without catastrophic forgetting or extensive labeled data. The inherent asymmetry of the offensive language detection task—where offensive instances might be a minority class or manifest in diverse, non-uniform ways—poses a particular challenge for adaptation strategies that treat all data or model components uniformly.
The emergence of LLMs with billions of parameters has further revolutionized the field, yet their size makes full finetuning impractical [28]. This has spurred research into PEFT techniques, which adapt LLMs by training only a small fraction of parameters. Prominent PEFT methods include adapter tuning [13,29], prompt tuning [13,30], and prefix tuning [14]. LoRA [15] posits that weight changes during adaptation have a low intrinsic rank. While significantly reducing trainable parameters, standard LoRA and many of its variants often apply updates in a relatively dense manner within the low-rank space and can employ static configurations. Variants like LoRA-FA [31] (which freezes matrix A), QLoRA [32] (which quantizes the model), and AdaLoRA [18] (which adaptively allocates rank budget) have introduced improvements. AdaLoRA, for instance, takes a step towards a more asymmetric adaptation by allocating rank based on matrix importance. However, these methods typically maintain dense LoRA matrices within their allocated ranks and may not fully exploit instance-specific dynamic adjustments or explicit parameter-level sparsity. Such approaches might be considered more symmetric in how they apply adaptation within the LoRA module once the rank is determined, potentially overlooking the benefits of a more granular, asymmetric engagement of parameters based on specific input nuances.
Recent advancements in PEFT, particularly within the LoRA family, have made significant strides. Methods like AdaLoRA [18] have introduced a level of adaptivity by allocating parameter budgets based on matrix importance, acknowledging that not all components should be treated equally. This represents a step towards structural asymmetry. Other variants have focused on quantization for efficiency (QLoRA [32]) or improving the update mechanism itself (DoRA [33]). A key finding from this body of work is that moving beyond a one-size-fits-all adaptation strategy generally yields better performance. However, a common limitation in many of these approaches is that the adaptation, once configured, is applied in a relatively static or input-agnostic manner. For instance, even with an adaptively allocated rank, the resulting update is typically applied with the same intensity regardless of whether an input is simple and straightforward or complex and nuanced. This leaves a critical research gap: the exploration of methods that can dynamically modulate their adaptation strength on a per-instance basis. For tasks characterized by high linguistic asymmetry, such as detecting subtle offensive language, a model that can “decide” how much finetuning to apply for each specific input could be significantly more effective. This motivates our work, where we will propose a framework designed explicitly to introduce this form of dynamic, input-dependent asymmetry.

3. Methodology

In this section, we first briefly review the standard LoRA technique, highlighting aspects relevant to the theme of symmetric versus asymmetric adaptation. We then introduce our proposed DS-LoRA, detailing its core components, input-dependent gating and LoRA parameter sparsification, which together enable a more asymmetric and adaptive finetuning approach. Finally, we describe the overall model architecture and training procedure.

3.1. Preliminaries: LoRA

LLMs typically consist of multiple layers, with matrix multiplications being a core operation. Full finetuning of an LLM involves updating all its weights W 0 R d × k . LoRA hypothesizes that the change in weights during adaptation, Δ W , has a low intrinsic rank. Therefore, LoRA freezes the pretrained weights W 0 and injects a trainable rank decomposition module representing Δ W . Specifically, for a given layer, the update Δ W is approximated by two smaller matrices, A R r × k and B R d × r , where r min ( d , k ) is the rank of the adaptation. The forward pass of a LoRA-adapted layer becomes as shown in Equation (1):
h out = W 0 x + Δ W x = W 0 x + s · B A x
where x R k is the input, h out R d is the output, and s is a scaling factor, often set to α / r , where α is a hyperparameter. Only A and B are trained. While this significantly reduces the number of trainable parameters, matrices A and B themselves are typically dense. This means that all parameters within this low-rank adaptation are updated, representing a somewhat uniform or symmetric treatment of all components within the adaptation subspace defined by r. Matrix A is typically initialized with a random Gaussian distribution, and B is initialized to zero, so Δ W is zero at the beginning of training, ensuring that the adaptation starts from the pretrained model’s state.

3.2. DS-LoRA: Introducing Asymmetric Adaptation

Our proposed DS-LoRA enhances standard LoRA by introducing two key mechanisms designed to foster a more asymmetric and input-sensitive adaptation: (1) an input-dependent gating mechanism to dynamically control the influence of each LoRA module, leading to input-level asymmetry, and (2) L1 regularization to promote sparsity within the LoRA matrices A and B, resulting in parameter-level asymmetry. To provide a clearer picture of our method’s architectural innovations, Figure 1 visually contrasts the data flow in a standard LoRA layer with that of our proposed DS-LoRA layer, highlighting the introduction of the dynamic gate and the use of sparse update matrices.

3.2.1. Input-Dependent Gating: Asymmetric Modulation of LoRA Influence

To allow the model to adapt its LoRA contributions based on the specific input instance, thereby introducing asymmetry in how intensely adaptation is applied, we introduce a learnable gating mechanism for each LoRA module. Given an input x to a LoRA-adapted layer, a small gate controller network f gate computes a scalar gate value g ( x ) [ 0 , 1 ] . This gate value then modulates the output of the LoRA path, allowing for instance-specific adaptation strength.
The gate controller f gate is implemented as a small Multi-Layer Perceptron (MLP). For an input x R k (which is the input to the original linear transformation W 0 x ), the gate value is computed as in Equation (2),
g ( x ) = σ W g 2 · ReLU ( W g 1 x + b g 1 ) + b g 2
or in a simpler form without a hidden layer if the hidden dimension of f gate is set to 0 (effectively making it a linear layer followed by sigmoid) as in Equation (3),
g ( x ) = σ W g x + b g
where W g 1 , b g 1 , W g 2 , and b g 2 (or just W g and b g ) are learnable parameters of the gate controller, and σ is the sigmoid function, ensuring the gate value is bounded between 0 and 1. We detach x before feeding it to f gate ( x gate = x . detach ( ) ) to prevent the gate’s gradients from directly influencing the representation x being processed by the main LoRA path, simplifying learning dynamics. The gate parameters are trained jointly with the LoRA matrices.
The modified forward pass with this asymmetric gating mechanism is in Equation (4):
h out = W 0 x + g ( x ) · s · B A x
This allows the LoRA update to be scaled down (when g ( x ) 0 ) for inputs where the pretrained model is already sufficient or the specific LoRA adaptation is not beneficial, and scaled up (when g ( x ) 1 ) when the adaptation is crucial (a stronger asymmetric modification). This dynamic scaling introduces input-level asymmetry to the adaptation process.

3.2.2. LoRA Parameter Sparsification: Asymmetric Parameter Engagement

While LoRA reduces the total number of trainable parameters, LoRA matrices A and B are typically dense, implying a symmetric engagement of all parameters within the low-rank projection. We hypothesize that for nuanced tasks, only a sparse, asymmetric subset of these low-rank adaptation parameters might be truly necessary and impactful. To encourage this parameter-level asymmetry, we incorporate an L1 regularization term into the overall training loss. This technique is often referred to as promoting L1 sparsity. The L1 norm, or the sum of the absolute values of the parameters, has a unique property when used as a penalty in machine learning: it encourages many of the less important parameter weights to become exactly zero during training. This effectively “switches off” or prunes redundant components within the LoRA matrices, forcing the model to rely only on a sparse, critical subset of parameters for its updates. The L1 penalty is defined as in Equation (5):
L sparse = λ L 1 i A i 1 + B i 1
where the sum is over all LoRA modules i applied in the model, · 1 denotes the L1 norm (sum of absolute values of the elements), and λ L 1 is a hyperparameter controlling the strength of the sparsity penalty. This penalty encourages many elements in A i and B i to become zero during training, leading to a sparser, and thus more asymmetric, effective Δ W i . This focuses the adaptation on the most impactful parameter adjustments, potentially improving generalization and interpretability by highlighting which parts of the low-rank space are most critical for the task.

3.2.3. DS-LoRA Layer Forward Pass

The complete forward pass for a DS-LoRA layer combines the original weight computation with the asymmetrically gated and potentially sparse LoRA path and the scaling factor as in Equation (6):
h out DS - LoRA = W 0 x + g ( x ) · α r · B A x
All parameters of W 0 are frozen. The trainable parameters for each DS-LoRA layer are those in the asymmetrically active matrices A and B and its corresponding gate controller f gate .

3.3. Model Architecture and Training

3.3.1. Base Model and DS-LoRA Application

We apply DS-LoRA to the query ( W q ) and value ( W v ) projection matrices within the self-attention sub-layers of each transformer block. These matrices are common targets for LoRA due to their significant role in shaping the attention mechanism’s behavior. The selection of these specific matrices for asymmetric adaptation is based on their critical influence on how the model processes input sequences. The original weights of LLMs, except for the newly introduced DS-LoRA parameters (LoRA matrices A and B and gate controller parameters), are kept frozen during training. For offensive language detection, we append a linear classification head on top of the LLMs, which takes the hidden state corresponding to the last input token and projects it to the number of target classes (e.g., offensive vs. non-offensive).

3.3.2. Training Objective

The model is trained to minimize a composite loss function L total , which combines the standard classification loss with the L1 sparsity regularization term designed to promote asymmetry in the LoRA parameters as in Equation (7):
L total = L CE + L sparse
  • L CE is the standard cross-entropy loss for the offensive language classification task as in Equation (8):
    L CE = j = 1 N c = 1 C y j , c log ( p j , c )
    where N is the batch size, C is the number of classes, y j , c is the true label (1 if sample j belongs to class c; 0 otherwise), and p j , c is the model’s predicted probability for sample j belonging to class c.
  • L sparse is the L1 sparsity regularization term defined in Equation (5), encouraging the asymmetric engagement of LoRA parameters.
Hyperparameter λ L 1 balances the contribution of the classification objective and the drive towards parameter-level asymmetry.

3.3.3. DS-LoRA Finetuning Algorithm

The overall finetuning process using DS-LoRA, which facilitates dynamic asymmetric adaptation, is summarized in Algorithm 1.
Algorithm 1 DS-LoRA Finetuning Algorithm for Asymmetric Adaptation
  • Require: Pretrained LLM M θ 0 (e.g., Llama-3 8B)
  • Require: Training dataset D = { ( x ( i ) , y ( i ) ) } i = 1 N
  • Require: LoRA rank r, LoRA alpha α , L1 sparsity coefficient λ L 1 (controls degree of parameter asymmetry)
  • Require: Learning rate η , number of epochs E, batch size B s
  • Require: Target modules for DS-LoRA (e.g., W q , W v in attention layers)
  1:
Initialize DS-LoRA parameters for asymmetric adaptation:
  2:
for all target linear layers W 0 in M θ 0  do
  3:
      Initialize LoRA matrices A (Kaiming uniform) and B (zeros)
  4:
      Initialize gate controller f gate parameters (e.g., Xavier uniform) for input-dependent asymmetry
  5:
      Replace W 0 with DS-LoRA layer (Equation (6))
  6:
end for
  7:
Freeze all original parameters θ 0 in M θ 0 .
  8:
Let θ DS - LoRA be the set of all trainable parameters ( A , B , f gate parameters).
  9:
Initialize optimizer (e.g., AdamW) for θ DS - LoRA .
10:
for epoch = 1 to E do
11:
      Shuffle D
12:
      for each batch { ( x b , y b ) } in D  do
13:
            Compute model output y ^ b = M θ 0 , θ DS - LoRA ( x b ) (incorporating asymmetric DS-LoRA updates)
14:
            Compute classification loss L CE ( y b , y ^ b )
15:
            Compute L1 sparsity loss L sparse (Equation (5)) over all A , B matrices to promote parameter asymmetry.
16:
            Compute total loss L total = L CE + L sparse
17:
            Perform backpropagation: θ DS - LoRA L total
18:
            Update θ DS - LoRA using the optimizer.
19:
      end for
20:
end for
21:
return Trained model M θ 0 , θ DS - LoRA with learned asymmetric adaptation capabilities.

4. Experimental Setup

We detail the experimental setup and describe the base models, datasets used for training and evaluation, evaluation metrics, baseline methods, and implementation details.

4.1. Base Models

We selected two recent and powerful open source Large Language Models (LLMs) as base models for our experiments to demonstrate the generalizability of DS-LoRA:
  • Llama-3 8B Instruct: A decoder-only transformer model from Meta AI with 8 billion parameters. We specifically use the instruction-tuned variant, which has been aligned for better instruction following and safety, providing a strong foundation for downstream task adaptation.
  • Gemma 7B Instruct: A decoder-only transformer model from Google with 7 billion parameters, also an instruction-tuned variant. Gemma models are built using similar architectures and techniques as Google’s Gemini models.
For both models, we utilized their official Hugging Face Transformers library [34] implementations. During finetuning with DS-LoRA or other PEFT methods, the original weights of these base models were kept frozen, and only the adaptation parameters were updated. A linear classification head was added on top of the base model’s last hidden state output to predict the offensive language class.

4.2. Datasets

We conducted experiments on widely recognized public benchmark datasets for offensive language detection, chosen to cover a range of offensive phenomena and annotation schemes:
  • OLID [35]: This dataset, from SemEval-2019 Task 6, contains English tweets annotated for three hierarchical levels. We focus on Sub-task A: Offensive language identification (OFF vs. NOT). This task requires identifying whether a tweet contains any form of offensive language, including insults, threats, and profanity. We use the official training, development, and test splits.
  • HateXplain [27]: This dataset provides fine-grained annotations for English posts from Twitter and Gab, distinguishing between hate speech, offensive language, and normal language. Crucially, it also includes human-annotated rationales (token-level explanations) for each classification, which, while not directly used for training our classification model, underscores the dataset’s focus on nuanced and explainable offensiveness. We use the three-class classification task (hate, offensive, normal) and also report a binary offensive vs. normal version for comparison.
For all datasets, we adhere to the standard training, validation, and testing splits provided by the dataset creators to ensure fair comparison with prior work. Preprocessing steps include minimal cleaning, such as the normalization of user mentions and URLs, and tokenization using the respective base model’s tokenizer.

4.3. Evaluation Metrics

To comprehensively evaluate the performance of our models, we use standard classification metrics, which include the Accuracy, Precision, Recall, F1-Score, and Macro-F1. Given that offensive language is often a minority class, the F1-score (especially Macro-F1 or F1 for the positive class) and Recall for the offensive class are particularly important indicators of a model’s practical utility.

4.4. Baselines

We compare DS-LoRA against several strong baselines and existing PEFT methods:
  • Zero-Shot LLM: The base Llama-3 8B and Gemma 7B models without any finetuning, using carefully crafted prompts to perform offensive language classification in a zero-shot manner.
  • Full Finetuning: Full finetuning of the base LLMs. Due to computational constraints, this might be limited to finetuning only the top few layers or a smaller version of the model as an indicative upper bound.
  • LoRA [15]: The original LoRA implementation applied to the same target modules ( W q , W v ) as DS-LoRA, using various ranks (r) for comparison.
  • Adapters [36]: Finetuning using Houlsby adapters inserted into the transformer layers.
  • AdaLoRA [18]: An adaptive version of LoRA that dynamically allocates the rank budget to weight matrices based on their importance, introducing a form of structural asymmetry.
  • QLoRA [32]: A highly efficient adaptation method that enables finetuning on quantized four-bit models. It is the de facto standard for memory-efficient finetuning and serves as a powerful efficiency- and performance-oriented baseline.
  • DoRA [33]: A recent state-of-the-art method that decomposes the pretrained weights into magnitude and direction components for finetuning. DoRA has been shown to improve the performance and training stability of LoRA.
For all PEFT methods, including DS-LoRA and standard LoRA, we tune relevant hyperparameters (e.g., rank r, LoRA α , learning rate, adapter bottleneck dimension) on the development set of each dataset.

4.5. Implementation Details

For DS-LoRA and standard LoRA, we target the query ( W q ) and value ( W v ) projection matrices in the self-attention mechanisms of all transformer layers. The LoRA rank r is explored in the set { 4 , 8 , 16 , 32 } . The LoRA scaling factor α is typically set to 2 r or r. For DS-LoRA, the gate controller f gate is implemented as a two-layer MLP with a hidden dimension of d gate { 16 , 32 } or a simple linear layer if d gate = 0 , with ReLU activation in the hidden layer and a sigmoid output. The L1 sparsity regularization coefficient λ L 1 is tuned from the set { 10 4 , 10 5 , 10 6 , 0 } . Then, we selected a two-layer MLP with a hidden dimension of 16 for all main experiments, as it provided the best balance of performance and parameter efficiency. We trained for a maximum of E = 5 to 10 epochs, depending on the dataset size, with early stopping based on the validation set’s Macro-F1 score. Batch sizes were chosen based on GPU memory constraints, typically ranging from 4 to 16 per GPU. Experiments were run on four NVIDIA A100s.

5. Results and Analysis

In this section, we present the empirical results of our proposed DS-LoRA compared against baseline methods on the OLID and HateXplain datasets. We interpret these results through the lens of how asymmetric adaptation benefits performance on nuanced tasks. We then conducted ablation studies to understand the individual contributions of DS-LoRA’s key and provide further analysis into their behavior, particularly how they manifest asymmetry in model adaptation.

5.1. Main Results

Table 1 and Table 2 summarize the performance of DS-LoRA and baseline methods on the OLID and HateXplain datasets, respectively. We report the Accuracy, Precision, Recall, F1-Score, and Macro-F1. All PEFT methods were applied to both Llama-3 8B and Gemma 7B base models.
As shown in Table 1 and Table 2, DS-LoRA consistently outperforms all baseline methods across both datasets and for both the Llama-3 8B and Gemma 7B base models. This suggests that our method is more effective than the more uniform approaches of standard LoRA or other baselines. On OLID, DS-LoRA (Llama-3 8B) achieves a Macro-F1 of 81.3%, an improvement of 1.2% over standard LoRA. Notably, the F1-score for the “Offensive” class sees a more significant gain of 1.8% (77.7% vs. 75.9%). This highlights DS-LoRA’s enhanced ability to correctly identify the target minority class, likely due to its more targeted, asymmetric adjustments that can better capture the specific nuances of this class compared to a more symmetric update applied across all inputs. Similar trends are observed for the Gemma 7B model.
The performance gains from DS-LoRA’s asymmetric approach are even more pronounced on the more challenging HateXplain dataset, which features finer-grained distinctions and thus greater asymmetries in class characteristics. For Llama-3 8B, DS-LoRA achieves a Macro-F1 of 72.0%, surpassing standard LoRA by 1.8%. The F1-scores for both “Hate” (66.5% vs. 64.0%) and “Offensive” (70.1% vs. 67.5%) classes show substantial improvements. This suggests that the dynamic gating (enabling input-level asymmetry) and sparsity mechanisms (enabling parameter-level asymmetry) in DS-LoRA are particularly beneficial for capturing the subtle, often asymmetrically expressed cues that differentiate these categories. These results indicate that DS-LoRA’s adaptive asymmetric nature allows it to make more precise adjustments to the base LLM, which is crucial for nuanced classification tasks where a one-size-fits-all (symmetric) adaptation is suboptimal. The number of trainable parameters for DS-LoRA is only marginally larger than that for standard LoRA (due to the small gate controller) but remains significantly lower than for partial full finetuning or standard adapter approaches with larger bottleneck dimensions, demonstrating efficient use of parameters through its asymmetric design.

5.2. Ablation Studies

To understand the individual contributions of the key asymmetric components of DS-LoRA—input-dependent gating (Gate), which introduces input-level asymmetry, and L1 sparsity regularization (L1), which fosters parameter-level asymmetry—we conducted ablation studies on the OLID dataset using Llama-3 8B. The results are presented in Table 3.
The ablation results in Table 3 clearly demonstrate that both the input-level asymmetry introduced by the gating mechanism and the parameter-level asymmetry from L1 sparsity regularization contribute positively to DS-LoRA’s performance. Adding only the gating mechanism to standard LoRA (‘LoRA + Gate‘) improves the F1(Offensive) score by 1.2% and Macro-F1 by 0.8%. This suggests that dynamically and asymmetrically scaling LoRA module contributions based on input characteristics is highly beneficial, allowing the model to respond differently to varying inputs. Similarly, incorporating only L1 sparsity (‘LoRA + L1 Sparsity‘) provides a modest improvement, indicating that encouraging sparser, more asymmetric LoRA update matrices helps in refining the adaptation by focusing on critical parameters. The full DS-LoRA model, combining both asymmetric components, achieves the best performance, underscoring the synergistic effect of dynamic input-level asymmetry and learned parameter-level asymmetry. This synergy allows for a more nuanced and effective adaptation than either component alone or a more symmetric baseline.
The design of the gate controller ( f g a t e ) offers a trade-off between expressive power and parameter overhead. A more complex gate might better model the decision of when to apply adaptation, but at the cost of more trainable parameters. To justify our design choice, we conducted an ablation study on the architecture of the gate controller itself. We evaluated three configurations on the OLID dataset with Llama-3 8B:
  • Linear Gate: A simple linear layer followed by sigmoid ( d g a t e = 0 ), as described in Equation (3).
  • MLP (d = 16): A two-layer MLP with a hidden dimension of 16 and ReLU activation. This is the configuration used in our main experiments.
  • MLP (d = 32): A two-layer MLP with a hidden dimension of 32 and ReLU activation.
The results are presented in Table 4. The results demonstrate that even the simplest linear gate provides a substantial performance boost over standard LoRA, improving the Macro-F1 score by 0.7%. This confirms the fundamental benefit of the input-dependent gating mechanism. Introducing a hidden layer in the MLP ( d g a t e = 16 ) configuration yields a further improvement, reaching the best Macro-F1 score of 81.3%. This suggests that the non-linearity and increased capacity of the MLP allow it to learn a more effective gating function. However, increasing the hidden dimension further to 32 (MLP ( d g a t e = 32 )) did not lead to additional gains and in fact resulted in a slight drop in performance, likely due to overfitting on this small control network. Therefore, we selected the MLP with a hidden dimension of 16 as our default architecture for all experiments. It strikes the optimal balance, providing a clear performance benefit over a simpler linear gate without adding unnecessary complexity or parameters.

5.3. Analysis of DS-LoRA Components

5.3.1. Sensitivity to LoRA Rank r

We investigate the performance of DS-LoRA and standard LoRA across different LoRA ranks ( r { 4 , 8 , 16 , 32 } ) on the HateXplain dataset (Macro-F1) using Llama-3 8B. Figure 2 illustrates the results, comparing DS-LoRA’s asymmetric approach with standard LoRA’s more symmetric use of rank.
Figure 2 shows that DS-LoRA consistently outperforms standard LoRA across all tested ranks. Both methods generally improve with increasing rank (adaptation capacity), but DS-LoRA exhibits a stronger performance curve. Its advantage over standard LoRA is maintained, suggesting that its asymmetric adaptation mechanisms (dynamic gating and sparsity) enable more effective utilization of the available rank. Even with a larger adaptation capacity (higher r), the ability to asymmetrically allocate and refine this capacity provides a distinct advantage. Interestingly, DS-LoRA with r = 8 already surpasses standard LoRA with r = 16 . This indicates that the asymmetric and targeted nature of DS-LoRA allows for achieving comparable or better results with a lower intrinsic rank (and thus fewer parameters within the LoRA matrices before sparsity) compared to a more symmetric, dense application of a higher rank.

5.3.2. Impact of Sparsity Coefficient λ L 1

We analyze the effect of the L1 sparsity coefficient λ L 1 on DS-LoRA’s performance (Macro-F1, OLID, Llama-3 8B, r = 16 ) and the resulting parameter-level asymmetry (sparsity level) in LoRA matrices.
Figure 3 illustrates that increasing λ L 1 generally leads to higher parameter-level asymmetry (sparsity) in the LoRA A and B matrices. Performance (Macro-F1) initially improves with moderate sparsity, peaking around λ L 1 = 10 5 . At this point, over 50% of LoRA weights become zero (high asymmetry), yet the model delivers its best F1 score. This strongly suggests that a significant portion of parameters in a standard dense LoRA (which exhibits parameter-level symmetry by updating all elements in A and B) might be redundant or even detrimental for nuanced tasks. The introduced asymmetry allows the model to focus its capacity. However, excessively high λ L 1 (e.g., 10 4 ) can lead to over-sparsification (extreme asymmetry) and a slight degradation in performance, indicating a trade-off between parsimony from asymmetry and model capacity. The optimal λ L 1 effectively prunes less important LoRA parameters, creating a beneficial level of parameter-level asymmetry that allows the model to focus on the most discriminative low-rank updates.

5.3.3. Qualitative Analysis of Gate Activations

To further investigate the behavior of the dynamic gating mechanism—a core driver of input-level asymmetry in DS-LoRA—we analyzed the average gate activation values ( g ( x ) ) for DS-LoRA modules across different layer groups of the Llama-3 8B model. These activations were conditioned on the ground-truth input category from the HateXplain dataset, which itself contains categories with asymmetric levels of offensiveness. Table 5 presents these average activations, including the standard deviation to indicate variance. This analysis provides a window into how the model asymmetrically allocates adaptive capacity based on input.
Several key observations emerge from Table 5, clearly demonstrating input-asymmetric adaptation: Firstly, across all layer groups, inputs categorized as “Normal” consistently exhibit the lowest average gate activations (e.g., overall model average of 0.31 ± 0.05 ). This suggests that DS-LoRA appropriately reduces LoRA module contributions when the base model’s pretrained knowledge is likely sufficient, effectively choosing a more symmetric response (closer to the frozen base LLM) for benign inputs. This conserves parameter updates and prevents unnecessary asymmetric shifts for inputs that do not require them. Secondly, inputs identified as “Hate Speech” trigger significantly higher average gate activations (overall model average of 0.60 ± 0.10 ), particularly in the middle layers (average of 0.68 ± 0.14 ). This indicates that the model learns to engage the LoRA adaptations more strongly—an asymmetric increase in adaptation intensity—when encountering highly offensive content that requires specialized, asymmetric adjustments from the base model’s representations. Inputs classified as “Offensive (Non-Hate)” generally show gate activations between those for “Normal” and “Hate Speech,” reflecting an intermediate level of asymmetric adaptation.
The higher engagement in middle layers for “Hate Speech” is particularly noteworthy in the context of asymmetric feature learning. These layers in LLMs are often associated with capturing more complex semantic relationships and abstract features. The increased LoRA activity here might suggest that DS-LoRA is making crucial fine-grained adjustments in these intermediate representations to better distinguish severe forms of offensive language—which are distinct from benign content—from merely offensive or normal text. This layer-specific and category-specific dynamic modulation of LoRA pathways, a clear manifestation of input-level asymmetry, supports our hypothesis that the gating mechanism enables a more targeted, efficient, and asymmetric use of adaptation parameters, contributing to the improved performance on nuanced distinctions. This directly relates to how asymmetry can contribute to more explainable LLM behaviors by revealing where and how adaptation is applied differently.
To assess the broader applicability and robustness of our proposed DS-LoRA, we extend our evaluation beyond English social media. The core hypothesis of our work is that the benefits of asymmetric adaptation are particularly salient for nuanced tasks, a characteristic not limited to a single domain or language. Therefore, we tested DS-LoRA’s performance in two new settings: a different domain (English forum comments) and a different language (German).

5.4. Performance on Cross-Domain and Cross-Languages

We used the publicly available Jigsaw/Wikipedia Toxicity Classification dataset (https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge, accessed on 22 June 2025). This dataset consists of a large number of comments from Wikipedia talk page edits, which have been human-rated for toxicity. We frame this as a binary classification task (toxic vs. non-toxic). This domain differs from Twitter in its longer text format and more formal (though often still adversarial) conversational style, providing a robust test for domain generalization.
To evaluate cross-lingual performance, we used the GermEval 2018 Task 1 dataset [37], a standard benchmark for identifying offensive language in German tweets. We focused on the binary classification subtask (OFFENSIVE vs. OTHER). For this experiment, we used a multilingual base model, specifically the instruction-tuned version of Llama-3-8B-Instruct, to ensure a strong foundation for the German language.
For both new datasets, we followed the same experimental protocol as before, comparing our DS-LoRA against the standard LoRA baseline. The results of our generalization experiments are summarized in Table 6.
On the Wikipedia Toxicity dataset, DS-LoRA achieves an F1-score of 86.1%, a significant improvement of 1.3% over the standard LoRA baseline. This suggests that the ability to dynamically modulate adaptation strength and prune redundant parameters is beneficial even for longer-form text and different definitions of problematic content. In the cross-lingual setting on the German GermEval 2018 dataset, DS-LoRA continues to outperform its symmetric counterpart, achieving an F1-score of 80.4% compared to 78.8% for standard LoRA. This result is particularly compelling, as it demonstrates that our method is not reliant on English-specific linguistic features. The core mechanism—allowing the model to learn how much and which parts of the LoRA module to apply based on the input—is a general principle that translates well to other languages.

5.5. Analysis of Computational Cost

While the primary goal of DS-LoRA is to improve model performance on nuanced tasks, it is crucial to assess its practicality by analyzing the computational overhead it introduces. In this section, we quantify the trade-offs in terms of training time, memory usage, and inference latency.
  • Training Time is measured as the average wall clock time per training epoch.
  • Peak GPU Memory is the maximum allocated GPU memory during the training process, as reported by PyTorchV2.3.1.
  • Inference Latency is the average time to perform a forward pass for a single sample, measured with a batch size of 1 to isolate the per-sample processing time.
All methods are configured with a LoRA rank of r = 16 .
The results of our computational cost analysis are presented in Table 7. DS-LoRA exhibits a 12.4% increase in training time per epoch compared to standard LoRA. This overhead is attributable to two main sources:
  • L1 Sparsity Loss Calculation: At each training step, the model must compute the L1 norm over all parameters in the LoRA matrices (A and B). While computationally simple, this operation iterates over millions of parameters, adding a noticeable cost to the loss computation phase.
  • Gate Controller Computation: The forward and backward passes for the small gate controller networks at each adapted layer contribute additional floating-point operations.
This is a one-time cost during the training phase. The memory footprint sees only a marginal increase of 2.2%, primarily for storing the gate parameters and their optimizer states, confirming that the gating mechanism is lightweight.
During inference, the L1 regularization term is absent. The primary source of overhead is the forward pass through the gating networks, which must be executed for each input at every adapted layer. This results in a 9.4% increase in per-sample latency compared to standard LoRA in our measurements.
DS-LoRA was designed to maximize performance, and this analysis confirms it does so at a modest computational cost. For the OLID task, this ∼10–12% increase in computational resources yields a 1.2% absolute improvement in Macro-F1 over standard LoRA and a 0.3% improvement over the highly competitive DoRA.
It is also important to consider the nature of the induced sparsity. Our latency measurements use standard dense matrix multiplication kernels. The significant parameter sparsity (∼50% as shown in Figure 2) created by L1 regularization could potentially lead to faster inference if deployed on hardware or using software libraries (e.g., NVIDIA’s ‘cusparseLt‘) that are optimized for sparse computations. In such scenarios, the latency cost of the gate controllers could be partially or even fully offset.

5.6. Case Study

To provide a more intuitive understanding of how DS-LoRA’s asymmetric adaptation helps in practice, we present a qualitative analysis of specific examples where standard LoRA fails but DS-LoRA succeeds. These cases, drawn from the HateXplain test set, highlight the types of nuanced and context-dependent offensive language that benefit most from our method.
For each example, we compare the predictions of the base model (Llama-3 8B Instruct, zero-shot), standard LoRA, and DS-LoRA. We also report the overall average gate activation ( g ¯ ( x ) ) from DS-LoRA, which serves as a proxy for how much the model decided to “engage” its finetuned knowledge for that specific instance. A higher value indicates that DS-LoRA applied a stronger adaptation.
Table 8 showcases several such examples. The examples in Table 8 reveal a clear pattern. Standard LoRA, with its uniform adaptation strategy, struggles with text where the offensiveness is not derived from explicit keywords but from sarcasm, subtext, coded language, or stereotypes. In these cases, the base LLM’s initial inclination is to classify the text as non-offensive based on a literal reading. Standard LoRA’s fixed-strength adaptation is often insufficient to overturn this strong initial bias. In contrast, DS-LoRA’s gating mechanism learns to identify these challenging, ambiguous instances. The high average gate activation scores ( g ¯ ( x ) > 0.5 ) for all these failed cases indicate that the model “knew” it was facing a difficult input and responded by dynamically increasing the influence of the LoRA updates. This ability to apply adaptation strength “on-demand” is precisely what allows DS-LoRA to capture the contextual nuances that other methods miss, providing a more robust and discerning model for offensive language detection.

6. Limitations and Future Work

Despite the promising results, our work has several limitations that open avenues for future research.
Computational Overhead of Gating The core innovation of DS-LoRA, the dynamic gating mechanism, introduces computational overhead. As quantified in our cost analysis (Section 5.5), the gate controllers add a modest but non-negligible cost to both training time (∼12%) and inference latency (∼9%). While we argue this is a worthwhile trade-off for the gains in accuracy, for applications with extremely strict latency or energy budgets (e.g., on-device deployment), this overhead might be a limiting factor. Future work could explore more computationally efficient gate architectures, such as using parameter-free functions or sharing gate controllers across layers to reduce this cost.
Risk of Gate Controller Overfitting The gate controller is a small neural network that is trained jointly with the LoRA matrices. Like any neural network, it is susceptible to overfitting, especially on datasets with limited size or diversity. The gate could potentially learn spurious correlations in the training data, leading it to incorrectly modulate the LoRA adaptation for out-of-distribution samples. While our experiments on established benchmarks did not indicate this was a major issue, it remains a potential limitation. Future research could investigate applying specific regularization techniques (e.g., dropout or weight decay) directly to the gate controller’s parameters to improve its generalization.
Focus on Classification Tasks Our evaluation focused exclusively on text classification. While DS-LoRA demonstrated strong performance, its applicability to generative tasks (e.g., summarization, machine translation, instruction following) has not been tested. It is an open question how dynamic, instance-specific gating would affect the coherence and quality of generated text. Applying and evaluating DS-LoRA in a generative setting is a promising direction for future work.

7. Conclusions

In this paper, we proposed DS-LoRA, designed to leverage asymmetric adaptation for enhancing nuanced offensive language detection in LLMs. DS-LoRA explicitly breaks from more symmetric or uniform adaptation strategies by incorporating two key mechanisms: input-dependent gating, which facilitates input-level asymmetry by dynamically scaling adaptation strength based on instance characteristics, and L1 sparsity regularization, which promotes parameter-level asymmetry by encouraging sparse, targeted updates within LoRA modules. Our extensive experiments on the OLID and HateXplain datasets demonstrated that DS-LoRA’s capabilities allow it to significantly outperform standard LoRA and other strong baselines.

Author Contributions

Writing—original draft, Y.W.; Writing—review & editing, J.S.; Visualization, B.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by The People’s Public Security University of China Basic Science Fee Project (grant number: 2023JKF02ZK04).

Data Availability Statement

The data presented in this study is available at: https://sites.google.com/site/offensevalsharedtask/olid (accessed on 22 June 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhu, S.; Supryadi; Xu, S.; Sun, H.; Pan, L.; Cui, M.; Du, J.; Jin, R.; Branco, A.; Xiong, D. Multilingual Large Language Models: A Systematic Survey. arXiv 2024, arXiv:2411.11072. [Google Scholar]
  2. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H.W.; Sutton, C.; Gehrmann, S.; et al. PaLM: Scaling Language Modeling with Pathways. J. Mach. Learn. Res. 2023, 24, 1–113. [Google Scholar]
  3. Shanahan, M. Talking about large language models. Commun. ACM 2024, 67, 68–79. [Google Scholar] [CrossRef]
  4. Fortuna, P.; Nunes, S. A survey on automatic detection of hate speech in text. ACM Comput. Surv. (Csur) 2018, 51, 1–30. [Google Scholar] [CrossRef]
  5. Mutanga, R.T.; Naicker, N.; Olugbara, O.O. Detecting hate speech on twitter network using ensemble machine learning. Int. J. Adv. Comput. Sci. Appl. 2022, 13, 331–339. [Google Scholar] [CrossRef]
  6. Davidson, T.; Warmsley, D.; Macy, M.; Weber, I. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, Montreal, QC, Canada, 15–18 May 2017; Volume 11, pp. 512–515. [Google Scholar]
  7. Van Bruwaene, D.; Huang, Q.; Inkpen, D. A multi-platform dataset for detecting cyberbullying in social media. Lang. Resour. Eval. 2020, 54, 851–874. [Google Scholar] [CrossRef]
  8. Shi, X.; Liu, X.; Xu, C.; Huang, Y.; Chen, F.; Zhu, S. Cross-lingual offensive speech identification with transfer learning for low-resource languages. Comput. Electr. Eng. 2022, 101, 108005. [Google Scholar] [CrossRef]
  9. Zhu, S.; Pan, L.; Li, B.; Xiong, D. LANDeRMT: Dectecting and Routing Language-Aware Neurons for Selectively Finetuning LLMs to Machine Translation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Bangkok, Thailand, 11–16 August 2024; pp. 12135–12148. [Google Scholar]
  10. Zhu, S.; Cui, M.; Xiong, D. Towards robust in-context learning for machine translation with large language models. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), Torino, Italy, 20–25 May 2024; pp. 16619–16629. [Google Scholar]
  11. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.; Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. A survey on evaluation of large language models. ACM Trans. Intell. Syst. Technol. 2024, 15, 1–45. [Google Scholar] [CrossRef]
  12. Gui, A.; Ye, J.; Xiao, H. G-adapter: Towards structure-aware parameter-efficient transfer learning for graph transformer networks. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 12226–12234. [Google Scholar]
  13. Lester, B.; Al-Rfou, R.; Constant, N. The Power of Scale for Parameter-Efficient Prompt Tuning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic, 7–11 November 2021; Association for Computational Linguistics: Vienna, Austria, 2021; pp. 3045–3059. [Google Scholar]
  14. Li, X.L.; Liang, P. Prefix-Tuning: Optimizing Continuous Prompts for Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Bangkok, Thailand, 1–6 August 2021; pp. 4582–4597. [Google Scholar]
  15. Hu, E.J.; Shen, Y.; Wallis, P.; Allen-Zhu, Z.; Li, Y.; Wang, S.; Wang, L.; Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual, 25–29 April 2022. [Google Scholar]
  16. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.A.; Lacroix, T.; Rozière, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. LLaMA: Open and Efficient Foundation Language Models. arXiv 2023, arXiv:2302.13971. [Google Scholar]
  17. Gallegos, I.O.; Rossi, R.A.; Barrow, J.; Tanjim, M.M.; Kim, S.; Dernoncourt, F.; Yu, T.; Zhang, R.; Ahmed, N.K. Bias and fairness in large language models: A survey. Comput. Linguist. 2024, 50, 1097–1179. [Google Scholar] [CrossRef]
  18. Zhang, Q.; Chen, M.; Bukharin, A.; He, P.; Cheng, Y.; Chen, W.; Zhao, T. Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning. In Proceedings of the International Conference on Learning Representations. Openreview, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  19. Pradhan, R.; Chaturvedi, A.; Tripathi, A.; Sharma, D.K. A review on offensive language detection. In Advances in Data and Information Sciences: Proceedings of ICDIS 2019; Springer: Singapore, 2020; pp. 433–439. [Google Scholar]
  20. Nobata, C.; Tetreault, J.; Thomas, A.; Mehdad, Y.; Chang, Y. Abusive language detection in online user content. In Proceedings of the 25th International Conference on World Wide Web, Montreal, QC, Canada, 11–15 April 2016; pp. 145–153. [Google Scholar]
  21. Gambäck, B.; Sikdar, U.K. Using convolutional neural networks to classify hate-speech. In Proceedings of the First Workshop on Abusive Language Online, Vancouver, BC, Canada, 4 August 2017; pp. 85–90. [Google Scholar]
  22. Badjatiya, P.; Gupta, S.; Gupta, M.; Varma, V. Deep learning for hate speech detection in tweets. In Proceedings of the 26th International Conference on World Wide Web Companion, Perth, Australia, 3–7 April 2017; pp. 759–760. [Google Scholar]
  23. Pamungkas, E.W.; Patti, V. Cross-domain and cross-lingual abusive language detection: A hybrid approach with deep learning and a multilingual lexicon. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, Florence, Italy, 28 July–2 August 2019; pp. 363–370. [Google Scholar]
  24. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; Volume 1 (long and short papers),, pp. 4171–4186. [Google Scholar]
  25. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv 2019, arXiv:1907.11692. [Google Scholar]
  26. Caselli, T.; Basile, V.; Mitrovic, J.; Granitzer, M. HateBERT: Retraining BERT for Abusive Language Detection in English. In Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), Bangkok, Thailand, 6 August 2021; Association for Computational Linguistics: Stroudsburg, PA, USA, 2021; pp. 17–25. [Google Scholar]
  27. Mathew, B.; Saha, P.; Yimam, S.M.; Biemann, C.; Goyal, P.; Mukherjee, A. HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtually, 2–9 February 2021; Volume 35, pp. 14867–14875. [Google Scholar]
  28. Zhu, S.; Pan, L.; Xiong, D. FEDS-ICL: Enhancing translation ability and efficiency of large language model by optimizing demonstration selection. Inf. Process. Manag. 2024, 61, 103825. [Google Scholar] [CrossRef]
  29. Zhang, D.; Feng, T.; Xue, L.; Wang, Y.; Dong, Y.; Tang, J. Parameter-Efficient Fine-Tuning for Foundation Models. arXiv 2025, arXiv:2501.13787. [Google Scholar]
  30. Xie, T.; Li, T.; Zhu, W.; Han, W.; Zhao, Y. PEDRO: Parameter-Efficient Fine-tuning with Prompt DEpenDent Representation MOdification. arXiv 2024, arXiv:2409.17834. [Google Scholar]
  31. Zhang, L.; Zhang, L.; Shi, S.; Chu, X.; Li, B. Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning. arXiv 2023, arXiv:2308.03303. [Google Scholar]
  32. Dettmers, T.; Pagnoni, A.; Holtzman, A.; Zettlemoyer, L. QLoRA: Efficient Finetuning of Quantized LLMs. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA, USA, 10–16 December 2023; Volume 36, pp. 10088–10115. [Google Scholar]
  33. Liu, S.Y.; Lin, C.Y.; Lee, H.y.; Wang, Y.C.F. DoRA: Weight-Decomposed Low-Rank Adaptation. arXiv 2024, arXiv:2402.09353. [Google Scholar]
  34. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; et al. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Online, 16–20 November 2020; pp. 38–45. [Google Scholar]
  35. Zampieri, M.; Malmasi, S.; Nakov, P.; Rosenthal, S.; Farra, N.; Kumar, R. Predicting the Type and Target of Offensive Posts in Social Media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, MN, USA, 2–7 June 2019; pp. 1415–1420. [Google Scholar]
  36. Houlsby, N.; Giurgiu, A.; Jastrzebski, S.; Morrone, B.; De Laroussilhe, Q.; Gesmundo, A.; Attariyan, M.; Gelly, S. Parameter-efficient transfer learning for NLP. In Proceedings of the International Conference on Machine Learning. PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 2790–2799. [Google Scholar]
  37. Wiegand, M.; Siegel, M.; Ruppenhofer, J. Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language. In Proceedings of the 14th Conference on Natural Language Processing KONVENS 2018, Vienna, Austria, 21 September 2018. [Google Scholar]
Figure 1. Visual comparison of the forward pass in a standard LoRA layer versus a DS-LoRA layer. (a) In standard LoRA, the low-rank update is dense and applied uniformly to all inputs. (b) Our DS-LoRA introduces two key architectural changes: (1) a lightweight gate controller that computes an input-dependent scalar g ( x ) to dynamically modulate the update strength, and (2) the LoRA matrices A and B are encouraged to be sparse through L1 regularization, focusing the adaptation on the most critical parameters.
Figure 1. Visual comparison of the forward pass in a standard LoRA layer versus a DS-LoRA layer. (a) In standard LoRA, the low-rank update is dense and applied uniformly to all inputs. (b) Our DS-LoRA introduces two key architectural changes: (1) a lightweight gate controller that computes an input-dependent scalar g ( x ) to dynamically modulate the update strength, and (2) the LoRA matrices A and B are encouraged to be sparse through L1 regularization, focusing the adaptation on the most critical parameters.
Symmetry 17 01076 g001
Figure 2. Macro-F1 score vs. LoRA rank r. DS-LoRA’s asymmetric adaptation consistently outperforms standard LoRA’s more symmetric approach across ranks on HateXplain (Llama-3 8B).
Figure 2. Macro-F1 score vs. LoRA rank r. DS-LoRA’s asymmetric adaptation consistently outperforms standard LoRA’s more symmetric approach across ranks on HateXplain (Llama-3 8B).
Symmetry 17 01076 g002
Figure 3. Demonstration of the trade-off between performance and parameter sparsity, controlled by the L1 regularization coefficient ( λ L 1 ) on the OLID dataset (Llama-3 8B, r = 16). Top: The Macro-F1 score peaks at λ L 1 = 10 5 . Bottom: At this optimal coefficient, approximately 55% of the LoRA update weights are pruned to zero. This shows that introducing a significant level of parameter-level asymmetry (sparsity) is beneficial, but excessive pruning (over-sparsification) leads to a degradation in performance.
Figure 3. Demonstration of the trade-off between performance and parameter sparsity, controlled by the L1 regularization coefficient ( λ L 1 ) on the OLID dataset (Llama-3 8B, r = 16). Top: The Macro-F1 score peaks at λ L 1 = 10 5 . Bottom: At this optimal coefficient, approximately 55% of the LoRA update weights are pruned to zero. This shows that introducing a significant level of parameter-level asymmetry (sparsity) is beneficial, but excessive pruning (over-sparsification) leads to a degradation in performance.
Symmetry 17 01076 g003
Table 1. Performance comparison on the OLID dataset. DS-LoRA’s asymmetric adaptation consistently yields superior results. The best results for each base model are in bold.
Table 1. Performance comparison on the OLID dataset. DS-LoRA’s asymmetric adaptation consistently yields superior results. The best results for each base model are in bold.
Base ModelMethodTrainable ParamsAcc.P(O)R(O)F1(O)Macro-F1
Llama-3 8BZero-Shot Prompting072.365.855.260.169.5
Full Finetuning∼700M80.576.273.574.879.2
Adapters∼5.8M79.875.171.973.578.1
Standard LoRA∼4.2M81.277.074.875.980.1
QLoRA (r = 16)∼4.2M81.477.375.176.280.3
AdaLoRA (r = 16)∼4.3M81.977.875.976.880.7
DoRA (r = 16)∼4.4M82.278.176.577.381.0
DS-LoRA∼4.5M82.578.377.277.781.3
Gemma 7BZero-Shot Prompting071.564.553.858.768.8
Full Finetuning∼650M79.675.072.173.578.3
Adapters∼5.2M78.974.270.572.377.2
Standard LoRA∼3.9M80.476.173.574.879.2
QLoRA (r = 16)∼3.9M80.776.474.075.279.5
AdaLoRA (r = 16)∼4.0M81.176.974.775.879.8
DoRA (r = 16)∼4.1M81.577.375.476.380.2
DS-LoRA∼4.1M81.877.576.076.780.5
Table 2. Performance comparison on the HateXplain dataset, showcasing the advantage of DS-LoRA for finer-grained distinctions. The best results for each base model are in bold.
Table 2. Performance comparison on the HateXplain dataset, showcasing the advantage of DS-LoRA for finer-grained distinctions. The best results for each base model are in bold.
Base ModelMethodTrainable ParamsAcc.P(Macro)R(Macro)F1(H)F1(Off)Macro-F1
Llama-3 8BZero-Shot Prompting060.258.155.345.150.356.5
Full Finetuning (Top 2 layers)∼700M70.569.268.062.365.868.8
Adapters ( d b o t t l e n e c k = 64 )∼5.8M69.367.866.560.163.267.0
Standard LoRA ( r = 16 )∼4.2M71.870.569.964.067.570.2
QLoRA (r = 16)∼4.2M72.070.770.164.467.970.4
AdaLoRA (r = 16)∼4.3M72.671.470.865.268.971.1
DoRA (r = 16)∼4.4M73.071.871.265.869.571.5
DS-LoRA (ours, r = 16 )∼4.5M73.572.371.866.570.172.0
Gemma 7BZero-Shot Prompting059.157.054.143.849.055.2
Full Finetuning (Top 2 layers)∼650M69.268.066.860.964.167.5
Adapters ( d b o t t l e n e c k = 64 )∼5.2M68.166.565.258.861.965.7
Standard LoRA ( r = 16 )∼3.9M70.669.168.562.566.068.8
QLoRA (r = 16)∼3.9M70.969.568.862.966.469.1
AdaLoRA (r = 16)∼4.0M71.470.169.363.767.269.5
DoRA (r = 16)∼4.1M71.870.669.964.467.970.0
DS-LoRA (ours, r = 16 )∼4.1M72.371.070.265.168.770.5
Table 3. Ablation study of DS-LoRA’s asymmetric components on OLID (Llama-3 8B, r = 16 ). The bolded data in the table represents the best results among the different method configurations.
Table 3. Ablation study of DS-LoRA’s asymmetric components on OLID (Llama-3 8B, r = 16 ). The bolded data in the table represents the best results among the different method configurations.
Method ConfigurationF1(O)Macro-F1
Standard LoRA (Baseline—more symmetric)75.980.1
LoRA + Gate (Input-level asymmetry)77.180.9
LoRA + L1 Sparsity (Parameter-level asymmetry)76.580.4
DS-LoRA (Combined asymmetries)77.781.3
Table 4. Ablation study of the gate controller architecture on OLID (Llama-3 8B, r = 16). The two-layer MLP offers a slight advantage over a simple linear gate. The bolded data in the table represents the best results among the different method configurations.
Table 4. Ablation study of the gate controller architecture on OLID (Llama-3 8B, r = 16). The two-layer MLP offers a slight advantage over a simple linear gate. The bolded data in the table represents the best results among the different method configurations.
Gate ArchitectureAdded ParamsF1(O)Macro-F1
Standard LoRA (No Gate)075.980.1
Linear Gate ( d g a t e = 0 )∼0.1M77.080.8
MLP ( d gate = 16 )∼0.3M77.781.3
MLP ( d g a t e = 32 )∼0.6M77.681.2
Table 5. Average gate activation values ( g ( x )   ±   std.dev.) demonstrating input-level asymmetric adaptation per layer group and input category on HateXplain (Llama-3 8B, DS-LoRA r = 16 ). Values range from 0 (LoRA path fully closed, more symmetric to base LLM) to 1 (LoRA path fully open, more asymmetric adaptation). The bolded data in the table represents the best results.
Table 5. Average gate activation values ( g ( x )   ±   std.dev.) demonstrating input-level asymmetric adaptation per layer group and input category on HateXplain (Llama-3 8B, DS-LoRA r = 16 ). Values range from 0 (LoRA path fully closed, more symmetric to base LLM) to 1 (LoRA path fully open, more asymmetric adaptation). The bolded data in the table represents the best results.
Layer GroupAverage Gate Activation for Category (Input Asymmetry)
Normal Offensive (Non-Hate) Hate Speech
Early Layers (1–10)0.35 ± 0.080.45 ± 0.100.52 ± 0.11
Middle Layers (11–21)0.28 ± 0.060.58 ± 0.120.68 ± 0.14
Late Layers (22–32)0.31 ± 0.070.50 ± 0.090.61 ± 0.13
Overall Model Average0.31 ± 0.050.51 ± 0.080.60 ± 0.10
Table 6. Generalization performance comparison on cross-domain (Wikipedia Toxicity) and cross-lingual (GermEval 2018) datasets. DS-LoRA’s asymmetric adaptation demonstrates superior performance in both new settings. The bolded data in the table represents the best results.
Table 6. Generalization performance comparison on cross-domain (Wikipedia Toxicity) and cross-lingual (GermEval 2018) datasets. DS-LoRA’s asymmetric adaptation demonstrates superior performance in both new settings. The bolded data in the table represents the best results.
DatasetMethodAcc.P(Off.)R(Off.)F1(Off.)
Base Model: Llama-3 8B Instruct
Wikipedia ToxicityStandard LoRA (r = 16)89.285.184.584.8
DS-LoRA (ours, r = 16)90.586.385.986.1
GermEval 2018 (DE)Standard LoRA (r = 16)82.179.578.278.8
DS-LoRA (ours, r = 16)83.480.880.180.4
Table 7. Computational cost comparison on the OLID dataset using Llama-3 8B. Costs are benchmarked against standard LoRA. The bolded data in the table represents the best results.
Table 7. Computational cost comparison on the OLID dataset using Llama-3 8B. Costs are benchmarked against standard LoRA. The bolded data in the table represents the best results.
MethodTraining Time/EpochPeak GPU MemoryInference Latency
(Minutes) (GB) (ms/Sample)
Standard LoRA (r = 16)25.118.285
DoRA (r = 16)26.5 (+5.6%)18.4 (+1.1%)89 (+4.7%)
DS-LoRA (ours, r = 16)28.2 (+12.4%)18.6 (+2.2%)93 (+9.4%)
Table 8. Qualitative examples from the HateXplain dataset. We show cases where the base model and standard LoRA fail, but DS-LoRA correctly identifies the offensive nature of the text by applying a stronger, asymmetric adaptation, as indicated by the high average gate activation score.
Table 8. Qualitative examples from the HateXplain dataset. We show cases where the base model and standard LoRA fail, but DS-LoRA correctly identifies the offensive nature of the text by applying a stronger, asymmetric adaptation, as indicated by the high average gate activation score.
Example TextTrue LabelBase LLMStd. LoRADS-LoRADS-LoRA g ¯ ( x )
“So you agree? You think you’re just an animal? Good boy.”OffensiveNormalNormalOffensive0.68
Analysis:This is a classic case of dehumanizing sarcasm. The literal words “good boy” are positive, confusing models that rely on surface-level cues. DS-LoRA’s high gate activation (0.68) shows it recognized the need for deep semantic analysis, engaging its finetuned knowledge to understand the derogatory context.
“Another day, another group of ’undocumented shoppers’ cleaning out the local convenience store.”Hate SpeechOffensiveOffensiveHate Speech0.75
Analysis:The phrase ’undocumented shoppers’ is a coded, dehumanizing slur for immigrants. While standard LoRA correctly identifies it as offensive, it fails to capture the severity required for the “Hate Speech” label. DS-LoRA applies an even stronger adaptation (gate value of 0.75), allowing it to make the finer-grained distinction.
“I’m not surprised she got the promotion. It always helps to be the ’diversity hire’.”OffensiveNormalNormalOffensive0.62
Analysis:This is a subtle microaggression that dismisses someone’s achievements by attributing them to affirmative action. The offensive nature is entirely contextual and implied. Standard LoRA misses this nuance, while DS-LoRA’s gates activate strongly, indicating the model recognized the subtext and applied a necessary corrective adaptation.
“Great, another article about feminism. I can already hear the shrieking.”OffensiveNormalNormalOffensive0.59
Analysis:This text uses dismissive and stereotypical language (“shrieking”) to attack a protected group’s ideology. The base model and standard LoRA likely see the topic “feminism” and the seemingly neutral phrase “I can hear” as benign. DS-LoRA correctly identifies the derogatory association, applying a significant adaptation to classify it as offensive.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Chen, B.; Sun, J. Leveraging Asymmetric Adaptation with Dynamic Sparse LoRA for Enhanced Nuance in LLM-Based Offensive Language Detection. Symmetry 2025, 17, 1076. https://doi.org/10.3390/sym17071076

AMA Style

Wang Y, Chen B, Sun J. Leveraging Asymmetric Adaptation with Dynamic Sparse LoRA for Enhanced Nuance in LLM-Based Offensive Language Detection. Symmetry. 2025; 17(7):1076. https://doi.org/10.3390/sym17071076

Chicago/Turabian Style

Wang, Yanzhe, Bingquan Chen, and Jingchao Sun. 2025. "Leveraging Asymmetric Adaptation with Dynamic Sparse LoRA for Enhanced Nuance in LLM-Based Offensive Language Detection" Symmetry 17, no. 7: 1076. https://doi.org/10.3390/sym17071076

APA Style

Wang, Y., Chen, B., & Sun, J. (2025). Leveraging Asymmetric Adaptation with Dynamic Sparse LoRA for Enhanced Nuance in LLM-Based Offensive Language Detection. Symmetry, 17(7), 1076. https://doi.org/10.3390/sym17071076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop