Deep Learning Short Text Sentiment Analysis Based on Improved Particle Swarm Optimization

: Manually tuning the hyperparameters of a deep learning model is not only a time-consuming and labor-intensive process, but it can also easily lead to issues like overﬁtting or underﬁt-ting, hindering the model’s full convergence. To address this challenge, we present a BiLSTM-TCSA model (BiLSTM combine TextCNN and Self-Attention) for deep learning-based sentiment analysis of short texts, utilizing an improved particle swarm optimization (IPSO). This approach mimics the global random search behavior observed in bird foraging, allowing for adaptive optimization of model hyperparameters. In this methodology, an initial step involves employing a Generative Adversarial Network (GAN) mechanism to generate a substantial corpus of perturbed text, augmenting the model’s resilience to disturbances. Subsequently, global semantic insights are extracted through Bidirectional Long Short Term Memory networks (BiLSTM) processing. Leveraging Convolutional Neural Networks for Text (TextCNN) with diverse convolution kernel sizes enables the extraction of localized features, which are then concatenated to construct multi-scale feature vectors. Concluding the process, feature vector reﬁnement and the classiﬁcation task are accomplished through the integration of Self-Attention and Softmax layers. Empirical results underscore the effectiveness of the proposed approach in sentiment analysis tasks involving succinct texts containing limited information. Across four distinct datasets, our method attains impressive accuracy rates of 91.38%, 91.74%, 85.49%, and 94.59%, respectively. This performance constitutes a notable advancement when compared against conventional deep learning models and baseline approaches.


Introduction
Text sentiment analysis typically involves the exploration and examination of the emotional inclination present within textual commentaries.This encompasses the extraction of subjectively imbued elements like viewpoints, stances, attitudes, and opinions, subsequently entailing the assessment of the emotional underpinnings of said elements.The proliferation of the internet and the rapid evolution of social media platforms and online comment forums have propelled text sentiment analysis into a realm of scholarly inquiry that commands considerable focus in recent years.
Currently, proposed methods for text sentiment analysis can be broadly categorized into two main groups.The first category entails machine learning approaches, encompassing techniques like support vector machines [1], K-nearest neighbor classification algorithms [2], naive Bayes [3], and Latent Dirchlet allocation [4].While these algorithms exhibit commendable performance, they predominantly hinge upon manually extracting text features, leading to inefficiencies throughout the entire process.The second approach centers around deep learning, a prevailing paradigm within the realm of natural language processing.Prominent examples encompass the Convolutional Neural Networks (CNN) and the Recurrent Neural Networks (RNN).The former, CNN, adeptly discerns local features from textual matrices through convolutional and pooling operations.Notably, Kim [5] introduced a CNN-based sentence classification model that yields promising classification results across multiple public datasets.However, it struggles to effectively to grasp global context features.Recognizing the shortcomings in obtaining comprehensive textual semantics and contextual insights, sources in the literature [6][7][8] increase the number of layers to increase the receptive field of information obtained by the convolutional kernel, but the effect is not significant.For this reason, many researchers began to change the structure of the convolution layer and the pooling layer from the horizontal.Guo [9] ingeniously constructed an augmented CNN model, harnessing skip convolution and K-max pooling operations, thereby refining the extraction of features from concise texts.Additionally, Wang [10] fortified the TextCNN architecture through two distinct avenues: the creation of N-gram discontinuous sliding windows and K-Max average pooling.Despite these advancements, owing to the sparse nature of textual content and the inherent limitations in capturing temporal dimensions, even these enhanced models grapple with inadequacies in fully acquiring the nuances of text semantics.
RNN possesses the capability to extract global information to a certain extent.Irsoy [11] harnessed RNN for extracting sequence features in sentiment analysis tasks.Similarly, Xin [12] employed a specialized RNN structure to capture text features for subsequent analysis.However, due to its inherent limitations in "memory" capacity, RNN encounters issues such as gradient explosion or vanishing gradients when text length exceeds a certain range, thereby impacting experimental outcomes.In response to this challenge, Hochreiter and Gers [13,14] introduced the LSTM network, and in subsequent advancements, Cho [15] amalgamated input and forget gates into update gates, culminating in the development of Gated Recurrent Units (GRUs).This not only effectively resolves the gradient vanishing or explosion problem but also refines the internal processing units, enabling more adept context information capture.Consequently, GRUs outperform RNNs on numerous tasks.Tai [16] introduced textual topological structure information into LSTM models, constructing LSTM models based on both dependency trees and phrase structure trees.This novel approach modifies the network structure to make gate vectors and internal state updates dependent on states from multiple relevant subunits, effectively amalgamating syntactic features such as dependency relationships and phrase composition in short texts.This results in more accurate semantic representations for short texts.Zhao's [17] proposed sequential hybrid model initially employs BiLSTM to capture the global features of the text and subsequently utilizes CNN to extract local semantic features and perform classification.This approach mitigates the loss of positional information and contributes to enhanced results.
In recent years, attention mechanisms [18] have been introduced to assign significant attention weights to vectors, highlighting key information.The integration of attention mechanisms enhances feature expression, resulting in models with higher accuracy that intuitively emphasize the importance of different words.Yang [19] combined a hierarchical attention network with Bidirectional Gated Recurrent Units (Bi-GRU) for English short text classification.This approach segments text into three hierarchical levels-words, sentences, and documents-introducing an attention mechanism between each level to assign varying weights to different words and sentences, progressively selecting crucial information.Building upon Yang's work, Zhou [20] proposed a hybrid attention network that combines both word-level and character-level information for classifying Chinese short texts.By merging the attention mechanism with BiLSTM, this model captures both word and character embeddings, thus extracting the most pivotal word and character information.Cheng Yan's model [21] integrates a multi-channel CNN with Bi-GRU.This combination exploits the complementarity of local features and contextual information, with the incorporation of an attention mechanism further identifying pivotal information within the text.
While the aforementioned methods have yielded promising results, the configuration of hyperparameters significantly influences the performance of deep learning predictive models.Relying on manual parameter tuning based on empirical knowledge introduces considerable errors, thereby limiting the model's full potential.To address this issue, Feurer [22] introduced an advanced interface capable of automatically assessing multiple preprocessing and model fitting pipelines.They delved into the concept of achieving automated machine learning (AutoML) through meta-learning, with the aim of streamlining the tedious manual tuning process involved in hyperparameter optimization for deep learning models.Wang [23] proposed an efficient and robust method for automated hyperparameter tuning using Gaussian processes.This approach effectively handles various hyperparameter combinations and employs Gaussian process regression to predict model performance.However, Jeremy [24] breaks away from tradition through a method known as Automatic Gradient Descent (AGD), which allows for training deep learning models without the need for hyperparameters.
This paper presents an innovative approach that integrates an IPSO into a deep learning-based sentiment analysis model for text.Leveraging BiLSTM, TextCNN, and the Attention Mechanism, this model adeptly extracts profound semantic information from text pairs.Furthermore, it introduces a Generative Adversarial Network (GAN) mechanism to generate perturbed texts for training and combines with the IPSO to optimize and match the model's hyperparameters effectively.Experimental results further underscore the efficacy and feasibility of this approach.

Related Works 2.1. IPSO
In general, deep learning models often have numerous hyperparameters, and their search space can be extensive.When employing manual search methods or grid search techniques, it can become impractical due to the presence of continuous hyperparameter variables.This not only makes the search process cumbersome but also incurs high computational time costs.Assuming that a three-dimensional function 'Z = g(x, y)' needs to find its optimal integer solution, when the search range of 'x' is denoted as 'm' and the search range of 'y' is denoted as 'n', as the search ranges 'm' and 'n' continue to expand, the scope of grid search grows quadratically, the horizontal and vertical axes represent the expansion of the search domains, 'm' and 'n', while the vertical axis represents the number of parameter tuning iterations, as illustrated in Figure 1.In this, areas with a redder color indicate a higher number of required iterations.While the aforementioned methods have yielded promising results, the configuration of hyperparameters significantly influences the performance of deep learning predictive models.Relying on manual parameter tuning based on empirical knowledge introduces considerable errors, thereby limiting the model's full potential.To address this issue, Feurer [22] introduced an advanced interface capable of automatically assessing multiple preprocessing and model fitting pipelines.They delved into the concept of achieving automated machine learning (AutoML) through meta-learning, with the aim of streamlining the tedious manual tuning process involved in hyperparameter optimization for deep learning models.Wang [23] proposed an efficient and robust method for automated hyperparameter tuning using Gaussian processes.This approach effectively handles various hyperparameter combinations and employs Gaussian process regression to predict model performance.However, Jeremy [24] breaks away from tradition through a method known as Automatic Gradient Descent (AGD), which allows for training deep learning models without the need for hyperparameters.
This paper presents an innovative approach that integrates an IPSO into a deep learning-based sentiment analysis model for text.Leveraging BiLSTM, TextCNN, and the Attention Mechanism, this model adeptly extracts profound semantic information from text pairs.Furthermore, it introduces a Generative Adversarial Network (GAN) mechanism to generate perturbed texts for training and combines with the IPSO to optimize and match the model's hyperparameters effectively.Experimental results further underscore the efficacy and feasibility of this approach.

IPSO
In general, deep learning models often have numerous hyperparameters, and their search space can be extensive.When employing manual search methods or grid search techniques, it can become impractical due to the presence of continuous hyperparameter variables.This not only makes the search process cumbersome but also incurs high computational time costs.Assuming that a three-dimensional function '  ,  ' needs to find its optimal integer solution, when the search range of '' is denoted as '' and the search range of '' is denoted as '', as the search ranges '' and '' continue to expand, the scope of grid search grows quadratically, the horizontal and vertical axes represent the expansion of the search domains, '' and '', while the vertical axis represents the number of parameter tuning iterations, as illustrated in Figure 1.In this, areas with a redder color indicate a higher number of required iterations.To address the challenge of manually exhaustively searching for optimization solutions, various specialized algorithms have been proposed.These include the Pathfinder To address the challenge of manually exhaustively searching for optimization solutions, various specialized algorithms have been proposed.These include the Pathfinder Algorithm (PA) [25], Multi-Verse Optimizer (MVO) [26], Moth Flame Optimizer (MFO) [27], and Arithmetic Optimization Algorithm (AOA) [28].
The PA algorithm draws inspiration from the behavior of ants searching for the shortest path.It employs communication based on pheromones to guide the search for the optimal path in the problem space.However, its applicability is limited when it comes to problems unrelated to pathfinding or routing.
The MVO algorithm is based on the concept of multiple universes, where each universe represents a set of solutions.These universes interact, share information, and compete to find the best solutions, making it suitable for multimodal optimization problems.However, when a large number of populations are set, the computational resources required increase significantly, which may not be acceptable.
The MFO algorithm is inspired by the behavior of moths being attracted to flames.Moths represent solutions, and they are attracted to better solutions while avoiding the "flames" or undesirable regions.This algorithm iteratively updates solutions to find the best or near-optimal solution.It is easy to implement and applicable to various optimization problems, but it can easily become stuck in local optima, especially in complex highdimensional spaces.
The AOA algorithm is a general optimization framework based on arithmetic operations such as addition, subtraction, multiplication, and division.It iteratively improves solutions based on the solutions themselves and has a wide range of applications.However, for complex problems, it may require a substantial amount of computational resources.
In comparison to those algorithms mentioned above, the Particle Swarm Optimization (PSO) algorithm boasts several advantages.It exhibits minimal dependence on initial information, avoids convergence into local minima due to high computational complexity, and is relatively straightforward to comprehend.Originally introduced by Kennedy and Eberhart [29], PSO is a global random search algorithm inspired by the foraging behavior of birds.In complex constrained problems, particles represent optimal solutions under multiple optimization constraints, and a particle swarm constitutes a collection of these optimal solutions.The dimensionality of particles is determined by the number of parameters to be optimized, while the number of particles in the swarm can be adjusted based on factors such as the quantity of optimization problems, required computational resources, and optimization objectives.
The algorithm sets the position state 'X i ' and velocity state 'V i ' to represent the current state of the particle 'i'.Throughout the iterative process, particles continually update their positions and velocities based on the interplay between individual and global best values of the current state.Upon locating the present global and individual optimal solutions, particle information is iteratively updated according to the following Equations ( 1)-(4) [30,31]: Among them, 'X i ' and 'V i ' represent the collection of positions and velocities of particles in each dimension of the current n-dimensional space.The most crucial step in the particle swarm optimization algorithm lies in Equation (3), which is used to update the velocity information of the current particle in a specific dimension.Here, 'w' denotes the weight of the current velocity, 'c 1 ' and 'c 2 ' represent learning factors, 'rand 1 ' and 'rand 2 ' are random numbers between (0, 1), and 'pbest' and 'gbest' stand for the individual best value found by the current particle and the global best value found by all particles, respectively.Therefore, in Equation ( 3), the velocity increment information 'V t+1 ' depends not only on the current position 'V t ' but also on the individual best 'pbest' of the particle and the global best solution 'gbest' found by all particles.Finally, the current position information of the particle is updated using Equation (4).
To validate the superior performance of the PSO algorithm, this paper compares the PSO algorithm with other algorithms such as PA, MVO, MFA, and AOA using the CEC benchmark test function set.The mathematical formulas for the CEC standard benchmark functions are presented in Table 1, and the results of various algorithms on the CEC standard benchmark functions are presented in Table 2.

Function Name
Mathematical Expression While PSO exhibits notable performance in optimization compared to other algorithms, this paper aims to further enhance the performance of the PSO.Specifically, we propose improvements to the inertia 'w' and learning factors 'c 1 ' and 'c 2 ' within the PSO.

Improvement of the Inertia Weight 'w'
The inertia weight refers to the extent to which the current velocity is influenced by the previous generation's velocity.It is directly proportional to the global search capability and inversely proportional to the local search capability.During the early iterations, the particle's optimal state lies in possessing a strong global search capability, enabling it to explore new regions and discover potential possibilities.However, as the iterations progress, there arises a need to enhance the particle's local search capability to ensure smooth convergence.Therefore, a novel approach is introduced to dynamically adjust the inertia weight 'w' using a nonlinear function.The formula is as follows in Equation ( 5): The motivation for using ' 4 π × arctan t t max ' is twofold: it exhibits the characteristics of a convex function and can effectively adapt to the iterative process of the weight 'w' operation.In the early stages of iteration, although individual diversity is required to explore unknown domains, the portion controlled by the inertia weight does not dominate.Therefore, it is essential to apply a certain degree of decay rate to the 'w' in the early stages, maintaining a relatively high level of acceleration in the 'w' decay.However, as the iteration progresses, in order to expedite particle convergence to the global optimum, it becomes necessary to smoothly reduce the 'w' to its minimum value.Consequently, in the later stages, the acceleration of w decay should be maintained at a lower level.Assuming 'w max = 0.8' and 'w min = 0.2', the variation of the 'w' decay acceleration during the iteration process is illustrated in Figure 2.
The motivation for using ' arctan ' is twofold: it exhibits the characteristics of a convex function and can effectively adapt to the iterative process of the weight '' operation.In the early stages of iteration, although individual diversity is required to explore unknown domains, the portion controlled by the inertia weight does not dominate.Therefore, it is essential to apply a certain degree of decay rate to the '' in the early stages, maintaining a relatively high level of acceleration in the '' decay.However, as the iteration progresses, in order to expedite particle convergence to the global optimum, it becomes necessary to smoothly reduce the '' to its minimum value.Consequently, in the later stages, the acceleration of w decay should be maintained at a lower level.Assuming ' 0.8' and ' 0.2', the variation of the '' decay acceleration during the iteration process is illustrated in Figure 2.

Improvement of Learning Factors
The learning factors, ' ' and ' ', respectively determine the extent to which particles are influenced by group information and individual information.When ' ' is set to 0, the particle's state is entirely governed by global information, resulting in reduced diversity within the group and making it difficult for particles to escape local optima.Conversely, when ' ' is set to 0, the particle's state solely relies on individual information, displaying a self-centric behavior that disregards the contributions from the group.This can lead to a slower convergence rate of the algorithm.Hence, a nonlinear function is introduced to dynamically adjust the learning factors, following Equations ( 6) and ( 7): where ' ' and ' ' represent the initial and final values of parameter ' ', with ' 1' being greater than ' 0', and ' ' and ' ' denoting the initial and final values of parameter ' ' with ' 0' being less than ' 1'.From the perspective of ' ' during the iteration process, it is necessary to slowly decay ' ' using the concave function characteristic of ' '.This is because, in the early stages of iterations, particles need to possess sufficient individual skills to explore unknown territories.Therefore, the rate of ' ' decay in the early stages should be slow, and the gradient should decrease.However, in the later stages, when particles no longer need to explore unknown areas due to their independent characteristics being suppressed or even cleared, the acceleration of ' ' decay should increase.During iterations, the variation of ' ' is illustrated in Figure 3a.On the contrary, for ' ', in the early stages of iteration, it is necessary to suppress the global characteristics of particles, allowing them to move with their "self-will".Therefore, only a slight increase in ' ' is required in the early stages.However, as the iterations progress to the later stages,

Improvement of Learning Factors
The learning factors, 'c 1 ' and 'c 2 ', respectively determine the extent to which particles are influenced by group information and individual information.When 'c 1 ' is set to 0, the particle's state is entirely governed by global information, resulting in reduced diversity within the group and making it difficult for particles to escape local optima.Conversely, when 'c 2 ' is set to 0, the particle's state solely relies on individual information, displaying a self-centric behavior that disregards the contributions from the group.This can lead to a slower convergence rate of the algorithm.Hence, a nonlinear function is introduced to dynamically adjust the learning factors, following Equations ( 6) and ( 7): where 'c 1s ' and 'c 1e ' represent the initial and final values of parameter 'c 1 ', with 'c 1s = 1' being greater than 'c 1e = 0', and 'c 2s ' and 'c 2e ' denoting the initial and final values of parameter 'c 2 ' with 'c 2s = 0' being less than 'c 2e = 1'.From the perspective of 'c 1 ' during the iteration process, it is necessary to slowly decay 'c 1 ' using the concave function characteristic of ' t t max

3
'.This is because, in the early stages of iterations, particles need to possess sufficient individual skills to explore unknown territories.Therefore, the rate of 'c 1 ' decay in the early stages should be slow, and the gradient should decrease.However, in the later stages, when particles no longer need to explore unknown areas due to their independent characteristics being suppressed or even cleared, the acceleration of 'c 1 ' decay should increase.During iterations, the variation of 'c 1 ' is illustrated in Figure 3a.On the contrary, for 'c 2 ', in the early stages of iteration, it is necessary to suppress the global characteristics of particles, allowing them to move with their "self-will".Therefore, only a slight increase in 'c 2 ' is required in the early stages.However, as the iterations progress to the later stages, the global optimum requires a significant "attraction" to particles.Thus, 'c 2 ' should undergo a substantial increment.The concave function characteristic of ' t t max 3 ' can meet the requirements for 'c 2 "s incremental changes during the iteration.Based on the description of 'c 2 ', the variation of 'c 2 ' will be as shown in Figure 3b.
the global optimum requires a significant "attraction" to particles.Thus, ' ' should undergo a substantial increment.The concave function characteristic of ' ' can meet the requirements for ' ''s incremental changes during the iteration.Based on the description of ' ', the variation of ' ' will be as shown in Figure 3b.The specific computation process of the IPSO algorithm is illustrated in Figure 4.The algorithm consists of the following six specific steps: (1) Randomly generate an initial batch of particles (along with initialized particle velocities and positions) to constitute the current population.(2) Calculate the fitness for each particle.
(3) Update the individual best and global best solutions for each particle based on their fitness function values.The updated equations are listed as Equation ( 8).Here, ' ' represents the fitness value of the particle at the current iteration '', ' ' represents the global best solution at iteration '', and '' is the maximum number of iterations.The specific computation process of the IPSO algorithm is illustrated in Figure 4.The algorithm consists of the following six specific steps: The specific computation process of the IPSO algorithm is illustrated in Figure 4.The algorithm consists of the following six specific steps: (1) Randomly generate an initial batch of particles (along with initialized particle velocities and positions) to constitute the current population.(2) Calculate the fitness for each particle.
(3) Update the individual best and global best solutions for each particle based on their fitness function values.The updated equations are listed as Equation ( 8).Here, '  ' represents the fitness value of the particle at the current iteration '', '  ' represents the global best solution at iteration '', and '' is the maximum number of iterations.3) and ( 4).
(5) Calculate the fitness for each particle again.(1) Randomly generate an initial batch of particles (along with initialized particle velocities and positions) to constitute the current population.(2) Calculate the fitness for each particle.
(3) Update the individual best and global best solutions for each particle based on their fitness function values.The updated equations are listed as Equation ( 8).Here, 'p i ' represents the fitness value of the particle at the current iteration 'i', 'g i ' represents the global best solution at iteration 'i', and 'n' is the maximum number of iterations.
(5) Calculate the fitness for each particle again.( 6) Check if the iteration is complete.If it is, output the current best solution.Otherwise, proceed to (3).

Performance Analysis of the IPSO
To evaluate the performance of the IPSO, this study utilizes three CEC standard benchmark functions to demonstrate the algorithm's improvements, namely, 'holder_table', 'easom', and 'rastrigin', with their expressions shown in Equations ( 10)- (12).Since it involves finding the global optimum of multivariate functions, the fitness function in the IPSO is the test function itself.

Performance Analysis of the IPSO
To evaluate the performance of the IPSO, this study utilizes three CEC standard benchmark functions to demonstrate the algorithm's improvements, namely, 'ℎ_', '', and '', with their expressions shown in Equations ( 10)- (12).Since it involves finding the global optimum of multivariate functions, the fitness function in the IPSO is the test function itself.
10 cos( 2) Figure 5 illustrates the three-dimensional plot of the test function 'ℎ_', '', ''.During testing, a particle swarm size of 20 and particle dimensionality of 2 were configured, with a maximum iteration count of 200.The parameters were set as w max ' = 0.8 and 'w min ' = 0.4.A comparative analysis was conducted on the evolution of fitness curves between the pre-improvement and post-improvement methods, as depicted in Figure 6.From the comparison in the above figures, it can be observed that the performance of PSO is not optimal, and it exhibits certain shortcomings compared to IPSO.Comparing Figure 6a with Figure 6b, as well as Figure 6c with Figure 6d, it can be seen that in the early iterations, PSO lacks sufficient individual diversity, leading to insufficient capability to escape local optima and often becoming trapped in local optima briefly and frequently.By examining Figure 6e,f, it becomes evident that the global characteristics of particles in PSO do not transition with iterations, resulting in unstable update speeds.In contrast, particles in IPSO exhibit significant individual diversity in the early stages, allowing them to thoroughly explore the unknown space and converge to the global optimum solution quickly in the initial iterations.Furthermore, as the iterations progress, the particles in IPSO quickly stabilize.From the comparison in the above figures, it can be observed that the performance of PSO is not optimal, and it exhibits certain shortcomings compared to IPSO.Comparing Figure 6a with Figure 6b, as well as Figure 6c with Figure 6d, it can be seen that in the early iterations, PSO lacks sufficient individual diversity, leading to insufficient capability to escape local optima and often becoming trapped in local optima briefly and frequently.By examining Figure 6e,f, it becomes evident that the global characteristics of particles in PSO do not transition with iterations, resulting in unstable update speeds.In contrast, particles in IPSO exhibit significant individual diversity in the early stages, allowing them to thoroughly explore the unknown space and converge to the global optimum solution quickly in the initial iterations.Furthermore, as the iterations progress, the particles in IPSO quickly stabilize.

BiLSTM
In the LSTM network, although the model circumvents the issues of vanishing or exploding gradients seen in RNN, its unidirectional nature restricts it to utilizing information from the 'preamble', leaving it unaware of the 'postamble'.In practical scenarios, predictive outcomes often necessitate access to information from the entire input sequence.Consequently, the prevalent approach involves the application of BiLSTM [32].Building upon the foundation of LSTM, BiLSTM integrates input sequence information from both the forward and backward directions.For the output at time 't', the forward LSTM layer incorporates information up to and including time 't', while the backward LSTM layer incorporates information from 't' onwards in the input sequence.The outputs from both forward and backward layers are then averaged to obtain the BiLSTM layer's output.The architecture of the BiLSTM network is illustrated in Figure 7.

BiLSTM
In the LSTM network, although the model circumvents the issues of vanishing or exploding gradients seen in RNN, its unidirectional nature restricts it to utilizing information from the 'preamble', leaving it unaware of the 'postamble'.In practical scenarios, predictive outcomes often necessitate access to information from the entire input sequence.Consequently, the prevalent approach involves the application of BiLSTM [32].Building upon the foundation of LSTM, BiLSTM integrates input sequence information from both the forward and backward directions.For the output at time '', the forward LSTM layer incorporates information up to and including time '', while the backward LSTM layer incorporates information from '' onwards in the input sequence.The outputs from both forward and backward layers are then averaged to obtain the BiLSTM layer's output.The architecture of the BiLSTM network is illustrated in Figure 7.The gating mechanisms within the hidden layers of the LSTM structure determine the transmission of information and have the capacity to learn critical dependencies for the current information.The forget gate governs the discarding of information that is deemed unimportant for classification, the input gate determines which information needs to be updated, and the output gate determines the information to be output.Let us denote the values of the forget gate, input gate, and output gate at time '' as ' ', ' ', and ' ', respectively.Specifically, the activation function '' for the forget gate utilizes the sigmoid function.The information update is represented as follows in Equation ( 13): Information update for the input gate is computed using Equation ( 14): Computation for updating the output gate's information is given by Equation (15): Calculation for updating the cell information state is represented by Equation ( 16): 1 tanh( ) The update of the hidden state information at time '' is determined by Equation ( 17): The gating mechanisms within the hidden layers of the LSTM structure determine the transmission of information and have the capacity to learn critical dependencies for the current information.The forget gate governs the discarding of information that is deemed unimportant for classification, the input gate determines which information needs to be updated, and the output gate determines the information to be output.Let us denote the values of the forget gate, input gate, and output gate at time 't' as ' f t ', 'i t ', and 'o t ', respectively.Specifically, the activation function 'δ' for the forget gate utilizes the sigmoid function.The information update is represented as follows in Equation ( 13): Information update for the input gate is computed using Equation ( 14): Computation for updating the output gate's information is given by Equation (15): Calculation for updating the cell information state is represented by Equation ( 16): The update of the hidden state information at time 't' is determined by Equation ( 17): where 'h (t) ' is composed of the forward 'h (t) ' and backward 'h where 'ℎ ' is composed of the forward 'ℎ ' and backward 'ℎ ' states, and ' ', ' ', ' ', and ' ' represent the weight matrices for the forget gate, input gate, output gate, and updated cell state, respectively.Correspondingly, ' ', ' ', ' ', and ' ' are the respective bias terms.The LSTM cell structure is illustrated in Figure 8.

TextCNN
The TextCNN model proposed by Kim [5] consists of one-dimensional convolutional layers and one-dimensional max-pooling layers.Its input is a text matrix composed of word vectors, and the width of the convolutional kernels is the same as the width of the text matrix.In other words, the width of the convolutional kernels is equal to the dimension of the word vectors.The convolutional kernels only move vertically.The model structure is illustrated in Figure 9.After convolving a text vector with a kernel, the resulting output is a vector with a shape of (sentence length kernel height 1, 1).The calculation of the convolution operation is shown in Equation (18).
In the equation, '' represents the feature vector obtained from the convolutional kernel output, '' is the activation function of the convolutional layer, '' stands for the bias term, ' ' denotes convolutional kernels of different sizes, and '' represents the text matrix.
Due to the varying sizes assigned to the convolutional kernels ' ', they can extract text features at different scales.Therefore, the text features '' preserved after passing through the pooling layer represent the multi-scale characteristics of the text.The calculation of the pooling operation is demonstrated in Equation (19).

TextCNN
The TextCNN model proposed by Kim [5] consists of one-dimensional convolutional layers and one-dimensional max-pooling layers.Its input is a text matrix composed of word vectors, and the width of the convolutional kernels is the same as the width of the text matrix.In other words, the width of the convolutional kernels is equal to the dimension of the word vectors.The convolutional kernels only move vertically.The model structure is illustrated in Figure 9. tanh( ) where 'ℎ ' is composed of the forward 'ℎ ' and backward 'ℎ ' states, and ' ', ' ', ' ', and ' ' represent the weight matrices for the forget gate, input gate, output gate, and updated cell state, respectively.Correspondingly, ' ', ' ', ' ', and ' ' are the respective bias terms.The LSTM cell structure is illustrated in Figure 8.

TextCNN
The TextCNN model proposed by Kim [5] consists of one-dimensional convolutional layers and one-dimensional max-pooling layers.Its input is a text matrix composed of word vectors, and the width of the convolutional kernels is the same as the width of the text matrix.In other words, the width of the convolutional kernels is equal to the dimension of the word vectors.The convolutional kernels only move vertically.The model structure is illustrated in Figure 9.After convolving a text vector with a kernel, the resulting output is a vector with a shape of (sentence length kernel height 1, 1).The calculation of the convolution operation is shown in Equation (18).
In the equation, '' represents the feature vector obtained from the convolutional kernel output, '' is the activation function of the convolutional layer, '' stands for the bias term, ' ' denotes convolutional kernels of different sizes, and '' represents the text matrix.
Due to the varying sizes assigned to the convolutional kernels ' ', they can extract text features at different scales.Therefore, the text features '' preserved after passing through the pooling layer represent the multi-scale characteristics of the text.The calculation of the pooling operation is demonstrated in Equation (19).After convolving a text vector with a kernel, the resulting output is a vector with a shape of (sentence length − kernel height + 1, 1).The calculation of the convolution operation is shown in Equation (18).
In the equation, 'c' represents the feature vector obtained from the convolutional kernel output, ' f ' is the activation function of the convolutional layer, 'b' stands for the bias term, 'w i ' denotes convolutional kernels of different sizes, and 'M' represents the text matrix.
Due to the varying sizes assigned to the convolutional kernels 'w i ', they can extract text features at different scales.Therefore, the text features 'k' preserved after passing through the pooling layer represent the multi-scale characteristics of the text.The calculation of the pooling operation is demonstrated in Equation (19).
Subsequently, the obtained features 'k' from each operation are concatenated to form a text feature vector, which serves as the input for the subsequent stage of operations.

Self-Attention
The design of the Self-Attention [18] mechanism is inspired by human visual processing, where a preview of the global information is taken to gauge the attention towards local details.This approach allows for a better focus on key information while selectively suppressing irrelevant data.The structure of the mechanism is depicted in Figure 10.Subsequently, the obtained features 'k' from each operation are concatenated to form a text feature vector, which serves as the input for the subsequent stage of operations.

Self-Attention
The design of the Self-Attention [18] mechanism is inspired by human visual processing, where a preview of the global information is taken to gauge the attention towards local details.This approach allows for a better focus on key information while selectively suppressing irrelevant data.The structure of the mechanism is depicted in Figure 10.Self-Attention is fundamentally a mapping relationship composed of '', '', and '' components for each local information.Its primary computation process can be summarized in the following steps: Step 1: Compute the similarity between the '' of a specific local information and each '' to obtain corresponding weights, which can be calculated using Equation (20).
( , ) Step 2: Normalize the weight values through '' function to prevent occurrences of extremely large or small values.This normalization process can be computed according to Equation (21).

( ( , )
) Step 3: Compute the final '' by performing a weighted sum of ' ' with the corresponding '' components, which can be calculated using Equation (22).

Noise Injection and Adversarial Training
Given the limited semantic information in short texts coupled with their high dimensionality, overfitting can become a challenge.In the context of deep learning models, various approaches are typically employed to mitigate overfitting and enhance generalization ability: 1. Utilizing dropout: This involves randomly deactivating different weights during training.2. Leveraging adversarial networks for noise injection: Adversarial networks are used to generate noisy data, introducing necessary perturbations.
These strategies collectively aim to prevent overfitting and enhance the model's ability to generalize.Self-Attention is fundamentally a mapping relationship composed of 'Query', 'Key', and 'Value' components for each local information.Its primary computation process can be summarized in the following steps: Step 1: Compute the similarity between the 'Query' of a specific local information and each 'Key' to obtain corresponding weights, which can be calculated using Equation (20).
Step 2: Normalize the weight values through 'So f tmax' function to prevent occurrences of extremely large or small values.This normalization process can be computed according to Equation (21).
Step 3: Compute the final 'Attention' by performing a weighted sum of 'a i ' with the corresponding 'Value' components, which can be calculated using Equation (22).

Methodology 3.1. Noise Injection and Adversarial Training
Given the limited semantic information in short texts coupled with their high dimensionality, overfitting can become a challenge.In the context of deep learning models, various approaches are typically employed to mitigate overfitting and enhance generalization ability: 1.
Utilizing dropout: This involves randomly deactivating different weights during training.

2.
Leveraging adversarial networks for noise injection: Adversarial networks are used to generate noisy data, introducing necessary perturbations.
These strategies collectively aim to prevent overfitting and enhance the model's ability to generalize.
The principle of dropout involves randomly deactivating a subset of parameters during each training iteration.In each training cycle, a random selection of parameters is chosen to be hidden, with their values set to zero.Since different parameters are hidden in each training iteration, it is akin to multiple instances of different sub-networks being combined and stacked.This results in diverse computational outcomes being compounded and outputted, leading to a richer representation of information features.In comparison to the original network, the network at this stage is not only simplified but also captures more stochastic factors, consequently reducing the occurrence of overfitting.
The GAN [33,34] is a generative modeling technique based on differentiable generative networks, rooted in the principles of game theory.Miyato [35] introduced adversarial training and virtual training to semi-supervised text classification tasks, effectively mitigating the occurrence of overfitting.Chen [36] combined the attention mechanism with a multitasking-focused GAN network.Under the attention mechanism, a subset of the original text is allocated for adversarial training.Within the GAN architecture, the generator and discriminator compete with each other to achieve a "Nash equilibrium", as illustrated in Figure 11.
The principle of dropout involves randomly deactivating a subset of parameters during each training iteration.In each training cycle, a random selection of parameters is chosen to be hidden, with their values set to zero.Since different parameters are hidden in each training iteration, it is akin to multiple instances of different sub-networks being combined and stacked.This results in diverse computational outcomes being compounded and outputted, leading to a richer representation of information features.In comparison to the original network, the network at this stage is not only simplified but also captures more stochastic factors, consequently reducing the occurrence of overfitting.
The GAN [33,34] is a generative modeling technique based on differentiable generative networks, rooted in the principles of game theory.Miyato [35] introduced adversarial training and virtual training to semi-supervised text classification tasks, effectively mitigating the occurrence of overfitting.Chen [36] combined the attention mechanism with a multitasking-focused GAN network.Under the attention mechanism, a subset of the original text is allocated for adversarial training.Within the GAN architecture, the generator and discriminator compete with each other to achieve a "Nash equilibrium", as illustrated in Figure 11.
In Equation ( 21): '' represents the real samples input into the network; '' represents the expectation of inputting real samples; ' • ' signifies the output values of the discriminator's judgment; ' • ' signifies the output values of the generator.
To address the limitations of having a single perturbation parameter, fixed normalization, and an inability to accurately control the noise magnitude within the model, a combination of adversarial training network and dropout is employed.This approach generates a substantial number of noisy samples, enhancing both model performance and robustness, thereby mitigating the risk of overfitting.

Model Architecture
To emphasize the essential information for sentiment analysis within short texts, this paper introduces a novel text sentiment analysis BiLSTM-TCSA model for binary classification of sentences, which integrates word embedding techniques with BiLSTM, TextCNN, and Self-Attention.The proposed model utilizes Glove Vectors for word Rep- In Equation ( 21): 'x' represents the real samples input into the network; 'E' represents the expectation of inputting real samples; 'D(•)' signifies the output values of the discriminator's judgment; 'G(•)' signifies the output values of the generator.
To address the limitations of having a single perturbation parameter, fixed normalization, and an inability to accurately control the noise magnitude within the model, a combination of adversarial training network and dropout is employed.This approach generates a substantial number of noisy samples, enhancing both model performance and robustness, thereby mitigating the risk of overfitting.

Model Architecture
To emphasize the essential information for sentiment analysis within short texts, this paper introduces a novel text sentiment analysis BiLSTM-TCSA model for binary classification of sentences, which integrates word embedding techniques with BiLSTM, TextCNN, and Self-Attention.The proposed model utilizes Glove Vectors for word Representation (GloVe) [37] pre-trained word embeddings derived from a vast corpus to represent text vectors, and further fine-tunes these embeddings during the entire training process in conjunction with the dataset.
Prior to training, a certain quantity of perturbed texts is generated using a GAN and incorporated into the training dataset.This approach not only enhances the robustness of the model but also effectively addresses overfitting issues.In the learning of contextual dependency, a dual-layer BiLSTM mechanism that incorporates the Resnet concept is employed to fuse initially independent word embeddings.This mechanism establishes connections between contextual semantic information, yielding comprehensive global semantic features.For the learning of local semantic relationships, a TextCNN with convolutional kernels of sizes (3,4,5) is employed.The convolutional operations are followed by pooling, resulting in local semantic features that are concatenated.The Self-Attention mechanism is then employed to enhance the sensitivity of sentiment word features while suppressing non-keyword features.This enhances the model's focus on text information and improves classification efficiency.
Regarding the overall model architecture, during the training process, the IPSO is employed to iteratively search for the optimal combination of model hyperparameters.This ensures that the model achieves maximum convergence.The holistic structure of the model is illustrated in Figure 12.
resent text vectors, and further fine-tunes these embeddings during the entire training process in conjunction with the dataset.
Prior to training, a certain quantity of perturbed texts is generated using a GAN and incorporated into the training dataset.This approach not only enhances the robustness of the model but also effectively addresses overfitting issues.In the learning of contextual dependency, a dual-layer BiLSTM mechanism that incorporates the Resnet concept is employed to fuse initially independent word embeddings.This mechanism establishes connections between contextual semantic information, yielding comprehensive global semantic features.For the learning of local semantic relationships, a TextCNN with convolutional kernels of sizes (3,4,5) is employed.The convolutional operations are followed by pooling, resulting in local semantic features that are concatenated.The Self-Attention mechanism is then employed to enhance the sensitivity of sentiment word features while suppressing non-keyword features.This enhances the model's focus on text information and improves classification efficiency.
Regarding the overall model architecture, during the training process, the IPSO is employed to iteratively search for the optimal combination of model hyperparameters.This ensures that the model achieves maximum convergence.The holistic structure of the model is illustrated in Figure 12. ...
Generating perturbed text

TextCNN layer
Self-Attention layer

Experimental Environment
The experiments were conducted using the open-source deep learning framework Keras (Version-2.4.3), implemented using the Python 3.8 programming language on the PyCharm platform.The experimental setup was as follows: the operating system was Windows 10, the CPU was AMD Ryzen 7 5800H, and the GPU was NVIDIA GeForce GTX 1650.A summary of the experimental environment information is presented in Table 3.

Datasets
To validate the rationality and effectiveness of the BiLSTM-TCSA model, four widely used public datasets were selected: the IMDB movie review dataset, the MR English movie review dataset, the SST-2 dataset from the Stanford Sentiment Treebank, and the Hotel Reviews dataset containing user reviews from renowned hotels worldwide.
(1) The IMDB: This movie review dataset comprises user reviews from the IMDB website about movies.For the experiment, all 50,000 records from this dataset were obtained.(2) The MR: This dataset is a binary-class English movie review dataset, containing an equal number of positive and negative sentiment samples, totaling 10,662 instances.(3) The SST-2: This dataset originates from the Stanford Sentiment Treebank and is designed specifically for binary sentiment classification a total of 70,044 sentiment-labeled comments.(4) The Hotel Reviews: This dataset, hereinafter referred to as the HR dataset, consists of user-generated polarity sentiment reviews about renowned hotels worldwide, collected by the Whale Community.After eliminating irrelevant comments, this dataset comprises a total of 877,640 valuable entries.
However, to enhance the robustness of the model, several sets of perturbed texts were extracted from the dataset and input into the GAN to generate perturbed texts.The number of generated perturbed texts accounted for 1/10 of the dataset's sample size.Subsequently, these perturbed texts were combined with the original dataset.Following the principle of maintaining a balanced proportion of positive and negative samples, a training set to testing set ratio of 7:3 was established, and this combined dataset was used for model training.The relevant statistical information for each dataset is presented in Table 4.

Metrics
In the sentiment analysis experiments, this study employs accuracy, recall, and F1 score as evaluation metrics for the models.The specific calculation formulas are as follows in Equations ( 24)-( 26): Recall = TP TP + FP (25) where TP (True Positive) represents the number of samples correctly classified as positive sentiment, FP (False Positive) represents the number of samples incorrectly classified as positive sentiment, FN (False Negative) represents the number of samples incorrectly classified as negative sentiment, and TN (True Negative) represents the number of samples correctly classified as negative sentiment.

Hyperparameter Optimization Based on IPSO
The parameters for the IPSO were initially set with a particle count of m = 15.The dimension of the optimization hyperparameters was defined as d = 5, and the experiment was conducted over a total of k = 100 iterations.
The optimization of hyperparameters for the network model is elaborated upon below.In this study, the network model is constructed within the Keras framework.It employs a two-layer BiLSTM and utilizes the Cross Entropy Loss function as its loss function.The optimizer of choice is Stochastic Gradient Descent (SGD).The hyperparameters selected for optimization encompass the dropout rate, BiLSTM hidden layer dimension (hd), learning rate, batch size, and convolutional kernel stride.In the process of fine-tuning deep learning models using IPSO, it is necessary to abstract the model as a 'model' function.The fitness function for this deep learning model typically adopts the loss function used by the model.Here, the fitness function is set to the 'cross_enterpory'.The operational procedure is illustrated in Equations ( 27) and (28): Upon observation, it is apparent that, although the model includes a self-attention mechanism not explicitly listed in the 'model' function, the parameters within this mechanism adapt through backpropagation during iteration.In contrast, BiLSTM, GAN, and TextCNN each have their own set of hyperparameters that require optimization.During the iterative process, the designated learning rate and batch size are also chosen as hyperparameters for optimization, as indicated in Equations ( 29)-( 31): The optimal hyperparameter search results obtained through IPSO are presented in Table 5.

Comparative Model Experiments and Results Analysis
To validate the performance of our model and analyze the impact of the GAN mechanism, considering the characteristics of text structure and sentiment analysis tasks, this study conducted comparative experiments on four datasets: IMDB, MR, SST-2, and HR.Our model was compared with classical models such as FastText, TextCNN-MultiChannel, DCNN, RCNN, BiLSTM+EMB_Att, BiLSTM-cnn, MLACNN, and MC-AttCNN-AttBiGRU, as well as our model without the GAN.The details of the comparative models are as follows: (1) FastText [38]: an open-source model by Facebook which is a rapid text classification approach.Its fundamental idea involves the summation of N-gram vectors of the text, followed by averaging to obtain the vector representation of the text.This vector is then fed into a softmax classifier to yield the classification outcome.(2) TextCNN-MultiChannel [5]: This model, also known as the TextCNN model proposed by Kim, is a multi-channel convolutional neural network.It utilizes convolutional kernels with window sizes of 3, 4, and 5 in the convolutional layers to extract local text features of various scales.These features are then concatenated after undergoing max-pooling, followed by classification using the Softmax operation.(3) DCNN [39]: The DCNN employs wide convolutions for feature extraction, allowing it to effectively utilize the edge information present in the text.(4) RCNN [40]: The RCNN employs a recurrent structure to learn word representations while simultaneously capturing contextual information as much as possible.Additionally, it incorporates maximum pooling to automatically identify the pivotal role of words in text classification, thereby capturing critical semantics within the text.(5) BiLSTM+EMB_Att [41]: The BiLSTM+EMB_Att model employs an attention mechanism to directly learn the weight distribution of each word's sentiment tendency from the foundational word vectors.It then utilizes a BiLSTM to capture the semantic information within the text, followed by a fusion process for classification.(6) BiLSTM-CNN [17]: The BiLSTM-CNN model initially employs BiLSTM to extract contextual semantic information.Subsequently, it utilizes multiple convolutional layers to extract local semantics and then fuses them.The classification is ultimately performed using the Softmax function.Analysis of the comparative experimental results in Table 6 reveals that, on a macroscopic level, individual deep learning models perform consistently lower across various metrics compared to deep stacked models.This discrepancy can be attributed to the heightened emphasis of deep-layered network models in fully harnessing features, showcasing a more pronounced utilization compared to singular models.At a micro level, the TextCNN-MultiChannel model outperforms FastText across various datasets in terms of accuracy.This is attributed to the fact that the FastText model, relying on text N-gram vectors and simple averaging of word vectors, not only fails to conduct feature extraction but also introduces substantial noise.In contrast, the TextCNN-MultiChannel model employs three different sizes of convolutional kernels for localized feature extraction, resulting in superior and noise-free features.Building upon the foundation of TextCNN-MultiChannel, the DCNN model augments the convolutional kernel width to incorporate edge information within the text, achieving the utilization and comprehension of global textual information.As a result, DCNN and TextCNN-MultiChannel exhibit comparable performances in practical experiments.RCNN focuses not only on temporal continuity but also effectively employs CNN for feature extraction, merging global and local features.This compensates for DCNN's limitations in temporal feature extraction capability, resulting in slightly improved performance compared to DCNN.Both BiLSTM+EMB_Att and BiLSTM-CNN belong to stacked networks, and they exhibit significant advantages in feature extraction and utilization compared to single-network models.BiLSTM+EMB_Att combines BiLSTM with the attention mechanism to effectively extract keywords relevant to sentiment analysis.BiLSTM-CNN, on the other hand, first extracts contextual semantic features and then employs CNN for local convolutional operations on global features.While their approaches differ, both models achieve similar outcomes, with accuracies of 84.49% and 85.75%, 88.84% and 88.57%, 81.32% and 80.60%, and 89.03% and 89.69% on the four datasets, respectively.MLACNN synthesizes the strengths of both BiLSTM+EMB_Att and BiLSTM-CNN.It incorporates TextCNN for local semantic understanding and BiLSTM for global context comprehension, further utilizing attention to appropriately allocate weights to features.In comparison to BiLSTM-CNN, MLACNN showcases a roughly 1% improvement across three metrics.Differentiating itself from MLACNN, MC-AttCNN-AttBiGRU integrates a dual-channel processing mechanism.After connecting word vectors with attention, it leverages TextCNN and BiGRU for parallel processing.Experimental results indicate that MC-AttCNN-AttBiGRU, with its dual-channel processing approach, outperforms MLACNN in terms of accuracy.

Models IMDB SST-2 MR HR
In comparison to the aforementioned models, our proposed model not only incorporates a module for processing both global and local features through Self-Attention but also enhances its robustness by introducing a perturbed text generation mechanism based on GAN and integrating the generated perturbed text into the training process.This augmentation further enhances the model's fitting capacity and resilience against disturbances.Moreover, by utilizing an IPSO during the training process, the model's relevant hyperparameters are tuned to their optimal values, fully unleashing the model's performance potential.In experimental validation, our model achieves accuracy rates of 91.23%, 91.74%, 85.49%, and 94.59% on the four datasets, respectively.Additionally, the Recall and F1 scores of our model outperform those of the other comparative models.

Impact Analysis of GAN
To assess the impact of GANs on the model's robustness against perturbations, this section employs several comparative models from the previous section as well as the model presented in this paper.The experimental results are visualized in Table 7 to facilitate analysis.In analyzing the impact of GAN mechanisms on model performance, we conducted comparative experiments.Through these comparative experiments, it was observed that perturbed text generated during model training with GAN mechanisms can effectively identify ambiguous text in sentiment analysis, resulting in an approximate 1% improvement in accuracy across all models on the dataset.This partially mitigates the interference caused by perturbed text.Additionally, the GAN mechanism in our proposed model also performed impressively, leading to accuracy improvements of 1.14%, 0.86%, 1.13%, and 1.35% on the four datasets, respectively.

Ablation and Validation Experiments
To validate the effectiveness of the IPSO, we selected BiLSTM, LSTM, TextCNN, and Attention as control groups, combined with the comprehensive algorithm.A comparative experiment was conducted against single models without the IPSO.The optimized hyperparameters chosen for each model are listed in Table 8. experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension Learning Rate Batch_size Strides Dropout
After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures 13-16.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size Strides After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size Strides Dropout After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures 13-16.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size Strides After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size Strides Dropout After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures 13-16.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size Strides After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size Strides Dropout LSTM

BiLSTM-TCSA (Ours)
     After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

BiLSTM-TCSA (Ours)
    After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size LSTM

BiLSTM-TCSA (Ours)
   After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single mode perparameters chosen for each model are listed i

Hidden Layer Dimension
Learning Rate B LSTM   After selecting the hyperparameters that ne ducted ablation experiments to demonstrate the incorporating the IPSO in hyperparameter select sults is shown in Figures 13-16.

LSTM-Attention
Electronics 2023, 12, x FOR PEER REVIEW 20 of 2 experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size Strides Dropout LSTM

BiLSTM-TCSA (Ours)
     After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

BiLSTM-TCSA (Ours)
    After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size LSTM

BiLSTM-TCSA (Ours)
   After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.

BiLSTM-Attention
Electronics 2023, 12, x FOR PEER REVIEW 20 of 2 experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size Strides Dropout LSTM

BiLSTM-TCSA (Ours)
     After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size Strides LSTM

BiLSTM-TCSA (Ours)
    After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Model Hidden Layer Dimension
Learning Rate Batch_size LSTM

BiLSTM-TCSA (Ours)
   After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.

TextCNN-Attention
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size Strides LSTM

BiLSTM-TCSA (Ours)
    After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size LSTM

BiLSTM-TCSA (Ours)
   After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures 13-16.
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single mode perparameters chosen for each model are listed i    experiment was conducted against single models without the IPSO.The optimized hy perparameters chosen for each model are listed in Table 8.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without the IPSO.The perparameters chosen for each model are listed in Table 8.Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single models without perparameters chosen for each model are listed in Table 8.

Hidden Layer Dimension
Learning Rate Batch_size LSTM      From the above bar chart, it is evident that the IPSO has effectively identified suitable hyperparameters for each respective model, fully unleashing their performance potential.The accuracy of each model across the four datasets has been enhanced by an average of approximately 0.5% to 1%, with LSTM-TextCNN, BiLSTM-TextCNN, LSTM-Attention, and the proposed model experiencing even greater average improvements of 1.18%, 1.05%, 1.13%, and 1.065%, respectively.For our model, BiLSTM-TCSA, comparative experiments were also conducted.The performance across four datasets reveals that utilizing the IPSO algorithm for hyperparameter tuning outperforms manual tuning in terms of accuracy by 0.89%, 1.58%, 0.61%, and 1.18%, respectively.This is because manual tuning relies on brute-force search, which becomes challenging when dealing with continuous variables, leading to an infinite number of hyperparameter combinations.In such cases, the quality of manually selected hyperparameters largely depends on luck.In contrast, the IPSO algorithm iteratively updates and shares particle information within the swarm, ultimately finding the optimal combination in the search space.This highlights how the IPSO algorithm outperforms manual empirical selection when it comes to choosing model hyperparameters.From the above bar chart, it is evident that the IPSO has effectively identified suitable hyperparameters for each respective model, fully unleashing their performance potential.The accuracy of each model across the four datasets has been enhanced by an average of approximately 0.5% to 1%, with LSTM-TextCNN, BiLSTM-TextCNN, LSTM-Attention, and the proposed model experiencing even greater average improvements of 1.18%, 1.05%, 1.13%, and 1.065%, respectively.For our model, BiLSTM-TCSA, comparative experiments were also conducted.The performance across four datasets reveals that utilizing the IPSO algorithm for hyperparameter tuning outperforms manual tuning in terms

Conclusions and Future Work
The proposed deep learning model for short-text sentiment analysis, based on the IPSO, effectively leverages the IPSO approach to extract deep textual features.Moreover, the model's robustness against disturbances is enhanced through the generation of a large amount of perturbed text using a GAN model.Experimental results indicate that utilizing the IPSO yields varied effects based on different combinations of model hyperparameters.The algorithm dynamically and adaptively adjusts inertia weights and learning factors, which not only prevents particles from becoming trapped in local optima to some extent, but also rapidly and effectively facilitates complete model convergence.This feature enables the full unleashing of model performance potential.
In the subsequent stages of experimental research, further efforts will be dedicated to refining the PSO.Additionally, consideration will be given to conducting horizontal comparisons with other parameter optimization algorithms, such as Bayesian optimization, genetic algorithms, and simulated annealing.The aim is to achieve further enhancements in metrics such as the model's prediction accuracy.

Figure 1 .
Figure 1.parameter tuning iterations in grid search.

Figure 1 .
Figure 1.Parameter tuning iterations in grid search.

Figure 2 .
Figure 2. The acceleration of 'w' decay in iterations.

Figure 2 .
Figure 2. The acceleration of 'w' decay in iterations.

( 5 )
Calculate the fitness for each particle again.(6)Check if the iteration is complete.If it is, output the current best solution.Otherwise, proceed to (3).

( 4 )
Updating particle velocities and positions using Equations (

Figure 11 .
Figure 11.Architecture of GAN Network Model.The generator captures the underlying distribution of the training data, aiming to generate synthetic data that conforms to the same distribution.It expects the discriminator to classify the generated data as real training data.The discriminator, guided by the standards set by the training data, encourages the generator to produce more "realistic" synthetic data.Through the adversarial interplay and continuous self-improvement between these two networks, the objective is to generate training data that closely resembles real data.The loss function of the adversarial network is defined by Equation (23): ( ) ( ) min max ( , ) [log ( )] [log(1 ( ( ))]

Figure 11 .
Figure 11.Architecture of GAN Network Model.The generator captures the underlying distribution of the training data, aiming to generate synthetic data that conforms to the same distribution.It expects the discriminator to classify the generated data as real training data.The discriminator, guided by the standards set by the training data, encourages the generator to produce more "realistic" synthetic data.Through the adversarial interplay and continuous self-improvement between these two networks, the objective is to generate training data that closely resembles real data.The loss function of the adversarial network is defined by Equation (23):

( 1 )
Dropout rate, denoted as 'd p ', representing the random deactivation rate of the dropout layer in the generators of GAN, with a range of d p ∈ (0, 1); (2) BiLSTM hidden layer dimensions, denoted as 'h d ', indicating the dimensionality of the output vectors from the BiLSTM layer, with a range of h d ∈ [50, 200]; (3) Learning rate, denoted as 'lr', which controls the magnitude of updates to the internal parameters during model training, with lr ∈ [0.0001, 0.01]; (4) Batch size, indicating the number of samples used in each training iteration, with batch_size ∈ {16, 32, 64, 128}; (5) Convolutional kernel strides, denoted as 'strides', representing the length of each movement of the convolutional kernel in the TextCNN layer, with strides ∈ {1, 2, 3, 4};

( 7 )
MLACNN [42]: The MLACNN model employs CNN to extract local semantic information and utilizes LSTM-Attention to obtain global semantic information.The outputs from both components are fused to achieve classification.(8) MC-AttCNN-AttBiGRU [21]: This model incorporates the Attention mechanism over word embeddings.It employs both TextCNN and BiGRU to capture local and global features, respectively.The extracted features from both components are then fused and used for classification.

Figure 13 .
Figure 13.Analysis of IPSO's Impact on Model Performance (IMDB

Figure 13 .
Figure 13.Analysis of IPSO's Impact on Model Performance (IMDB

Figure 13 .
Figure 13.Analysis of IPSO's Impact on Model Perfor After selecting the hyperparameters that ne ducted ablation experiments to demonstrate the incorporating the IPSO in hyperparameter select sults is shown in Figures13-16 .
Electronics 2023, 12, x FOR PEER REVIEW 20 of 2 After selecting the hyperparameters that need optimization for the models, we con ducted ablation experiments to demonstrate the rationale and effectiveness of the mode incorporating the IPSO in hyperparameter selection.The accuracy of the experimental re sults is shown in Figures13-16 .
After selecting the hyperparameters that need optimization for the m ducted ablation experiments to demonstrate the rationale and effectivene incorporating the IPSO in hyperparameter selection.The accuracy of the e sults is shown in Figures13-16 .
After selecting the hyperparameters that need optimiza ducted ablation experiments to demonstrate the rationale an incorporating the IPSO in hyperparameter selection.The acc sults is shown in Figures13-16 .

Table 8 .
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against single mode perparameters chosen for each model are listed i The optimized hyperparameters for each mo After selecting the hyperparameters that ne ducted ablation experiments to demonstrate the incorporating the IPSO in hyperparameter select sults is shown in Figures13-16 .

8 .
Electronics 2023, 12, x FOR PEER REVIEW experiment was conducted against si perparameters chosen for each model Table The optimized hyperparameters After selecting the hyperparame ducted ablation experiments to demo incorporating the IPSO in hyperparam sults is shown in Figures13-16.After selecting the hyperparameters that need optimization for the models, we conducted ablation experiments to demonstrate the rationale and effectiveness of the model incorporating the IPSO in hyperparameter selection.The accuracy of the experimental results is shown in Figures13-16 .
After selecting the hyperparameters that need optimization for the models, we conducted ablation experiments to demonstrate the rationale and effectiveness of the model incorporating the IPSO in hyperparameter selection.The accuracy of the experimental results is shown in Figures13-16.

Table 2 .
The final convergence values of various algorithms on the CEC standard benchmark functions.
(t)' states, and 'W f ', 'W i ', 'W o ', and 'W c ' represent the weight matrices for the forget gate, input gate, output gate, and updated cell state, respectively.Correspondingly, 'b f ', 'b i ', 'b o ', and 'b c ' are the respective bias terms.The LSTM cell structure is illustrated in Figure8.

Table 4 .
Information for each dataset.

Table 7 .
The optimized hyperparameters for each model (without GAN/with GAN).

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each mo

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each mo

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.

Table 8 .
The optimized hyperparameters for each model.