Next Article in Journal
Image Style Transfer Based on Dynamic Convolutional Manifold Alignment of Halo Attention
Next Article in Special Issue
Multimodal Natural Language Explanation Generation for Visual Question Answering Based on Multiple Reference Data
Previous Article in Journal
Blockchain-Based Security Configuration Management for ICT Systems
Previous Article in Special Issue
Sarcasm Detection over Social Media Platforms Using Hybrid Ensemble Model with Fuzzy Logic
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel AB-CNN Model for Multi-Classification Sentiment Analysis of e-Commerce Comments

School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou 450002, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(8), 1880; https://doi.org/10.3390/electronics12081880
Submission received: 26 February 2023 / Revised: 5 April 2023 / Accepted: 12 April 2023 / Published: 16 April 2023

Abstract

:
Despite the success of dichotomous sentiment analysis, it does not encompass the various emotional colors of users in reality, which can be more plentiful than a mere positive or negative association. Moreover, the complexity and imbalanced nature of Chinese text presents a formidable obstacle to overcome. To address prior inadequacies, the three-classification method is employed and a novel AB-CNN model is proposed, incorporating an attention mechanism, BiLSTM, and a CNN. The proposed model was tested on a public e-commerce dataset and demonstrated a superior performance compared to existing classifiers. It utilizes a word vector model to extract features from sentences and vectorize them. The attention layer is used to calculate the weighted average attention of each text, and the relevant representation is obtained. BiLSTM is then employed to read the text information from both directions, further enhancing the emotional level. Finally, softmax is used to classify the emotional polarity.

1. Introduction

Sentiment analysis, also known as sentiment tendency analysis or opinion mining, is the process of extracting information from user opinions [1]. It involves obtaining people’s attitudes, emotions, and opinions through the analysis of text, audio, and images. Sentiment analysis is the process of analyzing, processing, and interpreting text with emotion. The Internet has seen an influx of texts with emotional sentiment, prompting researchers to move from the initial analysis of emotional words to more complex analysis of emotional sentences and articles. As a result, the fine-grained processing of text varies, and sentiment analysis can be divided into three levels: word-level, sentence-level, and chapter-level research [2]. Sentiment analysis can be divided into two categories: analysis of social platform reviews and analysis of e-commerce platform reviews. The former mainly focuses on social platform reviews, while the latter focuses on product reviews from e-commerce platforms. For instance, a positive review such as “This phone is cost-effective and runs smoothly” indicates that the consumer is satisfied with the product. A neutral review such as “The overall feel of the phone is so-so!” implies that the consumer still approves of the product. On the other hand, a negative review such as “This phone is rubbish, a real card!” implies that the consumer is not satisfied with the product. Sentiment analysis of e-commerce reviews helps consumers quickly understand the public opinion of a certain product, making it popular among consumers and e-commerce websites. Meanwhile, sentiment analysis of social platform reviews is mostly used for public opinion monitoring and information prediction.
This paper focuses on sentiment analysis of comments on e-commerce platforms, categorizing ratings into three classes: negative (one and two stars), neutral (three stars), and positive (four and five stars). We then load the e-commerce review dataset into a trained deep-learning model for research and analysis. Our method combines an attention mechanism, BiLSTM, and a CNN to create an AB-CNN-based model and perform classification prediction.
The following sections of this article will be discussed: Section 2: analysis of previous related studies; Section 3: overview of relevant theories; Section 4: presentation of the paper’s model and brief introduction; Section 5: experimentation and analysis of results; Section 6: conclusions and summary.

2. Related Work

Sentiment analysis of texts has been a popular research topic since the introduction of emotional dictionaries, through to traditional machine learning and now deep learning. Research in this field primarily centers on emotional dictionaries, machine learning, and deep learning.

2.1. The Emotional-Dictionary-Based Methods

Sentiment analysis based on sentiment dictionaries involves obtaining the emotional value of a sentence in a text, and then determining the text’s emotional tendency through weighted calculation. For example, Zargari H et al. [3] proposed an N-Gram sentiment dictionary method based on global intensifiers, which increases the coverage of emotion phrases in the dictionary. Yan et al. [4] constructed a comprehensive dictionary of emotional polarity through artificial annotation, including basic, negative, adverbial degree, and flip dictionaries, to enhance the accuracy of sentiment analysis. Wang et al. [5] proposed an enhanced NTU sentiment dictionary, which was constructed by collecting sentiment statistics of words in several sentiment annotation works, and provided rich semantic information. Its application in the polarity classification of words has achieved good results. Zhang et al. [6] trained Weibo texts based on adverbial, network, and negative dictionaries to obtain updated sentiment values. Based on the existing sentiment lexicon, Jia K et al. [7] improved the calculation method of the sentiment intensity of the sentiment words of different sentiment categories in the sentiment lexicon. By combining sentiment lexicon and semantic rules, the sentiment intensity of micro-blogs with different sentiment categories is calculated and sentiment classification is realized. Xu et al. [8] established an extended emotional lexicon comprising basic, scene, and polysomic emotional words, which further improved the emotional classification effect of texts. Phan T et al. [9] proposed a sentiment dictionary for Vietnamese, containing over 100,000 Vietnamese emotional words. Wei Wang et al. [10] proposed a U-shaped acoustic word sentiment lexicon feature model based on AWED. This method can be used to build an emotional information model from the acoustic vocabulary level of different emotional categories. The results show that the proposed method achieves significant improvements in unweighted average recall in all four sentiment classification tasks.

2.2. The Machine-Learning-Based Methods

Sentiment analysis using machine learning involves leveraging traditional machine learning algorithms to extract features from labeled or unlabeled datasets and then producing sentiment analysis results. For instance, Guanghan M [11] proposed a micro-blog emotion-mining method based on word2vec and a support vector machine (SVM). This method weighted the trained word vectors of word2vec and calculated different expected word frequencies, followed by sentiment analysis in SVM. Xue et al. [12] constructed a data sentiment classifier based on the Naive Bayes principle and analyzed the sentiment tendency of the tested text with the constructed sentiment classifier. Wawre et al. [13] compared SVM and Naive Bayes (NB) machine learning methods and found that NB had a higher classification accuracy than SVM for large datasets. Kamal et al. [14] proposed a rule-based and machine learning combination method to identify its emotional polarity, achieving good results. Rathor et al. [15] compared and analyzed SVM, NB, and ME machine learning algorithms using weighted letters, and the results show that all three algorithms achieved good classification effects. Zhang et al. [16] proposed an AE-SVM algorithm and achieved satisfactory results when applied to the employee sentiment analysis dataset of company evaluation. Mitroi, M. et al. [17] proposed novel topical document embedding (TOPICDOC2VEC) to detect the polarity of text. TOPICDOC2VEC is constructed as a connection between document embeddings (DOC2VEC) and topic embeddings (TOPIC2VEC), and experimental results show that TOPICDOC2VEC embeddings outperform DOC2VEC embeddings in detecting document polarity.

2.3. The Deep-Learning-Based Methods

Deep learning has achieved remarkable success in the field of image processing, leading to an increased interest in sentiment analysis based on this technology. Current deep learning models include CNNs, LSTM, BiLSTM, RNNs, and attention mechanisms. For instance, Teng et al. [18] proposed a topic classification model based on LSTM, which is capable of processing vector, array, and high-dimensional data. Yin et al. [19] used a BiGRU neural network layer for feature enhancement, achieving this through superposition and reuse, and faster convergence through continuous enhancement. This network outperformed other classification models. He et al. [20] extracted word-embedding and sequence features from the word vector based on vocabulary, fused the two features into SVM input, and finally determined the emotional polarity of the text. Zeng et al. [21] proposed the PosATT-LSTM model, which takes into account the significance of the connection between context words and context location. Zhou et al. [22] proposed a Chinese sentiment analysis method that combined word2vec and Stacked BiLSTM, resulting in improved performance. Su et al. [23] proposed an AEB-CNN model incorporating emoji attention and a CNN, which enhanced the accuracy of sentiment analysis. Truică, Ciprian-Octavian et al. [24] proposed a new document-topic-embedding model for document-level polarity detection in large texts by using general and specific contextual cues obtained from document embedding (DOC2VEC) and topic modeling and achieved promising results. Petrescu, Alexandru et al. [25] proposed a new ensemble architecture, EDSA-Ensemble (Event Detection Sentiment Analysis Ensemble), which uses event detection and sentiment analysis to improve polarity detection of current events from social media. The ensemble model has achieved good results in deep learning and machine learning.

3. Related Theory

3.1. Attention Mechanism

Treisman [26] proposed in 1980 that deep learning’s mechanism of attention borrows from the way human vision operates. Physiologically, when humans observe their environment, they quickly scan the panorama and, based on brain signals, quickly focus their attention on the target area, forming a focus of attention to obtain more details and suppress irrelevant information.
When using NLP for text tasks, the attention mechanism can be employed to prioritize the text content that requires attention, thus increasing the model’s running speed, reducing complexity, shortening training time, and improving prediction accuracy. In sentiment analysis, the attention layer on the CNN is used to focus on words or sentences related to emotions, discarding other text information that is not emotionally relevant.
Given an input text sequence X , a query vector q is used to identify important information. The query process is conducted across X , with each word contributing its own attention, and words with emotional color contributing more.
To reduce complexity, an attention variable U 1 , N is defined to represent the index of the selected query information. Figure 1 illustrates the calculation process, which is the manifestation of the attention mechanism. This allows only emotion-related words or sentences to be selected from X and inputted into the model for training, instead of all the text information content.
The attention mechanism consists of three steps: text input, calculation of the attention weight coefficient   α , and weighted average of input information.
  • Text information input:   X = X 1 , X 2 , , X N represents   N input text information content.
  • The attention weight between the i th word and q is calculated using the formula Equation (1):
    α i = P U = i | X , q = s o f t m a x s X i , q
α i is the weight coefficient of attention, and s X i , q is the calculation function. The main calculation methods include the following formulae, Equations (2)–(5):
Additive   model :   s X i , q = V T t a n h W X i + U q
Dot   product   model :   s X i , q = x i T q
Scaled   dot   product   model :   s X i , q = X i T q d
Bilinear   model :   s X i , q = X i T W q
Let   W ,   U , and V be variables in the network model, where d   denotes the latitude of the input word vector.
3.
The weight coefficient of attention α i , encodes the input text information X . The attention degree of the i th information concerning the context query vector q is calculated using Equation (6):
a t t e n t i o n q , X = i = 1 N α i X i

3.2. BiLSTM

Long Short-Term Memory (LSTM) [27] is a type of Recurrent Neural Network (RNN) that has been found to suffer from issues such as gradient disappearance, gradient explosion, and a limited ability to read a range of information. To address these issues, LSTM was introduced, featuring the “memory time sequence” capability, allowing it to quickly learn the relationship between input text data and context.
Based on a basic RNN, LSTM enhances two aspects:
  • New internal state:
LSTM introduces an internal state c t R D and outputs information to the hidden layer’s external state h t R D . This internal state is calculated using Equations (7) and (8):
c t = f t c t 1 + i t c ˜ t
h t = o t t a n h c t
The f t ,   i t ,   o t forms the path for the information to pass through, with the vector product of elements ⨀ and the memory unit of the moment c ˜ t R D obtained using a nonlinear function candidate status. Equation (9) is as follows:
c ˜ t = t a n h W c x t + U c h t 1 + b c
2.
Gating mechanism:
The LSTM network controls the flow of information through the implementation of three gating mechanisms: the forget gate f t , the input gate i t , and the output gate o t . The values of these gates range from 0 to 1, indicating the proportion of text information that is allowed to pass through. Equations (10)–(12) are as follows:
f t = σ W f x t + U f h t 1 + b f
i t = σ W i x t + U i h t 1 + b i
o t = σ W o x t + U o h t 1 + b o
Let σ · denote the logistic function, x t the current input, and h t 1 the external state of the preceding moment.
BiLSTM [28] divides an input sequence into two independent LSTMs, which process the sequence in both the forward and reverse directions to extract features. The final expression of the word is created by merging the output vectors of two LSTMs.
The structural characteristics of the BiLSTM model are illustrated in Figure 2. Its design concept involves the incorporation of both past and future information into the feature data obtained at the t moment. The output of the forward LSTM layer at the t moment is denoted as h t , while the output of the backward LSTM layer is h t . Experiments demonstrate that the BiLSTM model is more efficient and effective at text feature extraction than a single-LSTM structure model. Furthermore, BiLSTM consists of two LSTM parameters that share the same word-embedding vector list, but are otherwise independent.

3.3. CNN

Convolutional Neural Networks (CNNs) [29] have been widely adopted in the field of computer vision due to their high effectiveness. Starting with a convolutional layer, the network is further enhanced by the addition of layers such as pooling, dropout, and padding. Subsequently, GoogleNet, VGGNet, and ResNet, the most renowned CNNs in the field of image recognition, were developed, allowing neural networks to classify images with greater accuracy than humans. CNNs have efficient feature extraction and classification capabilities, which can be used to classify text information treated as a one-dimensional image, as illustrated in Figure 3.
The words in the input text are expressed as a matrix X , with each element X i representing a sequence vector in the word-embedding layer. This matrix is then used as the input of the CNN.
The features of the text are extracted by applying a sliding convolution with a kernel k n × d on the original input text sequence. The n g r a m convolution is carried out using a sliding window scan with a step size of s , yielding N n + 1 features for each convolution kernel. A pooling layer is then used to select the text features with the highest weight, while ignoring the unimportant ones, thus obtaining the final word vector represented by the text features.
The pooling layer screens the text features, which are then connected to the predicted category labels. All text features are obtained, and the probability of each category label is calculated. The classification result is the maximum label probability value.

4. AB-CNN Model

This paper’s model structure, comprising an input layer, word-embedding layer, convolutional layer, attention layer, BiLSTM layer, fully connected layer, and output layer, is depicted in Figure 4.

4.1. Word2vec Word-Vector-Embedding Layer

Let m be a text composed of n words, expressed as m = m 1 , m 2 , , m n . The input text sequence is converted to a word vector using word2vec, with an encoded latitude of 128, and initialized. The resulting vectorized form of the text is shown in Equation (13):
m 1 : n = m 1 m 2 m n
where n denotes the length of each comment text sequence, with each word represented by a vector of   h dimensions and m i being the vector of the i th word in the sentence, connected by the operator .
The text sequence m is segmented and converted into an n h -dimensional vector matrix. This is then embedded into a low-dimensional word vector through an embedding layer, thus completing the conversion of text to a numerical vector.

4.2. CNN Layer

The output of the embedding layer is used as the convolutional layer input for the text sequence m . Applying k convolutional filters D = φ 1 , φ 2 , , φ k with length l to the k l -dimensional word vector matrix of the i th word vector yields new features of the i th word of the text sequence m . Equation (14) is as follows:
n i = f D T · m i : i + l 1 + b
where b is a bias term, D T the weight, and f a nonlinear function R e L u . When the filter is applied to every word in the sentence m 1 : l , m 2 : l + 1 , , m n l + 1 : n , the text characteristic expression is as follows, as shown in Equation (15):
N = n 1 , n 2 , , n n l + 1
Among them, N n l + 1 uses the maximum pool operation, with the maximum value N ^ = m a x n as a filter characteristic. This ensures the most significant feature with the greatest value is acquired. The output of the convolutional layer is Y , and the equation is as follows, as shown in Equation (16):
Y = N 1 , N 2 , , N n l + 1
Then, the dropout layer is added after the convolutional layer to prevent over-fitting.

4.3. Attention Mechanism Layer

The convolutional layer can extract the important features of the text, while the attention layer can identify the words related to emotional polarity. This reduces the running time and complexity of the model. The attention mechanism is applied to the convolutional layer’s output Y , with a query vector N i for each text information input. The attention weight coefficient for each text feature N i can be calculated, the equation is as follows, as shown in Equation (17):
α i = s o f t m a x s N i , q = e x p s N i , q j = 1 n l + 1 e x p s N j , q
Where j is a parameter in the s o f t m a x linear function, which means summing all text features N j and calculating the probability distribution of the i th text, namely the weight coefficient α i .
Where i 1 , 2 , , n l + 1 is for the attention calculation function s N i , q . You can choose the four models mentioned earlier for calculation.
After encoding the input text information Y as follows, the weighted average attention signal of each text can be obtained. Equation (18) is shown as follows:
T ¯ = a t t e n t i o n q , N = i = 1 n l + 1 α i N i
Then, the attention signal T ¯ is mapped to the corresponding input text feature matrix N i to obtain a text matrix with an attention mechanism. This is expressed as T ¯ N i .
Finally, after the attention is extracted through the convolution operation, the fusion of attention is carried out. Equation (19) is shown as follows:
ω i = μ 1 · N i + μ 2 · N i · T ¯ , i 1 , 2 , , n l + 1
where μ 1 is the weight of the original word vector, and μ 2 is the weight of the attention signal. The word vector form after the integration of attention can be expressed as follows: ω = [ ω 1 , ω 2 , ω n ] .

4.4. BiLSTM Layer

The text word vector ω regarding emotional polarity is output by the attention layer as input to the BiLSTM layer. The two LSTMs integrate the input sequence’s information in both the forward and backward directions, thereby enhancing the emotional hue of the input text content and improving the model’s classification performance.
The forward LSTM layer of the output at the current moment has information from the current moment and the preceding one in the input sequence, while the backward LSTM layer has information from the current moment and the subsequent one in the input sequence.
The two LSTMs combine the input sequence’s information in both the forward and backward directions, splicing the word vectors to generate the BiLSTM result. This model can significantly enhance accuracy, with the forward output h t and backward output h t   at time t shown in Equations (20) and (21):
h t = L S T M h t 1 , ω t , c t 1
h t = L S T M h t 1 , ω t , c t 1
Then, the BiLSTM output contains the emotional color t moment, and the i th text feature vector is shown in Equation (22):
H t i = h t , h t
The output of the BiLSTM network’s text sequence semantic information extraction is Q = H t 1 , H t 2 , , H t n l + 1 .

4.5. Softmax Classification Output Layer

The input text is vectorized from the embedding layer using word2vec. The convolutional layer is then employed to classify and extract significant features. The attention layer is utilized to extract semantic features associated with emotion. Meanwhile, BiLSTM is employed to extract text context information to further augment the emotional hue of the extracted semantic features. Deeper semantic feature representation is obtained. Finally, the result Q obtained by the BiLSTM network is classified as the input to a linear function softmax, yielding the final emotion classification result. Equation (23) is as follows:
y = s o f t m a x W c Q + b c
W c is the weight matrix, and b c is the bias term.

5. Experimental Analysis

This section outlines the implementation of the model experiment, including dataset partition, evaluation metrics, and hyperparameter selection. The model performance is then evaluated and compared to other deep learning models as well as ablation experiments.

5.1. Dataset Introduction

The public dataset used in this article, which contains 21,091 comments on products such as electronic products, books, and home appliances, was crawled from Jingdong Mall. After screening, 16,873 comments were selected as the dataset and divided into three kinds of comments: positive, negative, and neutral. This dataset was divided into 8033 positive reviews, 4355 neutral reviews, and 8703 negative reviews, as illustrated in Table 1.

5.2. Data Partitioning and Training Process

This paper’s model training process was completed on Windows 10 OS using an Intel (R) Core (TM) i7-5500U 2.40GHz processor with 16GB RAM. Python 3.7 was used as the programming language, Pycharm as the development tool, jieba0.38 for Chinese word segmentation, and Tensorflow1.15.0 and Keras2.3.1 as the deep-learning-based architecture. The ratio of training set to test set was 4:1.

5.3. Evaluation Metric

5.3.1. Accuracy Rate

The model’s ability to classify samples in the test set accurately as positive, neutral, or negative reflects its ability to judge the entire dataset. The proportion of correctly classified samples in the whole sample can be calculated using the following formula:
a c c u r a c y = i = 1 n T P i i = 1 n T P i + F P i
In this paper, n = 3 represents the accuracy of the three classifications.

5.3.2. Kappa Coefficient

The Kappa coefficient is a statistical measure of consistency that ranges from 0 to 1, the details are shown in Table 2. A larger coefficient suggests that the model is more accurate in classifying data. It is calculated in Equation (25):
K = P o P e 1 P e
P o represents the overall classification accuracy.
P e is denoted by Equation (26):
P e = a 1 × b 1 + a 2 × b 2 + + a m × b m n × n i = 1 , 2 , , m
where b i represents the predicted number of samples of type i and a i represents the actual number of samples of type i .

5.3.3. Weighted F1 Score

The F1 score is an indicator used in statistics to measure the accuracy of a binary classification model. It takes into account both the precision and recall of the classification model. The F1 score can be regarded as a weighted average of model precision and recall, and its value is [0,1]. This article focuses on multi-classification problems, so weighted F1 is selected to perform weighted averages for each category.

5.4. Parameter Selection

The hyperparameters used in this paper are tuned sequentially, trained individually, and then combined for training in the model, and hyperparameter tuning is performed based on training data only.
Selecting an adequate input text length is our main challenge. If the input is too short, the sentiment of the text cannot be accurately captured, which will impact the model’s final performance. If the text is too lengthy, it can result in a high number of zero values in the word vector, thus reducing the model’s training accuracy and affecting the final evaluation metric.
Figure 5 and Figure 6 demonstrate that the majority of the texts in the dataset have a length of less than 200 words, with only a small portion have more than 200. Sentences with a text length of less than 200 words appear most frequently. When the text length is 201, the cumulative frequency of sentences is 0.94. Consequently, this paper considers both the length of the text and its frequency of occurrence and selects 200 as the length of the input text.
The selection of the number of iterations is a critical factor in determining the model’s quality. Too many iterations can lead to over-fitting, while too few can prevent the model from reaching its best state. As Figure 7 and Table 3 demonstrate, the model’s performance begins to decline when the number of iterations exceeds 16, and the performance of the model improves when it is less than 16. After analyzing the experimental data, it is concluded that 16 is the optimal number of iterations.
Model training is susceptible to over-fitting, as evidenced by a low loss function on the training data and a high prediction accuracy, yet a large loss function and low accuracy on the test data. To prevent this, we introduce a dropout value, which makes the model more generalizable by reducing the complex co-adaptive relationships between neurons. Experimentation has shown that the model performs best when the dropout value is 0.45, thus preventing over-fitting. The outcomes are depicted in Table 4 and Figure 8.
The batch size, which is the number of samples selected for one training, influences the optimization degree and speed of the model. By setting the batch size, the model can select batch data for processing each time during the training process. If batch size is too large [30], the network tends to converge to sharp minima, potentially resulting in poor generalization. To ensure the best training effect, an appropriate batch size should be chosen. Experimentally, when the batch size is set to 16, the convergence accuracy is maximized, as illustrated in Table 5 and Figure 9.
The learning rate determines whether the objective function can converge to the local minimum and when it can converge to the minimum. An appropriate learning rate can make the objective function converge to the local minimum within an appropriate time. If the learning rate is too large, the loss will explode, and if the learning rate is too small, the loss will not change for a long time. In this paper, we attempt to use different learning rates, observe the relationship between the learning rates and the loss, and find the learning rate corresponding to the quickest loss rate. The results, shown in Table 6 and Figure 10, indicate that the model performs best at a learning rate of 0.0001, at which the loss decreases the fastest.
Comparison training was performed with other hyperparameter value combinations to ensure that the hyperparameter combination in this paper is optimal. Finally, the hyperparameters of the model in this paper, as well as those of the comparison model, are detailed in Table 7.

5.5. Model Comparison

To assess the performance of the proposed model, we conducted comparative experiments using eight deep learning models with similar architectures [19,23,31,32,33,34,35,36].
Analysis of Table 8 reveals that the proposed model exhibits a superior accuracy rate, Kappa coefficient, and weighted F1 score. This is due to the incorporation of the attention mechanism and BiLSTM network into the CNN. The BiLSTM network facilitates the extraction of features from the input text sequence, taking into account both past and future information. This leads to a 1.07% improvement in accuracy and a 2.26% improvement in weighted F1 score compared to the CNN model alone. The attention mechanism further enhances the model’s performance by allowing it to focus on emotion-related words or sentences while discarding emotion-irrelevant text content. As a result, the proposed model outperforms other deep learning models.
The confusion matrix in Figure 11 reveals the prediction accuracy of positive, neutral, and negative labels in the test set were 87.42%, 90.30%, and 95.83%, respectively. Notably, the accuracy of neutral and negative emotions exceeded 90%, indicating that this model is effective at multi-class sentiment analysis.
As shown in Table 9, three mis-predicted examples are picked. Through the analysis, we found that the cause of the prediction errors was the classification problem of the labels and the tendency of some users to give a good rating even if they are not fully satisfied with the product. In addition, the semantic issues of the Chinese dataset make it more difficult to understand, which also brings certain difficulties to the correct recognition of the model.

5.6. Ablation Experiment

To assess the impact of the attention mechanism and BiLSTM on model performance, an ablation experiment was conducted.
The results of the ablation experiment, shown in in Table 10 and Figure 12, reveal that the introduction of the attention mechanism alone into the sentiment analysis model yields poor performance, with an accuracy rate of 60.36%, a weighted F1 score of 0.5542, and a Kappa coefficient of 0.3731. Similarly, when only BiLSTM is used, the model can process text context information, resulting in an improved accuracy rate, Kappa coefficient, and weighted F1 score. It can be seen that the pure attention mechanism takes the shortest time, only 10.3 min. Although the training time is short, the performance of the model is the worst; it also takes less time to train BiLSTM and the CNN than the model proposed in this paper, but the accuracy rate, Kappa coefficient, and weighted F1 score are not as good as the model in this paper; the rest of the remaining models have better performances than the proposed model in terms of training time.
The combination of the attention mechanism and BiLSTM enables the model to not only consider text information from both directions but also to focus on emotion-related sentences, thus improving the model’s performance, and it does not take a lot of time. This paper’s model yields an accuracy 1.85% higher than that of the CNN alone, 31.15% higher than that of the ATT alone, and 0.78% higher than that of ATT+CNN, which significantly enhances the model’s feature extraction and classification capabilities, resulting in optimal performance.

6. Conclusions

Sentiment analysis is a significant branch of NLP, and its application for e-commerce platforms is highly valued by both consumers and businesses. This paper proposes a model architecture, AB-CNN, which combines an attention mechanism and BiLSTM to enhance the accuracy of multi-classification models. The attention mechanism extracts words or sentences related to emotion, while BiLSTM simultaneously captures contextual text information, further strengthening the emotion degree and improving the model’s classification prediction performance. Finally, the proposed model is benchmarked against existing literature on similar architectures, yielding the best experimental results.
The limitations of the current work include the following: (1) we only compared models with similar architectures (2) and hyperparameter settings that favored our proposed approach. In the future work, to improve the model, we could use the Bert or transformer pre-training model. Additionally, due to the complexity of Chinese text itself, we could try to introduce a powerful Chinese sentiment dictionary to improve the prediction accuracy.

Author Contributions

Conceptualization, H.L.; Methodology, H.L. and Y.L.; Software, Y.M.; Validation, Y.L.; Resources, H.Z.; Writing—original draft preparation, Y.L.; Writing—review and editing, H.Z. and H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Project of Science and Technology Tackling Key Problems in Henan Province of China under grant no. 222102210234.

Data Availability Statement

The data presented in this study can be provided upon request.

Acknowledgments

The authors would like to thank the editors and the anonymous reviewers for their helpful comments and suggestions, which have improved the presentation.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hu, R.; Rui, L.; Zeng, P.; Chen, L.; Fan, X. Text sentiment analysis: A review. In Proceedings of the 2018 IEEE 4th International Conference on Computer and Communications (ICCC), Chengdu, China, 7–10 December 2018; pp. 2283–2288. [Google Scholar]
  2. Peng, H.; Cambria, E.; Hussain, A. A review of sentiment analysis research in Chinese language. Cogn. Comput. 2017, 9, 423–435. [Google Scholar] [CrossRef]
  3. Zargari, H.; Zahedi, M.; Rahimi, M. GINS: A Global intensifier-based N-Gram sentiment dictionary. J. Intell. Fuzzy. Syst. 2021, 40, 11763–11776. [Google Scholar] [CrossRef]
  4. Yan, X.; Huang, T. Research on construction of Tibetan emotion dictionary. In Proceedings of the 2015 18th International Conference on Network-Based Information Systems, Taipei, Taiwan, 2–4 September 2015; pp. 570–572. [Google Scholar]
  5. Wang, S.M.; Ku, L.W. ANTUSD: A large Chinese sentiment dictionary. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia, 23–28 May 2016; pp. 2697–2702. [Google Scholar]
  6. Zhang, S.; Wei, Z.; Wang, Y.; Liao, T. Sentiment analysis of Chinese micro-blog text based on extended sentiment dictionary. Future. Gener. Comput. Syst. 2018, 81, 395–403. [Google Scholar] [CrossRef]
  7. Jia, K.; Li, Z. Chinese micro-blog sentiment classification based on emotion dictionary and semantic rules. In Proceedings of the 2020 International Conference on Computer Information and Big Data Applications (CIBDA), Guiyang, China, 17–19 April 2020; pp. 309–312. [Google Scholar]
  8. Xu, G.; Yu, Z.; Yao, H.; Li, F.; Meng, Y.; Wu, X. Chinese text sentiment analysis based on extended sentiment dictionary. IEEE Access 2019, 7, 43749–43762. [Google Scholar] [CrossRef]
  9. Tran, T.K.; Phan, T.T. A hybrid approach for building a Vietnamese sentiment dictionary. J. Intell. Fuzzy. 2018, 35, 967–978. [Google Scholar] [CrossRef]
  10. Wei, W.; Cao, X.; Li, H.; Shen, L.; Feng, Y.; Watters, P.A. Improving speech emotion recognition based on acoustic words emotion dictionary. Nat. Lang. Eng. 2021, 27, 747–761. [Google Scholar] [CrossRef]
  11. Miao, G.H. Emotion Mining and Simulation Analysis of Microblogging Based on Word2vec and SVM. J. Electron. Sci. Technol. 2018, 31, 81–83. [Google Scholar]
  12. Xue, J.; Liu, K.; Lu, Z.; Lu, H. Analysis of Chinese Comments on Douban Based on Naive Bayes. In Proceedings of the 2nd International Conference on Big Data Technologies, Jinan, China, 28–30 August 2019; pp. 121–124. [Google Scholar]
  13. Wawre, S.V.; Deshmukh, S.N. Sentiment classification using machine learning techniques. Int. J. Sci. Res. 2016, 5, 819–821. [Google Scholar]
  14. Kamal, A.; Abulaish, M. Statistical features identification for sentiment analysis using machine learning techniques. In Proceedings of the 2013 International Symposium on Computational and Business Intelligence, New Delhi, India, 24–26 August 2013; pp. 178–181. [Google Scholar]
  15. Rathor, A.S.; Agarwal, A.; Dimri, P. Comparative study of machine learning approaches for Amazon reviews. Procedia. Comput. Sci. 2018, 132, 1552–1561. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Wang, L. Design of Employee Comment Sentiment Analysis Platform Based on AE-SVM Algorithm. J. Phys. Conf. Ser. 2020, 1575, 012019. [Google Scholar] [CrossRef]
  17. Mitroi, M.; Truică, C.O.; Apostol, E.S.; Florea, A.M. Sentiment analysis using topic-document embeddings. In Proceedings of the 2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania, 3–5 September 2020; pp. 75–82. [Google Scholar]
  18. Teng, F.; Zheng, C.; Li, W. Multidimensional topic model for oriented sentiment analysis based on long short-term memory. J. Comput. Appl. 2016, 36, 2252. [Google Scholar]
  19. Yin, X.; Liu, C.; Fang, X. Sentiment analysis based on BiGRU information enhancement. J. Phys. Conf. Ser. 2021, 1748, 032054. [Google Scholar] [CrossRef]
  20. He, J.; Zou, M.; Liu, P. Convolutional neural networks for chinese sentiment classification of social network. In Proceedings of the 2017 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 6–9 August 2017; pp. 1877–1881. [Google Scholar]
  21. Zeng, J.; Ma, X.; Zhou, K. Enhancing attention-based LSTM with position context for aspect-level sentiment classification. IEEE Access 2019, 7, 20462–20471. [Google Scholar] [CrossRef]
  22. Zhou, J.; Lu, Y.; Dai, H.N.; Wang, H.; Xiao, H. Sentiment analysis of Chinese microblog based on stacked bidirectional LSTM. IEEE Access 2019, 7, 38856–38866. [Google Scholar] [CrossRef]
  23. Su, Y.J.; Chen, C.H.; Chen, T.Y.; Cheng, C.C. Chinese microblog sentiment analysis by adding emoticons to attention-based CNN. J. Internet. Technol. 2020, 21, 821–829. [Google Scholar]
  24. Truică, C.O.; Apostol, E.S.; Șerban, M.L.; Paschke, A. Topic-based document-level sentiment analysis using contextual cues. Mathematics 2021, 9, 2722. [Google Scholar] [CrossRef]
  25. Petrescu, A.; Truică, C.O.; Apostol, E.S.; Paschke, A. EDSA-Ensemble: An Event Detection Sentiment Analysis Ensemble Architecture. arXiv 2023, arXiv:2301.12805. [Google Scholar]
  26. Treisman, A. Features and objects in visual processing. Sci. Am. 1986, 255, 114B–125. [Google Scholar] [CrossRef]
  27. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural. Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  28. Huang, P.; Zheng, L.; Wang, Y.; Zhu, H.J. Sentiment Analysis of Chinese Text Based on CNN-BiLSTM Serial Hybrid Model. In Proceedings of the 2021 10th International Conference on Computing and Pattern Recognition, Shanghai China, 15–17 October 2021; pp. 309–313. [Google Scholar]
  29. Xu, F.; Zhang, X.; Xin, Z.; Yang, A. Investigation on the Chinese text sentiment analysis based on convolutional neural networks in deep learning. Comput. Mater. Con. 2019, 58, 697–709. [Google Scholar] [CrossRef]
  30. Keskar, N.S.; Mudigere, D.; Nocedal, J.; Smelyanskiy, M.; Tang PT, P. On large-batch training for deep learning: Generalization gap and sharp minima. arXiv 2016, arXiv:1609.04836. [Google Scholar]
  31. Long, F.; Zhou, K.; Ou, W. Sentiment analysis of text based on bidirectional LSTM with multi-head attention. IEEE Access 2019, 7, 141960–141969. [Google Scholar] [CrossRef]
  32. Liao, S.; Wang, J.; Yu, R.; Sato, K.; Cheng, Z. CNN for situations understanding based on sentiment analysis of twitter data. Procedia. Comput. Sci. 2017, 111, 376–381. [Google Scholar] [CrossRef]
  33. Zhang, W.; Li, L.; Zhu, Y.; Yu, P.; Wen, J. CNN-LSTM neural network model for fine-grained negative emotion computing in emergencies. Alex. Eng. J. 2022, 61, 6755–6767. [Google Scholar] [CrossRef]
  34. Jiang, M.; Zhang, W.; Zhang, M.; Wu, J.; Wen, T. An LSTM-CNN attention approach for aspect-level sentiment classification. J. Comput. Methods Sci. Eng. 2019, 19, 859–868. [Google Scholar] [CrossRef]
  35. Gan, C.; Feng, Q.; Zhang, Z. Scalable multi-channel dilated CNN–BiLSTM model with attention mechanism for Chinese textual sentiment analysis. Future. Gener. Comput. Syst. 2021, 118, 297–309. [Google Scholar] [CrossRef]
  36. Miao, Y.; Ji, Y.; Peng, E. Application of CNN-BiGRU Model in Chinese short text sentiment analysis. In Proceedings of the 2019 2nd International Conference on Algorithms, Computing and Artificial Intelligence, Sanya China, 20–22 December 2019; pp. 510–514. [Google Scholar]
Figure 1. The computational process of the attention mechanism.
Figure 1. The computational process of the attention mechanism.
Electronics 12 01880 g001
Figure 2. BiLSTM model structure.
Figure 2. BiLSTM model structure.
Electronics 12 01880 g002
Figure 3. CNN text classification model.
Figure 3. CNN text classification model.
Electronics 12 01880 g003
Figure 4. Structure of the AB-CNN model.
Figure 4. Structure of the AB-CNN model.
Electronics 12 01880 g004
Figure 5. Sentence length and frequency statistics.
Figure 5. Sentence length and frequency statistics.
Electronics 12 01880 g005
Figure 6. Graph of the cumulative distribution function of sentence length.
Figure 6. Graph of the cumulative distribution function of sentence length.
Electronics 12 01880 g006
Figure 7. Selection of epochs.
Figure 7. Selection of epochs.
Electronics 12 01880 g007
Figure 8. Selection of dropout value.
Figure 8. Selection of dropout value.
Electronics 12 01880 g008
Figure 9. Selection of batch size.
Figure 9. Selection of batch size.
Electronics 12 01880 g009
Figure 10. Selection of learning rate.
Figure 10. Selection of learning rate.
Electronics 12 01880 g010
Figure 11. Test set confusion matrix.
Figure 11. Test set confusion matrix.
Electronics 12 01880 g011
Figure 12. Comparison of experimental ablation models.
Figure 12. Comparison of experimental ablation models.
Electronics 12 01880 g012
Table 1. Introduction to the dataset.
Table 1. Introduction to the dataset.
Category of EmotionExamples of Dataset ContentsTrain SetsTest Sets
Positive“The baby is fine, the seller is very nice!”64431590
Neutral“The sound function is better! But there are drawbacks!”3479876
Negative“No delivery at all! Waste of money!”69511752
Total------16,8734218
Table 2. Kappa coefficient table.
Table 2. Kappa coefficient table.
Coefficient0–0.20.2–0.40.4–0.60.6–0.80.8–1.0
LevelSlightFairModerateSubstantialAlmost perfect
Table 3. Selection of epochs.
Table 3. Selection of epochs.
EpochsAccuracyKappaW-F1 Score
40.83670.74310.6801
80.89780.83970.8722
120.90330.84830.8934
160.90610.85280.9126
200.89170.83040.8843
240.87740.80820.8542
Table 4. Selection of dropout value.
Table 4. Selection of dropout value.
DropoutAccuracyKappaW-F1 Score
0.150.90450.85070.8118
0.250.89850.84110.8431
0.350.89880.84140.8846
0.450.90780.85550.9213
0.550.89400.83420.8943
0.650.88950.82640.8671
Table 5. Selection of batch size.
Table 5. Selection of batch size.
Batch SizeAccuracyKappaW-F1 Score
160.90830.85630.8943
320.89710.83890.8617
640.90140.84550.8562
1280.88360.81660.8215
2560.87930.81110.7358
Table 6. Selection of learning rate.
Table 6. Selection of learning rate.
Learning RateAccuracyLossKappaW-F1 Score
0.010.37701.06000.0000.6452
0.0010.86960.61780.79510.7213
0.00010.90020.30360.84380.8843
0.000010.88670.37210.82220.6774
0.0000010.51420.93290.21330.5342
Table 7. The setting of model hyperparameters.
Table 7. The setting of model hyperparameters.
HyperparameterHyperparameter Value
Latitude of word vector128
Convolution kernel size3
Number of convolution kernels250
The BiLSTM hides layer size64
Maximum input text length200
Epoch number16
Dropout value0.45
Batch size16
Learning rate0.0001
Table 8. Deep learning models performance comparison.
Table 8. Deep learning models performance comparison.
MethodsAccuracyKappaW-F1 Score
BiGRU [19]0.90040.84410.8317
ATT+CNN [23]0.91250.86290.8622
ATT+BiLSTM [31]0.89760.83970.8803
CNN [32]0.89660.83840.8546
LSTM+CNN [33]0.87910.81030.8671
ATT+LSTM+CNN [34]0.90160.85030.8869
CNN+BiLSTM [35]0.90730.85550.8772
CNN+BiGRU [36]0.89760.84020.8643
Proposed0.91510.86730.8976
Table 9. Examples of mispredictions.
Table 9. Examples of mispredictions.
ContentsTruePredict
It’s okay, but it’s too slow, the logistics took 4 days.PositiveNegative
I don’t know if it’s good or not, I bought it for a friend, so I’ll give it five points.PositiveNegative
Color, shape, okay, but the design is not so good.PositiveNegative
Table 10. Comparison of experimental ablation models.
Table 10. Comparison of experimental ablation models.
MethodsAccuracyKappaW-F1 ScoreTime (mins)
CNN0.89660.83840.854636.4
ATT0.60360.37310.554210.3
BiLSTM0.87700.80680.851839.2
ATT+CNN0.91250.86290.862249.5
ATT+BiLSTM0.89760.83970.880351.2
CNN+BiLSTM0.90730.85550.877254.3
Proposed0.91510.86730.897647.8
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, H.; Lu, Y.; Zhu, H.; Ma, Y. A Novel AB-CNN Model for Multi-Classification Sentiment Analysis of e-Commerce Comments. Electronics 2023, 12, 1880. https://doi.org/10.3390/electronics12081880

AMA Style

Li H, Lu Y, Zhu H, Ma Y. A Novel AB-CNN Model for Multi-Classification Sentiment Analysis of e-Commerce Comments. Electronics. 2023; 12(8):1880. https://doi.org/10.3390/electronics12081880

Chicago/Turabian Style

Li, Hongchan, Yantong Lu, Haodong Zhu, and Yu Ma. 2023. "A Novel AB-CNN Model for Multi-Classification Sentiment Analysis of e-Commerce Comments" Electronics 12, no. 8: 1880. https://doi.org/10.3390/electronics12081880

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop