Next Article in Journal
Initial Student Attention-Allocation and Flight-Performance Improvements Based on Eye-Movement Data
Next Article in Special Issue
Improving Collaborative Filtering Recommendations with Tag and Time Integration in Virtual Online Communities
Previous Article in Journal
PSNet: Parallel-Convolution-Based U-Net for Crack Detection with Self-Gated Attention Block
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Grouped Contrastive Learning of Self-Supervised Sentence Representation

1
College of Computer Science, Sichuan University, Chengdu 610065, China
2
Chengdu Ruibei Yingte Information Technology Co., Ltd., Chengdu 610054, China
3
Sichuan Zhiqian Technology Co., Ltd., Chengdu 610065, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9873; https://doi.org/10.3390/app13179873
Submission received: 21 July 2023 / Revised: 28 August 2023 / Accepted: 30 August 2023 / Published: 31 August 2023
(This article belongs to the Special Issue High-Performance Computing, Networking and Artificial Intelligence)

Abstract

:
This paper proposes a method called Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR), which can learn an effective and meaningful representation of sentences. Previous works maximize the similarity between two vectors to be the objective of contrastive learning, suffering from the high-dimensionality of the vectors. In addition, most previous works have adopted discrete data augmentation to obtain positive samples and have directly employed a contrastive framework from computer vision to perform contrastive training, which could hamper contrastive training because text data are discrete and sparse compared with image data. To solve these issues, we design a novel framework of contrastive learning, i.e., GCLSR, which divides the high-dimensional feature vector into several groups and respectively computes the groups’ contrastive losses to make use of more local information, eventually obtaining a more fine-grained sentence representation. In addition, in GCLSR, we design a new self-attention mechanism and both a continuous and a partial-word vector augmentation (PWVA). For the discrete and sparse text data, the use of self-attention could help the model focus on the informative words by measuring the importance of every word in a sentence. By using the PWVA, GCLSR can obtain high-quality positive samples used for contrastive learning. Experimental results demonstrate that our proposed GCLSR achieves an encouraging result on the challenging datasets of the semantic textual similarity (STS) task and transfer task.

1. Introduction

Representation learning of sentences involves learning a meaningful representation for a sentence. Most downstream tasks in natural language processing (NLP) are implemented with sentence representation [1,2,3,4,5].
Recently, researchers have achieved great advances in sentence representation based on contrastive learning with pre-trained language models [6,7,8,9,10]. On the one hand, the large-scale pre-trained language models (PLMs), typified by BERT [11], are trained with unlabeled data, improving the state-of-the-art results in most downstream tasks. Therefore, PLMs are applied to various real scenarios, such as text generation [8], name entity recognition [12], question answering [13], and translation [13]. On the other hand, unsupervised representation learning based on contrastive learning advances the development of computer vision [14,15,16,17]. Therefore, many researchers combine PLMs with contrastive learning to conduct sentence representation tasks [18,19]. For example, Wu et al. [20] adopt back-translate as the data augmentation method to produce positive samples used for contrastive learning and PLMs as the backbone to obtain semantic feature of sentences, achieving a promising result for sentence representation. Gao et al. [21] respectively take the standard dropout mask of the transformer and cosine similarity as the data augmentation method and as the contrastive objective function to conduct the contrastive training, producing a meaningful sentence representation.
However, there are issues with implementing contrastive learning in sentence representation: (a) An appropriate data augmentation method is needed to produce positive samples used for contrastive learning. In contrastive training, the semantic similarity between the positive example pair should be narrow. Therefore, improper data augmentation may change the semantic information of sentences, resulting in difficulties advancing performance. (b) The information of text data are sparse and discrete. Unlike image data, the information between the adjacent pixels is continuous, while the information of text data is discrete, indicating that the model could not learn the distinguishing features by contrastive learning. (c) Similarity computing between high-dimensional vectors could lose the local information of vectors. Generally, the objective of contrastive learning is to minimize the similarity of high-dimensional vectors, which could not make use of local information of vectors well and could affect performance.
To solve the issues above, we propose Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR). GCLSR adopts continuous data and partial data augmentation to obtain high-quality positive samples used for contrastive learning. Due to the discrete and sparse text data, GCLSR designs a self-attention mechanism to focus on informative words by measuring the importance of every word in a sentence. To address high-dimensional feature vectors, GCLSR proposes grouped contrastive learning to disentangle more local information of feature vectors.
The contributions of this paper are summarized as follows:
  • We propose a new data augmentation method called partial-word vector augmentation (PWVA) to obtain positive samples used for contrastive learning. PWVA performs data augmentation on partial word vectors of the word embedding space of a sentence. In this way, the positive sample pairs can retain more original semantic information, which could enhance and facilitate contrastive learning.
  • We design a new computation method of self-attention to help the model focus on the informative words of a sentence. Experimental results show that the use of self-attention can enhance the representation of discrete and sparse text data.
  • We design a new paradigm of contrastive learning called the Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR), which can make use of more local information of high-dimensional feature vectors.
  • We evaluate GCLSR on different datasets. Experimental results demonstrate that our proposed GCLSR achieves a promising result on sentence representation. Additionally, we further investigate effectiveness of the GCLSR through an ablation study and explore possible implementation schemes based on our method.
The rest of this paper is organized as follows: The related works on representation learning based on contrastive learning, text data augmentation, and self-attention are introduced in Section 2. Our proposed GCLSR is presented in Section 3. Section 4 and Section 5 respectively evaluate and investigate GCLSR. Conclusions and future work are presented in Section 6.

2. Related Work

2.1. Representation Learning Based on Contrastive Learning

Contrastive learning obtains promising results in representation learning [14,17,22]. Generally, a Siamese network is used to construct the contrastive framework and conduct contrastive training [14].
In the computer vision domain, contrastive learning achieves significant improvement in the representation of image. SimCLR [14] uses an encoder and projection head as the contrastive framework, which advances the state-of-the-art results for image representation. BYOL [17] designs a momentum encoder to avoid collapsing on contrastive training, which obtains an encouraging result. SimSiam [16] uses BYOL’s contrastive framework, but it changes the way that a network’s parameters are updated. Surprisingly, stop-gradient not only solves the issue of the model’s collapse but also obtains comparable results in STS tasks.
Contrastive learning achieves promising results for image representation. Therefore, researchers have started to adopt contrastive learning to obtain high-quality sentence representation. CERT [23] augments a sentence by back-translation and performs contrastive training using the contrastive framework of MoCo. CMV-BERT [24] adopts different tokenizers to conduct sentence augmentation and performs contrastive training on the framework of the SimSiam. ConSERT [25] performs data augmentation (such as token shuffling, adversarial attack, cutoff, and dropout) on word vectors of BERT to obtain positive samples and achieve encouraging results. More details about contrastive learning in sentence representation are shown in Table 1.
While great success has been achieved by contrastive learning in sentence representation, deficiencies still exist in the aforementioned methods, hampering the improvement of performance. The details are shown below: (1) Most methods directly utilize the framework of computer vision as the pipeline for contrastive learning. Therefore, it could hamper contrastive training because text data are discrete and sparse compared to image data. (2) The well-performing pre-trained models (such as BERT) are adopted as the backbone network of contrastive learning. Consequently, they cannot evaluate the performance of a lightweight model on sentence representation using contrastive learning. After all, the pre-trained model works well in NLP tasks. (3) Improper data augmentation could change the original semantics of a sentence. Most methods use discrete data augmentation to produce positive samples to perform contrastive training, which could deteriorate the original semantics. Different from the aforementioned methods, we design a dedicated contrastive learning framework for sentence representation, namely, GCLSR. To obtain high-quality positive samples, GCLSR uses partial-word vector augmentation, a continuous form of data augmentation, which can maintain more of the original semantics of sentences. Further, GCLSR uses a lightweight model TextCNN to explore the effectiveness of contrastive learning on sentence representation.

2.2. Text Data Augmentation

Data augmentation is an effective strategy to improve performance and steadiness of training. Wei et al. [26] proposed a popular data augmentation method called EDA for text classification and achieved promising results. Wang et al. [27] use k-nearest neighbor word vectors as the positive samples. Guo et al. [28] obtain positive sample pairs by performing a linear interpolation between word vectors.
While many data augmentation methods obtain encouraging results in NLP, there is no dedicated one for contrastive training. Generally, the positive samples used for contrastive training are produced by data augmentation. Therefore, the above-mentioned approaches could not be directly employed to generate positive samples. The explanations for this are listed as follows: (1) A high semantic similarity should be reserved between positive samples. On the contrary, the contrastive model could be collapsing easily. (2) A data augmentation method could be implemented on partial data to produce positive samples. In this way, more original semantics could be preserved between positive samples, which could help the model learn to distinguish features easily. (3) The augmentation of text is as continuous as possible. Most methods above, such as EDA and back-translation, are discrete, which may make the contrastive training unsteady and hurt the generalization of model. Different from the existing data augmentation methods, our proposed PWVA is a continuous data augmentation strategy. PWVA conducts data augmentation for partial words of a sentence in the word embedding space, which can preserve more original semantics between positive samples and facilitate the contrastive training.

2.3. Self-Attention in Language Model

Great progress has been made in the development of attention since Bahdanau et al. [29] adopted attention to enhance the performance of NLP tasks. Devlin et al. [11] designed an encoder with attention in order to process sentences and achieved great performance on the various tasks of NLP. However, it needs additional operations (such as position-wise feed-forward networks and layer normalization ) to ensure steady training, resulting in difficulties in application to a practical, lightweight computational platform. Different from the method proposed by Devlin [11], we design a self-attention mechanism with low computing consumption to compute the importance of words in a sentence without any additional operations. In addition, to help the lightweight model measure the importance of a word for a sentence, we rewrite the computing process of sef-attention slightly. In this way, the use of self-attention in contrastive learning can help the model focus on the informative words of a sentence. The details of our proposed method for self-attention are shown in Section 3.

3. Methodology

As discussed above, contrastive learning can be conducted by mainly obtaining positive samples and designing a contrastive framework. In this paper, we propose a Grouped Contrastive Learning of self-supervised Sentence Representation (GCLSR). Figure 1 illustrates the overall architecture and training pipeline of GCLSR. As shown in Figure 1, GCLSR contains three parts: partial-word vector augmentation (introduced in Section 3.1), self-attention (introduced in Section 3.2), and the GCLSR network (introduced in Section 3.3). The upper right plot includes the details of the GCLSR network. The lower right plot is the visualization of PWVA (introduced in Section 3.1).

3.1. Partial Word Vector Augmentation

As discussed above, performing data augmentation in contrastive learning is done in order to obtain positive samples. However, most existing methods are discrete and performed on full words of a sentence, which could deteriorate original semantic information for discrete and sparse text data. Therefore, we design a continuous and partial-word vector augmentation (PWVA) for contrastive learning. Furthermore, a word vector is a vector with fixed dimensionality, and every element in a word vector is a real value. Therefore, a word vector can be treated as a 1D discrete signal. In this way, word vectors can be processed by strategies of digital signal processing. Our proposed PWVA is based on this insight in order to implement data augmentation. To be exact, PWVA is conducted by two probability choices. Let W = { w i R d } i = 1 N be the N word vectors with d dimensionality. The first probability choice of PWVA is represented by:
w a u g = ρ ( A g w n ( w i ) , A r z s ( w i ) , A i f f t ( w i ) , A r b n ( w i ) ; p 1 , p 2 , p 3 , p 4 ) ,
where ρ ( · ) is a function aiming to choose data augmentation strategies from Random Zero Setting (RZS), Inverse Fast Fourier Transformation (IFFT), Gaussian White Noise (GWN), and Random Background Noise (RBN) by the probabilities p 1 , p 2 , p 3 , and p 4 , respectively. The  w a u g denotes the augmented word vectors. The second choice of PWVA can be expressed below:
w p w v a = ϱ ( w a u g , w i ; p ) ,
where ϱ is a function to select the final PWVA output w p w v a from w a u g and w i , with probability p. The visualization of PWVA is shown in the lower right plot of Figure 1. In addition, four data augmentation strategies employed in Equation (1) are explained below.
  • Gaussian White Noise (GWN)
    In order to improve the robustness of our model, we introduce Gaussian white noise (as illustrated in Figure 2a) into the word vectors. This approach is inspired by the work of Uchaikin et al. [30]. Gaussian white noise can be mathematically represented as follows:
    A g w n ( w i ) = w i + λ · N ( 0 , 1 ) ,
    where λ represents the trade-off parameter, while N ( 0 , 1 ) refers to the standard normal distribution.
  • Random Zero Setting (RZS)
    To mitigate data dependence and enhance generalization ability, we employ a technique called random zero setting A r z s ( w i ) = D r o p o u t ( w i ) (as illustrated in Figure 2b). This technique enables us to randomly assign zero values to certain word vector components.
  • Inverse Fast Fourier Transformation (IFFT)
    To extract features in the frequency domain, we utilize word vectors and subsequently apply the inverse fast Fourier transform (IFFT) as illustrated in Figure 2c to convert them into the time domain. The word vectors undergo slight modifications after undergoing the IFFT process, thereby enhancing the resilience of the data boundary. The mathematical representation of the IFFT can be expressed as follows:
    A i f f t ( w i ) = R e a l ( I F F T ( F F T ( w i ) ) ) ,
    where R e a l ( · ) denotes the real part.
  • Random Background Noise (RBN)
    Random background noise cannot be learned by a model, as stated in the research conducted by [31]. Therefore, to enhance training stability, we introduce random background noise into the word vectors, as depicted in Figure 2d. The formulation for random background noise (RBN) is given below:  
    A r b n ( w i ) = w i + u n i f o r m ( 0 , 0.1 ) ,
    where u n i f o r m ( · ) is the uniform distribution.
In summary, continuous and PWVA can enhance and facilitate contrastive learning. The characteristic of “continuous” PWVA can ensure that there is no semantic gap in word vectors, while the “partial” can retain more original semantics of word vectors. In this way, a model can readily acquire a richer set of discriminative features by assimilating the disparities between the initial word vectors and their respective augmented counterparts. In contrast, all existing methods obtain distinguishing features between augmented data, resulting in difficulties in contrastive training. This insight is the main contribution of PWVA. In addition, as shown in the lower right plot of Figure 1, we present a visualization of the PWVA process. Specifically, we apply data augmentation twice to the word vector space using PWVA to obtain two sets of positive sample pairs, AUG1 and AUG2. In AUG1 and AUG2, the orange boxes represent the augmented word vectors, while the yellow boxes indicate that the word vectors have not been augmented. As a result, there are four possible combinations, B1, B2, B3, and B4, between AUG1 and AUG2. B1 represents the scenario where the word vector W1 in AUG1 is augmented, while W1 in AUG2 remains unchanged. The term “partial” indicates that some word vectors in both AUG1 and AUG2 are not augmented, thus preserving more of their original semantics for contrastive learning. The results of the ablation study are presented in Section 5.

3.2. Self-Attention of the Word Vectors

We perform PWVA to obtain high-quality positive samples. However, for a lightweight model, it could not effectively capture the importance of a word in a sentence. Therefore, we design a self-attention mechanism to capture the importance of words and facilitate contrastive training. Self-attention is applied to many scenarios and achieves great success. Inspired by the work of [11], we design a dedicated self-attention method to help the model focus on informative word vectors from the discrete and sparse text data. Note that the word vectors are produced by the pre-trained word2vec [32] before carrying out data augmentation. Hence, the word vectors already include some semantic information. Furthermore, the self-attention added to the word vectors can make the model focus on the features useful for distinguishing semantic information. The details are shown in Figure 3. Note that our method of self-attention is different from BERT’s. The main differences are as follows: (1) We fill the value 1 × 10 9 after computing the scores for padding tokens. (2) We first compute the importance of the words to a sentence, and then multiply it by the original word vector. More details are shown in Algorithm 1. To verify the effectiveness of our method, we conduct an experiment to compare the performance on an STS task with the state-of-the-art models proposed by [11]. We observe that our proposed method increases the average Spearman’s correlation from 62.97 to 66.75 (+3.78) with the same time complexity O ( n 2 ) . In addition, we visualize the process of our proposed self-attention method. As shown in Figure 3, let N = be the number of words in a sentence. M A S K and s i , j = x i x j are the mask matrix (the value of which equals 0 if the word is the padding token) and attention of the word x i to x j in a sentence, respectively. Specially, S = j = 1 N s i , j ( i = 1 , 2 , , N ) can represent the importance of the word x i to a sentence. We can observe from Figure 3 that the first few words with a larger value of “importance” are the word “comic”, “relief”, “performances”, and “immaculate”, which can help the model watch for the crucial information in a sentence.
Algorithm 1: Self-attention of Word Vectors
Applsci 13 09873 i001

3.3. Grouped Contrastive Learning

By conducting PWVA and self-attention, the construction of positive samples is finished. Next, we introduce grouped contrastive learning to obtain sentence representation. Generally, the pipelines of contrastive learning are that one first performs data augmentation to produce positive samples, then obtains the features computed by the backbone, and finally computes the contrastive loss [16]. Unfortunately, computing a contrastive loss between high-dimensional vectors could not make use of the local information of vectors well. To solve this issue, we propose the GCLSR to mitigate the aforementioned drawback during contrastive training. As shown in Figure 1, the GCLSR consists of two branches. The first branch includes the backbone, projector [14], and predictor [17], while the other is the backbone and projector. In particular, in order to make use of local information about features, we first divide the features of the projector and predictor into M groups with D dimensionality. The grouped features of the projector and predictor can be denoted as F e a P r o = { P r o i R D } i = 1 M and F e a P r e = { P r e i R D } i = 1 M , respectively. Finally, we use the negative mean of the cosine similarity as the contrastive loss [17]:
= M e a n ( i = 1 M ( P r e i · P r o i P r e i · P r o i 2 ) ) ,
where · is l 2 -norm. In addition, we adopt the symmetrical loss to improve performance:
s y m = 1 2 M e a n ( i = 1 M ( P r e i · P r o i P r e i · P r o i 2 + P r o i · P r e i P r e i · P r o i 2 ) .

4. Experiments

We systematically assess the efficacy of our novel approach, denoted as GCLSR, across seven distinct tasks focused on semantic textual similarity (STS). Moreover, we rigorously examine its performance on an additional set of seven transfer tasks. Worth highlighting is our deliberate choice of a lightweight model—TextCNN—as the foundational architecture. This decision allows us to meticulously probe the potential of contrastive learning in enhancing sentence representations. It is pertinent to underscore that our intention is not to draw comparisons to the prevailing state-of-the-art benchmarks. Furthermore, it is essential to emphasize that both the STS experiments and the transfer tasks are conducted under a fully unsupervised setting. Notably, during the training phase, no STS datasets—comprising training, validation, or test sets—are employed. This approach ensures the integrity of our experimental setup and validates the intrinsic strength of our proposed methodology.

4.1. Implementation Settings

Unless explicitly stated otherwise, we adhere to the ensuing configuration for the pre-training phase of our contrastive self-supervised methodology:
  • Backbone. We use TextCNN [33] as the default backbone. Specifically, the filter region size is [1,1,1,6,15,20]. The number of filters is 300. Note that we do not use a fully-connected (FC) layer or dropout at the end of the backbone, because this makes the results worse.
  • Projector. The projection layers include three FC layers. Every output of an FC layer has a batch normalization [34] and ReLU, except for the last FC layer. The dimension of the hidden and output layers is 4096.
  • Predictor. The prediction layers have two FC layers. The hidden layers have a batch normalization (BN) and ReLU, while the output layers do not have BN and ReLU. The hidden and output layers are both endowed with dimensions of 1024 and 4096, respectively, resulting in a bottleneck architecture that substantially enhances the model’s robustness [16].
  • Optimizer. The SGD is used for the optimizer. The learning rate (LR) is b a s e _ l r B a t c h S i z e / 128 (the b a s e _ l r is 0.03). The LR has a cosine decay schedule [35]. The weight decay is 0.001. We also use the warm-up (5 epochs). Additionally, the momentum is 0.9 before warm-up epochs and 0.8 after warm-up epochs, which makes the model more robust (more details are shown in Section 5).

4.2. Semantic Textual Similarity Task

The goal of the semantic textual similarity task (STS) is to evaluate the similarity between two sentences by directly computing the cosine distance [36]. Then, the cosine distance correlates with a labeled similarity score (from 0 to 5) by Pearson or Spearman correlations to obtain a matching score. In this way, the matching score can reflect the semantic similarity between two sentences. We train our self-supervised GCLSR model with pre-trained word2vec on 10 4 sampled sentences randomly drawn from English Wikipedia [21]. The stop epochs are 20, and the best checkpoint on validation datasets is used for testing. Finally, we use the SentEval toolkit [36] to measure our proposed method on 7 STS tasks, i.e., STS 2012–2016 [37,38,39,40,41], STS Benchmark [42], and SICK-Relatedness [43]. In these datasets, sentence pairs are from news articles, news conversations, forum discussions, headlines, and image and video descriptions. Following [16], (a) we employ Spearman correlation as the only metric to evaluate the quality of sentence representation in STS tasks. Ref. [16] argues that Spearman correlation better suits the needs of evaluation; (b) no additional networks are applied on top of sentence representation. Put differently, we directly calculate the Spearman correlation using cosine similarities; (c) given that STS data of every year include several sub-datasets, we concatenate all sub-datasets to calculate the Spearman correlation. The operation “concatenate”, incorporating different subsets, is more proper compared with other methods in practical applications.
The results of the evaluation are shown in Table 2. We observe that all variations we proposed work well and are better than word2vec embeddings (on average). Specifically, we improve the average Spearman correlation from 44.16 to 66.75 compared with word2vec embeddings (on average), the pre-trained language model BERT (from 56.57 to 66.75), RoBERT (from 56.57 to 66.75), and CLEAR (from 61.8 to 66.75). Furthermore, our proposed data augmentation PWVA outperforms the compared methods of EDA and back-translation on STS tasks using the proposed contrastive learning framework GCLSR b a s e . The application of self-attention (GCLSR b a s e +PWVA+self-att.) and grouping (GCLSR b a s e +PWVA+self-att.+GP) can also improve the performance. The experimental results show that our proposed method GCLSR b a s e +PWVA+self-att.+GP can also achieve a better result compared with GCLSR b a s e +EDA+self-att.+GP and with GCLSR b a s e +TransL.+self-att.+GP. In addition, we note a significant distinction wherein strong performance in the STS tasks does not inherently translate into improved results in the transfer tasks. Consequently, it is prudent to primarily consider the outcomes from the STS evaluations for the purpose of comparison.

4.3. Transfer Task

Transfer task is used to evaluate the performance of downstream tasks using sentence representation [36]. Generally, a classifier is added on the top layer of a sentence representation model to evaluate the performance of the transfer task. Note that the classifier (consisting of linear layers) can be trained, while the sentence representation model needs to be frozen. Our proposed method underwent rigorous testing across a spectrum of tasks, including MR [44], CR [45], SUBJ [46], MPQA [47], SST-2 [48], TREC [49], and MRPC [50]. The pre-trained stage is the same as for STS tasks. The evaluation results are shown in Table 3. We find that the overall tendency of results is the same as STS tasks. However, there are two abnormalities that need to be explained. (1) The pre-trained model BERT b a s e obtains a better result compared with our proposed model on transfer tasks. Firstly, we only chose 10 4 sentences from the wiki to perform the pre-training. Secondly, the number of parameters of our model is far less than BERT b a s e . Therefore, we take a short time to perform pre-training (about 1 h on 1 Tesla v100 GPU). (2) The application of self-attention and grouping harms the performance slightly compared with GCLSR b a s e +PWVA. A possible explanation is that the implementation of PWVA is on word vectors, which could change the original semantic information of vectors. In addition, we do not perform joint training with tasks, which means the model could not digest the learned contrastive features.

5. Further Investigation of GCLSR

We design a novel contrastive learning paradigm, namely, GCLSR, that consists of three crucial components, i.e., (a) data augmentation, (b) self-attention, and (c) grouped contrastive learning, to study the performance of contrastive learning on sentence representation. Experimental results show that our proposed GCLSR achieves a promising result. However, some experimental settings of GCLSR influence the performance of sentence representation, such as warm-up, weight decay, etc. Therefore, we conduct ablation experiments to analyze them further. All experiments are conducted in STS 2012–2015.

5.1. Effect of Batch Size

Given that a large batch size could impact the performance shown in previous works [14], we conduct an ablation experiment to study it. Table 4 shows the comparison results of batch sizes from 64 to 4096. We use the same linear scaling rule— b a s e _ l r B a t c h _ s i z e 128 (the b a s e _ l r is 0.3)—for all experiments.
Table 4 reports the results of batch sizes from 64 to 1024. Different from previous conclusions, our model is insensitive to batch size. On the contrary, the performance is worse when the batch size increases to 1024, compared with the batch size of 512. In addition, a small batch size of 64 also achieves competitive performance. A reasonable explanation is that the computing of contrastive loss does not include negative examples.

5.2. Effect of Weight Decay

We find that the value of the weight decay influences performance dramatically. We conjecture that the perturbation of model weight can influence contrastive self-supervised training. Therefore, we perform an experiment to investigate it. The results are shown in Table 5.
The experimental results show that improper weight decay could make the model stop training early, resulting in underfitting and poor performance.

5.3. Effect of the LR of the Predictor

As mentioned by [16], the predictor with a constant LR (without decay) can obtain good image representation. Therefore, we design an experiment to verify whether the same settings can obtain good sentence representation. The results are shown in Table 6.
Experimental results show that a predictor with a constant LR can obtain better sentence representation compared with a decay LR. Specifically, as shown in Table 6 and Figure 4, the model will stop training (at the 9th epoch) when the LR of the predictor is small or reduced by a linear scaling rule. Additionally, the model needs a bigger learning rate (LR = 1) compared with vision tasks (LR = 0.1) to obtain better results. A possible explanation is that the predictor can adapt the latest representation. Therefore, it is not necessary to force the predictor to converge before the model is trained sufficiently [16].

5.4. Effect of the SGD Momentum

In general, an optimizer with momentum can accelerate training because the update of the next step is based on the former steps. In other words, the gradient has a certain initial velocity (the network can remember the direction of gradient descent), which makes the network get rid of the local optima. In our proposed methods, the momentum is set to 0.9 before the warm-up epochs and 0.8 after the warm-up epochs. More details are shown in Table 7.
We observe that a small momentum will take more time to train the model and will not necessarily achieve the best performance. While a large momentum can save training time, the model can miss the optima resulting from a big step updating in the vicinity of the optimal point. Therefore, we set the momentum to (0.9, 0.8) to accelerate the training before warm-up epochs and to slow the updating step after the warm-up epochs, achieving better performance.

5.5. Effect of the Warm-Up

In the training phase, the LR is linearly scaling, i.e., the LR linearly increases to the maximum and reduces to the minimum, which can make a model more robust. Given that the parameters of a model are randomly initialized, it is inappropriate to employ a large LR in the first few updates of training because the noise of the data may influence the performance. The comparison results are shown in Table 8.
Overall, the performances between the different warm-up epochs are comparable. However, a small warm-up can make the model stop early, especially for data with much noise.

5.6. Effect of the Region Size of TextCNN

The region size is a crucial parameter of TextCNN. Therefore, we design different region sizes to investigate their impacts. The results are shown in Table 9.
The experimental results show that the performance can be influenced dramatically by region size. Specifically, region size 1 is crucial for obtaining good results, observed from region size (1,2,3,4,5,6) and (2,3,4,5,6). A possible explanation is that region size 1 can enhance the representation of every word itself in a sentence without noise from other words. Furthermore, we can increase the region size to study it. The results show that, although large region sizes can obtain a better result compared with a small region sizes (1,1,1,2,3,4) on STS tasks, worse performance is obtained in transfer tasks (a large region size will reduce by 0.3 percentage points). We argue that a large region size could obtain more context information, but at the same time, much noise is also added into the representation.

5.7. Effect of Data Augmentation

Data augmentation could affect the quality of positive samples used for contrastive learning, which directly influences the performance and robustness of the model. Consequently, we propose two hypotheses for discrete text data: (1) partial data augmentation could preserve more original semantic information; (2) continuous data augmentation could guarantee that there is no semantic gap in augmented data. Next, we conduct an experiment to verify it. The results are shown in Table 10 and Figure 4.
As shown in experimental results, the continuous and PWVA improve the performance compared with No Aug. (from 65.91 to 67.66) and Full Aug. (from 66.28 to 67.66), which verifies our two hypotheses about data augmentation used in contrastive learning. In addition, the model can work well without data augmentation. A possible explanation is that unrecognizable words’ random initialization can be regarded as a method of data augmentation, resulting in an improvement in stability and robustness.

5.8. Effect of the Size of Groups

Grouping the features of the projector and predictor can solve the issue of information loss caused by contrastive loss computing between high-dimensional vectors. Therefore, we conduct an experiment to study the effect of the size of the feature grouping. The results are shown in Table 11.
Generally speaking, different grouping sizes can achieve comparable performance on STS tasks. Although the performance gap between feature grouping and no grouping is small, as observed in Figure 4, the stability and robustness of the model with feature grouping are better compared with no grouping. This verifies that the usage of local information by feature grouping can help the model mine more information for contrastive learning to advance the performance of sentence representation slightly (from 66.66 to 66.75).

6. Conclusions and Future Work

Previous work used large pre-trained language models to perform sentence representation (such as BERT and RoBERT), but could not evaluate the performance of a lightweight model on sentence representation using contrastive learning. In this paper, we propose a lightweight model GCLSR to investigate the effectiveness of contrastive learning for sentence representation. GCLSR consists of continuous and partial data augmentation PWVA, self-attention, and grouped contrastive learning. GCLSR can obtain more original semantics from PWVA to produce high-quality positive samples. Self-attention can help GCLSR focus on informative words. Grouped contrastive learning can make use of more local information of features. The experimental results show that our proposed method of GCLSR can produce a meaningful sentence representation. Additionally, the findings of PWVA have practical implications. PWVA conducts data augmentation in a partial and continuous manner in the word embedding space. However, there are some limitations. For example, our proposed method is evaluated on a lightweight model, i.e., TextCNN, and achieves promising results, while the effectiveness of it on a large model is uncertain. In the future, we intend to combine contrastive learning with self-attention further. In addition, we will use our proposed method on a large pre-trained language model (such as BERT or GPT) to obtain better results regarding sentence representation.

Author Contributions

Conceptualization, Q.W.; methodology, Q.W.; software, Q.W. and W.Z.; validation, T.L.; formal analysis, W.Z.; investigation, Q.W.; resources, Q.W.; data curation, T.L.; writing—original draft preparation, Q.W.; writing—review and editing, Q.W.; visualization, W.Z. and D.P.; project administration, D.P.; funding acquisition, D.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research is financially supported by the Sichuan Science and Technology Planning Project (2021YFG0301, 2021YFG0317, 2023YFQ0020, 2023YFG0033, 2023ZHCG0016, 2022YFQ0014, 2022YFH0021), Chengdu Science and Technology Project (2023-XT00-00004-GX).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Our code and training data for GCLSR are accessed on 21 July 2023 by https://github.com/qianandfei/GCLSR.

Acknowledgments

The authors would like to thank Sichuan Science and Technology Planning Project (2021YFG0301, 2021YFG0317, 2023YFQ0020, 2023YFG0033, 2023ZHCG0016, 2022YFQ0014, 2022YFH0021), Chengdu Science and Technology Project (2023-XT00-00004-GX) for financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, D.; Wang, J.; Lin, H.; Chu, Y.; Wang, Y.; Zhang, Y.; Yang, Z. Sentence representation with manifold learning for biomedical texts. Knowl.-Based Syst. 2021, 218, 106869. [Google Scholar] [CrossRef]
  2. Li, B.; Zhou, H.; He, J.; Wang, M.; Yang, Y.; Li, L. On the sentence embeddings from pre-trained language models. arXiv 2020, arXiv:2011.05864. [Google Scholar]
  3. Logeswaran, L.; Lee, H. An efficient framework for learning sentence representations. arXiv 2018, arXiv:1803.02893. [Google Scholar]
  4. Kim, T.; Yoo, K.M.; Lee, S.g. Self-Guided Contrastive Learning for BERT Sentence Representations. arXiv 2021, arXiv:2106.07345. [Google Scholar]
  5. Zhang, D.; Li, S.W.; Xiao, W.; Zhu, H.; Nallapati, R.; Arnold, A.O.; Xiang, B. Pairwise supervised contrastive learning of sentence representations. arXiv 2021, arXiv:2109.05424. [Google Scholar]
  6. Ethayarajh, K. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. arXiv 2019, arXiv:1909.00512. [Google Scholar]
  7. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv 2019, arXiv:1910.10683. [Google Scholar]
  8. Dong, L.; Yang, N.; Wang, W.; Wei, F.; Liu, X.; Wang, Y.; Gao, J.; Zhou, M.; Hon, H.W. Unified language model pre-training for natural language understanding and generation. arXiv 2019, arXiv:1905.03197. [Google Scholar]
  9. Wu, L.; Hu, J.; Teng, F.; Li, T.; Du, S. Text semantic matching with an enhanced sample building method based on contrastive learning. Int. J. Mach. Learn. Cybern. 2023, 14, 3105–3112. [Google Scholar] [CrossRef]
  10. Ma, X.; Li, H.; Shi, J.; Zhang, Y.; Long, Z. Importance-aware contrastive learning via semantically augmented instances for unsupervised sentence embeddings. Int. J. Mach. Learn. Cybern. 2023, 14, 2979–2990. [Google Scholar] [CrossRef]
  11. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  12. Liu, P.; Guo, Y.; Wang, F.; Li, G. Chinese named entity recognition: The state of the art. Neurocomputing 2022, 473, 37–53. [Google Scholar] [CrossRef]
  13. Yu, P.; Weizhong, Q. Three-stage question answering model based on BERT. J. Comput. Appl. 2022, 42, 64. [Google Scholar]
  14. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
  15. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 9729–9738. [Google Scholar]
  16. Chen, X.; He, K. Exploring simple siamese representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15750–15758. [Google Scholar]
  17. Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.H.; Buchatskaya, E.; Doersch, C.; Pires, B.A.; Guo, Z.D.; Azar, M.G.; et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv 2020, arXiv:2006.07733. [Google Scholar]
  18. Reimers, N.; Gurevych, I. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv 2019, arXiv:1908.10084. [Google Scholar]
  19. Giorgi, J.M.; Nitski, O.; Bader, G.D.; Wang, B. Declutr: Deep contrastive learning for unsupervised textual representations. arXiv 2020, arXiv:2006.03659. [Google Scholar]
  20. Wu, Z.; Wang, S.; Gu, J.; Khabsa, M.; Sun, F.; Ma, H. Clear: Contrastive learning for sentence representation. arXiv 2020, arXiv:2012.15466. [Google Scholar]
  21. Gao, T.; Yao, X.; Chen, D. SimCSE: Simple Contrastive Learning of Sentence Embeddings. arXiv 2021, arXiv:2104.08821. [Google Scholar]
  22. Wang, Q.; Zhang, W.; Lei, T.; Cao, Y.; Peng, D.; Wang, X. CLSEP: Contrastive learning of sentence embedding with prompt. Knowl.-Based Syst. 2023, 266, 110381. [Google Scholar] [CrossRef]
  23. Fang, H.; Wang, S.; Zhou, M.; Ding, J.; Xie, P. Cert: Contrastive self-supervised learning for language understanding. arXiv 2020, arXiv:2005.12766. [Google Scholar]
  24. Zhu, W.; Cheung, D. CMV-BERT: Contrastive multi-vocab pretraining of BERT. arXiv 2020, arXiv:2012.14763. [Google Scholar]
  25. Yan, Y.; Li, R.; Wang, S.; Zhang, F.; Wu, W.; Xu, W. ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer. arXiv 2021, arXiv:2105.11741. [Google Scholar]
  26. Wei, J.; Zou, K. Eda: Easy data augmentation techniques for boosting performance on text classification tasks. arXiv 2019, arXiv:1901.11196. [Google Scholar]
  27. Wang, W.Y.; Yang, D. That’s so annoying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic categorization of annoying behaviors using# petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, 17–21 September 2015; pp. 2557–2563. [Google Scholar]
  28. Guo, H.; Mao, Y.; Zhang, R. Augmenting data with mixup for sentence classification: An empirical study. arXiv 2019, arXiv:1905.08941. [Google Scholar]
  29. Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
  30. Uchaikin, V.V.; Zolotarev, V.M. Chance and Stability: Stable Distributions and Their Applications; Walter de Gruyter: Berlin, Germany, 2011. [Google Scholar]
  31. Géron, A. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow; O’Reilly Media, Inc.: Sebastopol, CA, USA, 2022. [Google Scholar]
  32. Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
  33. Kim, Y. Convolutional Neural Networks for Sentence Classification. arXiv 2014, arXiv:1408.5882. [Google Scholar]
  34. Ioffe, S.; Normalization, C.S.B. Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
  35. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  36. Conneau, A.; Kiela, D. Senteval: An evaluation toolkit for universal sentence representations. arXiv 2018, arXiv:1803.05449. [Google Scholar]
  37. Agirre, E.; Cer, D.; Diab, M.; Gonzalez-Agirre, A. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the SEM 2012: The First Joint Conference on Lexical and Computational Semantics—Volume 1: Proceedings of the Main Conference and the Shared Task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), Montréal, Canada, 7–8 June 2012; pp. 385–393.
  38. Agirre, E.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; Guo, W. * SEM 2013 shared task: Semantic textual similarity. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, Atlanta, GA, USA, 13–14 June 2013; pp. 32–43. [Google Scholar]
  39. Agirre, E.; Banea, C.; Cardie, C.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; Guo, W.; Mihalcea, R.; Rigau, G.; Wiebe, J. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Dublin, Ireland, 23–24 August 2014; pp. 81–91. [Google Scholar]
  40. Agirre, E.; Banea, C.; Cardie, C.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; Guo, W.; Lopez-Gazpio, I.; Maritxalar, M.; Mihalcea, R.; et al. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), Denver, CO, USA, 4–5 June 2015; pp. 252–263. [Google Scholar]
  41. Agirre, E.; Banea, C.; Cer, D.; Diab, M.; Gonzalez Agirre, A.; Mihalcea, R.; Rigau Claramunt, G.; Wiebe, J. Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the SemEval-2016, 10th International Workshop on Semantic Evaluation, San Diego, CA, USA, 16–17 June 2016; pp. 497–511. [Google Scholar]
  42. Cer, D.; Diab, M.; Agirre, E.; Lopez-Gazpio, I.; Specia, L. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv 2017, arXiv:1708.00055. [Google Scholar]
  43. Marelli, M.; Menini, S.; Baroni, M.; Bentivogli, L.; Bernardi, R.; Zamparelli, R. A SICK cure for the evaluation of compositional distributional semantic models. In Proceedings of the LREC 2014, Reykjavik, Iceland, 26–31 May 2014; pp. 216–223. [Google Scholar]
  44. Pang, B.; Lee, L. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. arXiv 2005, arXiv:cs/0506075. [Google Scholar]
  45. Hu, M.; Liu, B. Mining and summarizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Seattle, WA, USA, 22–25 August 2004; pp. 168–177. [Google Scholar]
  46. Pang, B.; Lee, L. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. arXiv 2004, arXiv:cs/0409058. [Google Scholar]
  47. Wiebe, J.; Wilson, T.; Cardie, C. Annotating expressions of opinions and emotions in language. Lang. Resour. Eval. 2005, 39, 165–210. [Google Scholar] [CrossRef]
  48. Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C.D.; Ng, A.Y.; Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WA, USA, 18–21 October 2013; pp. 1631–1642. [Google Scholar]
  49. Voorhees, E.M.; Tice, D.M. Building a question answering test collection. In Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Athens, Greece, 24–28 July 2000; pp. 200–207. [Google Scholar]
  50. Dolan, W.B.; Brockett, C. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), Jeju Island, Republic of Korea, 4 October 2005. [Google Scholar]
Figure 1. The GCLSR architecture.
Figure 1. The GCLSR architecture.
Applsci 13 09873 g001
Figure 2. Four word vector augmentation methods of the PWVA. Note that we perform data augmentation on word vectors of a sentence rather than on the original sentence. (a) Gaussian White Noise: Amplitudes of noise added to the word vectors follow the standard normal distribution. (b) Random Zero Setting: The data corresponding to the black curve are set to zero. (c) Inverse Fast Fourier Transformation:Differences from the data boundary can be observed in the right zoom window. (d) Random Background Noise: Amplitudes of noise added to the word vectors follow a uniform distribution.
Figure 2. Four word vector augmentation methods of the PWVA. Note that we perform data augmentation on word vectors of a sentence rather than on the original sentence. (a) Gaussian White Noise: Amplitudes of noise added to the word vectors follow the standard normal distribution. (b) Random Zero Setting: The data corresponding to the black curve are set to zero. (c) Inverse Fast Fourier Transformation:Differences from the data boundary can be observed in the right zoom window. (d) Random Background Noise: Amplitudes of noise added to the word vectors follow a uniform distribution.
Applsci 13 09873 g002
Figure 3. The self-attention of word vectors.
Figure 3. The self-attention of word vectors.
Applsci 13 09873 g003
Figure 4. Comparison experiment results. “All” means that all methods we proposed are used. “lr0.08” means that the learning rate of the predictor is 0.08. “non-att”: the self-attention we proposed is not used in whole training. “non-aug”: no data augmentations are applied in training, i.e., the two channels of network receive the same input. “non-group”: feature grouping is not adopted to make use of local information of features.
Figure 4. Comparison experiment results. “All” means that all methods we proposed are used. “lr0.08” means that the learning rate of the predictor is 0.08. “non-att”: the self-attention we proposed is not used in whole training. “non-aug”: no data augmentations are applied in training, i.e., the two channels of network receive the same input. “non-group”: feature grouping is not adopted to make use of local information of features.
Applsci 13 09873 g004
Table 1. Some contrastive learning methods in sentence representation. P: the computing of loss just includes positive samples. P + N: the computing of loss includes positive and negative samples, i.e.,  = l o g e x p ( s i m ( z i , z j ) / τ ) k = 1 2 N e x p ( s i m ( z i , z k ) / τ ) ( k i ) , where z i , z j , and z k are the positive samples, while z k represents the negative samples. τ is a temperature parameter. s i m ( · ) denotes the similarity. Discrete/continuous and full: data augmentation is discrete/continuous and performed on every word of a sentence.
Table 1. Some contrastive learning methods in sentence representation. P: the computing of loss just includes positive samples. P + N: the computing of loss includes positive and negative samples, i.e.,  = l o g e x p ( s i m ( z i , z j ) / τ ) k = 1 2 N e x p ( s i m ( z i , z k ) / τ ) ( k i ) , where z i , z j , and z k are the positive samples, while z k represents the negative samples. τ is a temperature parameter. s i m ( · ) denotes the similarity. Discrete/continuous and full: data augmentation is discrete/continuous and performed on every word of a sentence.
ModelBackboneData AugmentationLossFramework
CERT [23]Pre-trained BERTBack-translation (Discrete and Full)P + NBased on MoCo
CMV-BERT [24]ALBERT (3 layers)Multi-tokenizers (Discrete and Full)PBased on SimSiam
CLEAR [20]TransformerSubstitution (Discrete and Full)P + NBased on SimCLR
ConSERT [25]Pre-trained BERTDropout (Continuous and Full)P + NBased on SimCLR
SimCSE [21]Pre-trained BERTDropout (Continuous and Full)P + NBased on SimCLR
Table 2. The evaluation of sentence representation in STS tasks. All results are computed with the Spearman correlation. *: results from [21]; **: results from [20]; the remaining results are evaluated by us. GCLSR b a s e means that the model only consists of a backbone, projector, and predictor. The model receives the same two word vectors as the input, i.e., no data augmentation is used. EDA is a text data augmentation method proposed by [26]. TransL denotes the data augmentation back-translation. Self-att and GP denote the self-attention mechanism and feature grouping, respectively.
Table 2. The evaluation of sentence representation in STS tasks. All results are computed with the Spearman correlation. *: results from [21]; **: results from [20]; the remaining results are evaluated by us. GCLSR b a s e means that the model only consists of a backbone, projector, and predictor. The model receives the same two word vectors as the input, i.e., no data augmentation is used. EDA is a text data augmentation method proposed by [26]. TransL denotes the data augmentation back-translation. Self-att and GP denote the self-attention mechanism and feature grouping, respectively.
ModelSTS12STS13STS14STS15STS16STS-BSICK-RAvg.
Word2vec embeddings(avg.)33.7543.2036.9555.2354.8536.2448.9044.16
BERT b a s e (first-last avg.) *39.7059.3849.6766.0366.1953.8762.0656.70
RoBERT b a s e (first-last avg.) *40.8858.7449.0765.6361.4858.5561.6356.57
CLEAR **49.0048.9057.4063.6065.6075.6072.5061.80
BERT b a s e -flow *58.4067.1060.8575.1671.2268.6664.4766.55
BERT b a s e -whitening *57.8366.9060.9075.0871.3168.2463.7366.28
GCLSR b a s e 57.4768.7064.0372.8467.9065.6459.5565.16
GCLSR b a s e +EDA57.3368.0963.6572.0166.6365.3459.7164.68
GCLSR b a s e +TransL60.5366.0963.5072.8367.3966.0959.7865.17
GCLSR b a s e +PWVA58.9267.9464.4173.5468.7266.1659.8565.65
GCLSR b a s e +PWVA+self-att.57.6271.0065.8375.5169.8167.4159.4166.66
GCLSR b a s e +EDA+self-att.+GP58.1368.8564.1473.4066.9265.3159.6165.19
GCLSR b a s e +TransL.+self-att.+GP58.8068.0064.7073.7468.5067.2459.6765.81
GCLSR b a s e +PWVA+self-att.+GP57.8171.0165.8375.6270.0167.5859.3466.75
Table 3. The results of transfer tasks. All results are computed with the Spearman correlation. *: results from [21]; the remaining results are evaluated by us.
Table 3. The results of transfer tasks. All results are computed with the Spearman correlation. *: results from [21]; the remaining results are evaluated by us.
ModelMRCRSUBJMPQASST-2TRECMRPCAvg.
Word2vec embeddings(avg.)75.9177.5689.3187.1880.8977.4072.1780.06
BERT b a s e (first-last avg.) *78.6686.2594.3788.6684.4092.8069.5484.94
GCLSR b a s e 76.7879.0290.2188.3581.7783.0073.2881.77
GCLSR b a s e +EDA76.5679.5590.5488.5481.6684.4072.9982.03
GCLSR b a s e +TransL76.6179.5290.4188.7481.2784.0073.0481.94
GCLSR b a s e +PWVA76.7680.0890.6688.5981.6085.2073.5783.35
GCLSR b a s e +PWVA+self-att.76.9778.8190.9888.3680.5184.6073.7482.00
GCLSR b a s e +EDA+self-att.+GP76.9079.5890.5788.5081.8285.6072.7582.25
GCLSR b a s e +TransL.+self-att.+GP76.9879.3290.4188.5481.1684.0073.2881.96
GCLSR b a s e +PWVA+self-att.+GP77.6979.8790.8988.9981.4483.8073.3982.30
Table 4. The effect of batch size.
Table 4. The effect of batch size.
Batch Size/STSSTS12STS13STS14STS15Avg.
6456.3572.5666.5974.7367.56
12856.2072.7766.7275.0367.68
25655.9472.6666.6775.2267.62
512 (ours)55.7972.7266.7375.4067.66
102455.7372.4466.4575.4067.51
Table 5. The effect of weight decay.
Table 5. The effect of weight decay.
Weight Decay/STSSTS12STS13STS14STS15Avg.
0.000155.0470.7965.5573.3666.19
0.001 (ours)55.7972.7266.7375.4067.66
0.0155.2270.4865.4374.1266.31
0.154.9470.2664.7572.5865.63
Table 6. The effect of the LR of the predictor. Decay: the LR of the predictor reduces with a cosine decay.
Table 6. The effect of the LR of the predictor. Decay: the LR of the predictor reduces with a cosine decay.
LR/STSSTS12STS13STS14STS15Avg.
Decay54.6470.5865.3372.9265.87
0.0855.2270.1765.1773.1065.92
0.256.2472.2266.5375.2567.56
0.556.1472.5966.7375.3667.71
1 (ours)55.7972.7266.7375.4067.66
Table 7. The effect of the SGD momentum (Mot). (0.9,0.8) means that the momentum is 0.9 before warm-up and 0.8 after warm-up.
Table 7. The effect of the SGD momentum (Mot). (0.9,0.8) means that the momentum is 0.9 before warm-up and 0.8 after warm-up.
Momentum/STSSTS12STS13STS14STS15Avg.
0.855.4772.4566.5074.8467.32
0.955.2771.9666.2775.1867.17
0.9954.7670.8665.6973.9366.24
(0.9,0.8) (ours)55.7972.7266.7375.4067.66
Table 8. The effect of the warm-up. 1, 2, 3, 4, and 5 represent epochs that the LR starts to reduce.
Table 8. The effect of the warm-up. 1, 2, 3, 4, and 5 represent epochs that the LR starts to reduce.
Warm-Up/STSSTS12STS13STS14STS15Avg.
155.6272.5266.5474.7567.36
255.4572.3866.4274.9067.29
355.6272.5466.5675.0967.45
455.6472.5566.6075.0767.47
5 (ours)55.7972.7266.7375.4067.66
Table 9. The effect of the region size of TextCNN.
Table 9. The effect of the region size of TextCNN.
Region Size/STSSTS12STS13STS14STS15Avg.
(1,2,3,4,5,6)55.3667.9263.3373.5565.04
(2,3,4,5,6)53.8562.3659.8270.8061.71
(1,1,1,2,3,4)55.5271.7265.7974.9066.98
(1,1,1,4,5,6)56.7171.3565.5375.1967.20
(1,1,1,1,1,1)57.8171.0165.8375.6267.57
(1,1,1,6,15,20) (ours)55.7972.7266.7375.4067.66
(1,1,1,20,30,40)57.4470.8266.0075.3567.40
Table 10. The effect of data augmentation. No Aug.: no word vectors are subject to the data augmentation. Full Aug.: all of the word vectors of a sentence are augmented with our proposed four data augmentation strategies. Partial Aug.: word vectors are augmented by our proposed PWVA.
Table 10. The effect of data augmentation. No Aug.: no word vectors are subject to the data augmentation. Full Aug.: all of the word vectors of a sentence are augmented with our proposed four data augmentation strategies. Partial Aug.: word vectors are augmented by our proposed PWVA.
Augmentation/STSSTS12STS13STS14STS15Avg.
No Aug.54.6570.6365.3972.9665.91
Full Aug.55.3170.5165.8173.4966.28
Partial Aug. (ours)55.7972.7266.7375.4067.66
Table 11. The effect of the size of feature grouping. No grouping: feature grouping is not performed.
Table 11. The effect of the size of feature grouping. No grouping: feature grouping is not performed.
Grouping Size/STSSTS12STS13STS14STS15Avg.
No grouping55.7772.6966.6775.3367.62
455.7772.6266.6175.5067.63
855.7972.6366.6575.3567.61
16 (ours)55.7972.7266.7375.4067.66
3255.7372.4566.5775.3967.54
12855.7572.6366.5975.2667.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Zhang, W.; Lei, T.; Peng, D. Grouped Contrastive Learning of Self-Supervised Sentence Representation. Appl. Sci. 2023, 13, 9873. https://doi.org/10.3390/app13179873

AMA Style

Wang Q, Zhang W, Lei T, Peng D. Grouped Contrastive Learning of Self-Supervised Sentence Representation. Applied Sciences. 2023; 13(17):9873. https://doi.org/10.3390/app13179873

Chicago/Turabian Style

Wang, Qian, Weiqi Zhang, Tianyi Lei, and Dezhong Peng. 2023. "Grouped Contrastive Learning of Self-Supervised Sentence Representation" Applied Sciences 13, no. 17: 9873. https://doi.org/10.3390/app13179873

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop