Next Article in Journal
The Effect of Moisture Content on the Electrical Properties of Graphene Oxide/Cementitious Composites
Previous Article in Journal
Large-Area Thickness Measurement of Transparent Films Based on a Multichannel Spectral Interference Sensor
Previous Article in Special Issue
Towards Understanding Neural Machine Translation with Attention Heads’ Importance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Contrastive Learning Penalized Cross-Entropy with Diversity Contrastive Search Decoding for Diagnostic Report Generation of Reduced Token Repetition

1
State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China
2
School of Information and Communication Engineering, Communication University of China, Beijing 100024, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(7), 2817; https://doi.org/10.3390/app14072817
Submission received: 26 January 2024 / Revised: 16 March 2024 / Accepted: 20 March 2024 / Published: 27 March 2024

Abstract

:
Medical imaging description and disease diagnosis are vitally important yet time-consuming. Automated diagnosis report generation (DRG) from medical imaging description can reduce clinicians’ workload and improve their routine efficiency. To address this natural language generation task, fine-tuning a pre-trained large language model (LLM) is cost-effective and indispensable, and its success has been witnessed in many downstream applications. However, semantic inconsistency of sentence embeddings has been massively observed from undesirable repetitions or unnaturalness in text generation. To address the underlying issue of anisotropic distribution of token representation, in this study, a contrastive learning penalized cross-entropy (CLpCE) objective function is implemented to enhance the semantic consistency and accuracy of token representation by guiding the fine-tuning procedure towards a specific task. Furthermore, to improve the diversity of token generation in text summarization and to prevent sampling from unreliable tail of token distributions, a diversity contrastive search (DCS) decoding method is designed for restricting the report generation derived from a probable candidate set with maintained semantic coherence. Furthermore, a novel metric named the maximum of token repetition ratio (maxTRR) is proposed to estimate the token diversity and to help determine the candidate output. Based on the LLM of a generative pre-trained Transformer 2 (GPT-2) of Chinese version, the proposed CLpCE with DCS (CLpCEwDCS) decoding framework is validated on 30,000 desensitized text samples from the “Medical Imaging Diagnosis Report Generation” track of 2023 Global Artificial Intelligence Technology Innovation Competition. Using four kinds of metrics evaluated from n-gram word matching, semantic relevance, and content similarity as well as the maxTRR metric extensive experiments reveal that the proposed framework effectively maintains semantic coherence and accuracy (BLEU-1, 0.4937; BLEU-2, 0.4107; BLEU-3, 0.3461; BLEU-4, 0.2933; METEOR, 0.2612; ROUGE, 0.5182; CIDER, 1.4339) and improves text generation diversity and naturalness (maxTRR, 0.12). The phenomenon of dull or repetitive text generation is common when fine-tuning pre-trained LLMs for natural language processing applications. This study might shed some light on relieving this issue by developing comprehensive strategies to enhance semantic coherence, accuracy and diversity of sentence embeddings.

1. Introduction

Text summarization aims to compress a long text document into a short and human-readable form with the most important information of the source document [1]. There are two broad kinds of approaches, extractive and abstractive. The extractive approaches generate summaries through retrieving the most relevant and important phrases or sentences from the original text, while the abstractive approaches delve into the meaning and semantics and utilizes natural language generation techniques to create a new and comprehensive text summary [2].
As a specific application of text summarization, diagnosis report generation (DRG) aims to make summaries and generate diagnostic reports according to the text description of medical imaging findings. It is part of the medical report generation [3] task which concentrates on using deep learning networks for generating diagnosis reports in terms of the medical image input. Clinically, medical imaging description and disease diagnosis are predominant in radiologists’ daily works. These works are vitally important yet tedious and time-consuming. Accurate DRG from medical imaging description in an automated manner can decrease clinicians’ workload dramatically, subsequently improving their routine efficiency. However, medical imaging diagnostic reports involve specific field vocabulary, complex organization structure and detailed visual description [4]. Due to the professionalism in disease diagnosis, treatment planning and therapeutic delivery, higher demands are required for the DRG quality, including precise comprehension of medical terminology, content understanding and reasoning capabilities, coherent diagnosis and ambiguity avoidance.
Abstractive approaches for high-quality DRG have been developed. Traditional methods mainly rely on statistics and shallow learning, such as using maximum entropy models to predict words or constructing feature engineering and classifiers to generate key sentences. These methods might be unable to handle large-scale text document inputs [5]. On the other hand, significant progress has been made in the field of natural language processing (NLP) by using deep learning networks [6], such as recurrent neural networks (RNNs) [7] and long short-term memory (LSTM) networks [8]. One milestone comes from the attention mechanisms of Transformers [9] that build encoder–decoder-based sequence-to-sequence models, and essential messages in the input text are concentrated on. Later, as a cost-effective approach, a great deal of attention has been paid to pre-trained large language models (LLMs), such as bidirectional encoder representations from Transformers (BERT) [10] and generative pre-trained Transformer (GPT) [11]. Through pre-training on large-scale corpora, LLMs can effectively improve the performance in massive downstream tasks, including but not limited to clinical notes summarization [12], biomedical natural language tasks [13] and text-to-image generation [14], and LLMs outperform medical experts in clinical text summarization [15] that could help clinicians to focus more on patient care.
Unfortunately, when transferring a pre-trained LLM to a specific application, semantic inconsistency of sentence embeddings has been massively witnessed from dull repetitions and undesirable text generation. It might be derived from the inconsistent representation of sentence embeddings, anisotropic distributions of token generation, and a narrow subset of the entire representation space [16,17,18]. When the distance between different tokens in a representation space is close, these tokens have high cosine similarity. A showcase reveals that cosine similarities between tokens within a sentence could be larger than 0.95, and therefore, duplicate tokens will be unavoidably generated at different stages [19].
To solve or to relieve this degradation problem, massive attempts have been made. One feasible way is mapping the generated sentence vectors into an isotropic and uniform distribution space. For instance, BERT-flow [20] turns the sentence representations from BERT encoder into a smooth and isotropic Gaussian distribution space using a reversible flow transformation. It achieves significant improvement on several semantic textural similarity tasks. Wang and his colleagues [21] design a dual-stream attention mechanism and use a positional residual strategy to improve the robustness of extractive summarization. A summary method based on two-layer Transformer in [22] employs BART (bidirectional and autoregressive Transformer) [23] and T5 (text-to-text transfer converter) [24] to ensure the summary coherence. Another promising way is from contrastive learning (CL). Traditional text augmentation is used to construct positive and negative sample pairs from the augmented sentence set. Its training objective becomes making the embeddings of positive sample pairs closer and the distance of negative pairs farther. Debiased CL [25] is this kind of approach that samples appropriate same-label data points, since negative pairs sampled from different labels or classes improve performance [26], achieving consistent improvement on language, vision and reinforcement learning benchmarks. The contrastive learning for sentence representation (CLEAR) method [27] employs multiple sentence-level augmentation strategies, and during pre-training, different sentence augmentation strategies result in improvement on specific tasks. Token-aware CL (TaCL) [18] is a continual pre-training approach that is fully unsupervised and requires no additional samples. It embraces a teacher model and a student model, and both are initialized with the same pre-trained BERT. The objective function contains a masked language modeling term, a next sentence prediction term and a token-aware contrastive learning term for learning an isotropic and discriminative distribution of token representations. For reducing the impact of summary false negatives and effectively maintaining spatial consistency, a metric score is employed to dynamically penalize positive and negative samples during model training [28]. In extractive multi-document summarization, a contrastive hierarchical discourse graph is designed to capture complex discourse relationships and global topic coherence, and it shows excellent performance [29]. In any case, compared to greedy search (GS) [30] and nucleus search (NS) [31] decoding methods, some other decoding methods seem more promising to relieve this anisotropy problem [32]. For instance, a contrastive search (CS) decoding method injects CL into the text decoding stage, and its performance is verified to be better than traditional decoding methods [19]. On open-ended text generation, an empirical study [33] of CS and contrastive decoding indicates that CS substantially outperforms contrastive decoding in terms of the diversity and coherence metrics. The fidelity-enriched contrastive search (FECS) method [34] augments the CS framework with context-aware regularization terms, and in both abstractive summarization and dialogue generation tasks, it has been confirmed to improve semantic coherence among tokens, mitigate repetition, and strengthen fidelity to the provided source labels in the generated output. To reduce the number of repeated tokens in text generation when using encoder–decoder models, a repetition reduction module (RRM) [35] is proposed to supervise the training procedure by capturing the consistency of a sentence sample between the encoding and decoding sides.
In this study, a contrastive learning penalized cross-entropy with diversity contrastive search (CLpCEwDCS) decoding framework is proposed. To improve the consistency of sentence embeddings and to relieve the anisotropy issue, CL is integrated into the fine-tuning stage and a novel objective function is formed as contrastive learning penalized cross-entropy (CLpCE). Moreover, in the decoding stage, a diversity contrastive search (DCS) decoding method is designed to balance the diversity and quality of report generation. For mitigating degenerative behaviors, the core idea of the DCS decoding method is different from the FECS method [34]. FECS promotes the diversity by augmenting a faithfulness reward term into the CS framework, while DCS determines the outcome via the estimation of the maximum token repetition ratio (maxTRR) of candidate outputs. Specifically, the proposed metric maxTRR estimates the token repetitions in the token space before the text generation, while the measure of word-, phrase-, and the sentence-level consecutive repetitions [36] or the subsentence-level consecutive repetition [35] is for performance evaluation after the text generation. Overall, the contributions of this study can be summarized as follows:
  • An objective function CLpCE is designed for balancing both unsupervised and supervised learning in the model fine-tuning stage to enhance the consistency of feature representation of sentence embeddings.
  • A novel decoding method DCS is developed to improve the representation diversity and to relieve anisotropic distributions of token generation with maintained quality of text summarization.
  • A supplementary metric named the maximum of token repetition ratio (maxTRR) is implemented which estimates the token repetition and determines the outcome of text generation.
  • The effectiveness of the proposed CLpCEwDSC decoding framework is verified, and competitive performance and better diversity are observed on the DRG task.
The remainder of this paper is organized as follows: Section 2 presents the relevant techniques of GPT-2 and contrastive learning of sentence embeddings. The data collection, the proposed framework, experiment design, implementation details and parameter settings are shown in Section 3. We then report the DRG accuracy and diversity and the effect of the diversity control in Section 4. After that, we discuss the results and some limitations of this work in Section 5, and conclude this work and future directions in Section 6.

2. Related Techniques

This section introduces related techniques and computing theories, including GPT-2 decoder block, contrastive learning of sentence embeddings in semantic representation, and contrastive search decoding.

2.1. GPT-2 Decoder Block

Figure 1 shows the diagram structure of Transformer decoder block and GPT-2 decoder block. In comparison to the Transformer decoder block, GPT-2 decoder block is simplified without multi-head self-attention module.
GPT-2 is a large model trained in an unsupervised manner. For an unlabeled text sequence t = { t 1 , , t i , , t n } , it is trained by maximizing the likelihood function as below,
L P T ( t ) = i log { P ( t i | t i k , t i k + 1 , , t i 1 ; Θ ) } ,
where P T stands for “pre-training”, Θ denotes model parameters, and k historical tokens { t i k , t i k + 1 , , t i 1 } are used to predict the current token t i .
In the fine-tuning stage for a specific task, labeled samples are used for supervised learning. For an input sequence set { ( t j , y j ) } with t j = { t 1 j , , t i j , , t n j } and label y j , the fine-tuning of GPT-2 is by optimizing the loss function as
L F T ( t , y ) = ( t , y ) log { P ( y | t ) } = ( t , y ) log { s o f t m a x ( h n [ L ] · W Y ) }
in which F T stands for “fine-tuning”, h n [ L ] denotes the hidden state output of the last token in the t sequence from the last layer of the GPT-2 decoder block. W Y R d × l is the weight matrix of the fully connected layer, where d is the embedding dimension and l is the number of labels.
The GPT-2 embraces large and diverse multi-domain data for model pre-training, and the parameters are shared across different tasks, both enhancing the generalization capacity on specific downstream applications.

2.2. Contrastive Learning of Sentence Embeddings

Contrastive learning of sentence embeddings can improve semantic representation by minimizing the distance between similar samples and maximizing the distance between dissimilar samples [37]. For a small batch of sentence pairs D = { ( x i , x i + ) } , where x i + is the positive sample of x i and they are semantically related sentence pairs, the training objective function of ( x i , x i + ) is
L i = log e c o s ( h i , h i + ) / τ e c o s ( h i , h i + ) / τ c o s ( h i , h i + ) = h i T · h i + h i · h i +
where τ is the temperature coefficient, ( h i , h i + ) is the sentence vector representation of ( x i , x i + ) obtained through pre-trained models h = f Θ ( x ) , and c o s ( h i , h i + ) calculates cosine similarity between ( h i , h i + ) .
Simple contrastive learning of sentence embeddings (SimCSE) [38] is an efficient framework. Its core principle can be described as follows. For a small batch of sentence { x i } i = 1 N , x i + is set equal to x i , and then, independently sampled dropout masks are used on ( x i , x i + ) to obtain forward sentence pairs. In general, Transformers set dropout after the feed-forward layer and attention layer. Thus, h i m = f Θ ( x i , m ) , and m is a random mask of dropout. By utilizing the random mask property of dropout, the same input is fed into the encoder twice to obtain two different dropout masks { m , m + } .
In SimCSE [38], the embeddings of the forward sentence pairs and the training objective function can be expressed as
h i m i = f θ ( x i , m i ) h i m i + = f θ ( x i + , m i + ) L i = log e c o s ( h i m i , h i m i + ) / τ e c o s ( h i m i , h i m i + ) / τ
in which m is from the built-in dropout of Transformer. It should be noted that no dropout structures is added in our model, and the random noise brought by dropout can be viewed as a form of data augmentation.
In the CL field, compared to traditional text augmentation methods, using the built-in dropout mask in pre-trained models leads to simpler implementation, higher-quality sentence embeddings and better performance on numerous unsupervised and supervised downstream tasks [38].

2.3. Contrastive Search Decoding

In order to ensure the generated output semantically coherent with these generated prefix texts, the key idea of CS decoding is to find out the most likely candidate set and to guarantee the output with sufficient discriminative capacity. Given the previous generated text x < t , the choice of generating x t at time t should satisfy
x t * = arg max v V ( k ) { ( 1 α ) ×   p θ ( v | x < t ) α × ( max { c o s ( h v , h x j ) : 1 j t 1 } ) }
in which V ( k ) denotes the prediction set of the probability distribution space p Θ ( · | x < t ) , p Θ ( v | x < t ) stands for the model confidence that presents the probability of the candidate v, and max { c o s ( h v , h x j ) : 1 j t 1 } is the degradation penalty which measures the similarity of the candidate v and all tokens in the text set x < t .
A larger degradation penalty means candidate v is more similar to the previous text x < t , and thus, it is more likely to represent the previous content. The parameter α [ 0 , 1 ] is used to adjust the importance between the components. When α = 0 , CS decoding degenerates into GS decoding.

3. Materials and Methods

This section presents the data collection and outlines the proposed framework. Subsequently, the experiment design, evaluation metrics, implementation details, and parameter settings are described for performance comparison.

3.1. Data Collection

The dataset comes from the “Medical Imaging Diagnosis Report Generation” track of a nationwide open competition “2023 Global Artificial Intelligence Technology Innovation Competition” (https://gaiic.caai.cn/ai2023/, accessed on 19 March 2024) hosted by the Chinese Association for Artificial Intelligence. It is the newest and highest-quality dataset with the purpose of generating medical diagnosis reports according to medical image descriptions.
The dataset consists of 30,000 plain-text data samples, including descriptions of patient scans and corresponding diagnostic reports in Chinese. For instance, a text sample shows “Image Description” as “There is a local bone defect in the left parietal bone. There are small areas of decreased density adjacent to the lateral ventricles on both sides. An arc-shaped cerebrospinal fluid density shadow is observed below the right frontal skull. The ventricular system is enlarged, and the sulci, fissures, and cisterns of the brain are widened. There is no displacement of the midline structures. Poor pneumatization is observed in both mastoids, with increased density inside.” and its “Diagnosis Report” is as “There is a local defect in the left parietal bone, which may require surgical intervention. There are also scattered ischemic lesions in the brain. Additionally, there is a small amount of subdural effusion in the right frontal region, and the patient has bilateral mastoiditis”.
To avoid issues such as privacy leakage, the dataset provided for the competition is desensitized on a character-by-character basis. Thus, the aforementioned text sample becomes “Image Description” of the desensitized data “(14 108 28 30 15 13 294 29 20 18 23 21 25 32 16 14 39 27 14 47 46 69 70 11 24 42 26 37 61 24 10 79 46 62 19 13 31 95 19 28 20 18 10 22 12 38 41 17 23 21 36 53 25 10)” and “Diagnosis Report” of the desensitized data “(22 12 38 41 17 81 10)”.

3.2. The Proposed CLpCEwDCS Decoding Framework

This sub-section gives the reasons for backbone network selection and then elaborates on the formulation of the CLpCE objective function and the DCS decoding procedure. During DCS decoding, we construct a set of candidate token sequence outputs and select the final outcome through the comparison of the maxTRR values.

3.2.1. The Backbone Network Selection

In this study, GPT-2 Chinese version [39] is used as the backbone network for further fine-tuning the DRG task. The reasons for using the GPT-2 model are manifold. Above all, this model holds promise in validating the effectiveness of the proposed framework, encompassing both the objective function and the DCS decoding method for the DRG task, all while accommodating our limited computing resources. Secondly, compared to some other accessible models [23,40], GPT-2 was released earlier, and its pre-trained model is readily available and user-friendly. It should be noted that some other advanced models, such as GPT-4 [41], are powerful, while these models are not open-sourced, and using GPT-4 turbo token limit entails considerable expenses. Based on BERT tokenizer, the GPT-2 model can be re-trained for general language models and also support large training corpus. The pretrained model was downloaded from github (https://github.com/Morizeyao/GPT2-Chinese, accessed on 19 March 2024).

3.2.2. The CLpCE Objective Function

As an objective function, CE is widely used in the optimization procedure of text generation. For a text input x containing m sentences with length n, assuming the corresponding distribution is y and the predicted distribution is y ^ , the CE loss is calculated as in Equation (6).
L C E = 1 m n i = 1 n j = 1 m { y i , j × log ( y i , j ^ ) ( 1 y i , j ) × log ( 1 y i , j ^ ) }
As to the same input as in Equation (6), the objective function of CL of text x can be calculated as in Equation (7), and notably, the parameters are defined the same as those in Equation (4).
L C L = i = 1 n log ( e c o s ( h i , h i + ) / τ j = 1 N e c o s ( h i , h i + ) / τ )
Inspired by CL [37] and SimCSE [38], CLpCE is designed for guiding the fine-tuning process of GPT-2. The optimization goal of CLpCE can be defined as in Equation (8), where parameter β [ 0 , 1 ] is used to adjust the proportion of the loss functions. It should be mentioned that when β = 0 and β = 1 , the objective function CLpCE degenerates into CE and CL, respectively.
L C L p C E = ( 1 β ) × L C E + β × L C L
Figure 2 shows the model fine-tuning procedure. It consists of CE-based supervised learning and CL-based unsupervised learning parts, both of which are weighted by β in the CLpCE objective function.

3.2.3. The DCS Decoding

Essentially, CS is a GS decoding method with an additional degradation penalty term. When handling long texts, GS is prone to getting stuck in local optimal and generating duplicate tokens [32]. To overcome this anisotropy problem, a penalty term is added. It measures the similarity between the current candidate token and the previous tokens. However, CS considers only the word with the highest probability at the current time, and the generated text lacks diversity.
The DCS decoding enriches the diversity, and meanwhile, it ensures the probability difference to the best token acceptable. Given the previously generated text x < t , the output of x t at time t via DCS can be described as the token generation,
x t l = { ( 1 ψ ) × p θ ( v | x < t ) ψ × ( max { c o s ( h v , h x j ) : 1 j t 1 } ) } .
To enhance the text generation diversity, DCS decoding is designed which uses tokens with the highest probabilities to form a candidate set. Firstly, the token x t * with the highest probability ( p m a x = p x t * ) at the current time is added into the candidate set. Then, the probabilities (p) of the remaining tokens are compared to the highest probability. If the difference of the probabilities between tokens is less than threshold ρ , the token is added to the candidate set as well (Equation (10)).
x t m { ( p m a x p x t m ) ρ × p m a x }
After that, selection of the candidate tokens will yield different outputs of token sequences as { s e q l } l = 1 k and s e q l = { x < t , x t l } for text generation. In the end, among the generated outputs of token sequences ( { s e q l } l = 1 k ), the final outcome is determined by the maxTRR values as
o u t = min { m a x T R R ( s e q l ) } l = 1 k .
In Equations (9)–(11), “max” and “min” denote the operation of maximization and minimization, respectively. Since the parameter ρ dictates the quality of token generation, its value should be carefully defined. The metric maxTRR is defined in Equation (12), and it quantifies the token diversity in a candidate output of text summarization. In any case, when the token with p m a x is selected, DCS is degenerated into the CS decoding strategy.

3.3. Experiment Design

Extensive experiments are conducted to validate the effectiveness of the proposed CLpCEwDCS decoding framework. In each experiment, the dataset is shuffled and randomly divided into a training set and a testing set with an 8:2 ratio for model building and validation.
Specifically, the effectiveness of the objective function CLpCE is validated with different β values ( { 0.0 , 0.1 , , 0.9 , 1.0 } ), and different methods of DCS (ours), CS [38], GS [30], NS [31] and top-k search (TkS) [42] are used for decoding. The general trend and evaluation metric values are presented.
In addition, the diversity of the DCS decoding method is explored by using different control threshold ρ values. The generation accuracy, token candidate diversity, and visual perception of the output examples are illustrated.

3.4. Evaluation Metrics

Four kinds of evaluation metrics are used to quantify the text generation quality from various perspectives. The first metric is bilingual evaluation understudy (BLEU) [43], which is commonly used in machine translation evaluation. It measures the word overlap between generated and reference translations based on n-gram matching and fragment accuracy evaluation. This study involves BLEU-1, BLEU-2, BLEU-3, and BLEU-4, and higher scores indicate better text matching.
The second one is the evaluation of translation with explicit ordering (METEOR) [44]. It obtains the final score by exact word matching and semantic similarity at the word level via weighted fusion. A higher value reveals better word matching and semantic similarity.
The third one is recall-oriented understudy for gisting evaluation (ROUGE) [45]. It calculates the score based on the length of the longest common subsequence. A higher metric score denotes the generated summary more similar in content to the reference summary.
The fourth one is consensus-based image description evaluation (CIDER) [46]. It considers many factors such as consistency, semantic relevance and n-gram similarity comprehensively. A higher score shows better consistency and greater semantic similarity between the generated description and the reference description.
Besides, a supplementary metric (maxTRR) is implemented in this study for evaluating the token diversity. It is defined as the maximum repetition ratio of the tokens, and a lower value indicates higher representation diversity in text generation. Assuming s tokens are generated in a candidate output ( s e q l ), the j t h token T j appears t T j times, and the maxTRR can be formulated as
m a x T R R ( s e q l ) = max { ( t T j ) } j = 1 s j = 1 s ( t T j ) ,
in which the denominator represents the total number of all s tokens, and the numerator is the maximum number of times a token appears.

3.5. Implementation Details and Parameter Settings

The algorithms are implemented with python (version 3.10), pytorch (version 2.0.0 + cu118) and Transformers (version 4.28.1). The codes are deployed on a 64bit Win10 system (Intel(R) Core(TM) i9-12900K, 3.2 GHZ, and 128 GB RAM) with a 24GB GPU card (NVIDIA GeForce RTX 3080). The codes are available online (https://github.com/NicoYuCN/nlpMIDRG, accessed on 19 March 2024).
During model fine-tuning, the parameters of batch size (32), learning rate (0.0005), iteration number (10 epochs), maximum length of input text (230), maximum length of generated text (80) and optimizer (AdamW [47]) are defined, and the other parameters are set with default values.
For the decoding methods, the weighting parameter of CS is α = 0.70 as suggested in [19], k = 5 is set for TkS, the probability threshold ρ = 0.71 is for NS, and the other parameters are set with default values.

4. Results

This section reports the DRG accuracy and diversity achieved through various decoding methods. It also includes ablation studies examining the impact of parameter β in the CLpCE objective function (Equation (8)) on DRG accuracy and parameter ρ in the DCS decoding (Equation (10)) on diversity control. In any case, achievement of the first-tier teams on the competition is summarized.

4.1. DRG Accuracy

Table 1 presents the text summarization accuracy. To each DRG model, the highest value of each metric is in boldface. It suggests that the optimal value of β in CLpCE is 0.60 regardless of decoding methods. On the other hand, no obvious difference is found among the highest metric values from DCS and CS decoding methods, and GS decoding achieves generally higher GOUGE and CIDER values.
Table 1 indicates the superiority of the objective function CLpCE over CE or CL. When using DCS for decoding, CLpCE improves the report generation performance with ≈ 0.03 increases on BLEU and METEOR, 0.015 on ROUGE and 0.09 on CIDER metrics when β = 0.6 . This phenomenon can also be found when using other decoding methods.
Figure 3 shows the general trend of DRG accuracy when using different weighting values ( β { 0.0 , , 1.0 } ) and decoding methods (DCS, CS, GS, NS, and TkS). From the perspective of β values, compared to β = 0.0 , the other β values lead to a slight increase (≤ 0.03) on metric values, except for β = 1.0 . From the perspective of decoding methods, TkS and NS cause inferior results, and CIDER values are less than 1.10 and 1.35, respectively. The other decoding methods obtain slightly better performance, and the CIDER value from GS decoding is correspondingly higher.

4.2. DRG Diversity

Table 2 shows the representation diversity of text summarization using the CLpCE objective function ( β = 0.6 ). It reveals that the proposed DCS decoding method achieves the lowest maxTRR value (0.12 ± 0.09), followed by CS and GS decoding methods. On the other hand, the maxTRR values of all the decoding methods indicates that more than 6 out of 50 generated tokens are the same, which cause unnaturalness or undesirable repetitions in text generation.
To enhance the understanding of DRG diversity, two cases with CS and DCS decoding are shown in Table 3 for perception. The token with the maximum repeating times is underlined, and the maxTRR is shown at the end of the output (CS) or the candidate output (DCS). Case A is a relatively short desensitized data input, and text summarization seems good because of low token repetition ratio. CS decoding generates 11 tokens, and no tokens are the same. DCS decoding yields four candidates, while the fourth candidate has three identical tokens out of the twelve tokens (maxTRR, 25%). Case B is much longer. The CS decoding method yields seven tokens, and a token (“190”) appears four times, and thus, maxTRR = 4/7. On the other hand, all four candidates from the DCS decoding method show much lower repetition token ratios, and the third and fourth output contains up to 30 tokens. Therefore, DCS decoding could provide more choices of text summarization output to balance both DRG accuracy and representation diversity for improved naturalness of diagnositic report generation.

4.3. The Effect of the Diversity Control

Table 4 shows the effect of the control threshold ρ on the diverse text generation. Given the CLpCEwDCS decoding framework ( β = 0.60 ), it is found that the evaluation metric values have no obvious difference when the ρ value increases, which indicates that the DCS decoding maintains the token generation quality along with ρ increase.
Figure 4 shows the average candidate numbers in fifty experiments. The dotted red line with ♢ shows ρ = 0.01 , and the dashed blue line with ∘ indicates ρ = 0.10 . It is found that more candidate outputs of text summarization are generated when the control threshold ρ values increase. When ρ = 0.10 , the number of candidate outputs might be larger than 1.4 that is potential to maintain text generation coherence and decrease token repetition ratio in DRG text summarization.

4.4. Achievement of the First-Tier Teams on the Competition

According to the report of the “Medical Imaging Diagnosis Report Generation” competition, achieved results from the first-tier teams are shown in Table 5. All the teams explore the tricks of exponential moving average of weights, fast gradient method and regularized dropout [48] for improved robustness and accuracy. Team B, C and D additionally use stochastic weight averaging [49] and label smoothing [50], and team E further integrates extract loss and sentence shuffle in the fine-tuning stage.
Based on the metric scores provided by the competition track, minor differences are observable among the results of the first-tier participants (Table 5). It is found that the teams focus on BART [23] and/or Chinese Pre-trained unbalanced Transformer (CPT) [40], either base or large models, for the DRG task. Team A proposes the noise-aware similarity bucketing [51] and generates the text summary output with the best prompt matching, team B designs the graph beam search with priority queue (GBPQ) for speeding up the reasoning procedure, and team C utilizes the retrieval augmented generation (RAG) [52] strategy. These models outperform the proposed framework from 0.114 to 0.192 on the score values. The score comparison also suggest that our framework dedicated to improving the diversity maintains DRG accuracy and coherence well.

5. Discussion

Accurate and automatic DRG improves clinical efficiency, and fine-tuning a pre-trained LLM is indispensable for realizing this specific application task. However, anisotropy degeneration or semantic inconsistency of sentence embeddings has been massively observed from unnatural and undesirable text generation. To address this issue, a CLpCEwDCS decoding framework is proposed and evaluated on this challenging task. In any case, a supplementary metric (maxTRR) is designed to evaluate the token diversity in text summarization, which is also important in DCS decoding.
The CLpCE improves the consistency and accuracy of the sentence embeddings. In comparison to the CE objective function, the proposed CLpCE function leads to higher DRG quality regardless of the decoding methods. It increases the vales of evaluation metrics (Figure 3) and obtains superior performance when β = 0.60 (Table 1). Specifically, when using DCS for decoding, CLpCE ( β = 0.60 ) enhances 0.03 on BLEU-1 and BLEU-2, 0.02 on BLEU-3 and BLEU-4, 0.01 on METEOR and ROUGE, and 0.09 on CIDER over the CE objective function ( β = 0.00 ). Notably, this phenomenon can also be observed when using other decoding methods. It indicates that CLpCE can quantitatively improve the DRG quality from fragment accuracy, word matching, semantic similarity and content consistency. The main reason is the penalty term. CL is a self-supervised representation learning method by contrasting semantically similar and dissimilar pairs of samples [25]. Its purpose is to minimize the distance of samples from same distributions and to maximize the distance of samples from different distributions. Consequently, in the sentence embedding space, intra-class tokens could be close, and inter-class tokens could be kept a long distance. Thereby, the CL penalty term benefits LLM fine-tuning and guides the procedure towards a specific application task, and in this study, it improves DRG quality.
The DCS decoding method relieves the anisotropy degeneration issue by decreasing the frequency of token repetition. It achieves competitive DRG quality with the CS and the GS decoding methods (Table 1). Most importantly, it leads to more candidate outputs of text summarization in the token space (Figure 4) and decreases token repetition ratio (Table 2) by using the minimum of the maxTRR values, while the generation cohesion and accuracy are maintained well (Table 4). Of particular concern is the proposed metric maxTRR (Equation (12)). Its value is applied to determine the final token sequence output (Equation (11)) in an automated fashion. Additionally, two case examples further reveal that DCS decoding provides more candidate outputs of text generation with lower repeats and frequent tokens (Table 3). It should be admitted that there is discrepancy between the human and model word distributions, and further training on more data could not rectify this discrepancy [26,53]. Interestingly, the DCS decoding shows the potential to decrease the discrepancy by improving the output diversity. It keeps the accuracy and coherence as the CS decoding method and outperforms other traditional methods [19]. Therefore, using a small control threshold value ( ρ = 0.10 ) could keep these dissimilar tokens with the top-high probabilities and generate diverse text summarization.
According to the track report, our framework achieves state-of-the-art performance for the DRG competition (Table 5). A close look into these models reveals that BART and CPT models are preferred due to their focus on text summarization tasks. Conversely, as a general generation model, GPT-2 supports a broad spectrum of downstream applications, and a slight drop on the score value becomes understandable. Meanwhile, the first-tier teams utilize NLP tricks, including but not limited to exponential moving average of parameters, fast gradient method and regularized dropout, and these tricks contribute to the improved performance of text generation. The proposed framework stands to benefit from these techniques if they are appropriately integrated into the fine-tuning stage.
There are several limitations in the current study. On the DRG task, the proposed framework has been verified effectively relieving the anisotropy degeneration problem, and its feasibility and generalizability on other NLP applications becomes desirable. However, it definitely involves large-scale data processing and massive time cost that is beyond our budget due to limited funding and computing resources. Secondly, as a result of technological evaluation, more powerful LLMs [41,54,55] with hundreds of billions of parameters are now available, while utilizing these models requires additional expenses and heavy computing resources. The investigation into whether the proposed framework, employing advanced models, would enhance the DRG task is currently underway. Thirdly, besides contrastive learning [37], other fine-tuning and decoding strategies, such as fidelity-enriched contrastive search [34], self-supervised learning [56], and reinforcement learning with human feedback [57], could reduce the dependency on the labeled data samples. Last but not the least, combining with other data sources, such as dialogues, images, videos and human feedback [58], could broaden the application fields of the proposed framework.

6. Conclusions

When fine-tuning pre-trained LLMs for some specific downstream application tasks, the anisotropy degeneration problem has been massively witnessed. To address this problem, the CLpCEwDSC decoding framework is implemented that promotes the objective function of CE with a CL penalty term for accurate representation of sentence embeddings and designs a DCS decoding method for improving output diversity via selecting the candidate token sequence with the minimum maxTRR value. It has been verified effective on the DRG task with five types of evaluation metrics, and further improvement of the framework could be conducted by using more advanced models, proper fine-tuning strategies, multi-modal data learning and generalizability verification.
In the field of medical imaging, there is a long way to go before a fully automated medical image report generator can be used to facilitate clinical decision making. The proposed framework, aimed at generating accurate and natural diagnostic reports from medical image descriptions, could be further enhanced by integrating more powerful LLMs and effective fine-tuning strategies. On the other hand, most attention should be directed towards addressing other challenges, such as medical image understanding, vision–language alignment, and interpretation of diagnosis reports, in order to expedite the realization of automated and precise medical imaging diagnostic report generation.

Author Contributions

Conceptualization, T.Z., J.M. and S.Y.; methodology, J.M. and Y.Y.; software, J.M. and Y.Y.; validation, J.M. and Y.Y.; formal analysis, T.Z. and S.Y.; investigation, S.Y.; resources, J.M.; data curation, J.M. and Y.Y.; writing—original draft preparation, J.M.; writing—review and editing, T.Z. and S.Y.; visualization, J.M. and Y.Y.; supervision, S.Y.; project administration, S.Y.; funding acquisition, T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Key R&D Program of China (Grant No. 2023YFF0904604) and the Fundamental Research Funds for the Central Universities (Grant No. CUC23ZDTJ014).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset supporting the current study is available online at https://gaiic.caai.cn/ai2023/, accessed on 19 March 2024.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
DRGDiagnostic report generation
LLMLarge language model
CLpCEContrastive learning penalized cross-entropy
DCSDiversity contrastive search
maxTRRMaximum of token repetition ratio
GPTGenerative pre-trained Transformer
CLpCEwDCSCLpCE with DCS
BLEUBilingual evaluation understudy
METEOREvaluation of translation with explicit ordering
ROUGERecall-oriented understudy for gisting evaluation
CIDERConsensus-based image description evaluation
NLPNatural language processing
RNNRecurrent neural network
LSTMLong-short term memory
BERTBidirectional encoder representations from Transformers
BARTBidirectional and autoregressive Transformer
T5Text-to-text transfer converter
CLContrastive learning
CLEARContrastive learning for sentence representation
TaCLToken-aware contrastive learning
GSGreedy search
NSNucleus search
CSContrastive search
FECSFidelity-enriched contrastive search
RRMrepetition reduction module
PTPre-training
FTFine-tuning
SimCSESimple contrastive learning of sentence embeddings
TkSTop-k search
GBPQGraph beamsearch with prioirity queue
RAGRetrival augmented generation

References

  1. Kryscinski, W.; Keskar, N.S.; McCann, B.; Xiong, C.; Socher, R. Neural text summarization: A critical evaluation. arXiv 2019, arXiv:1908.08960. [Google Scholar]
  2. Allahyari, M.; Pouriyeh, S.; Assefi, M.; Safaei, S.; Trippe, E.D.; Gutierrez, J.B.; Kochut, K. Text summarization techniques: A brief survey. arXiv 2017, arXiv:1707.02268. [Google Scholar] [CrossRef]
  3. Pang, T.; Li, P.; Zhao, L. A survey on automatic generation of medical imaging reports based on deep learning. Biomed. Eng. Online 2022, 22, 48. [Google Scholar] [CrossRef] [PubMed]
  4. Chen, Z.; Varma, M.; Delbrouck, J.; Paschali, M.; Blankemeier, L.; Van Veen, D.; Valanarasu, J.; Youssef, A.; Cohen, J.; Reis, E. CheXagent: Towards a Foundation Model for Chest X-Ray Interpretation. arXiv 2024, arXiv:2401.12208. [Google Scholar]
  5. Jones, K.S. Automatic summarising: The state of the art. Inf. Process. Manag. 2007, 43, 1449–1481. [Google Scholar] [CrossRef]
  6. Minaee, S.; Kalchbrenner, N.; Cambria, E.; Nikzad, N.; Chenaghlu, M.; Gao, J. Deep learning–based text classification: A comprehensive review. ACM Comput. Surv. 2021, 54, 1–40. [Google Scholar] [CrossRef]
  7. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
  8. Van Houdt, G.; Mosquera, C.; Napoles, G. A review on the long short-term memory model. Artif. Intell. Rev. 2020, 53, 5929–5955. [Google Scholar] [CrossRef]
  9. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 6000–6010. [Google Scholar]
  10. Devlin, J.; Chang, M.-W.; Lee, K.; Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  11. Paulus, R.; Xiong, C.; Socher, R. A deep reinforced model for abstractive summarization. arXiv 2017, arXiv:1705.04304. [Google Scholar]
  12. Chuang, Y.; Tang, R.; Jiang, X.; Hu, X. SPeC: A soft prompt-based calibration on performance variability of large language model in clinical notes summarization. J. Biomed. Inform. 2024, 151, 104606. [Google Scholar] [CrossRef] [PubMed]
  13. Tian, S.; Jin, Q.; Yeganova, L.; Lai, P.; Zhu, Q.; Chen, X.; Yang, Y.; Chen, Q.; Kim, W.; Comeau, D. Opportunities and challenges for ChatGPT and large language models in biomedicine and health. Briefings Bioinform. 2024, 25, bbad493. [Google Scholar] [CrossRef] [PubMed]
  14. Li, J.; Li, D.; Savarese, S.; Hoi, S. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv 2023, arXiv:2301.12597. [Google Scholar]
  15. Van Veen, D.; Van Uden, C.; Blankemeier, L.; Delbrouck, J.; Aali, A.; Bluethgen, C.; Pareek, A.; Polacin, M.; Reis, E.; Seehofnerová, A. Adapted large language models can outperform medical experts in clinical text summarization. Nat. Med. 2024. [Google Scholar] [CrossRef]
  16. Dong, Y.; Cordonnier, J.-B.; Loukas, A. Attention is not all you need: Pure attention loses rank doubly exponentially with depth. In Proceedings of the 38th International Conference on Machine Learning, Virtual Event, 18–24 July 2021; pp. 2793–2803. [Google Scholar]
  17. Ethayarajh, K. How contextual are contextualized word representations? comparing the geometry of BERT, ELMO, and GPT-2 embeddings. arXiv 2019, arXiv:1909.00512. [Google Scholar]
  18. Su, Y.; Liu, F.; Meng, Z.; Lan, T.; Shu, L.; Shareghi, E.; Collier, N. Tacl: Improving bert pre-training with token-aware contrastive learning. arXiv 2021, arXiv:2111.04198. [Google Scholar]
  19. Su, Y.; Lan, T.; Wang, Y.; Yogatama, D.; Kong, L.; Collier, N. A contrastive framework for neural text generation. Adv. Neural Inf. Process. Syst. 2022, 35, 21548–21561. [Google Scholar]
  20. Li, B.; Zhou, H.; He, J.; Wang, M.; Yang, Y.; Li, L. On the sentence embeddings from pre-trained language models. arXiv 2020, arXiv:2011.05864. [Google Scholar]
  21. Wang, Z.; Zeng, J.; Tao, H.; Zhong, L. RBPSum: An extractive summarization approach using Bi-stream attention and position residual connection. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–8. [Google Scholar]
  22. Abanoub, G.E.; Fawzy, A.M.; Waly, R.R.; Gomaa, W.H. Generate descriptions of medical dialogues through two-layers Transformer-based summarization. Intell. Method Syst. Appl. 2023, 32–37. [Google Scholar] [CrossRef]
  23. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv 2019, arXiv:1910.13461. [Google Scholar]
  24. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res. 2020, 21, 5485–5551. [Google Scholar]
  25. Chuang, C.-Y.; Robinson, J.; Lin, Y.-C.; Torralba, A.; Jegelka, S. Debiased contrastive learning. Adv. Neural Inf. Process. Syst. 2020, 33, 8765–8775. [Google Scholar]
  26. Welleck, S.; Kulikov, I.; Roller, S.; Dinan, E.; Cho, K.; Weston, J. Neural text generation with unlikelihood training. arXiv 2019, arXiv:1908.04319. [Google Scholar]
  27. Wu, Z.; Wang, S.; Gu, J.; Khabsa, M.; Sun, F.; Ma, H. CLEAR: Contrastive learning for sentence representation. arXiv 2020, arXiv:2012.15466. [Google Scholar]
  28. Tan, C.; Sun, X. CoLRP: A contrastive learning abstractive text summarization method with ROUGE penalty. In Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia, 18–23 June 2023; pp. 1–7. [Google Scholar]
  29. Mai, T.P.; Nguyen, Q.A.; Can, D.C.; Le, H.Q. Contrastive hierarchical discourse graph for vietnamese extractive multi-document summarization. In Proceedings of the 2023 International Conference on Asian Language Processing (IALP), Singapore, 18–20 November 2023; pp. 118–123. [Google Scholar]
  30. Klein, G.; Kim, Y.; Deng, Y.; Senellart, J.; Rush, A. OpenNMT: Open-Source Toolkit for Neural Machine Translation. Annu. Meet. Assoc. Comput. Linguist. Syst. Demonstr. 2017, 35, 67–72. [Google Scholar]
  31. Holtzman, A.; Buys, J.; Du, L.; Forbes, M.; Choi, Y. The curious case of neural text degeneration. arXiv 2019, arXiv:1904.09751. [Google Scholar]
  32. Fu, Z.; Lam, W.; So, A.; Shi, B. A theoretical analysis of the repetition problem in text generation. Proc. AAAI Conf. Artif. Intell. 2021, 35, 12848–12856. [Google Scholar] [CrossRef]
  33. Su, Y.; Xu, J. An empirical study on contrastive search and contrastive decoding for open-ended text generation. arXiv 2022, arXiv:2211.10797. [Google Scholar]
  34. Chen, W.L.; Wu, C.K.; Chen, H.H.; Chen, C.C. Fidelity-enriched contrastive search: Reconciling the faithfulness-diversity trade-off in text generation. arXiv 2023, arXiv:2310.14981. [Google Scholar]
  35. Zhang, Y.; Kamigaito, H.; Aoki, T.; Takamura, H.; Okumura, M. Generic Mechanism for Reducing Repetitions in Encoder-Decoder Models. J. Nat. Lang. Process. 2023, 30, 401–431. [Google Scholar] [CrossRef]
  36. Xu, J.; Liu, X.; Yan, J.; Cai, D.; Li, H.; Li, J. Learning to break the loop: Analyzing and mitigating repetitions for neural text generation. Adv. Neural Inf. Process. Syst. 2022, 35, 3082–3095. [Google Scholar]
  37. Hadsell, R.; Chopra, S.; LeCun, Y. Dimensionality reduction by learning an invariant mapping. IEEE Comput. Vis. Pattern Recognit. 2006, 2, 1735–1742. [Google Scholar]
  38. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A simple framework for contrastive learning of visual representations. Int. Conf. Mach. Learn. 2020, 119, 1597–1607. [Google Scholar]
  39. Du, Z. GPT2-Chinese: Tools for Training GPT2 Model in Chinese Language; GitHub Repository, 2019. [Google Scholar]
  40. Shao, Y.; Geng, Z.; Liu, Y.; Dai, J.; Yan, H.; Yang, F.; Zhe, L.; Bao, H.; Qiu, X. CPT: A pre-trained unbalanced transformer for both chinese language understanding and generation. arXiv 2021, arXiv:2109.05729. [Google Scholar]
  41. Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; Anadkat, S.; et al. GPT-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
  42. Fan, A.; Lewis, M.; Dauphin, Y. Hierarchical neural story generation. arXiv 2018, arXiv:1805.04833. [Google Scholar]
  43. Papineni, K.; Roukos, S.; Ward, T.; Zhu, W.-J. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, Philadelphia, PA, USA, 6–12 July 2002; pp. 311–318. [Google Scholar]
  44. Banerjee, S.; Lavie, A. METEOR: An automatic metric for mt evaluation with improved correlation with human judgments. In Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization; Association for Computational Linguistics: Toronto, ON, Canada, 2005; pp. 65–72. [Google Scholar]
  45. Lin, C.-Y. ROUGE: A Package for Automatic Evaluation of Summaries. Text Summarization Branches Out. 2004; pp. 74–81. Available online: https://aclanthology.org/W04-1013.pdf (accessed on 19 March 2024).
  46. Vedantam, R.; Lawrence Zitnick, C.; Parikh, D. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4566–4575. [Google Scholar]
  47. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  48. Wu, L.; Li, J.; Wang, Y.; Meng, Q.; Qin, T.; Chen, W.; Zhang, M.; Liu, T. R-drop: Regularized dropout for neural networks. Adv. Neural Inf. Process. Syst. 2021, 34, 10890–10905. [Google Scholar]
  49. Izmailov, P.; Podoprikhin, D.; Garipov, T.; Vetrov, D.; Wilson, A. Averaging weights leads to wider optima and better generalization. arXiv 2018, arXiv:1803.05407. [Google Scholar]
  50. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2818–2826. [Google Scholar]
  51. Wu, X.; Gao, Y.; Zhang, H.; Yang, Y.; Guo, W.; Lu, J. The Solution for the CVPR2023 NICE Image Captioning Challenge. arXiv 2023, arXiv:2310.06879. [Google Scholar]
  52. Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin, V.; Goyal, N.; Küttler, H.; Lewis, M.; Yih, W.; Rocktäschel, T. Retrieval-augmented generation for knowledge-intensive nlp tasks. Adv. Neural Inf. Process. Syst. 2020, 33, 9459–9474. [Google Scholar]
  53. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
  54. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 2020, 33, 1877–1901. [Google Scholar]
  55. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; Tang, J. GLM: General language model pretraining with autoregressive blank infilling. arXiv 2022, arXiv:2103.10360. [Google Scholar]
  56. Baevski, A.; Hsu, W.-N.; Xu, Q.; Babu, A.; Gu, J.; Auli, M. Data2vec: A general framework for self-supervised learning in speech, vision and language. In Proceedings of the 39th International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 1298–1312. [Google Scholar]
  57. Uc-Cetina, V.; Navarro-Guerrero, N.; Martin-Gonzalez, A.; Weber, C.; Wermter, S. Survey on reinforcement learning for language processing. Artif. Intell. Rev. 2023, 56, 1543–1575. [Google Scholar] [CrossRef]
  58. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A. Training language models to follow instructions with human feedback. Adv. Neural Inf. Process. Syst. 2022, 35, 27730–27744. [Google Scholar]
Figure 1. The structure of Transformer and GPT-2 decoder blocks.
Figure 1. The structure of Transformer and GPT-2 decoder blocks.
Applsci 14 02817 g001
Figure 2. The CLpCE-based model fine-tuning procedure. L C E guides the supervised learning and L C L directs the unsupervised learning, both parts contributing to the fine-tuning of pre-trained LLMs for accurate feature representation towards a specific task.
Figure 2. The CLpCE-based model fine-tuning procedure. L C E guides the supervised learning and L C L directs the unsupervised learning, both parts contributing to the fine-tuning of pre-trained LLMs for accurate feature representation towards a specific task.
Applsci 14 02817 g002
Figure 3. The effect of different β values and decoding methods on DRG text summarization. In the plot, the horizontal axis denotes the β values in the CLpCE objective function, and the vertical axis presents the values of evaluation metrics. Specifically, combinations of different types of lines, markers and colors are used for identifying different metric values of a DRG model (BLEU-1, solid black line with ★; BLEU-2, dashed black line with ∘; BLEU-3, dotted black line with ♢; BLEU-4, dash-dotted black line with □; METEOR, dashed red line with ⊳; ROUGE, dashed green line with △; and CIDER, dashed blue line with ▽).
Figure 3. The effect of different β values and decoding methods on DRG text summarization. In the plot, the horizontal axis denotes the β values in the CLpCE objective function, and the vertical axis presents the values of evaluation metrics. Specifically, combinations of different types of lines, markers and colors are used for identifying different metric values of a DRG model (BLEU-1, solid black line with ★; BLEU-2, dashed black line with ∘; BLEU-3, dotted black line with ♢; BLEU-4, dash-dotted black line with □; METEOR, dashed red line with ⊳; ROUGE, dashed green line with △; and CIDER, dashed blue line with ▽).
Applsci 14 02817 g003
Figure 4. The effect of control threshold ρ on the text generation diversity ( ρ = 0.01 , dotted red line with ♢; ρ = 0.10 , dashed blue line with ∘).
Figure 4. The effect of control threshold ρ on the text generation diversity ( ρ = 0.01 , dotted red line with ♢; ρ = 0.10 , dashed blue line with ∘).
Applsci 14 02817 g004
Table 1. Evaluation of CLpCE guided DRG models by using different decoding methods. The values highlighted in bold represent the highest scores for each metric, while the β value underlined indicates the objective function.
Table 1. Evaluation of CLpCE guided DRG models by using different decoding methods. The values highlighted in bold represent the highest scores for each metric, while the β value underlined indicates the objective function.
β BLEUMETEORROUGECIDER
BLEU-1BLEU-2BLEU-3BLEU-4
DCS0.0 (CE)0.46380.38380.32230.27240.24870.50571.3607
0.10.48700.40490.34140.28930.25850.51701.4179
0.20.48640.40470.34140.28930.25860.51861.4118
0.30.47940.39870.33630.28530.25610.51541.4162
0.40.47840.39790.33580.28510.25600.51791.4391
0.50.48050.39970.33720.28600.25640.51541.4147
0.6 (CLpCE)0.49370.41070.34610.29330.26120.51821.4339
0.70.48550.40400.34090.28940.25860.51991.4459
0.80.48540.40330.34000.28840.25820.51951.4533
0.90.47800.39680.33420.28340.25490.51471.4132
1.0 (CL)0.02320.00130.00020.00000.02640.02840.0002
CS0.0 (CE)0.46450.38430.32270.27270.24910.50591.3611
0.10.48580.40390.34060.28870.25790.51661.4196
0.20.48660.40450.34100.28900.25850.51781.4125
0.30.47930.39870.33640.28560.25620.51591.4101
0.40.47670.39650.33460.28410.25520.51691.4255
0.50.48100.40030.33780.28670.25680.51621.4240
0.6 (CLpCE)0.49390.41120.34700.29430.26160.51981.4477
0.70.48560.40420.34100.28940.25870.51961.4395
0.80.48640.40430.34080.28920.25860.51981.4525
0.90.47980.39840.33550.28450.25580.51561.4208
1.0 (CL)0.02330.00120.00000.00000.02660.02860.0002
GS0.0 (CE)0.46810.38870.32740.27730.24890.50951.4090
0.10.48460.40360.34100.28980.25800.52101.4567
0.20.48810.40630.34310.29140.25920.52311.4684
0.30.47960.39990.33810.28750.25680.51991.4477
0.40.48090.40020.33760.28650.25670.52141.4542
0.50.48340.40340.34130.29040.25800.52131.4682
0.6 (CLpCE)0.49010.40880.34580.29410.26110.52461.4861
0.70.48940.40770.34430.29250.26020.52471.4835
0.80.48650.40530.34240.29100.25910.52441.4864
0.90.48120.39980.33700.28600.25590.51861.4583
1.0 (CL)0.01220.00090.00000.00000.01260.01690.0000
NS0.0 (CE)0.46540.37900.31360.26160.24220.48591.2368
0.10.47650.39070.32540.27280.24920.49961.3073
0.20.48000.39440.32900.27630.25110.50171.2831
0.30.47750.39250.32780.27570.25010.50091.3221
0.40.47930.39390.32850.27590.25040.50101.3049
0.50.47980.39440.32920.27660.25120.50171.3014
0.6 (CLpCE)0.48580.39910.33260.27890.25350.50441.3143
0.70.47990.39420.32880.27580.25110.50331.3322
0.80.48030.39420.32860.27580.25110.50291.3259
0.90.47370.38780.32260.27030.24730.49611.2776
1.0 (CL)0.01840.00070.00000.00000.02160.02260.0003
TkS0.0 (CE)0.44990.35540.28520.23040.22830.45420.9854
0.10.46270.36860.29860.24360.23600.47011.0701
0.20.46640.37120.30040.24470.23710.47181.0741
0.30.45820.36510.29560.24100.23420.46811.0553
0.40.46380.36950.29880.24340.23610.47101.0584
0.50.46240.36870.29870.24370.23590.47001.0584
0.6 (CLpCE)0.47300.37750.30590.24960.24020.47151.0676
0.70.46540.37130.30080.24490.23760.47121.0555
0.80.47020.37450.30320.24700.23890.47431.0807
0.90.46130.36720.29690.24210.23520.46891.0557
1.0 (CL)0.02060.00160.00000.00000.02250.02660.0002
Table 2. Representation diversity of text summarization.
Table 2. Representation diversity of text summarization.
DCSCSGSNSTkS
maxTRR0.12 ± 0.090.22 ± 0.130.24 ± 0.150.27 ± 0.130.29 ± 0.16
Table 3. Perception of DRG text summarization for diversity analysis. The token underlined shows the token with the maxTRR value.
Table 3. Perception of DRG text summarization for diversity analysis. The token underlined shows the token with the maxTRR value.
Desensitized Data Description
case A input14 108 30 13 20 18 23 21 10 14 32 16 39 27 47 51 31 29 20 18 10 24 42 26 37 61 24 10 40
13 45 163 45 39 159 49 50 204 37 21 157 155 10
CS output150 50 107 104 113 110 15 13 31 29 20 (maxTRR, 1/11)
DCS output(1) 150 50 107 66 17 81 76 33 81 10 (maxTRR, 1/10)
(2) 150 50 107 80 33 17 13 31 81 60 49 29 (maxTRR, 1/12)
(3) 150 50 107 80 33 17 81 76 33 31 81 60 49 29 (maxTRR, 1/14)
(4) 150 50 65 107 29 113 15 29 20 60 49 29 (maxTRR, 3/12)
case B input83 12 38 41 17 1074 96 17 552 48 17 27 131 17 89 65 69 70 11 149 58 51 36 82 11 34 38 41
17 40 153 44 23 21 25 11 263 256 567 28 59 11 199 54 894 141 126 231 11 45 83 207 281
240 353 300 212 491 302 237 297 300 212 11 113 110 104 259 207 281 315 286 258 280 11
22 12 96 16 35 12 38 41 17 178 58 36 82 10 22 279 33 91 72 78 11 33 24 122 61 24 10 22 12
62 33 628 51 171 82 11 33 686 170 1119 11 22 12 119 17 143 175 105 744 26 37 72 78 11
22 12 38 41 17 210 143 170 179 10
CS output190 57 190 190 190 79 10 (maxTRR, 4/7)
DCS output(1) 49 75 100 344 282 11 57 49 77 75 100 57 92 10 (maxTRR, 2/14)
(2) 49 75 100 344 282 49 57 49 77 75 100 57 92 10 (maxTRR, 3/14)
(3) 49 369 142 49 180 372 11 369 372 11 180 372 11 440
439 139 420 11 117 175 13 29 440 439 11 202 191 200 487 365 175 98 10 (maxTRR, 2/33)
(4) 49 369 142 49 180 372 11 369 372 11 180 372 11 440 439 139 420 11 117 487 384 440
439 11 202 191 175 98 278 10 (maxTRR, 2/30)
Table 4. DRG accuracy of DCS decoding by using different control threshold values.
Table 4. DRG accuracy of DCS decoding by using different control threshold values.
ρ BLEUMETEORROUGECIDER
BLEU-1BLEU-2BLEU-3BLEU-4
0.000.49390.41120.34700.29430.26160.51981.4477
0.010.49390.41110.34700.29420.26120.51901.4459
0.050.49390.41130.34660.29400.26130.51881.4445
0.100.49370.41070.34610.29330.26120.51821.4339
Table 5. Current achievement of the first-tier teams on the competition.
Table 5. Current achievement of the first-tier teams on the competition.
TeamMain Procedure in Diagnosis Report GenerationScore
ACPT-base + noise-aware similarity bucketing + fine-tuning2.327
BBART-large + GBPQ + fine-tuning2.297
C(CPT-base + BART-base) + RAG + fine-tuning2.285
DBART-large + fine-tuning2.272
EBART-large + fine-tuning2.263
FBART-large + fine-tuning2.249
oursGPT2-Chinese + fine-tuning + CLpCEwDCS decoding2.135
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, T.; Meng, J.; Yang, Y.; Yu, S. Contrastive Learning Penalized Cross-Entropy with Diversity Contrastive Search Decoding for Diagnostic Report Generation of Reduced Token Repetition. Appl. Sci. 2024, 14, 2817. https://doi.org/10.3390/app14072817

AMA Style

Zhang T, Meng J, Yang Y, Yu S. Contrastive Learning Penalized Cross-Entropy with Diversity Contrastive Search Decoding for Diagnostic Report Generation of Reduced Token Repetition. Applied Sciences. 2024; 14(7):2817. https://doi.org/10.3390/app14072817

Chicago/Turabian Style

Zhang, Taozheng, Jiajian Meng, Yuseng Yang, and Shaode Yu. 2024. "Contrastive Learning Penalized Cross-Entropy with Diversity Contrastive Search Decoding for Diagnostic Report Generation of Reduced Token Repetition" Applied Sciences 14, no. 7: 2817. https://doi.org/10.3390/app14072817

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop