Next Article in Journal
Design of a Tension Infiltrometer with Automated Data Collection Using a Supervisory Control and Data Acquisition System
Previous Article in Journal
Deep Learning Based CAPTCHA Recognition Network with Grouping Strategy
 
 
Correction published on 17 January 2024, see Sensors 2024, 24(2), 586.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Cross-Parallel Transformer: Parallel ViT for Medical Image Segmentation

College of Engineering and Design, Hunan Normal University, Changsha 410081, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(23), 9488; https://doi.org/10.3390/s23239488
Submission received: 31 October 2023 / Revised: 22 November 2023 / Accepted: 23 November 2023 / Published: 29 November 2023 / Corrected: 17 January 2024
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Medical image segmentation primarily utilizes a hybrid model consisting of a Convolutional Neural Network and sequential Transformers. The latter leverage multi-head self-attention mechanisms to achieve comprehensive global context modelling. However, despite their success in semantic segmentation, the feature extraction process is inefficient and demands more computational resources, which hinders the network’s robustness. To address this issue, this study presents two innovative methods: PTransUNet (PT model) and C-PTransUNet (C-PT model). The C-PT module refines the Vision Transformer by substituting a sequential design with a parallel one. This boosts the feature extraction capabilities of Multi-Head Self-Attention via self-correlated feature attention and channel feature interaction, while also streamlining the Feed-Forward Network to lower computational demands. On the Synapse public dataset, the PT and C-PT models demonstrate improvements in DSC accuracy by 0.87% and 3.25%, respectively, in comparison with the baseline model. As for the parameter count and FLOPs, the PT model aligns with the baseline model. In contrast, the C-PT model shows a decrease in parameter count by 29% and FLOPs by 21.4% relative to the baseline model. The proposed segmentation models in this study exhibit benefits in both accuracy and efficiency.

1. Introduction

Medical image segmentation is a vital research area due to its distinct properties that differentiate it from RGB image segmentation. Its significance in medical applications further highlights its necessity. The encoder–decoder structure, based on the Convolutional Neural Network (CNN), was a pioneering aspect of this field [1,2]. It exhibited excellent receptive fields and contextual information in the deep layers of the network, making it adaptable to multiscale input images. Additionally, it represented an end-to-end training model that received significant attention at the time. This innovation gave rise to the foundational U-Net framework [3], based on the “U”-shaped network structure, sparking a wave of research enthusiasm. The U-Net network structure is characterized by its simplicity, featuring a fully symmetric encoder–decoder architecture with skip connections. Due to its outstanding network performance, it has dominated the field of medical image segmentation. However, CNNs with local receptive fields have limitations in extracting global features for tasks with long-term relationship dependencies, making them unable to fully capture global information. This restricts the CNN’s ability to realize its full potential.
Recently, network models based on the Transformer architecture [4] have been challenging the dominant position of CNN and gaining prominence, primarily due to their self-attention mechanism, which possesses the capability to model long-range contextual information. This addresses the limitations of CNNs, making them shine in the field of medical imaging. The concept of incorporating Transformer modules into the network architecture of U-Net has reignited a research wave centered around Transformer-based approaches in the domain of medical image segmentation. On one hand, most researchers have been exploring how to embed serial Transformer modules into the U-Net structure, leading to a series of classic networks such as TransUNet [5], Swin-Unet [6], UNETR [7] and so on [8,9,10,11,12]. Undeniably, serial Transformer network models have significantly improved the accuracy of medical image segmentation. Notably, the TransUNet [5] model was the first to apply the Vision Transformer (ViT) [13] to the field of medical image segmentation, leveraging the global contextual modeling capabilities of Transformers in conjunction with the local feature extraction characteristics of CNN. This has provided highly effective solutions for the medical domain. It is evident that their competitive advantage has been achieved through increased model complexity, which inevitably comes with high computational and memory costs, potentially impacting the practical application of these models in clinical medical segmentation [14]. On the other hand, there has been relatively less research on the application of parallel Transformer modules in medical image segmentation [15,16,17]. This is because traditional parallel modes tend to increase network parameters and feature dimensions, which can impact network efficiency and accuracy. However, the emergence of parallel ViT [18] has provided a new direction for applying parallel Transformers to medical image segmentation research. Under the condition of maintaining the same parameter count, replacing serial ViT [13] with parallel ViT can increase network width while reducing network depth. Parallel ViT achieves this by reducing the depth of the modules, optimizing network training, and making network training less challenging compared with serial ViT. However, despite maintaining the same parameter count, it still incurs high computational costs. Additionally, the reduction in network depth weakens its semantic representation and contextual awareness, limiting the applicability of parallel ViT in medical image segmentation. Therefore, there is a need for an effective parallel structure that can simultaneously enhance the accuracy and efficiency of medical image segmentation, breaking the limitations of parallel ViT applications.
In this research, we reevaluated the design of the parallel ViT structure, addressing its shortcomings in semantic representation and the issue of high computational costs. We improved and optimized a classic end-to-end network, TransUNet [5], and introduced the Cross-Parallel Transformer module (C-PT block) to address these challenges. We replaced the serial ViT block in TransUNet with the parallel ViT and C-PT block, naming them PTransUNet (PT model) and C-PTransUNet (C-PT model), respectively. Specifically, we first enhanced TransUNet by incorporating the parallel ViT. Without changing the model’s parameter count and FLOPs, this approach aimed to reduce the optimization complexity of the network. The goal was to accelerate global context modeling on top of rich local features, leading to better feature representation. Next, we improved the proposed C-PT block in TransUNet, leveraging semantic information fusion and cross-attention from the left and right branches of the parallel Transformer. This enhancement strengthened the long-range modeling capabilities and high-order spatial interactions of the parallel ViT in a collaborative manner to obtain more robust feature representations. Furthermore, in order to reduce computational expenses and minimize memory usage, we streamlined the Feed-Forward Network (FFN) [19] architecture within the parallel ViT, replacing it with the Rectified Linear Unit (ReLU) activation function. This overall enhancement improved the segmentation performance. We conducted experiments that compared the performance impact of parallel Transformer structures applied to the TransUNet network model from the perspectives of depth and parallelism. We assessed the model’s segmentation accuracy and efficiency, and through ablation experiments, we analyzed the effectiveness of the C-PT block design.
The contribution of this paper consists of what follows:
  • We propose an enhanced PTransUNet model by utilizing parallel ViT rather than serial ViT in the TransUNet medical image segmentation model. This model demonstrates an improvement in segmentation efficiency in comparison to the TransUNet model.
  • We present an enhanced C-PTransUNet model and introduce a cross-parallel Transformer module to upgrade the parallel ViT. The MHSA part employs the Dendrite Net (DD) [20,21] layer to enhance semantic representation and remote spatial dependencies. It achieves self-crossover attention for semantic features through feature fusion. In the Feed-Forward Network section, we employ a “rude” streamlining approach. This entails the removal of the fully connected layer and using exclusively the combination of normalization and activation functions. This results in reduced computational overhead while preserving the nonlinear feature representation.
  • On the public dataset Synapse [22], we experimentally evaluate the PT model and C-PT model, and the findings demonstrate that parallel ViT has superior accuracy and efficiency than sequential ViT for medical image segmentation.

2. Related Work

2.1. Vision Transformers Development

The ViT model, which was first designed for Natural Language Processing, has now been successfully applied to Computer Vision. It competes favourably with conventional CNN approaches in tasks including image classification, target detection, and semantic segmentation. Its success is attributable to its dynamic attention mechanism and long-range modelling capabilities, which prove its robust feature learning. The image is divided into multiple small patches by ViT. These patches are then turned into sequences that serve as input features. An N-layer Transformer processes these sequences to produce a thorough feature representation of the entire image. With the self-attention mechanism, the Transformer captures long-distance dependencies among image features and enables higher-order spatial information exchange. It excels in global relational modeling, expanding the receptive field and acquiring rich contextual details, which effectively complements the global modeling capabilities of CNN.
Recently, a variety of novel models based on the ViT backbone network have been born, which may be categorized into sequential and parallel Transformer architectures. The sequential ViT models include DeiT [23], CeiT [24], Swin Transformer [25], T2T-ViT [26], PVT [27], DeepViT [28], and others. As the Transformer module is concerned with global contextual information, it is concerned with building relationships between pixels throughout the whole image range. It is unable to capture local visual features like standard CNN utilizing inductive bias, increasing the difficulty of ViT training and delayed convergence. Touvron et al. [23] proposed the DeiT model, which attempts to learn the inductive bias of image data and distill the knowledge utilizing the teacher model CNN, which is then passed to the student model Transformer, which may enhance feature extraction via convolutional bias and also accelerates model convergence. For sequence input, ViT will partition the input image into numerous patch blocks; these fixedly divided patch blocks will lose the image’s local features. The Swin Transformer [25] model adopts the idea of dynamic attention to neighbouring pixels, using sliding windows to model globally in the spatial dimension, while performing self-attention operations in each patch block and attention computation across blocks; this dynamic generation of attention weights reduces the computational complexity of self-attention and improves local feature extraction. Each ViT Transformer layer has the same resolution image characteristics, resulting in a high computational cost. Yuan et al. [26] proposed a T2T-ViT model which utilizes a deep and narrow hierarchical Transformer architecture to enhance the features but at a high computational cost. Wang et al. [27] suggested a Pyramid Vision Transformer (PVT) model with an asymptotically shrinking feature pyramid hierarchical structure that can acquire multi-scale feature maps and an attention layer SRA to reduce the computational consumption of processing high-resolution feature maps. DeepViT [28] is designed with Re-Attention, which re-performs the self-attention operation with multiple features across layers at a cheaper computational cost, relieving the deep ViT feature saturation problem and allowing the network to learn more complicated representations.
Currently, serial Transformer structure research is thriving, but parallel Transformer structure research is limited [16,17,29]. Parallel ViT [18] is the first improvement proposed by the Meta team, in which the serial connected Transformer blocks are converted to parallel processing by decreasing the depth of the model while increasing the width of the model, and the residual portion becomes smaller as the network becomes deeper, and the parallel processing can be approximated to be equivalent to the sequential ViT [13]; at this time, the number of model parameters and FLOPs are not changed. Depth [30,31] and width [32] are two critical factors for neural network architecture. To boost performance, most ViT variants [23,24,25,26,27,28] increase the depth by concatenating the Transformer. Deep networks are difficult to optimize, and the model’s separability is affected by the size of the feature dimension. There are fewer studies on ViT width expansion now [18], and the main concern is that parallel ViT raises the computational cost of the network, increases the model’s complexity, and the feature dimensions are too high to be easily overfitted.

2.2. Transformer-Based Medical Image Segmentation Method

Replacing the convolutional block of the U-network with a Transformer module capable of global feature extraction is a promising avenue to investigate the application of the Transformer for medical image segmentation. The TransUNet [5] was the pioneer network model implementing the ViT for medical image segmentation. As CNN captures only local information, embedding the Transformer in the codec to extract global features of the CNN image-coding block can acquire long-distance model dependencies and rich spatial information. In order to achieve accurate segmentation, the decoder up-samples the coded features and performs feature localization with CNN low-level features. The TransUNet model incorporates the self-attention mechanism into the U-Net architecture to enhance contextual comprehension, but this comes with a notable computational expense. The Swin-Unet [6] model, inspired by the Swin Transformer [25] module, replaces the U-Net network’s convolutional layer directly, leading to the first pure Transformer structure for medical image segmentation. The input image undergoes a non-overlapping patch operation before being fed into the Transformer encoder to learn the global deep feature representation. The decoder then combines the encoded features with up-sampled features to recover the feature map and perform segmentation prediction. This approach resolves the issue that convolution struggles to learn global semantic information effectively. UNETR [7] aims to convert the 3D segmentation task into a sequence-to-sequence prediction problem. This model’s encoder learns semantic features over long distances using a pure Transformer architecture, while the decoder retrieves high-resolution features with a CNN structure. UNETR uses a hybrid Transformer-CNN approach because it recognizes that ViT, even though it is excellent at extracting global features, does not perform well in acquiring local semantic information, and that the Transformer has a greater computational overhead than CNN. As previously mentioned, conventional network architectures including TransUNet [5], Swin-Unet [6], and UNETR [7] utilize ViT or Swin Transformer modules to enhance feature extraction through the increase of network depth (i.e., the series connectivity pattern of the blocks [33]). It is evident that increasing the network depth has a greater impact on model performance, yet this may not be the ideal selection when considering network optimization, separability, and computational costs.
The proposed Cross-Parallel Transformer (C-PT) module aims to improve model performance while reducing computational overhead and solving existing issues. By better balancing depth and parallelism choices, the C-PT module achieves these goals. In this study, we verify the module’s effectiveness on the classical 2D medical image segmentation network TransUNet. Furthermore, this work represents a courageous attempt to apply parallel Transformer to medical image segmentation. To illustrate the Transformer’s connection patterns, Table 1 lists several conventional networks. In particular, PTransUNet is crafted with parallel ViT, and the suggested C-PT block is fortified with parallel ViT. We revised a portion of the ViT structure and examined a more efficient Transformer connection. Our results indicate that this modification improves TransUNet, even though only a fraction of its architecture is altered.

3. Methods Section

3.1. Cross-Parallel Transformer Module

In this paper, we propose a cross-parallel Transformer module (C-PT), as illustrated in Figure 1. The module applies parallel design principles to enhance image feature extraction and improve image segmentation accuracy by increasing its width. As opposed to the conventional design of the Transformer module, this paper presents a cross-parallel transformer module comprising the cross-parallel multi-head self-attention block (C-PMHSA) and an activation function block. The C-PMHSA incorporates the multi-head self-attention residual block (MHSA) of the conventional Transformer module. At the same time, the multilayer perceptron (MLP) within the standard Transformer module is simplified as an activation function block. The connection composition is very concise, retaining only the Layer Normalization (LN) and GELU [34] activation functions. The key aim of this proposal is to fully utilize the Transformer’s parallel nature, improve the multi-head self-attention mechanism’s feature extraction ability [35], simplify the FFN structure in the latter stages, and minimize computational complexity without compromising module performance.

3.1.1. Cross-Parallel Multi-Head Self-Attention Blocks (C-PMHSA)

We have revised both the internal structure and connection of the two parallel MHSA modules to create a new cross-parallel MHSA structure. This new structure defines the parallel structure as left-branching and right-branching Transformer blocks, known as left and right blocks.
The image features are linearly normalized once in each of the left and right blocks, before being mapped to the query matrix ( Q * ), key-value matrix ( K * ), and value matrix ( V * ) of the corresponding blocks through the linear transformation matrices W Q _ * , W K _ * , and W V _ * of the left and right blocks, respectively. These matrices are calculated as follows:
Q * = W Q _ * ( L N * ( x ) ) ,
K * = W K _ * ( L N * ( x ) ) ,
V * = W V _ * ( L N * ( x ) ) ,
where L N * denotes the operator for layer normalization, with * representing the left or right blocks, and x signifying the position-encoded input block for the image.
Unlike the conventional Transformer module, we introduce a DD layer [20], i.e., a white-box model, after the W Q _ * , W K _ * , and W V _ * linear layers in the left and right blocks. The DD layer proceeds through a linear layer before conducting a Hadamard product operation with the input; its primary aim is to enhance the nonlinear representation of the model by emulating dendritic processing in neurons. In order to avoid excessive computational complexity and enhance the model’s generalization ability, we propose incorporating a single layer of Dendrite Net after the left and right blocks. While the Hadamard product operation of the DD layer can suppress low-interest regions in the image features, it may also diminish high-interest features. In this regard, we addressed the feature loss issue resulting from the DD layer by incorporating residual connections [36] and implemented proximity connections to reinforce the acquisition of highly relevant features. The mapping matrices of the left and right blocks are calculated as follows:
Q ^ * = W Q _ * 10 Q * Q * + W Q * 10 Q * ,
K ^ * = W K _ * 10 K * K * + W K _ * 10 K * ,
V ^ * = W V _ * 10 V * V * + W V _ * 10 V * ,
where W Q _ * 10 , W K _ * 10 , and W V _ * 10 represent the weight matrices of the DD layer, ∘ indicates the Hadamard product of the DD layer, Q * , K * , and V * denote the original feature inputs of the DD layer, and * denotes the same as above.
Subsequently, the feature information from both the left and right blocks is combined in the left block before undergoing the multi-head self-attention operation. This involves adding the mapping matrices Q ^ * , K ^ * , and V ^ * of both blocks to maximize the utilization of the extracted image features, improving feature expression capability and reducing feature loss. The corresponding mapping matrices Q ^ * and K ^ * of the left and right blocks undergo a LN to reduce data redundancy and hasten convergence. The attention input matrix is computed as follows:
Q ^ l e f t = LN Q _ l e f t ( Q ^ l e f t + Q ^ r i g h t ) ,
K ^ l e f t = LN K _ l e f t ( K ^ l e f t + K ^ r i g h t ) ,
V ^ l e f t = V ^ l e f t + V ^ r i g h t ,
Q ^ r i g h t = LN Q _ r i g h t ( Q ^ r i g h t ) ,
K ^ r i g h t = LN K _ r i g h t ( K ^ r i g h t ) ,
where LN Q _ l e f t , LN K _ l e f t , LN Q _ r i g h t , and LN K _ r i g h t are layer normalization operators.
In this paper, the multi-headed self-attention operations within the left and right blocks maintain the QKV computation method as introduced in the regular Transformer module [4]. This involves mapping the matrices Q, K, and V to distinct subspaces for attention computation, while maintaining the same number of parameters. The attention results of the distinct subspaces are then spliced together to capture the correlations between the image blocks. In particular, the left block operation involves a self-attention calculation, whereas the right block operation involves a self-cross-attention calculation. In this case, the input matrices Q ^ l e f t , K ^ l e f t and V ^ l e f t for the left block attention computation incorporate the feature fusion information from the left and right blocks, whilst primarily concentrating on their individual contextual features. The input matrices Q ^ r i g h t and K ^ r i g h t used for the right block attention computation contain feature information from this layer. Meanwhile, the input matrix V ^ r i g h t represents the fusion of the left block output matrix O l e f t and the right block mapping matrix V ^ r i g h t . This resulting matrix guides the generation of highly correlated image features by the right block using the image feature information from the left block. Similarly, this paper retains the Dropout [37] layer after softmax and MHSA to reduce the adaptation between all neurons and address potential overfitting issues. Additionally, the output from the left and right blocks are fused for feature alignment to prevent feature loss, then connected to the input with residuals. This step aims to mitigate the training difficulty of the network with gradient degradation. The left block’s self-attention ( Att l e f t ) and output ( O l e f t ) are defined as follows:
Att l e f t ( Q ^ l e f t , K ^ l e f t , V ^ l e f t ) = Drop_soft l e f t ( softmax ( Q ^ l e f t ( K ^ l e f t ) T d k ) ) V ^ l e f t ,
head l e f t i = Att l e f t ( Q ^ l e f t W i Q , K ^ l e f t W i K , V ^ l e f t W i V ) ,
MHSA l e f t ( Q ^ l e f t , K ^ l e f t , V ^ l e f t ) = Concat ( head l e f t 1 , , head l e f t h )
O l e f t = Drop_att l e f t ( W ^ l e f t MHSA l e f t ) ,
where Equations (13) and (14) can be located in reference [4].
The right block’s self-crossing attention ( Att r i g h t ) and output ( O r i g h t ) are defined as follows:
V ^ r i g h t = V ^ r i g h t + O l e f t
Att r i g h t ( Q ^ r i g h t , K ^ r i g h t , V ^ r i g h t ) = Drop_soft r i g h t ( softmax ( Q ^ r i g h t ( K ^ r i g h t ) T d k ) ) V ^ r i g h t ,
head r i g h t i = Att r i g h t ( Q ^ r i g h t W i Q , K ^ r i g h t W i K , V ^ r i g h t W i V ) ,
MHSA r i g h t ( Q ^ r i g h t , K ^ r i g h t , V ^ r i g h t ) = Concat ( head r i g h t 1 , , head r i g h t h )
O r i g h t = Drop_att r i g h t ( W ^ r i g h t MHSA r i g h t ) ,
where Equations (18) and (19) refer to the same as above.

3.1.2. Activation Function Block

The regular Transformer’s FFN segment features the conventional MLP connection, comprising fully connected layers and activation functions. It undergoes a process of upscaling and then downscaling the matrix dimension and achieves significant global feature interaction, but has a heavy computational overhead. The Transformer module [4] is crucial for the global nature of MLPs. However, in vision MLPs, only spatial location features are perceived globally [38], and channels are not interacted with. The self-attentive mechanism compensates for this limitation by utilizing the information between channels, thereby allowing global features to interact attentively. In this paper, the C-PMHSA is proposed as a replacement for the MHSA in the conventional Transformer. This enhancement improves the feature extraction capability of the multi-head self-attention mechanism [35], albeit at the cost of higher computational effort. To alleviate computational pressure in the C-PT module, this paper suggests removing the computationally expensive MLP in favour of a simple activation function block. The activation function block, which omits fully connected layers, consists of LN layers and the GELU activation function. GELU has the ability to smooth out input distributions [34], thereby alleviating gradient vanishing issues and enhancing model training efficiency.

3.2. Improvements to the TransUNet Model

The TransUNet model is a medical image segmentation network that utilizes the Transformer module, unique in its kind. Consequently, the model delivers comparable segmentation results in 2D medical image segmentation. Its encoder employs a hybrid CNN-Transformer architecture, wherein certain CNN blocks in the U-Net encoder have been partially replaced by Transformer blocks. This move cleverly reduces the computational cost resulting from the large input matrix typical of the pure Transformer module. Our proposed C-PT module is an evolutionary outcome derived from ViT, which attributes its triumph to three factors: (1) image segmentation instead of pixel-level inputs [13], (2) a multi-head self-attention mechanism [4], and (3) the GELU activation function [34]. Since the TransUNet model employs a sequential ViT architecture that is challenging to optimize and computationally costly, its effect on the segmentation accuracy and efficiency of the network model is significant.
Therefore, we propose two refined models: (i) the PTransUNet network model, which substitutes the sequential ViTs of the TransUNet model with parallel ViTs, thereby preserving the other network structures and not altering the parameter count and FLOPs of the sequential ViTs, and (ii) the C-PTransUNet network model, which replaces TransUNet’s sequential ViT with C-PT blocks. In particular, each layer of the C-PT block also utilizes two MHSAs for feature extraction in the C-PMHSA. Compared with TransUNet, our proposed network model can be expanded in depth. Increasing the number of layers of C-PMHSA can improve feature extraction, benefiting the model’s ability to perform long-range modelling and global spatial interaction. All other aspects of the model will retain the original structure and parameters of the TransUNet model. Figure 2 depicts the network structure of C-PTransUNet. The expression of layer ’s output is as follows:
z =   C - PMHSA ( z 1 ) + z 1 ,
z =   GELU ( LN ( z ) ) ,
where z represents the image feature of layer in the encoder.

4. Experiments and Results

In this section, we conducted experiments using publicly available dataset for medical image segmentation tasks. We evaluated the findings through both qualitative and quantitative perspectives and investigated the scalability of the module in ablation experiments.

4.1. Dataset

To validate the model, we selected the datasets used in the TransUNet model, namely the Multi-Organ CT Image Segmentation Dataset for the MICCAI Multi-Atlas Abdominal Labelling Challenge (Synapse) [22] and the MRI Image Segmentation Dataset for Automated Cardiac Diagnostics Challenge (ACDC) [39]. Synapse consists of CT scans of eight abdominal organs from 30 cases, comprising a total of 3779 cross-sectional slices. We partitioned the dataset into a training set comprising 18 cases, amounting to 2211 slices, and a test set including 12 cases, consisting of 1568 slices, as per the standard dataset division. For the 8 abdominal organ segmentation tasks, covering the aorta, gallbladder, left kidney, right kidney, liver, pancreas, spleen, and stomach, we present the Dice similarity coefficient (DSC) and the 95% Hausdorff distance (HD95). The ACDC dataset includes cardiovascular magnetic resonance images from 100 cases, which altogether comprise of 1902 cross-sectional slices. We have divided this dataset into three subsets: a training set that includes 70 cases (1300 slices), a validation set of 10 cases (186 slices), and a test set that contains 20 cases (416 slices). We focused primarily on displaying the DSC for the right ventricle (RV), left ventricle (LV), and myocardial layer (MYO).

4.2. Implementation Details

Our experiments were conducted using the PyTorch framework with a single NVIDIA A5000 GPU that has 24 GB of memory. To ensure objective comparison with the baseline TransUNet, we applied the same data augmentation technique as used in the TransUNet model to prevent overfitting. We also set the appropriate input resolution (224 × 224, 320 × 320) and patch size P = 16. The same optimizer and parameters [5], comprising a learning rate of 0.01, momentum of 0.9, weight decay of 1 × 10−4, etc., were used for training the model. Based on the TransUNet model, we set the batch size to 24 and the number of training iterations to 14 k for the Synapse dataset [5]. Under the stipulation of maintaining the initial requirements, we preserved the pre-training parameters of ResNet-50 [36] concerning ImageNet [40] in the TransUNet design. For the 12-layer Transformer component’s encoder part, we substituted it with a C-PT block with the most suitable number of layers. We employed some of the ViT pre-training parameters to enhance training effectiveness. Following this, we completed general training to adjust the network weights. We additionally utilized 2D inputs for forecasting, followed by the reconstruction of the model in 3D for assessment of the impact. In particular, all Synapse experiments with 512 input image sizes in this paper were performed with a batch size of 6 and a learning rate of 0.0025, which is different from the TransUNet conditions.
Six evaluation metrics were used in our experiment, including the DSC and the HD, in addition to Accuracy, F1 Score, Sensitivity, and Precision. DSC represents the overlap degree between the predicted and labelled images being compared. HD95 represents the largest distance between the first 95% of measures of the anticipated image and the corresponding points of the labelled image. Accuracy assesses the overall correctness of a classification model, whereas the F1 Score can be seen as a harmonic mean of a model’s precision and recall. Sensitivity appraises the proportion of correctly identified instances among the positive class, and Precision gauges the proportion of actual positive instances among the predicted positive samples. Here are the mathematical formulas used to assess performance metrics mentioned previously:
D S C = 2 | A B | | A | + | B | ,
H D 95 = m a x { d A B + d B A } ,
A c c u r a c y = T P + T N T P + T N + F P + F N ,
F 1   S c o r e = 2 T P 2 T P + F P + F N ,
S e n s i t i v i t y   ( R e c a l l ) = T P T P + F N ,
P r e c i s i o n = T P T P + F P ,
where A and B indicate the predicted result and true label regions of the image, respectively, |AB| represents the intersection size of these two regions, while |A| and |B| signify their sizes. d A B is represented by the 95th percentile distance between the predicted outcome and the true labelled value, and d B A is represented by the 95th percentile distance between the true labelled value and the predicted outcome. TP is True Positives, TN is True Negatives, FP is False Positives, and FN is False Negatives.

4.3. Comparison of Baseline Models

4.3.1. Synapse Dataset Result

Table 2 illustrates the effect of two improved models, PT and C-PT, on the Synapse dataset for multiple abdominal organs, whilst predominantly comparing the assessment outcomes for three distinct input image dimensions.
(1) Input image size of 224: The best performance for our C-PT model was achieved when using the 7-layer C-PT module, with average DSC and HD95 scores reaching 80.73% and 21.15 mm respectively—both surpassing other classical methods. Additionally, it also demonstrated superior segmentation performance compared with the latest CTC-Net [41] and TransDeepLab [42] models. Compared with the baseline model, both of our methods achieve better results. The PT model and the C-PT model outperform the baseline TransUNet [5] by 0.87% and 3.25%, respectively, in terms of average DSC accuracy. Additionally, they shorten the edge gap by 5.31 mm and 10.54 mm in terms of average HD95.
(2) Input image size of 320: The PT model and C-PT model perform marginally better than the baseline method in DSC evaluation metrics, achieving gains of 0.47% and 1.25%, respectively. However, the PT model’s edge prediction is slightly weaker than that of the baseline method in terms of HD, and the C-PT model’s structure is shorter by 2.17 mm than the baseline network.
(3) Input image size to 512: The C-PT model utilizes a 7-layer C-PT module to optimize performance. While the PT model demonstrates an improvement of 0.49% and 1.28% compared to nnUNet [43] and TransUNet [5], respectively, in average DSC metrics, its HD aspect fails to meet desired standards. The C-PT model, on the other hand, exhibits a potential advantage, with its accuracy metric in DSC slightly surpassing that of the baseline method, and a reduction of 4.57 mm in the HD metric. Overall, according to Table 2, our proposed C-PT model demonstrates superior performance in comparison to both the baseline and PT models in overall evaluation metrics. However, the PT model only outperforms the baseline model with regards to the mean DSC. Notably, the C-PT (320 × 320) model is better than the nnUNet (512 × 512) [44] by means of the mean DSC, and also reduces the edge prediction error by 3.63 mm based on mean HD metrics.
Table 3 compares the results of four evaluation metrics in the Synapse dataset. The findings demonstrate that the C-PT model outperforms the baseline model regarding Accuracy, F1 Score, Sensitivity, and Precision, while the PT model is marginally superior to the baseline model, displaying mainly comparable performance.
Table 2. Comparative results of mainstream 2D models on the Synapse abdominal multi-organ segmentation dataset. The best results are shown in bold. DSC stands for the higher the better and HD95 for the lower the better.
Table 2. Comparative results of mainstream 2D models on the Synapse abdominal multi-organ segmentation dataset. The best results are shown in bold. DSC stands for the higher the better and HD95 for the lower the better.
MethodsAverageSizeAortaGallbladderLeft
Kidney
Right
Kidney
LiverPancreasSpleenStomach
DSC↑HD95↓
V-Net [44]68.81--75.3451.8777.1080.7587.8440.0580.5656.98
DARR [45]69.77--74.7453.7772.3173.2494.0854.1889.9045.96
R50+U-Net [3]74.6836.87-84.1862.8479.7971.2993.3548.2384.4173.92
TransClaw UNet [46]78.09-22485.8761.3884.8379.3694.2857.6587.7473.55
U-Net [3]78.231.9622488.3170.279.3871.5793.7557.5386.3178.52
R50 VIT CUP [13]71.2932.8722473.7355.1375.8072.2091.5145.9981.9973.95
CGNET [47]75.08-22483.4865.3277.9172.0491.9257.3785.4777.12
AttUNet [48]75.5936.97-55.9263.9179.2072.7193.5649.3787.1974.95
Swin-UNet [6]79.1321.5522485.4766.5383.2879.6194.2956.5890.6676.60
TransUNet [5]77.4831.6922487.2363.1381.8777.0294.0855.8685.0875.62
UCTransNet [10]78.2326.75224--------
FFUNet [8]79.0931.6522486.6867.0981.1373.7393.6764.1790.9275.32
CTC-Net [41]78.4122.5222486.4663.5383.7180.7993.7859.7386.8772.39
TransDeepLab [42]80.1621.2522486.0469.1684.0879.8893.5361.1989.0078.40
Ours 178.3526.3822486.6560.8682.1877.5094.5758.2889.6577.12
Ours 280.7321.1522488.3365.9983.8482.2794.5463.3688.2879.27
TransUNet [5]81.4123.2832090.3764.9885.5181.6094.6768.3488.4177.40
Ours 181.8830.1732089.5464.9084.4080.1895.3966.2791.2883.12
Ours 282.6621.1132089.2167.9785.5481.1895.0966.8290.7984.71
nnUNet [43]82.3624.7451290.9665.5781.9278.3695.9669.3691.1285.60
TransUNet [5]81.5726.8951290.4566.2079.7374.9995.2574.2488.6183.14
Ours 182.8528.1251291.0163.8584.3880.695.8470.1991.6185.33
Ours 281.8522.3251290.9460.5184.9979.3895.2868.6793.3981.71
1 Representing the PTransUNet model. 2 Representing the C-PTransUNet model.
Table 3. Comparative results of baseline and enhanced models on the Synapse dataset. The best results are shown in bold. The better the performance, measured in percentages, follows an increase in values for Accuracy, F1 Score, Sensitivity, and Precision. The input image size is uniformly 224.
Table 3. Comparative results of baseline and enhanced models on the Synapse dataset. The best results are shown in bold. The better the performance, measured in percentages, follows an increase in values for Accuracy, F1 Score, Sensitivity, and Precision. The input image size is uniformly 224.
Evaluating
Indicator
MethodsAverageAortaGallbladderLeft
Kidney
Right
Kidney
LiverPancreasSpleenStomach
AccuracyTransUNet99.8899.9799.9899.9499.9399.9299.9199.9099.70
Ours 199.8999.9799.9899.9499.9399.7499.9199.9399.73
Ours 299.9099.9799.9899.9599.9499.7599.9299.9299.76
F1 ScoreTransUNet76.5387.6153.2481.9976.8993.7658.4486.0774.24
Ours 177.3186.6552.5382.1877.5094.5758.2889.6577.12
Ours 279.6988.3357.6583.8482.2794.5463.3688.2879.28
SensitivityTransUNet76.5388.3650.4987.6773.0095.0455.5990.5371.56
Ours 176.8587.0951.6282.9976.6295.8154.8092.0173.89
Ours 279.5689.8855.3282.4783.3095.8360.6391.4477.66
PrecisionTransUNet80.2987.1962.7178.5883.8192.6073.1284.5179.86
Ours 180.3686.7258.7981.7280.1693.4670.9588.5482.62
Ours 282.3787.3064.7187.4581.9493.4274.8086.2483.15
1 Representing the PTransUNet model. 2 Representing the C-PTransUNet model.

4.3.2. ACDC Dataset Result

Table 4 illustrates the impact of PT and C-PT models on the ACDC cardiac dataset. With regard to average DSC metrics, our C-PT model performs slightly lower than the newest CTC-Net model but displays a potential competitive edge over other models. Our C-PT model outperforms other models in segmenting the Myo region. In a comparison of the same parameter configuration between the TransUNet and C-PT model, the C-PT model exhibited superior segmentation DSC coefficients and HD metrics. Therefore, the results indicated that the C-PT model is more robust.
Table 5 presents the outcomes of the performance evaluation of our model and the baseline model across four performance metrics. The C-PT model surpasses the baseline model in terms of Sensitivity and F1 Score metrics, while slightly lagging behind the baseline model in terms of Precision. The Precision of the PT model was better than that of the baseline model, although it scored slightly lower in terms of F1 Score and Sensitivity. In terms of accuracy, both the C-PT model and the PT model were comparable to the baseline model.

4.4. Parallelism Experiment

We examined the influence on performance of varying the numbers of parallel branches on the PT and C-PT models, and contrasted their performance with that of the baseline model [5]. As any parallelism and depth combination changes the number of parameters and FLOPs of the ViT modules, we chose to keep the total number of modules constant for each network model and reallocate modules across different branches. For instance, TransUNet’s sequential ViT-B/12x1 can be combined into PT’s parallel ViT-B/6 × 2 or 4 × 3, 3 × 4. Here, ViT-B stands for the ViT-Base module [13], and 6 × 2, 4 × 3, and 3 × 4 denote 6-layer 2-branching, 4-layer 3-branching, and 3-layer 4-branching, respectively. The layering and branching refer specifically to the module’s depth and degree of parallelism, respectively. The reorganization process does not alter the fundamental parameters of each module [18], such as the Hidden Size and MLP Size. Since the C-PT modules used in the C-PT model are adapted with parallel ViT-B/6 × 2, which has a distinct structure compared to the parallel ViT-B blocks, we regulated how to merge the depth and parallelism of these blocks by simply fixing the total quantity of C-PT modules. The methodology used to combine them is identical to the one used in the PT model, and the C-PT blocks require fewer references and FLOPs than the parallel ViT-B blocks.
Figure 3 displays the outcomes of three parallelism experiments of the PT and C-PT models conducted on varying image sizes belonging to the Synapse multi-organ segmentation dataset. In Figure 3a, the PT model’s 3 × 4 and 6 × 2 structure proves advantageous over the baseline model for Stomach and Spleen organ segmentation. For Left Kidney and Right Kidney organ segmentation, the PT model’s 4 × 3 and 6 × 2 structure is slightly superior to the baseline model. The overall structure is similar to the baseline model for Liver and Aorta organ segmentation. However, the PT/3 × 4 model exhibits feature redundancy when segmenting 512-sized images, leading to a reduction in the accuracy of Gallbladder organ segmentation. The PT model exhibits variability in segmentation accuracy across organs with variations in parallelism, while the PT/6 × 2 model consistently demonstrates superior segmentation performance across multiple organs compared to the PT/4 × 3 and PT/3 × 4 models. In Figure 3b, it is evident that the segmentation performance of the three parallelisms of the C-PT model improves significantly at an image size of 224. Similarly, the C-PT/6 × 2 model shows comparable results to the baseline for multi-organ segmentation once the input image size is increased to 320. Finally, increasing the image size from 320 to 512 further enhances the performance. The accuracy of the C-PT/4 × 3 and C-PT/3 × 4 models fluctuates during the segmentation of the Stomach, Spleen, Left Kidney, and Right Kidney organs, with varying levels of precision. Furthermore, it should be noted that an increase in parallelism can impact the segmentation stability of the C-PT model.
We adjust the optimal module width of the segmentation network model by varying the level of parallel branching of modules to enhance the segmentation performance. From the experimental data presented in Table 6, it is apparent that the PT model has an equal number of parameters across different modes, including the 6-layer 2-branch, 4-layer 3-branch, and 3-layer 4-branch, with similar FLOPs for identical input image sizes. Interestingly, with input image sizes of 224 and 320, the average DSC of PT/4 × 3 is slightly greater than that of PT/6 × 2 and PT/3 × 4, while the average HD reaches its optimal level at an input size of 320. Increasing the input size to 512, PT/6 × 2 achieves the best performance in terms of average DSC, while it is slightly weaker than PT/3 × 4 in terms of average HD. Although, by increasing the input size, the PT module improves performance at parallelism levels 3 and 4, at 512, the model acquires redundant features, leading to a degradation of the segmentation performance. The performance impact of C-PT models at varying parallelism can be found in Table 7. As the parallel structure in the MHSA component of C-PT is present, the activation function decreases as parallelism increases. Therefore, it becomes apparent that the number of parameters differs between the three models, although the total number of C-PT modules remains the same. Consequently, the FLOPs of all three C-PT models are also the same. Considering the DSC and HD aspects together, it is evident that with an image size of 224, DSC increases as branching increases while HD decreases. In contrast, at an image size of 320, the overall performance deteriorates with an increase in branching. For an image size of 512, C-PT/3 × 4 exhibits superior DSC and HD metrics, as well as a greater number of parameters compared with C-PT/6 × 2. However, due to the higher probability of redundant features and unstable segmentation performance, the increase in parallelism is unsuitable for depth scaling. In summary, considering the segmentation model’s performance and computational overhead, both the PT model and C-PT model in this paper use a degree of parallelism of 2.

4.5. Depth Experiment

Figure 4 shows the outcome of scaling the depth of the PT and C-PT models for three image sizes using the Synapse multi-organ segmentation dataset, with a parallelism of 2. In Figure 4a, for the PT model used in multiple organ segmentation, its Dice similarity coefficient (DSC) does not exhibit a consistent increase with depth, which is only observed in the cases of 224-Gallbladder, 224-Left Kidney, 224-Right Kidney, 320-Left Kidney, 320-Right Kidney, 320-Pancreas and 512-Pancreas. As the image scale increases, the PT-6L model displays a more pronounced advantage in segmentation accuracy compared with PT models of other depths. Compared with the baseline model, the PT model demonstrates a competitive advantage when the depth is set to 6 (without altering the number of consecutive ViT parameters and FLOPs). In Figure 4b, the C-PT/7L model achieved better segmentation results in multi-organ segmentation as the image scale increased from 224 to 512. It is noteworthy that the C-PT/8L model outperformed the C-PT/7L model in the segmentation of Gallbladder, Left Kidney, Spleen, and Stomach organs at an image scale of 320. The C-PT model may enhance segmentation performance through enhancing the depth of appropriate modules.
To investigate how variations in module depth affect the performance of PTransUNet (PT) and C-PTransUNet (C-PT) networks, we adjust the depth by expanding and compressing the number of layers. This is based on the results obtained from previous experiments to determine the optimal number of parallel branches. Our main focus is the performance analysis of the PT network model at depths ranging from 5 to 7 layers and the C-PT network model at depths ranging from 5 to 9 layers. Table 8 shows the performance of the PT segmentation model across various depths. It appears that the size of the input image has an impact on the PT segmentation model, with performance enhancements occurring at a depth of 224. However, increasing the size to 320 or 512 results in a decrease in the model’s performance as the over-deep network structure leads to feature saturation and overfitting. Experiments indicate that the parallel ViT module of the PT model attains optimal performance at 6 layers. Similarly, Table 9 illustrates how depth impacts the performance of the C-PT model. The model achieves better segmentation performance with 7 layers of depth at different scales. However, when the image size is 320, the C-PT model reaches its highest performance at 8 layers of depth. In summary, the model’s modularity improves while the segmentation performance decreases; thus, the PT model employs 6 layers of depth, and the C-PT model uses 7 or 8 layers of depth to acquire the features, which may relieve the model’s optimization issue to a certain degree.

4.6. Ablation Experiment

In order to validate the efficiency of the C-PTransformer module, this paper has conducted combined ablation studies in various settings, such as DD and MLP layers.
We examined the impact of utilizing varying quantities of DD layers on the performance of the C-PT model. As shown in Figure 5a, it is evident that C-PT/1dd-7L can achieve a DSC advantage on the Liver, Left Kidney, Right Kidney, Gallbladder, and Pancreas organs. However, altering the number of DD layers to 2 or reducing it to 0 did not improve the DSC performance. In Figure 5b, it is observed that the DD layer does not significantly enhance Liver segmentation, but it does improve Pancreas segmentation significantly. This indicates that the DD layer has the potential to correctly segment challenging targets. Table 10 illustrates that the introduction of the DD layer has resulted in improved average DSC and HD scores as compared with the baseline model. This outcome highlights the efficacy of incorporating the “DD layer and activation function”. In terms of average DSC and HD indices, the C-PT/1DD model outperforms the C-PT/0DD and C-PT/2DD models. The incorporation of the DD layer significantly enhances the feature extraction of the segmentation model. The quantity of parameters and FLOPs are significantly impacted by the DD layer. The 7-layer C-PT/2DD model, when compared with the baseline model, still maintains a competitive advantage concerning the number of parameters and FLOPs. However, an excess of DD layers results in feature saturation within the module, which hinders the model’s convergence and negatively impacts segmentation performance. Therefore, we introduced a 1-layer DD module with a combination of activation functions, resulting in lower computational overhead and improved segmentation performance compared to the serial ViT baseline model.
We investigated the impact of varying numbers of MLP layers on the C-PT model following the addition of an equal number of DD layers. Figure 5c displays that C-PT/0MLP (with only the C-PMHSA component retained) demonstrated excellent DSC performance for Gallbladder and Spleen organ segmentation, while C-PT/2MLP showed superior results for Pancreas and Gallbladder segmentation. In Figure 5d, it is evident that the quantity of MLP layers has a minimal effect on DSC performance improvement for Liver and Aorta segmentation. However, it has a significant influence on segmenting Left Kidney, Right Kidney, and Pancreas organs. Table 11 indicates that C-PT/0MLP and C-PT/1MLP exhibit similar performance in segmentation metrics. The addition of two MLP layers in parallel leads to improved performance, but an excessive number of MLP layers increases the number of parameters and FLOPs, imposing significant computational overhead. Notably, C-PT/1DD introduces only the activation function block, which has a similar computational cost as C-PT/0MLP, yet achieves better segmentation performance than both C-PT/1MLP and C-PT/2MLP. This validates the possibility of improving the feature extraction capacity of the MHSA component, thereby reducing the computational burden of the MLP.

4.7. Visualising Results on Synapse Dataset

Figure 6 displays the visualization outcomes from various methods on the Synapse dataset. The visualization results reveal that our C-PT model outperforms the baseline model in segmenting the Stomach and Liver. Additionally, there is a low probability of actual background regions being predicted as organ areas. This advantage stems from our designed C-PMHSA module, where the output from the left layer guides the right layer to focus on regions containing organs rather than freely learning features. This study indicates that by enhancing the MHSA feature extraction capability and reducing the FFN computational overhead, the method can be modelled more effectively over long distances, leading to improved segmentation results.

5. Conclusions

This paper examines the effect of incorporating parallel ViT and C-PT modules to enhance the medical segmentation baseline model, TransUNet, in terms of segmentation efficiency and performance from the perspective of improving ViT. This study proposes the PTransUNet (PT) model and C-PTransUNet (C-PT) model for medical image segmentation. On the basis of the research findings, it can be concluded that:
(1)
The PT model demonstrates superior DSC performance compared with the TransUNet baseline model while maintaining the same number of parameters and FLOPs. Additionally, the parallel ViT proves more appropriate for the baseline model than the serial ViT for feature learning at a deeper level.
(2)
At an input size of 224, the C-PT model decreases parameter count by 29% and FLOPs by 21.4% as compared with the baseline model, while also improving DSC accuracy by 3.25% and shortening HD edge gap by 10.54 mm compared to the baseline. The C-PT model exhibits superior segmentation performance and higher efficiency than the baseline model employed.
(3)
The C-PT module demonstrates improved performance and efficiency when compared to the parallel ViT module within the baseline model. This is attributed to the design of the C-PMHSA and the streamlined MLP. The MHSA block’s feature extraction capability is enhanced to ensure the overall performance of the C-PT module, while the FFN block is replaced with an activation function to reduce the number of parameters and FLOPs of the C-PT module.
This study confirms that the parallel ViT and the proposed C-PT module achieve superior segmentation performance compared to the serial ViT under the baseline model. A future study will examine how the C-PT module can be implemented in 3D medical image segmentation. Moreover, the study will explore the performance benefits of the module by applying it to other network models.

Author Contributions

Conceptualization, D.W. and Z.W.; methodology, D.W.; software, D.W.; validation, D.W., Z.W. and B.Y.; formal analysis, L.C.; investigation, D.W.; resources, Z.W.; data curation, D.W. and Z.W.; writing—original draft preparation, D.W.; writing—review and editing, B.Y., L.C. and H.X.; visualization, D.W.; supervision, Z.W.; project administration, B.Y.; funding acquisition, B.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted following the principles outlined in the Declaration of Helsinki and received approval from the Institutional Review Board of Hunan Normal University (protocol code: 568, approval date: 7 November 2023).

Data Availability Statement

Two publicly available datasets were used in this manuscript: the Multi-Organ CT Image Seg-mentation Dataset for the MICCAI Multi-Atlas Abdominal Labelling Challenge (Synapse) and the MRI Image Segmentation Dataset for Automated Cardiac Diagnostics Challenge (ACDC). These datasets can be found at https://www.synapse.org/#!Synapse:syn3193805/wiki/217789 (accessed on 5 May 2023), and https://www.creatis.insa-lyon.fr/Challenge/acdc/ (accessed on 5 May 2023), respectively.

Conflicts of Interest

The authors declare that there are no conflict of interest regarding the publication of this paper.

Abbreviations

The following abbreviations are used in this manuscript:
PTPTransUNet network
C-PTC-PTransUNet network
CNNConvolutional neural network
ViTVision Transformer
MHSAMulti-Head Self-Attention
FFNFeed-Forward Network
DSCDice similarity coefficient
HD95the 95% Hausdorff Distance
FLOPsFloating point operations per second
MLPMultilayer Perceptron
LNLayer Normalization
GELUGaussian Error Linear Unit

References

  1. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  2. Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
  3. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  4. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  5. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  6. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like pure transformer for medical image segmentation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 205–218. [Google Scholar]
  7. Hatamizadeh, A.; Tang, Y.; Nath, V.; Yang, D.; Myronenko, A.; Landman, B.; Roth, H.R.; Xu, D. Unetr: Transformers for 3d medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022; pp. 574–584. [Google Scholar]
  8. Xie, J.; Zhu, R.; Wu, Z.; Ouyang, J. FFUNet: A novel feature fusion makes strong decoder for medical image segmentation. IET Signal Process. 2022, 16, 501–514. [Google Scholar] [CrossRef]
  9. Zhou, H.-Y.; Guo, J.; Zhang, Y.; Yu, L.; Wang, L.; Yu, Y. nnformer: Interleaved transformer for volumetric segmentation. arXiv 2021, arXiv:2109.03201. [Google Scholar]
  10. Wang, H.; Cao, P.; Wang, J.; Zaiane, O.R. Uctransnet: Rethinking the skip connections in U-Net from a channel-wise perspective with transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual, 22 February–2 March 2022; pp. 2441–2449. [Google Scholar]
  11. Gao, Y.; Zhou, M.; Metaxas, D.N. UTNet: A hybrid transformer architecture for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; pp. 61–71. [Google Scholar]
  12. Peiris, H.; Hayat, M.; Chen, Z.; Egan, G.; Harandi, M. A robust volumetric transformer for accurate 3D tumor segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; pp. 162–172. [Google Scholar]
  13. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S. An image is worth 16×16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  14. Ansari, M.Y.; Abdalla, A.; Ansari, M.Y.; Ansari, M.I.; Malluhi, B.; Mohanty, S.; Mishra, S.; Singh, S.S.; Abinahed, J.; Al-Ansari, A. Practical utility of liver segmentation methods in clinical surgeries and interventions. BMC Med. Imaging 2022, 22, 97. [Google Scholar]
  15. Liu, Z.; Shen, L. Medical image analysis based on transformer: A review. arXiv 2022, arXiv:2208.06643. [Google Scholar]
  16. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–41. [Google Scholar] [CrossRef]
  17. Shamshad, F.; Khan, S.; Zamir, S.W.; Khan, M.H.; Hayat, M.; Khan, F.S.; Fu, H. Transformers in medical imaging: A survey. Med. Image Anal. 2023, 88, 102802. [Google Scholar] [CrossRef]
  18. Touvron, H.; Cord, M.; El-Nouby, A.; Verbeek, J.; Jégou, H. Three things everyone should know about vision transformers. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 497–515. [Google Scholar]
  19. Bebis, G.; Georgiopoulos, M. Feed-forward neural networks. IEEE Potentials 1994, 13, 27–31. [Google Scholar] [CrossRef]
  20. Liu, G.; Wang, J. Dendrite net: A white-box module for classification, regression, and system identification. IEEE Trans. Cybern. 2021, 52, 13774–13787. [Google Scholar] [CrossRef] [PubMed]
  21. Liu, G. It may be time to perfect the neuron of artificial neural network. TechRxiv 2023. [Google Scholar] [CrossRef]
  22. Landman, B.; Xu, Z.; Igelsias, J.; Styner, M.; Langerak, T.; Klein, A. Miccai multi-atlas labeling beyond the cranial vault–workshop and challenge. In Proceedings of the MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge, Munich, Germany, 5–9 October 2015; p. 12. [Google Scholar]
  23. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. Proc. Int. Conf. Mach. Learn. 2021, 139, 10347–10357. [Google Scholar]
  24. Yuan, K.; Guo, S.; Liu, Z.; Zhou, A.; Yu, F.; Wu, W. Incorporating convolution designs into visual transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 579–588. [Google Scholar]
  25. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  26. Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.-H.; Tay, F.E.; Feng, J.; Yan, S. Tokens-to-token vit: Training vision transformers from scratch on imagenet. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 558–567. [Google Scholar]
  27. Wang, W.; Xie, E.; Li, X.; Fan, D.-P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 568–578. [Google Scholar]
  28. Zhou, D.; Kang, B.; Jin, X.; Yang, L.; Lian, X.; Jiang, Z.; Hou, Q.; Feng, J. Deepvit: Towards deeper vision transformer. arXiv 2021, arXiv:2103.11886. [Google Scholar]
  29. Lin, T.; Wang, Y.; Liu, X.; Qiu, X. A survey of transformers. AI Open 2022, 3, 111–132. [Google Scholar] [CrossRef]
  30. Delalleau, O.; Bengio, Y. Shallow vs. deep sum-product networks. Adv. Neural Inf. Process. Syst. 2011, 24. [Google Scholar]
  31. Eldan, R.; Shamir, O. The power of depth for feedforward neural networks. In Proceedings of the Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; pp. 907–940. [Google Scholar]
  32. Lu, Z.; Pu, H.; Wang, F.; Hu, Z.; Wang, L. The expressive power of neural networks: A view from the width. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
  33. Zhang, Y.; Liu, H.; Hu, Q. Transfuse: Fusing transformers and cnns for medical image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 Septembe–1 October 2021; pp. 14–24. [Google Scholar]
  34. Dubey, S.R.; Singh, S.K.; Chaudhuri, B.B. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing 2022, 503, 92–108. [Google Scholar] [CrossRef]
  35. Sukhbaatar, S.; Grave, E.; Lample, G.; Jegou, H.; Joulin, A. Augmenting self-attention with persistent memory. arXiv 2019, arXiv:1907.01470. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  37. Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 2014, 15, 1929–1958. [Google Scholar]
  38. Zhao, Y.; Wang, G.; Tang, C.; Luo, C.; Zeng, W.; Zha, Z.-J. A battle of network structures: An empirical study of cnn, transformer, and mlp. arXiv 2021, arXiv:2108.13002. [Google Scholar]
  39. Bernard, O.; Lalande, A.; Zotti, C.; Cervenansky, F.; Yang, X.; Heng, P.-A.; Cetin, I.; Lekadir, K.; Camara, O.; Ballester, M.A.G. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Trans. Med. Imaging 2018, 37, 2514–2525. [Google Scholar] [CrossRef]
  40. Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  41. Zhang, S.; Xu, Y.; Wu, Z.; Wei, Z. CTC-Net: A Novel Coupled Feature-Enhanced Transformer and Inverted Convolution Network for Medical Image Segmentation. In Proceedings of the Asian Conference on Pattern Recognition, Kitakyushu, Japan, 5–8 November 2023; pp. 273–283. [Google Scholar]
  42. Azad, R.; Heidari, M.; Shariatnia, M.; Aghdam, E.K.; Karimijafarbigloo, S.; Adeli, E.; Merhof, D. Transdeeplab: Convolution-free transformer-based deeplab v3+ for medical image segmentation. In Proceedings of the International Workshop on PRedictive Intelligence in MEdicine, Singapore, 22 September 2022; pp. 91–102. [Google Scholar]
  43. Isensee, F.; Jaeger, P.F.; Kohl, S.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef] [PubMed]
  44. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  45. Fu, S.; Lu, Y.; Wang, Y.; Zhou, Y.; Shen, W.; Fishman, E.; Yuille, A. Domain adaptive relational reasoning for 3d multi-organ segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020; pp. 656–666. [Google Scholar]
  46. Chang, Y.; Menghan, H.; Guangtao, Z.; Xiao-Ping, Z. Transclaw u-net: Claw u-net with transformers for medical image segmentation. arXiv 2021, arXiv:2107.05188. [Google Scholar]
  47. Wu, T.; Tang, S.; Zhang, R.; Cao, J.; Zhang, Y. Cgnet: A light-weight context guided network for semantic segmentation. IEEE Trans. Image Process. 2020, 30, 1169–1179. [Google Scholar] [CrossRef]
  48. Schlemper, J.; Oktay, O.; Schaap, M.; Heinrich, M.; Kainz, B.; Glocker, B.; Rueckert, D. Attention gated networks: Learning to leverage salient regions in medical images. Med. Image Anal. 2019, 53, 197–207. [Google Scholar] [CrossRef]
  49. Huang, X.; Deng, Z.; Li, D.; Yuan, X. Missformer: An effective medical image segmentation transformer. arXiv 2021, arXiv:2109.07162. [Google Scholar] [CrossRef]
Figure 1. (a) Transformer module. (b) Parallel Transformer module. (c) Cross-Parallel Transformer module. QKV represents the Query-Key-Value in the self-attention mechanism. After passing through the linear layers and the DD layer in the left and right layers, the QKV is respectively mapped to Q ^ * , K ^ * , and V ^ * . * represents the left block or the right block.
Figure 1. (a) Transformer module. (b) Parallel Transformer module. (c) Cross-Parallel Transformer module. QKV represents the Query-Key-Value in the self-attention mechanism. After passing through the linear layers and the DD layer in the left and right layers, the QKV is respectively mapped to Q ^ * , K ^ * , and V ^ * . * represents the left block or the right block.
Sensors 23 09488 g001
Figure 2. C-PTransUNet Network Architecture. Replacing the sequential Transformer layer within a TransUNet model with a parallel Transformer layer results in the construction of a PT network model. On the other hand, substituting it with a cross-parallel Transformer layer establishes a C-PT network model.
Figure 2. C-PTransUNet Network Architecture. Replacing the sequential Transformer layer within a TransUNet model with a parallel Transformer layer results in the construction of a PT network model. On the other hand, substituting it with a cross-parallel Transformer layer establishes a C-PT network model.
Sensors 23 09488 g002
Figure 3. Comparison of DSC for multiple organ segmentation in parallelism experiments. (a) Parallelism experiments with different sizes of PTransUNet models; (b) Parallelism experiments with different sizes of C-PTransUNet models.
Figure 3. Comparison of DSC for multiple organ segmentation in parallelism experiments. (a) Parallelism experiments with different sizes of PTransUNet models; (b) Parallelism experiments with different sizes of C-PTransUNet models.
Sensors 23 09488 g003
Figure 4. Comparison of DSC for multiple organ segmentation in depth experiments. (a) Depth experiments with different sizes of PTransUNet models; (b) Depth experiments with different sizes of C-PTransUNet models.
Figure 4. Comparison of DSC for multiple organ segmentation in depth experiments. (a) Depth experiments with different sizes of PTransUNet models; (b) Depth experiments with different sizes of C-PTransUNet models.
Sensors 23 09488 g004
Figure 5. Experimental diagram of ablation of DD and MLP layers. (a) DSC performance impact of different DD layers; (b) DSC distribution of abdominal 8 organs under different DD layers; (c) DSC performance impact of different MLP layers; (d) DSC distribution of abdominal 8 organs under different MLP layers.
Figure 5. Experimental diagram of ablation of DD and MLP layers. (a) DSC performance impact of different DD layers; (b) DSC distribution of abdominal 8 organs under different DD layers; (c) DSC performance impact of different MLP layers; (d) DSC distribution of abdominal 8 organs under different MLP layers.
Sensors 23 09488 g005
Figure 6. The visualization of model segmentation results on the synapse dataset. Aorta, Gallbladder, Left Kidney, Right Kidney, Liver, Pancreas, Spleen, and Stomach are represented by pink, brown, orange, yellow, purple, green, blue, and red colors, respectively.
Figure 6. The visualization of model segmentation results on the synapse dataset. Aorta, Gallbladder, Left Kidney, Right Kidney, Liver, Pancreas, Spleen, and Stomach are represented by pink, brown, orange, yellow, purple, green, blue, and red colors, respectively.
Sensors 23 09488 g006
Table 1. Network model of the Transformer’s connection method.
Table 1. Network model of the Transformer’s connection method.
ModelSequential TransformerParallel Transformer
TransUNet
Swin-Unet
UNETR
PTransUNet
C-PTransUNet
Table 4. Comparative results of mainstream 2D models on the ACDC dataset. The best results are shown in bold. DSC stands for the higher the better and HD95 for the lower the better. The input image size is uniformly 224.
Table 4. Comparative results of mainstream 2D models on the ACDC dataset. The best results are shown in bold. DSC stands for the higher the better and HD95 for the lower the better. The input image size is uniformly 224.
MethodsAverageRVMyoLV
DSC↑HD95↓DSC↑HD95↓DSC↑HD95↓DSC↑HD95↓
R50 U-Net [3]87.55-87.1-80.63-94.92-
R50 Att-Unet [48]86.75-87.58-79.2-93.47-
VIT CUP [13]81.45-81.46-70.71-92.18-
R50 VIT CUP [13]87.57-86.07-81.88-94.75-
SwinUNet [6]90.00-88.55-85.62-95.83-
MISSFormer [49]87.90-86.36-85.75-91.59-
UNETR [7]87.38-88.49-82.04-91.62-
CTC-Net [41]90.77-90.09-85.52-96.72-
TransUNet89.596.9188.5418.1186.131.3694.121.28
Ours 189.496.5088.1616.9186.291.3294.021.26
Ours 290.445.3090.0213.3486.511.3494.801.22
1 represents the PTransUNet model. 2 represents the C-PTransUNet model.— represents that no indicator data were given in the original paper.
Table 5. Comparative results of baseline and enhanced models on the ACDC dataset. The best results are shown in bold. Better performance, measured in percentage, follows an increase in values for Accuracy, F1 Score, Sensitivity, and Precision. The input image size is uniformly 224.
Table 5. Comparative results of baseline and enhanced models on the ACDC dataset. The best results are shown in bold. Better performance, measured in percentage, follows an increase in values for Accuracy, F1 Score, Sensitivity, and Precision. The input image size is uniformly 224.
MethodsAverageRVMyoLV
Sens↑Prec↑Sens↑Prec↑Sens↑Prec↑Sens↑Prec↑
TransUNet87.0187.1682.5279.5885.2386.4893.2995.43
Ours 186.6487.1981.8479.2185.2986.7292.7995.66
Ours 287.9086.8483.7679.0585.8986.6494.0894.82
MethodsAverageRVMyoLV
Acc↑F1-S↑Acc↑F1-S↑Acc↑F1-S↑Acc↑F1-S↑
TransUNet99.8086.7199.7980.3799.7385.6599.8994.13
Ours 199.8086.6199.7979.9999.7385.8299.8994.02
Ours 299.8087.0899.7980.8899.7386.0499.8994.32
1 represents the PTransUNet model. 2 represents the C-PTransUNet model. Acc, F1-S, Sens, and Prec respectively represent Accuracy, F1 Score, Sensitivity, and Precision.
Table 6. Impact of parallelism on the performance of PTransUNet segmentation models.
Table 6. Impact of parallelism on the performance of PTransUNet segmentation models.
ModelsSizeParallelismAverageParams
(M)
FLOPs
(G)
Train Time
BranchesLayerDSC↑HD↓
PT/6 × 22242678.3526.3888.9124.731:43:03
3202681.8830.1788.9150.53:33:34
5122682.8528.1288.91129.5610:23:42
PT/4 × 32243478.926.5388.9124.731:42:33
3203481.9125.3388.9150.53:30:20
5123481.8433.2588.91129.5610:23:45
PT/3 × 42244378.4925.2888.9124.731:42:39
3204380.9629.8788.9150.53:28:56
5124374.3927.4888.91129.5610:28:56
Table 7. Impact of parallelism on the performance of C-PTransUNet segmentation models.
Table 7. Impact of parallelism on the performance of C-PTransUNet segmentation models.
ModelsSizeParallelismAverageParams
(M)
FLOPs
(G)
Train time
BranchesLayerDSC↑HD↓
C-PT/
6 × 2
2242678.8724.5155.1717.81:26:59
3202681.228.8655.1736.372:58:21
5122680.2426.0755.1793.378:58:56
C-PT/
4 × 3
2243479.5526.9755.1617.81:25:38
3203479.0338.3855.1636.352:57:19
5123475.1218.1755.1693.348:58:51
C-PT/
3 × 4
2244379.6725.0231.4817.81:26:02
3204380.5833.5131.4836.362:57:53
5124382.6326.1131.4893.378:58:54
Table 8. The impact of depth on the performance of PTransUNet segmentation models.
Table 8. The impact of depth on the performance of PTransUNet segmentation models.
ModelSizeDepthAverageParams
(M)
FLOPs
(G)
Train Time
BranchesLayerDSC↑HD↓
PT2242577.3131.5275.3921.951:28:56
2242678.3526.3888.9124.731:43:03
2242779.1827.82102.4327.511:51:51
PT3202581.1127.6375.3944.823:07:48
3202681.8830.1788.9150.53:33:34
3202781.3723.58102.4356.183:51:43
PT5122580.8237.4775.39114.979:09:44
5122682.8528.1288.91129.5610:23:42
5122773.6627.44102.43144.1411:37:46
Table 9. The impact of depth on the performance of C-PTransUNet segmentation models.
Table 9. The impact of depth on the performance of C-PTransUNet segmentation models.
ModelSizeDepthAverageParams
(M)
FLOPs
(G)
Train Time
BranchesLayerDSC↑HD↓
C-PT/
6 × 2
2242578.7830.3647.2816.181:19:27
2242678.8724.5155.1717.81:26:59
2242780.7321.1563.0719.431:34:25
2242879.1727.5270.9621.051:41:10
2242978.5928.7878.8622.681:48:22
C-PT/
4 × 3
3202580.4731.6247.2833.042:41:18
3202681.228.8655.1736.372:58:21
3202781.2629.4363.0739.693:14:48
3202882.6621.1170.9643.013:36:37
3202980.5125.9178.8646.343:52:46
C-PT/
3 × 4
5122580.2828.2747.2884.828:05:24
5122680.2426.0755.1793.378:58:56
5122781.8522.3263.07101.939:52:13
5122881.5223.2170.96110.4810:47:44
5122982.124.8978.86119.0311:44:54
Table 10. The impact of DD layer on the performance of C-PTransUNet segmentation models.
Table 10. The impact of DD layer on the performance of C-PTransUNet segmentation models.
ModelAverageCP-T BlockBlock
Layer
Params
(M)
FLOPs
G)
DSC↑HD↓DD LayerMLP
C-PT/
0DD
77.7729.930f634.8913.64
77.829.030f739.4114.57
77.4733.910f843.9315.51
C-PT/
1DD
78.8724.51+1f655.1717.8
80.7321.15+1f763.0719.43
79.1727.52+1f870.9621.05
C-PT/
2DD
77.2928.64+2f675.4521.96
78.6928.57+2f786.7224.28
78.2430.76+2f897.9926.61
Note: 0 represents that no DD layer was used, +1 represents that one DD layer was used, +2 represents that two DD layers were used, and f represents that only the activation function was utilized.
Table 11. The impact of MLP layer on the performance of C-PTransUNet segmentation models.
Table 11. The impact of MLP layer on the performance of C-PTransUNet segmentation models.
ModelAverageCP-T BlockBlock
Layer
Params
(M)
FLOPs
(G)
DSC↑HD↓DD LayerMLP
C-PT/
0MLP
77.6628.23+10655.1617.8
77.4835.41+10763.0619.42
78.227.68+10870.9521.05
C-PT/
1MLP
77.3128.88+1+1682.1923.35
77.8227.72+1+1794.5925.9
78.0527.6+1+18106.9924.45
C-PT/
2MLP
78.1731.38+1+26109.2228.9
79.0828.14+1+27126.1332.38
78.2529.54+1+28143.0335.86
Note: One layer was employed for the DD layer, while for the experiments, MLP utilized layers 0, 1, and 2.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, D.; Wang, Z.; Chen, L.; Xiao, H.; Yang, B. Cross-Parallel Transformer: Parallel ViT for Medical Image Segmentation. Sensors 2023, 23, 9488. https://doi.org/10.3390/s23239488

AMA Style

Wang D, Wang Z, Chen L, Xiao H, Yang B. Cross-Parallel Transformer: Parallel ViT for Medical Image Segmentation. Sensors. 2023; 23(23):9488. https://doi.org/10.3390/s23239488

Chicago/Turabian Style

Wang, Dong, Zixiang Wang, Ling Chen, Hongfeng Xiao, and Bo Yang. 2023. "Cross-Parallel Transformer: Parallel ViT for Medical Image Segmentation" Sensors 23, no. 23: 9488. https://doi.org/10.3390/s23239488

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop