Next Article in Journal
Approximating a Minimum Dominating Set by Purification
Previous Article in Journal
Synthesis of Circular Antenna Arrays for Achieving Lower Side Lobe Level and Higher Directivity Using Hybrid Optimization Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images

College of Computer Engineering, Jimei University, Xiamen 361021, China
*
Authors to whom correspondence should be addressed.
Algorithms 2024, 17(6), 257; https://doi.org/10.3390/a17060257
Submission received: 11 May 2024 / Revised: 4 June 2024 / Accepted: 7 June 2024 / Published: 12 June 2024

Abstract

:
Breast tumor segmentation of ultrasound images provides valuable tumor information for early detection and diagnosis. However, speckle noise and blurred boundaries in breast ultrasound images present challenges for tumor segmentation, especially for malignant tumors with irregular shapes. Recent vision transformers have shown promising performance in handling the variation through global context modeling. Nevertheless, they are often dominated by features of large patterns and lack the ability to recognize negative information in ultrasound images, which leads to the loss of breast tumor details (e.g., boundaries and small objects). In this paper, we propose a novel noise suppression and boundary refinement network, NSBR-Net, to simultaneously alleviate speckle noise interference and blurred boundary problems of breast tumor segmentation. Specifically, we propose two innovative designs, namely, the Noise Suppression Module (NSM) and the Boundary Refinement Module (BRM). The NSM filters noise information from the coarse-grained feature maps, while the BRM progressively refines the boundaries of significant lesion objects. Our method demonstrates superior accuracy over state-of-the-art deep learning models, achieving significant improvements of 3.67% on Dataset B and 2.30% on the BUSI dataset in mDice for testing malignant tumors.

1. Introduction

Breast tumors are a prevalent health concern for women that significantly impacts their well-being and lives. As a result, regular breast screening and diagnosis play a crucial role in formulating effective treatment plans and improving survival rates. Due to the flexibility and convenience of ultrasound imaging, it has become a conventional modality for breast tumor screening. In recent years, many deep learning methods based on ultrasound images have been proposed for breast tumor segmentation. However, complex ultrasound patterns continue to pose the following challenges: (1). blurred boundaries caused by low contrast between the foreground and background; (2). segmentation disruption due to speckle noise (as illustrated in Figure 1).
The impressive non-linear learning capability has led to significant successes in medical image segmentation with Full Convolution Network (FCN) and U-Net [1,2]. Motivated by this, many deep learning approaches have emerged for segmenting breast tumors from ultrasound images. In 2018, Almajalid et al. [3] were the first to systematically evaluate the impact of different FCN variants on breast tumor segmentation and achieve segmentation results that outperformed traditional methods. AAU-Net [4] integrates a hybrid adaptive attention module instead of the conventional convolution block, enhancing feature extraction across diverse receptive fields. NU-Net [5] utilizes sub-networks of varying depths with shared weights to attain robust representations of breast tumors.
Transformer has garnered scholarly attention for its attention mechanism and the complete elimination of convolution. Subsequent investigations [6,7,8] have explored the integration of transformer structures in image recognition. Notably, ViT [9] stands out as the pioneering work to apply a pure transformer to image classification, substantiating the viability of transformer architectures for computer vision tasks. In the realm of medical image segmentation, the efficacy of vision transformers has been substantiated by PVT-CASCADE [10] and DuAT [11]. DuAT proposes a Dual-Aggregation Transformer Network to address the challenge of capturing both global and local spatial features, while PVT-CASCADE introduces an innovative attention-based decoder leveraging the multi-stage feature representation of the vision transformer. These advancements underscore the transformative impact of transformer architectures in the medical image field.
To further address the issue of blurred boundaries caused by breast tumors, the following optimization strategies have been considered: expanding the receptive field and the attention mechanisms has been widely used. The dilated convolution operation is a commonly used strategy to expand the receptive field. For example, Hu et al. [12] obtained the large receptive field of breast tumors by using dilated convolutions in deeper network layers. In terms of attention mechanisms, Lee et al. [13] proposed a channel attention module to further improve the performance of U-Net for breast tumor segmentation. Yan et al. [14] proposed an attention enhanced U-Net with hybrid dilated convolution, merging dilated convolutions with an attention mechanism. Although progress has been made by these methods, the optimization paradigm from fine to coarse granularity struggles to capture prominent object regions in deeper convolutional layers, where object regions and boundaries stand as two crucial distinguishing features between normal tissue and breast tumors. Thus, we propose an iteratively enhanced Boundary Refinement Module (BRM) based on a global map, emphasizing a pattern from coarse to fine granularity. Our motivation arises from clinical practice, where clinicians initially approximate the location of a breast tumor and then meticulously extract its silhouette mask based on local features. In the NSBR-Net model, we adopt a two-step approach: first predicting the coarse region and then implicitly modeling the boundaries using axial reverse attention. There are two advantages to this strategy, including better learning ability and improved generalization capability.
In ultrasound imaging, the inherent nature of speckle noise tends to degrade image quality and complicates the distinction between breast tissue and noise artifacts [15], making accurate tumor detection more challenging. Moreover, speckle noise significantly impacts segmentation accuracy by propagating across various convolutional layers at different scales. Current methods primarily leverage the concept of deep supervision to develop refined networks [16], exploring neighboring decisions to correct potential errors induced by speckle noise. However, we propose addressing noise influence from a more fundamental perspective by introducing “frequency”. In an intriguing experiment, we examined the network’s performance variation when eliminating high-frequency information (detail and noise) [17] in deeper layers. We used the mainstream method UNet [18] to evaluate the impact of high frequencies on breast tumor segmentation in the BUSI testset (ultrasound images, including both benign and malignant breast tumors) [19]. Building upon [20], we employed multiple pooling operations on the last two stages of the UNet architecture to filter out the high-frequency information and keep only the low-frequency information. As shown in Figure 2, we observed a substantial improvement in model performance when the network solely contained low-frequency information, indicating that speckle noise within high-frequency information disrupts spatial consistency. To address this phenomenon, we introduced a Noise Suppression Module, decoupling high- and low-frequency information in feature maps and denoising the high-frequency components with a Gaussian filter. While following prior works’ principles, NSBR-Net also incorporates a deep supervision mechanism.
Our method, built upon a transformer-based encoder, incorporates BRM and NSM for breast tumor segmentation in ultrasound images. Its efficacy was validated through extensive experiments on a breast ultrasound dataset and resulted in highlight significant improvement over existing methods. Our contributions include the following:
  • We present a novel breast tumor segmentation framework, termed NSBR-Net. Unlike existing CNN-based methods, we adopt the pyramid vision transformer as an encoder to extract more robust features.
  • To support our framework, we introduce two simple modules. Specifically, NSM is utilized to suppress speckle noise within high-frequency information, while BRM performs boundary refinement based on coarse regions.
  • Comparative experiments juxtaposed with leading-edge medical image segmentation models demonstrate the superior efficacy of our method on a breast ultrasound dataset.
This paper is structured as follows: Section 2 outlines the dataset, method (including NSM and BRM), loss function, and experimental settings (including evaluation protocols and implementation details). Section 3 presents a qualitative and quantitative comparison of different methods and their corresponding analyses. Section 4 discusses the results, limitations, and future research directions, while Section 5 is dedicated to the conclusion.

2. Materials and Methods

2.1. Datasets

We conducted experiments on two widely used public breast ultrasound datasets, the BUSI dataset [19] and Dataset B [21]. The BUSI dataset contains 780 images acquired by two types of ultrasound equipment (LOGIQ E9 ultrasound and LOGIQ E9 Agile ultrasound system) in Baheya Hospital. The average image size of these images is 500 × 500 pixels. Radiologists manually segmented the boundaries of breast lesions in each ultrasound image using Matlab. Dataset B contains 163 breast US images from different women, which were scanned with a Siemens ACUSON Sequoia C512 system 17L5 HD linear array transducer (8.5 MHz) at the UDIAT Diagnostic Centre of the Parc Tau’lI Corporation, Sabadell (Spain), in 2012. In the ground truth (GT) images, the boundaries of breast lesions were delineated by professional radiologists. The STU dataset comprises 42 BUS (breast ultrasound) images, each with an average size of 128 × 128 pixels. These images were obtained by the Imaging Department of the First Affiliated Hospital of Shantou University using a GE Voluson E10 ultrasonic diagnostic system. Due to the limited number of images in the STU dataset, it was solely employed as external validation data to assess the segmentation network’s generalization performance.

2.2. Method

An overview of NSBR-Net is shown in Figure 3. Upon inputting an ultrasound image, we initially extracted four levels of feature maps of various scales sequentially utilizing the pyramid vision transformer (PVT) block [22]. We input the feature maps from the last three stages into NSM individually for the suppression of speckle noise, followed by utilizing a parallel partial decoder (PD) [23] to generate high-level semantic global maps. Lastly, a set of reverse axial attention mechanisms was employed to refine the tumor boundaries progressively. Detailed expositions of NSM and BRM are presented as follows. The overall pseudocode of our method is presented in Algorithm 1.
Algorithm 1 Pseudocode of NSBR-Net in a Pytorch-like Style
  • # O p e r a t o r : BI, Bilinear interpolation
  • # Pass input through the backbone network
  • pvt = backbone(x)
  • # Translayer: Apply convolutional, batch normalization, and ReLU
  • pvt_transformed = [Translayer (pvt[i]) for i in range (4)]
  • # NSM modules
  • nsm_outputs = [NSM (pvt_transformed[i]) for i in range (3)]
  • # partial decoder(PD)
  • pd_output = PD (*nsm_outputs)
  • # BRM modules
  • global_maps[4] = BI (pd_output)
  • stage_outputs[4] = BRM (pvt_transformed[4],global_maps[4]) + global_maps[4]
  • for i in range (3, 0, −1):
  •     global_maps[i] = BI (stage_outputs[i + 1])
  •     stage_outputs[i] = BRM (pvt_transformed[i],global_maps[i]) + global_maps[i]
  • predictions = [BI (stage_outputs[i]) for i in range (5)]
    return predictions

2.2.1. Noise Suppression Module

Speckle noise is a complex physical characteristic observed in ultrasound images that frequently poses challenges in accurate object localization. The adoption of frequency representation presents a novel approach to discerning differences between categories, potentially revealing insights overlooked by human visual perception. To mitigate this, we propose a Noise Suppression Module (NSM) that considers speckle noise suppression from a frequency perspective, as illustrated in Figure 4.
Low-pass filter (LPF). Low-frequency components occupy most of the energy in the absolute image and represent most of the semantic information. A low-pass filter allows signals below the cutoff frequency to pass, while signals above the cutoff frequency are obstructed. Thus, we employed typical average pooling as a low-pass filter. However, the cutoff frequencies of different images are different. To adapt to this, we utilized channel splitting [20], where the feature map is partitioned into multiple groups. This allowed us to apply different kernels and strides to each group, thereby generating low-pass filters. For the mth group, we have
L P F m v m = U p Γ s × s v m ,
where U p · represents upsampling, Γ s × s denotes the adaptive average pooling with the output size of s × s , s { 1 , 2 , 3 , 6 } , and v m stands for the mth channel split from the input feature map, m { 1 , 2 , 3 , 4 } .
High-pass filter (HPF). High-frequency information is crucial to preserve details in segmentation. As a typical high-pass operator, convolution can filter out irrelevant low-frequency redundant components to retain favorable high-frequency components. The high-frequency components determine the image quality, and the cutoff frequency of the high pass for each image is different. Similar to the LPF, we partitioned the feature map into n groups. For each group, we used a convolution layer with different kernels to simulate the cutoff frequencies in different high-pass filters. For the nth group, we have
H P F n v n = Λ k × k v n ,
where Λ k × k denotes the depthwise convolution layer with a kernel size of k × k , k { 1 , 3 , 5 , 7 } , and v n stands for the nth channel split from the input feature map, n { 1 , 2 , 3 , 4 } . The continuous accumulation of speckle noise within the internal high frequencies often yields adverse effects on the extracted high-frequency information. Therefore, we employed Gaussian filtering on the high-frequency features to effectively eliminate noise.
W ( x , y ) = 1 2 π σ 2 e x 2 + y 2 2 σ 2 ,
G ( x , y ) = i = 0 k j = 0 k W ( i , j ) · I ( x + i , y + j ) ,
where W ( x , y ) represents the Gaussian weight matrix and σ is the standard deviation of the Gaussian function. G ( x , y ) represents the value of the Gaussian function at the spatial coordinates ( x , y ) , k stands for the window size of the Gaussian filter, and I represents the high-frequency feature by the pooling filter. The final output, F N S M , is obtained by summing the denoised high-frequency information with the low-frequency information:
F N S M = [ H P F n v n ] + G ( [ L P F n v n ] ) ,
where [ · ] indicates the concatenate operation, which integrates feature maps of different cutoff frequencies.

2.2.2. Boundary Refinement Module

As discussed above, our global map S g was derived from the deepest segment of the network, achieved via partial decoders (PD), which can only capture a relatively rough location of the breast tumor, without structural details. To address this issue, we propose a Boundary Refinement Module (BRM) to progressively mine discriminative breast tumors by erasing foreground objects, as illustrated in Figure 5. Our method can sequentially mine complementary regions and details by erasing the existing estimated tumor regions from high-level side-output features, where the existing estimation is upsampled from the deeper layer. Simultaneously, we introduce axial attention [24] for further saliency analysis of intermediate features, F m i , i { 1 , 2 , 3 , 4 } , which can be represented by the following equation:
F A x i a l = A t t e n t i o n axial ( F m i ) .
This consideration primarily addresses the complexity of ultrasound images, requiring increased focus on the object regions. The reverse attention weight W R e v e r s e is de facto for salient object detection in the computer vision community [23,25], and can be formulated as
W R e v e r s e = Θ σ U p S g ,
where U p ( · ) denotes an upsampling operation, σ ( · ) is the Sigmoid function, and Θ ( · ) is a reverse operation subtracting the input from matrix E , in which all the elements are 1. It is worth noting that the erasing strategy driven by reverse attention can eventually refine the imprecise and coarse estimation into an accurate and complete prediction map. Finally, we obtain the output boundary refinement features F B R M by multiplying the axial attention output feature F A x i a l by a reverse attention weight W R e v e r s e , as below:
F B R M = W R e v e r s e × F A x i a l .

2.3. Loss Function

Our loss function comprises the summation of weighted intersection over union (IoU) losses and weighted binary cross entropy (BCE) losses across multiple output layers, represented by the following equation:
L ( S , G ) = L I o U w ( S , G ) + L B C E w ( S , G ) ,
where S stands for the side-output; G represents the ground truth; and L I o U w ( S , G ) and L B C E w ( S , G ) , respectively, denote the weighted IoU loss [26] and weighted binary cross entropy (BCE) loss [27]. Thus, the total loss for the proposed method can be formulated as
L t o t a l = L ( S g , G ) + i = 1 4 L ( S i , G ) ,
where S g is the global map; S i represents side-outputs of different scales.

2.4. Experimental Settings

2.4.1. Evaluation Protocols

For quantitative comparison, we report three widely used metrics, including the mean Dice coefficient (mDice), mean intersection over union (mIoU), and mean absolute error (MAE). mDice and mIoU focus on the internal consistency of objects, while MAE represents the average value of the absolute error between the prediction and ground truth.

2.4.2. Implementation Details

We utilized a pre-trained PVT [22] model on ImageNet [28] as the backbone and conducted end-to-end training employing the AdamW optimizer [29]. The initial learning rate was set to 1 × 10 4 and the weight decay was adjusted to 1 × 10 4 too. Further, we resized the input images to 352 × 352 with a mini-batch size of 8 for 100 epochs. Given the diverse scales of objects in medical imaging, multi-scale training was adopted following previous work [30]. For AAU-Net, the result was adopted from [4], and we report results for the other models using our re-implementation. For other comparative models within this paper, we derived the results using open-source code, running and testing the models under the same experimental settings. All experiments were carried out utilizing PyTorch [31] on a singular NVIDIA GeForce RTX 3060 GPU boasting 12 GB of memory. For speckle noise in ultrasound images, we did not perform any preprocessing for denoising on the dataset images beforehand. Instead, we solely relied on our method for noise suppression.

3. Results

3.1. Comparison with State-of-the-Art Methods

The quantitative evaluation results on the BUSI dataset are presented in Table 1. For breast tumors with indistinct boundaries, NSBR-Net achieves consistent improvements against baseline models under comparable mDice, mIoU, and MAE values. In clinical diagnosis, the segmentation and localization of malignant tumors are paramount. For malignant tumors, these metrics are improved by 2.3%, 2.39%, and 0.23% compared to the second-best results. Compared to AAU-Net [4] in terms of mDice, our model achieves a 5.32% improvement in malignant tumors.
We demonstrate the qualitative performance of our model on Dataset B in Table 2. Our model achieves good performance across the entire dataset, with an additional improvement of 3.67% in mDice specifically in the testing of malignant tumors compared to the transformer-based method, DuAT [11].
The visualization of our method and comparative methods on the BUSI dataset and Dataset B is shown in Figure 6. Our method has the best performance in segmenting tumors. For instance, in the case of blurred tumor boundaries (first row and third row), other methods exhibit significant instances of false negatives, while our method addresses this issue well. Specifically, the reason lies in the BRM’s ability to extract complementary regions and details of tumors and enhance the saliency analysis of intermediate features through the attention mechanism, enabling increased focus on object regions. Similarly, under speckle noise (second row and fourth row) conditions, NSBR-Net exhibits no issues with false positives.

3.2. Robustness Analysis

External Validation. Due to variations between different datasets, a model may perform well on the training dataset but not generalize effectively to external data. To further assess the robustness of the proposed method in this paper, we utilized STU [37] as an external dataset to evaluate models trained on Dataset B [21]. As demonstrated in Table 3, our method outperforms others on three evaluation metrics in the external validation dataset. Compared to the second-best results, these metrics show improvements of 1.38%, 2.06%, and 0.39%, respectively. These findings suggest that our method exhibits insensitivity to input data variations and has good generalization capabilities.

3.3. Ablation Study

Effectiveness of Different Network Components. In Table 4, we employ a transformer-based encoder combined with a partial decoder (PD) as our baseline. Note that our partial decoder is only deployed on the high-level features, which achieve an mDice score of 78.79%. This not only showcases the effectiveness of the transformer encoder but also highlights a natural advantage over conventional CNN-based methods. We further investigate the contribution of the Noise Suppression Module. We observe that adding NSM improves the baseline performance, increasing the mDice score from 78.79% to 80.31%. These improvements suggest that introducing an NSM component can enable our model to enhance the quality of global maps. We verify the performance enhancement after integrating the Boundary Refinement Module. A noticeable improvement of 1.93% in the mDice score compared to the baseline is observed. This substantiates that BRM enables our model to accurately distinguish breast tumors. Finally, by simultaneously integrating the two primary components, we achieve a performance boost of 3.04%. This indicates that the fusion of high-quality coarse-grained information with refined boundary recovery is crucial and indispensable for localizing breast tumors.
To further validate the boundary refinement capability of BRM, we present the visualization of ablative experiment results in Figure 7. We selected tumors of varying sizes for analysis, with the first and third rows representing small tumors and the second and fourth rows representing large tumors. It can be observed that regardless of the size of the tumor, the segmentation results of the model incorporating BRM (second and fourth columns) are better fitted compared to those without BRM (third column). This further confirms that BRM indeed refines the boundary segmentation results, enabling more accurate localization of lesion boundaries, which is crucial for clinical applications.
Quantitative Comparison of Variants of NSM. As reported in Table 5, when showcasing variations in the NSM module, we aimed to demonstrate the impact of frequency information on speckle noise suppression. It is noteworthy that retaining only the low-pass filter within the NSM resulted in exceptional performance, surpassing an mDice score of 81%. However, introducing high-frequency information led to performance degradation, indicating that the cumulative noise error within the high-frequency data impaired the model’s performance. Additionally, leveraging Gaussian filtering on top of the high-frequency filter effectively mitigated noise errors, preserving valuable high-frequency information and contributing to a 0.76% performance boost.
Quantitative Comparison of Backbone Architecture. As reported in Table 6, we present the performance of our proposed NSBR-Net method with different backbone architectures on the BUSI dataset. Specifically, we compare three different backbone architectures: UNet [18], Res2Net [38], and PVT (ours) [22], using three evaluation metrics: mDice, mIoU, and MAE. The results demonstrate the effectiveness of our method, showing adaptability and robustness across all backbones. Furthermore, the comparison indicates that the transformer-based encoder is capable of extracting more robust features, leading us to ultimately select PVT as the backbone.

4. Discussion

Breast cancer is a prevalent gynecological disease which poses a significant threat to women’s health. With the development of deep learning, intelligent analysis based on ultrasound imaging is increasingly widely used in clinical pre-screening and is becoming the mainstream trend. However, the segmentation of breast tumors suffers from the inherent limitations of ultrasound images, including the presence of speckle noise and the issue of indistinct boundaries in malignant tumors.
Speckle noise is inevitable in ultrasound images, causing strong interference during neural network training, thereby reducing the model’s generalization ability. Simultaneously, the blurred boundaries of malignant tumors also lead to decreased model accuracy, which is detrimental to intelligent diagnosis. To address these issues, we discussed how to effectively suppress speckle noise from a frequency perspective and designed a coarse-to-fine paradigm, namely NSBR-Net, which shows outstanding segmentation performance and brings new insights into ultrasound image analysis.
The proposed breast tumor segmentation model exhibited superior accuracy over competing algorithms, achieving mDice, mIoU, and MAE scores of 81.83%, 73.50%, and 3.55% on the BUSI dataset and 81.48%, 73.08%, and 1.74% on Dataset B, respectively. Compared to the state-of-the-art transformer-based method, there was a significant improvement of 3.67% on Dataset B and 2.30% on BUSI in mDice, in the testing of malignant tumors, which holds great significance for the clinical diagnosis of cancer. Moreover, as can be seen in Figure 6, NSBR-Net showed no issues of false positives, unlike other segmentation models, which are obviously affected by speckle noise. We attribute this capability to our NSM module, which filters out noise information from coarse-grained feature mappings while preserving detailed boundary information.
Considering the significant individual differences in breast tumors (shown in Figure 6), one of our future research directions will focus on developing appropriate data augmentation algorithms to expand the sample space and enhance the generalization capabilities of our model further. Since our model is designed for two-dimensional ultrasound images, another future direction is to extend the methodology to three-dimensional images. Furthermore, considering the requirement for real-time performance in clinical diagnostic assistance, enhancing the computational efficiency of the model is also a crucial direction for future research. Therefore, we plan to conduct in-depth evaluations and optimizations of the model’s computational performance in our future studies.
Compared to ultrasound, mammography is irreplaceable due to its ability to clearly detect tiny calcifications within breast tissue. Thus, mammography is also a commonly employed method for breast cancer screening, with a wealth of AI-related research conducted in this area, including machine learning methods [39] and deep learning methods [40]. Digital Breast Tomosynthesis (DBT), which significantly mitigates the issue of missed detections caused by overlapping breast fibroglandular tissue in mammography, has also emerged as a widely adopted new technology. By combining deep learning methods with other breast cancer screening techniques, our future endeavors aim to broaden the scope of our research, facilitating more comprehensive and accurate diagnostic tools for breast pathology assessment.

5. Conclusions

This paper proposed a novel noise suppression and boundary refinement network (NSBR-Net) for breast tumor segmentation, which utilizes a pyramid vision transformer backbone as the encoder to explicitly extract more powerful and robust features. The core idea is to suppress the cumulative high-frequency internal errors caused by speckle noise and optimize the boundary refinement process from a practical perspective. We validated NSBR-Net’s effectiveness through quantitative and qualitative comparisons with current cutting-edge models, demonstrating its superior accuracy overall. We anticipate that this research will inspire further innovative approaches to address the segmentation of breast tumors in ultrasound images.

Author Contributions

Conceptualization, Y.S. and Z.H.; methodology, Y.S., Z.H. and J.S.; software, Y.S.; validation, Z.H.; formal analysis, G.C.; investigation, Y.S. and Z.H.; resources, G.C., J.S. and Z.G.; data curation, Y.S.; writing—original draft preparation, Y.S. and Z.H.; writing—review and editing, Y.S., Z.H. and J.S.; visualization, J.S.; supervision, G.C., J.S. and Z.G.; project administration, Z.G.; funding acquisition, Z.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 42371457, Grant 41971424, Grant 42301468, and Grant 61902330; in part by the Key Project of the Natural Science Foundation of Fujian Province, China, under Grant 2022J02045; in part by the Natural Science Foundation of Fujian Province, China, under Grant 2022J01337, Grant 2022J01819, Grant 2023J01801, Grant 2023J01799, Grant 2022J05157, and Grant 2022J011394; in part by the Natural Science Foundation of Xiamen, China, under Grant 3502Z20227048 and Grant 3502Z20227049; and in part by the Startup Fund of Jimei University under Grant ZQ2022031.

Institutional Review Board Statement

Ethical review and approval were waived for this study because all data used in this study are from public datasets.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial intelligence
NSBR-NetNovel Noise Suppression and Boundary Refinement Network
NSMNoise Suppression Module
BRMBoundary Refinement Module
FCNFully Convolutional Network
CNNConvolutional Neural Network
BUSbreast ultrasound
PDpartial decoder
HPFhigh-pass filter
LPFlow-pass filter
IoUintersection over union
BCEbinary cross entropy

References

  1. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar]
  2. Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
  3. Almajalid, R.; Shan, J.; Du, Y.; Zhang, M. Development of a deep-learning-based method for breast ultrasound image segmentation. In Proceedings of the 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), Orlando, FL, USA, 17–20 December 2018; pp. 1103–1108. [Google Scholar]
  4. Chen, G.; Li, L.; Dai, Y.; Zhang, J.; Yap, M.H. AAU-Net: An Adaptive Attention U-Net for Breast Lesions Segmentation in Ultrasound Images. IEEE Trans. Med. Imaging 2023, 42, 1289–1300. [Google Scholar] [CrossRef] [PubMed]
  5. Chen, G.; Li, L.; Zhang, J.; Dai, Y. Rethinking the unpretentious U-net for medical ultrasound image segmentation. Pattern Recognit. 2023, 142, 109728. [Google Scholar] [CrossRef]
  6. Hu, H.; Zhang, Z.; Xie, Z.; Lin, S. Local relation networks for image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3464–3473. [Google Scholar]
  7. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  8. Zhao, H.; Jia, J.; Koltun, V. Exploring self-attention for image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 10076–10085. [Google Scholar]
  9. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  10. Rahman, M.M.; Marculescu, R. Medical image segmentation via cascaded attention decoding. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 6222–6231. [Google Scholar]
  11. Tang, F.; Xu, Z.; Huang, Q.; Wang, J.; Hou, X.; Su, J.; Liu, J. DuAT: Dual-aggregation transformer network for medical image segmentation. In Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Shenzhen, China, 14–17 October 2023; pp. 343–356. [Google Scholar]
  12. Hu, Y.; Guo, Y.; Wang, Y.; Yu, J.; Li, J.; Zhou, S.; Chang, C. Automatic tumor segmentation in breast ultrasound images using a dilated fully convolutional network combined with an active contour model. Med. Phys. 2019, 46, 215–228. [Google Scholar] [CrossRef] [PubMed]
  13. Lee, H.; Park, J.; Hwang, J.Y. Channel attention module with multiscale grid average pooling for breast cancer segmentation in an ultrasound image. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2020, 67, 1344–1353. [Google Scholar]
  14. Yan, Y.; Liu, Y.; Wu, Y.; Zhang, H.; Zhang, Y.; Meng, L. Accurate segmentation of breast tumors using AE U-net with HDC model in ultrasound images. Biomed. Signal Process. Control 2022, 72, 103299. [Google Scholar] [CrossRef]
  15. Maity, A.; Pattanaik, A.; Sagnika, S.; Pani, S. A comparative study on approaches to speckle noise reduction in images. In Proceedings of the 2015 International Conference on Computational Intelligence and Networks, Odisha, India, 12–13 January 2015; pp. 148–155. [Google Scholar]
  16. Qi, W.; Wu, H.; Chan, S. Mdf-net: A multi-scale dynamic fusion network for breast tumor segmentation of ultrasound images. IEEE Trans. Image Process. 2023, 32, 4842–4855. [Google Scholar] [CrossRef]
  17. Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief review of image denoising techniques. Vis. Comput. Ind. Biomed. Art 2019, 2, 7. [Google Scholar] [CrossRef]
  18. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  19. Al-Dhabyani, W.; Gomaa, M.; Khaled, H.; Fahmy, A. Dataset of breast ultrasound images. Data Brief 2020, 28, 104863. [Google Scholar] [CrossRef] [PubMed]
  20. Dong, B.; Wang, P.; Wang, F. Head-free lightweight semantic segmentation with linear transformer. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 516–524. [Google Scholar]
  21. Yap, M.H.; Pons, G.; Marti, J.; Ganau, S.; Sentis, M.; Zwiggelaar, R.; Davison, A.K.; Marti, R. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J. Biomed. Health Inform. 2017, 22, 1218–1226. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 568–578. [Google Scholar]
  23. Fan, D.P.; Ji, G.P.; Zhou, T.; Chen, G.; Fu, H.; Shen, J.; Shao, L. Pranet: Parallel reverse attention network for polyp segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 263–273. [Google Scholar]
  24. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 6000–6010. [Google Scholar]
  25. Kim, T.; Lee, H.; Kim, D. Uacanet: Uncertainty augmented context attention for polyp segmentation. In Proceedings of the 29th ACM International Conference on Multimedia, Virtual Event, 20–24 October 2021; pp. 2167–2175. [Google Scholar]
  26. Máttyus, G.; Luo, W.; Urtasun, R. Deeproadmapper: Extracting road topology from aerial images. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3438–3446. [Google Scholar]
  27. De Boer, P.T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
  28. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  29. Loshchilov, I.; Hutter, F. Decoupled weight decay regularization. arXiv 2017, arXiv:1711.05101. [Google Scholar]
  30. Dong, B.; Wang, W.; Fan, D.P.; Li, J.; Fu, H.; Shao, L. Polyp-pvt: Polyp segmentation with pyramid vision transformers. arXiv 2021, arXiv:2108.06932. [Google Scholar] [CrossRef]
  31. Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32, 8026–8037. [Google Scholar]
  32. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  33. Jha, D.; Riegler, M.A.; Johansen, D.; Halvorsen, P.; Johansen, H.D. Doubleu-net: A deep convolutional neural network for medical image segmentation. In Proceedings of the 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), Rochester, MN, USA, 28–30 July 2020; pp. 558–564. [Google Scholar]
  34. Wei, J.; Hu, Y.; Zhang, R.; Li, Z.; Zhou, S.K.; Cui, S. Shallow attention network for polyp segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021; pp. 699–708. [Google Scholar]
  35. Valanarasu, J.M.J.; Patel, V.M. Unext: Mlp-based rapid medical image segmentation network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; pp. 23–33. [Google Scholar]
  36. Lou, A.; Guan, S.; Ko, H.; Loew, M.H. CaraNet: Context axial reverse attention network for segmentation of small medical objects. In Proceedings of the Medical Imaging 2022: Image Processing, San Diego, CA, USA, 20–24 February 2022; Volume 12032, pp. 81–92. [Google Scholar]
  37. Zhuang, Z.; Li, N.; Joseph Raj, A.N.; Mahesh, V.G.; Qiu, S. An RDAU-NET model for lesion segmentation in breast ultrasound images. PLoS ONE 2019, 14, e0221535. [Google Scholar] [CrossRef] [PubMed]
  38. Gao, S.H.; Cheng, M.M.; Zhao, K.; Zhang, X.Y.; Yang, M.H.; Torr, P. Res2net: A new multi-scale backbone architecture. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 43, 652–662. [Google Scholar] [CrossRef] [PubMed]
  39. Cai, S.; Liu, P.Z.; Luo, Y.M.; Du, Y.Z.; Tang, J.N. Breast microcalcification detection algorithm based on contourlet and asvm. Algorithms 2019, 12, 135. [Google Scholar] [CrossRef]
  40. Sakaida, M.; Yoshimura, T.; Tang, M.; Ichikawa, S.; Sugimori, H. Development of a Mammography Calcification Detection Algorithm Using Deep Learning with Resolution-Preserved Image Patch Division. Algorithms 2023, 16, 483. [Google Scholar] [CrossRef]
Figure 1. Challenges in a breast ultrasound image segmentation task. The red lines are the boundaries of the breast tumors. Inside and outside the boundaries are where the boundaries are blurred. The areas inside the blue circles are typical regions of speckle noise, which refers to irregular, distinct brightness and darkness distribution.
Figure 1. Challenges in a breast ultrasound image segmentation task. The red lines are the boundaries of the breast tumors. Inside and outside the boundaries are where the boundaries are blurred. The areas inside the blue circles are typical regions of speckle noise, which refers to irregular, distinct brightness and darkness distribution.
Algorithms 17 00257 g001
Figure 2. The network’s performance variation when eliminating high-frequency information. The metrics, including the mean Dice coefficient (mDice) and mean intersection over union (mIoU), both assess the internal consistency of objects within the segmentation results.
Figure 2. The network’s performance variation when eliminating high-frequency information. The metrics, including the mean Dice coefficient (mDice) and mean intersection over union (mIoU), both assess the internal consistency of objects within the segmentation results.
Algorithms 17 00257 g002
Figure 3. The framework of our proposed NSBR-Net primarily comprises the pyramid vision transformer, partial decoder (PD) [23], Noise Suppression Module, and Boundary Refinement Module.
Figure 3. The framework of our proposed NSBR-Net primarily comprises the pyramid vision transformer, partial decoder (PD) [23], Noise Suppression Module, and Boundary Refinement Module.
Algorithms 17 00257 g003
Figure 4. Overall architecture of NSM. It is composed of a low-pass filter (LPF) and a high-pass filter (HPF).
Figure 4. Overall architecture of NSM. It is composed of a low-pass filter (LPF) and a high-pass filter (HPF).
Algorithms 17 00257 g004
Figure 5. Overall architecture of BRM, which contains reverse attention and axial attention.
Figure 5. Overall architecture of BRM, which contains reverse attention and axial attention.
Algorithms 17 00257 g005
Figure 6. Qualitative comparison of different methods on BUSI [19] (first row and second row) and Dataset B [21] (third row and fourth row). The red curve is the ground truth boundary. The green curve is the segmentation results of these methods.
Figure 6. Qualitative comparison of different methods on BUSI [19] (first row and second row) and Dataset B [21] (third row and fourth row). The red curve is the ground truth boundary. The green curve is the segmentation results of these methods.
Algorithms 17 00257 g006
Figure 7. Visualization results of the ablative experiment on BUSI [19] (first row and second row) and Dataset B [21] (third row and fourth row). The red curve is the ground truth boundary. The green curve is the segmentation results of different components.
Figure 7. Visualization results of the ablative experiment on BUSI [19] (first row and second row) and Dataset B [21] (third row and fourth row). The red curve is the ground truth boundary. The green curve is the segmentation results of different components.
Algorithms 17 00257 g007
Table 1. Quantitative comparison of different methods on BUSI [19] to validate our model’s learning ability. ↑ denotes the higher the better, and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
Table 1. Quantitative comparison of different methods on BUSI [19] to validate our model’s learning ability. ↑ denotes the higher the better, and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
AllBenignMalignant
MethodmDice ↑mIoU ↑MAE ↓mDice ↑mIoU ↑MAE ↓mDice ↑mIoU ↑MAE ↓
UNet [18]0.69430.60330.04960.72190.63620.03800.62320.51830.0798
Attention U-Net [32]0.69340.60160.05090.72470.63740.03780.61250.50920.0845
UNet++ [1]0.70230.60700.05090.72120.63010.03980.65380.54760.0796
UNet3+ [2]0.70550.61390.04930.73580.64330.03880.64870.54140.0765
PraNet [23]0.76980.68470.04130.78410.70370.03200.73300.62720.0654
DoubleU-Net [33]0.77350.68700.04610.80160.71790.03330.70100.58850.0790
UACANet [25]0.74730.66500.04420.75930.67730.03530.71630.60890.0672
SANet [34]0.77080.68420.04580.79290.70740.03510.71360.60650.0732
UNext [35]0.71710.62580.04360.73660.65090.03320.66680.56130.0702
CaraNet [36]0.77690.69680.03830.79470.71990.02870.72890.62670.0633
AAU-Net [4]0.77510.68820.80880.73330.71540.6060
DuAT [11]0.80170.71630.04060.78640.70370.03140.71850.61040.0767
PVT-CASCADE [10]0.81180.72700.03800.83740.75820.02450.74560.64650.0619
NSBR-Net (Ours)0.81830.73500.03550.83750.76010.02620.76860.67040.0596
Table 2. Quantitative comparison of different methods on Dataset B [21] to validate our model’s learning ability. ↑ denotes the higher the better, and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
Table 2. Quantitative comparison of different methods on Dataset B [21] to validate our model’s learning ability. ↑ denotes the higher the better, and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
AllBenignMalignant
MethodmDice ↑mIoU ↑MAE ↓mDice ↑mIoU ↑MAE ↓mDice ↑mIoU ↑MAE ↓
UNet [18]0.73390.64620.02510.74960.65810.01740.70340.62330.0401
Attention U-Net [32]0.74070.65900.02310.75940.67860.01640.70450.62110.0362
UNet++ [1]0.73450.64930.02100.75960.67760.01410.68580.59450.0345
UNet3+ [2]0.67690.59440.02470.66340.57980.02130.70310.62270.0313
PraNet [23]0.76810.67140.01780.80080.70100.01000.70470.61400.0331
DoubleU-Net [33]0.76720.67650.01920.79330.70160.01180.71640.62770.0333
UACANet [25]0.76000.67110.01850.79170.70440.01090.69840.60650.0331
SANet [34]0.75350.67220.01950.77260.68990.01180.71650.63780.0344
UNext [35]0.69480.59840.02080.69660.59860.01390.69120.59820.0343
CaraNet [36]0.77420.69710.01710.80510.73000.00960.71440.63310.0315
AAU-Net [4]0.78140.6910--
DuAT [11]0.80460.72190.01610.84640.76160.00850.72330.64490.0308
PVT-CASCADE [10]0.81190.73210.01800.85860.78190.00990.72130.63550.0338
NSBR-Net (Ours)0.81480.73080.01740.84310.76240.01000.76000.66940.0317
Table 3. Performance of models trained on Dataset B [21] and evaluated on STU [37]. ↑ denotes the higher the better and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
Table 3. Performance of models trained on Dataset B [21] and evaluated on STU [37]. ↑ denotes the higher the better and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
MethodmDice ↑mIoU ↑ MAE  ↓
UNet [18]0.78380.68340.0492
Attention U-Net [32]0.74760.63800.0561
CaraNet [36]0.77080.66350.0434
DuAT [11]0.87390.78420.0275
NSBR-Net (Ours)0.88770.80480.0236
Table 4. Ablation study on the effectiveness of different components on the BUSI dataset. Red values enclosed in parentheses refer to improvement compared with partial decoder.
Table 4. Ablation study on the effectiveness of different components on the BUSI dataset. Red values enclosed in parentheses refer to improvement compared with partial decoder.
PDNSMBRMmDice (%)mIoU (%)
78.7970.16
80.31 (+1.52)71.49 (+1.33)
80.72 (+1.93)72.10 (+1.94)
81.83 (+3.04)73.50 (+3.34)
Table 5. Quantitative comparison of variants of NSM on BUSI dataset. Red values enclosed in parentheses refer to improvement compared with our model with only low-pass filter within NSM, while the blue ones indicates reductions.
Table 5. Quantitative comparison of variants of NSM on BUSI dataset. Red values enclosed in parentheses refer to improvement compared with our model with only low-pass filter within NSM, while the blue ones indicates reductions.
LPFHPF (Without Denoising)HPFmDice (%)mIoU (%)
81.0772.34
80.31 (−0.76)71.49 (−0.85)
81.83 (+0.76)73.50 (+1.16)
Table 6. Quantitative comparison of backbone architecture on BUSI dataset. ↑ denotes the higher the better and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
Table 6. Quantitative comparison of backbone architecture on BUSI dataset. ↑ denotes the higher the better and ↓ denotes the lower the better. Red indicates the best results and blue represents the second-best results.
BackbonemDice ↑mIoU ↑ MAE
UNet [18]0.73590.63950.8758
Res2Net [38]0.76410.67770.0413
PVT (Ours) [22]0.81830.73500.0355
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Huang, Z.; Cai, G.; Su, J.; Gong, Z. NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images. Algorithms 2024, 17, 257. https://doi.org/10.3390/a17060257

AMA Style

Sun Y, Huang Z, Cai G, Su J, Gong Z. NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images. Algorithms. 2024; 17(6):257. https://doi.org/10.3390/a17060257

Chicago/Turabian Style

Sun, Yue, Zhaohong Huang, Guorong Cai, Jinhe Su, and Zheng Gong. 2024. "NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images" Algorithms 17, no. 6: 257. https://doi.org/10.3390/a17060257

APA Style

Sun, Y., Huang, Z., Cai, G., Su, J., & Gong, Z. (2024). NSBR-Net: A Novel Noise Suppression and Boundary Refinement Network for Breast Tumor Segmentation in Ultrasound Images. Algorithms, 17(6), 257. https://doi.org/10.3390/a17060257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop