Next Article in Journal
The Use of Therapeutic Peptides in Combination with Full-Thickness Skin Columns to Improve Healing of Excisional Wounds
Previous Article in Journal
Automated Mucormycosis Diagnosis from Paranasal CT Using ResNet50 and ConvNeXt Small
Previous Article in Special Issue
The Effect of Data Leakage and Feature Selection on Machine Learning Performance for Early Parkinson’s Disease Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound

1
David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
2
Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
3
Schulich School of Medicine and Dentistry, Western University, London, ON N6A 3K7, Canada
*
Author to whom correspondence should be addressed.
Bioengineering 2025, 12(8), 855; https://doi.org/10.3390/bioengineering12080855
Submission received: 17 June 2025 / Revised: 26 July 2025 / Accepted: 4 August 2025 / Published: 8 August 2025
(This article belongs to the Special Issue Mathematical Models for Medical Diagnosis and Testing)

Abstract

Data augmentation is a central component of joint embedding self-supervised learning (SSL). Approaches that work for natural images may not always be effective in medical imaging tasks. This study systematically investigated the impact of data augmentation and preprocessing strategies in SSL for lung ultrasound. Three data augmentation pipelines were assessed: (1) a baseline pipeline commonly used across imaging domains, (2) a novel semantic-preserving pipeline designed for ultrasound, and (3) a distilled set of the most effective transformations from both pipelines. Pretrained models were evaluated on multiple classification tasks: B-line detection, pleural effusion detection, and COVID-19 classification. Experiments revealed that semantics-preserving data augmentation resulted in the greatest performance for COVID-19 classification—a diagnostic task requiring global image context. Cropping-based methods yielded the greatest performance on the B-line and pleural effusion object classification tasks, which require strong local pattern recognition. Lastly, semantics-preserving ultrasound image preprocessing resulted in increased downstream performance for multiple tasks. Guidance regarding data augmentation and preprocessing strategies was synthesized for developers working with SSL in ultrasound.

1. Introduction

Automated interpretation of medical ultrasound images is increasingly implemented using deep learning [1]. Deep neural networks (DNNs) achieve strong performance for applications in ultrasound imaging, such as distinguishing benign from malignant liver lesions [2], estimating left ventricular end-diastolic and end-systolic volumes [3], and screening for pneumothorax [4]. Several studies have found that artificial intelligence-based interpretation methods have exhibited strong accuracy across multiple tasks and improved the accessibility of point-of-care ultrasound; however, they struggle to perform well in some disease conditions or when images are poorly acquired [5].
Despite early successes, investigators are limited by the lack of publicly available datasets [6,7]. When available, researchers use private collections of ultrasound examinations, as they may contain far more samples. Given the expense of manual annotation, many are turning to self-supervised learning (SSL) methods to pretrain DNNs using large, unlabelled collections of ultrasound data [8]. These SSL-pretrained backbone DNNs may be fine-tuned for supervised learning tasks of interest.
An important category of SSL methods for computer vision is the joint embedding architecture, which is characterized by training DNNs to produce similar vector representations for pairs of related images. The most common method for retrieving related pairs of images from unlabelled datasets is to apply random transformations (i.e., data augmentation) to an image, producing two distorted views. The choice of random transformations steers the invariance relationships learned by the backbone.
In this study, we proposed and assessed data preprocessing and data augmentation strategies designed to preserve semantic content in medical ultrasound images (Figure 1). We compared handcrafted domain-specific augmentation methods against standard SSL data augmentation practices. We found that ultrasound-specific transformations resulted in the greatest improvement in performance for COVID-19 classification—a diagnostic task—on a public dataset. Experiments also revealed that standard cropping-based augmentation strategies outperformed ultrasound-specific transformations for object classification tasks in lung ultrasound (LU). Lastly, ultrasound-specific semantics-preserving preprocessing was found to be instrumental to the success of pretrained backbones. In summary, our contributions are as follows:
  • Semantics-preserving image preprocessing for SSL in ultrasound;
  • Semantics-preserving data augmentation methods designed for ultrasound images;
  • Comparison of multiple data augmentation strategies for SSL for multiple types of LU tasks;
  • Recommendations for developers working with unlabelled ultrasound datasets.
To our knowledge, this study is the first to quantify the impact of data augmentation methods for SSL with ultrasound. We are hopeful that the results and lessons from this study may contribute to the development of foundation models for medical ultrasound.

2. Background

2.1. Data Augmentation in Self-Supervised Learning

The joint embedding class of SSL methods is characterized by the minimization of an objective function that, broadly speaking, encourages similarity of related pairs of inputs. Semantically related pairs of images (i.e., positive pairs) are sampled from unlabelled datasets according to a pairwise relationship. If the SSL pairwise relationship is satisfied for samples exhibiting the same class, SSL methods will likely improve the performance of a classifier [9]. Most joint embedding methods rely on data augmentation to define the pairwise relationship. Some studies have used metadata or known relationships between samples to identify related pairs [10,11,12]; however, the availability of such information is rare. The choice of data augmentation transformations is therefore crucial, as it dictates the invariances learned [13]. However, the set of useful invariances differs by the image modality and downstream problem(s) of interest. Despite this, studies continue to espouse a data augmentation pipeline popularized by leading SSL methods, such as SimCLR [14], BYOL [15], Barlow Twins [16], and VICReg [17]. These methods utilized the same pipeline, but with minor hyperparameter variations. The pipeline includes the following transformations: random crops, horizontal reflection, colour jitter, Gaussian blur, and solarization. Hereafter, we refer to this baseline pipeline as StandardAug. Random rotation is an example of a transformation not found in the StandardAug pipeline that represents an important invariance relationship for many tasks in medical imaging. For example, random rotation has been applied in SSL pretraining with magnetic resonance exams of the prostate [18]. Moreover, the authors did not use StandardAug’s Gaussian blur transformation because it may have rendered the images uninterpretable.

2.2. Joint Embedding Self-Supervision in Ultrasound

Recent studies have examined the use of joint embedding SSL methods for ultrasound interpretation tasks, such as echocardiogram view classification [19], left ventricle segmentation [20], and breast tumour classification [21]. Some have proposed positive pair sampling schemes customized for ultrasound. The Ultrasound Contrastive Learning (USCL) method and its successors explored contrastive learning methods where the positive pairs were weighted sums of images from the same ultrasound video [22,23,24]. Other methods have studied the use of images from the same video as positive pairs [12,25]. In these studies, the set of transformations was a subset of the StandardAug data augmentation pipeline, occasionally with different hyperparameters. Few studies have proposed ultrasound-specific data augmentation methods for SSL. A recent study by Chen et al. [26] applied BYOL and SimCLR to pretrain 3D convolutional DNNs with specialized data augmentation for lung consolidation detection in ultrasound videos, observing that temporal transformations were contributory to their problem. This study builds on the previous literature by proposing and comparing domain-specific data augmentation and preprocessing methods for multiple types of downstream tasks.

3. Materials and Methods

3.1. Datasets and Tasks

We assessed the methods in this publication using a combination of public and private data. COVIDx-US is a public COVID-19 LU dataset consisting of 242 publicly sourced videos, acquired from a variety of manufacturers and sites [27]. Each example is annotated with one of the following classes: normal, COVID-19 pneumonia, non-COVID-19 pneumonia, and other lung pathology. Referred to as COVID hereafter, the task is a four-class image classification problem. Since there is no standard test partition, we split the data by patient identifier into training (70%), validation (15%), and test (15%) splits.
The second data source is a private collection of lung ultrasound examinations, and we refer to it as LUSData. Access to these data was granted by the research ethics boards at Western University (REB 116838) and the University of Waterloo (REB 43986). LUSData contains videos of parenchymal and pleural views of the lung. A subset of the parenchymal views has labels for the presence of A-lines or B-lines (i.e., the AB classification task). A-lines are reverberation artifacts that indicate normal lung tissue, while B-lines are axial artifacts that indicate fluid or thickness in the lung. A subset of the pleural views is labelled for the presence or absence of pleural effusion (i.e., the PE classification task), which is an accumulation of fluid around the lungs. A small fraction of the parenchymal views in LUSData possess bounding box labels for the pleural line (i.e., the PL object detection task). Most exams in LUSData originated from a local healthcare centre, but a subset were acquired at another centre, which we adopt as an external test set. The labelled examples in the local dataset were split into training (70%), validation (15%), and test (15%) splits by patient. Table 1 provides the video and class counts of LUSData. Further dataset details are in Appendix A. All models in this study are trained on images, instead of on videos. Classification labels apply to every image in the video. However, individual images within each video labelled for the PL task have bounding box annotations.

3.2. Semantics-Preserving Preprocessing

The field of view (FOV) in ultrasound images is typically surrounded by burnt-in scan parameters, logos, and other details. We estimated the shape of the FOV and masked out all extraneous graphical entities using ultrasound cleaning software (UltraMask, Deep Breathe Inc., London, ON, Canada, https://www.deepbreathe.ai/products, accessed on 16 June 2025). Semantic information only exists within the FOV of the ultrasound, which typically occupies a fraction of the images. Scaling transformations, such as random cropping, could produce views that largely contain background. Accordingly, we cropped the cleaned images to the smallest rectangle that encapsulates the FOV mask to maximize semantic content in ultrasound images. Figure 2 depicts this semantics-preserving preprocessing workflow. The process was applied to all images in LUSData and COVIDx-US.

3.3. Ultrasound-Specific Data Augmentation

Joint embedding SSL is effective when positive pairs contain similar information with respect to downstream tasks [9]. Several SSL studies applied to photographic or medical imaging datasets adopted variations in the StandardAug data augmentation pipeline. The core aim of our study was to determine if semantics-preserving data augmentation would better equip pretrained feature extractors for downstream LU tasks than the commonly applied StandardAug pipeline.
We refer to a data augmentation pipeline as an ordered sequence of transformations, each applied with some probability. For clarity, we assign each transformation an alphanumeric identifier and express a data augmentation pipeline as an ordered sequence of identifiers. The StandardAug pipeline transformations are detailed in Table 2. The table also includes an estimate of the time to transform a single image. Details on how the runtime estimates were calculated are in Appendix B.
We designed the AugUS-O pipeline, which was intended to preserve semantic information in the entire ultrasound FOV while imposing nontrivial differences across invocations. The transformations in AugUS-O are listed below.
B00:
Probe Type Change: Inspired by Zeng et al.’s work [28], this transformation resamples an ultrasound image according to a different field of view (FOV) shape. Linear FOV shapes are converted to curvilinear shapes, while curvilinear and phased arrays are converted to linear ones.
B01:
Convexity Change: The shape of convex FOVs can vary, depending on the manufacturer, depth, and field of view of the probe. This transformation modifies the FOV shape such that the distance between x 1 and x 2 is altered, mimicking a change in θ 0 .
B02:
Wavelet Denoising: As an alternative to the commonly used Gaussian blur transformation, this transformation denoises an image by thresholding it in wavelet space, according to Birgé and Massart’s method [29].
B03:
Contrast-Limited Adaptive Histogram Equalization: This transformation enhances contrast by applying locally informed equalization [30].
B04:
Gamma Correction: In contrast to standard brightness change transforms, gamma correction applies a nonlinear change in pixel intensity.
B05:
Brightness and Contrast Change: The brightness and contrast of the image are modified by applying a linear transform to the pixel values.
B06:
Depth Change Simulation: Changing the depth controls on an ultrasound probe impacts how far the range of visibility is from the probe. This transformation simulates a change in depth by applying a random zoom while preserving the FOV shape.
B07:
Speckle Noise Simulation: Speckle noise, Gaussian noise, and salt and pepper (S&P) noise are prevalent in ultrasound [31]. This transformation applies Singh et al.’s [32] synthetic speckle noise algorithm to the image.
B08:
Gaussian Noise Simulation: Multiplicative Gaussian noise is independently applied to each pixel.
B09:
Salt and Pepper Noise Simulation: A small, random assortment of pixels is set to black or white.
B10:
Horizontal Reflection: The image is reflected about the central vertical axis.
B11:
Rotation and Shift: The image is rotated and translated by a random angle and vector, respectively.
Refer to Figure 3 for a visual example of each transformation in AugUS-O. Algorithmic details and parameter settings for the StandardAug and AugUS-O pipelines are in Appendix C and Appendix D, respectively. As is common in stochastic data augmentation, each transformation was applied with some probability. Table 3 gives the entire sequence of transformations and the probability with which each is applied. Visuals of positive pairs produced using the StandardAug and AugUS-O augmentation pipelines can be found in Figure 4a and Figure 4b, respectively.
We conducted an informal assessment of the similarity of positive pairs. Positive pairs were produced for 50 randomly sampled images, using both the StandardAug and the AugUS-O pipelines. The pairs were presented in random order to one of the authors, who is an expert in point-of-care ultrasound. They were aware of the two pipelines but were not told which pipeline produced each pair. The expert was asked to mark the pairs they believed conveyed the same clinical impression. We observed that 58 % of pairs produced with the StandardAug pipeline were marked as similar, whereas 70 % of the AugUS-O pairs were marked as similar. While not conclusive, this manual evaluation added credence to the semantics-preserving intention of the design.

3.4. Discovering Semantically Contributory Transformations

A major aim of this work was to explore the utility of various data augmentation schemes during pretraining. As such, we conducted leave-one-out analysis for each of the StandardAug and AugUS pipelines to estimate the impact of each transformation on the models’ ability to solve downstream classification tasks. We pretrained separate models on the unlabelled portion of LUSData, using an altered version of a pipeline with one transformation omitted. We then conducted 10-fold cross-validation on the LUSData training set for downstream classification tasks for each pretrained model. The median cross-validation test performance for each model pretrained using an ablated pipeline was compared to a baseline model that was pretrained with the entire pipeline. The experiment was conducted for both the StandardAug and AugUS pipelines. Any transformations that, when omitted, resulted in worsened performance on either AB or PE were deemed contributory.

3.5. Training Protocols

We adopted the MobileNetV3Small architecture [33] for all experiments in this study and pretrained using the SimCLR method [14]. MobileNetV3Small was chosen due to its real-time inference capability on mobile devices and its use in prior work by VanBerlo et al.  for similar tasks [34]. Local inference on edge devices is especially important in point-of-care ultrasound imaging, as modern ultrasound devices are used in austere settings with limited internet access. The SimCLR projector was a 2-layer multilayer perceptron with 576 nodes per layer. Images were resized to 128 × 128 pixels prior to the forward pass, which is consistent with prior work for similar tasks [34]. Unless otherwise stated, backbones (i.e., feature extractors) were initialized using ImageNet-pretrained weights [35] and were pretrained using the LARS optimizer [36] with a batch size of 1024, a base learning rate of 0.2 , and a linear warmup with a cosine decay schedule. Pretraining was conducted for 3 epochs with 0.3 warmup epochs for LUSData, and 100 epochs with 10 warmup epochs for COVIDx-US.
To conduct supervised evaluation, a perceptron classification head was appended to the final pooling layer of the backbone. Classifiers were trained using stochastic gradient descent with a momentum of 0.9 and a batch size of 512. The learning rates for the backbone and head were 0.002 and 0.02 , respectively; each was annealed according to a cosine decay schedule. Training was conducted for 10 epochs on LUSData and 30 epochs on COVIDx-US. Unless otherwise stated, the weights corresponding to the epoch with the lowest validation loss were retained for test set evaluation.
Although this study focused on classification tasks, we also evaluated backbones on the PL object detection task using the Single Shot Detector (SSD) method [37]. SSL-pretrained backbones were used as the convolutional feature extractor. Architectural and training details for SSD are in Appendix E.
Self-supervised pretraining was conducted using virtual machines equipped with an Intel E5-2683 v4 Broadwell CPU at 2.1 GHz and 2 Nvidia Tesla P100 GPUs each with 12 GB of VRAM. Supervised training was conducted using the same hardware, except with a single GPU. Source code for the experiments and transformations is available in a public GitHub repository (https://github.com/bvanberl/aug_us_ssl, release v1).

4. Results

4.1. Transformation Leave-One-Out Analysis

Leave-one-out analysis was conducted to discover which transformations in each of the StandardAug and AugUS-O pipelines were contributory to downstream task performance. We pretrained backbones using versions of each pipeline with one transformation omitted. The private LUSData training set was split by patient into 10 disjoint subsets. For each pretrained backbone, 10-fold cross-validation was conducted to obtain estimates of the performance of linear classifiers trained on its output feature vectors. The maximum validation area under the receiver operating characteristic curve (AUC) across epochs was recorded. Omitted transformations that resulted in statistically significant lower validation AUC for either the AB or PE task were included in a third pipeline.
We conducted statistical testing to compare each of the StandardAug and AugUS-O pipelines, and for each of the AB and PE tasks (described in Section 3.1). Friedman’s test for multiple comparisons [38] was conducted, with significance level 0.05 . When significant differences were found, we performed the Wilcoxon Signed-Rank Test [39] to compare the test AUCs from each ablated model to the baseline’s test AUC valuess. To control for false positives, the Holm-Bonferroni correction [40] was applied to keep the family-wise significance level at 0.05 .
Table 4 details the results of the leave-one-out analysis. Friedman’s test detected differences in performance on both the AB and PE tasks when pretrained using the StandardAug pipeline, but only the AB task exhibited differences when pretrained with the AugUS-O pipeline. As shown in Table 4, the set of transformations that exhibited statistically significant reductions in test AUC for at least one task when excluded were crop and resize (A00), colour jitter (A02), CLAHE (B03), and rotation and shift (B11). Appendix F provides all test statistics from this investigation. Of note is the sharp decrease in performance without the random crop and resize (A00), indicating that it is a critical transformation.
Using these transformations, we constructed a distilled pipeline that consists only of the above transformations. Referred hereafter to as AugUS-D, the pipeline is expressed as the following sequence: [B03, A02, B11, A00]. Figure 4c provides some examples of positive pairs produced with AugUS-D. For more examples of pairs produced by each pipeline, see Appendix G.

4.2. Object Classification Task Evaluation

The StandardAug, AugUS-O, and AugUS-D pipelines were compared in terms of their performance on multiple downstream tasks. Model backbones were pretrained using each of the data augmentation pipelines on the union of the unlabelled and training sets in LUSData. Linear evaluation and fine-tuning experiments were performed according to the procedure explained in Section 3.5. In this section, we present results on the two object classification tasks: A-line vs. B-line classification (AB) and pleural effusion classification (PE).
Linear classifiers indicate the usefulness of pretrained backbones, as the only trainable weights for supervised learning are those belonging to the perceptron head. Table 5 reports the test set performance of linear classifiers for each task and data augmentation pipeline. On the private dataset, the AugUS-D and StandardAug pipelines performed comparably well on the AB task. AugUS-D attained greater performance metrics than the StandardAug pipeline on PE. To provide a visual perspective on linear classifier performance, we produced two-dimensional t-distributed Stochastic Neighbour Embeddings (t-SNEs) of the feature vectors outputted by pretrained backbones [41]. Shown in Figure 5, the separability of the visual representations is consistent with linear classifier performance.
We fine-tuned the pretrained models, allowing the backbone’s weights to be trainable in addition to the model head. Table 5 gives the test set performance of the fine-tuned classifiers. We observed similar performance differences among the different augmentation pipelines, but note some additional findings. The model pretrained using AugUS-O on LUSData performed comparably against the other pipelines on AB but exhibited extremely poor performance on the PE test set. Although it may appear that this model may have overfit to the training set, examination of training metrics revealed that training and validation metrics were close, with validation set AUC having been evaluated as 0.861 . Nonetheless, fine-tuned models that were pretrained with the StandardAug and AugUS-O pipelines yielded strong performance on both tasks in LUSData.
Linear and fine-tuned classifiers for the AB and PE tasks were also evaluated on the external portion of the LUSData test set. Metrics for external test set predictions are provided in Table 6. Most classifiers exhibited degraded performance on external data, compared to the local test set. Overall, the relative performance of the classifiers on the external test set was reflective of their performance on local test data. SimCLR-pretrained linear and fine-tuned classifiers achieved greater performance than those initialized with ImageNet-pretrained or randomly sampled weights on external AB and PE test sets. Note the difference in the manufacturer and probe type distributions between the local and external datasets (see Table A1)—there was a far greater proportion of videos gathered by linear probes and Philips-manufactured devices in the external test set. Unlike the local test evaluation, the network trained from scratch performed comparably on AB to the SimCLR-pretrained models that utilized the AugUS-O and AugUS-D pipelines. However, the SimCLR-pretrained model that utilized the StandardAug pipeline achieved the greatest AB test AUC by a margin of 0.029 among the fine-tuned models. On the PE task, the classifier originating from the same pretrained model that utilized StandardAug achieved the greatest AUC by a margin of 0.019 among the fine-tuned models. In the linear classifier setting, models pretrained using StandardAug and AugUS-D performed comparably well on the external test set, achieving greater AUC than the model pretrained using AugUS-O. Similar to the local test set, the pretrained models that incorporated the StandardAug and AugUS-D pipeline achieved the greatest test AUC, while the pretrained model that utilized AugUS-O performed the worst.
Although MobileNetV3Small was the backbone architecture used for these experiments, we repeated the above evaluations using the more commonly employed ResNet18 architecture [42]. Similar trends were observed regarding the greater test performance attained by models pretrained with cropping-based pipelines. However, the fine-tuned models greatly overfit, likely due to ResNet18’s much greater capacity. The ResNet18 models that achieved the greatest test performance were the linear classifiers trained on frozen backbones. Notably, the trend persisted when evaluating on external test data. Detailed results for ResNet18 can be found in Appendix H.

4.3. Diagnostic Classification Task Evaluation

Models pretrained on LUSData were also evaluated on the COVID-19 multi-class problem (COVID). Unlike the AB and PE tasks, COVID is a diagnostic task that involves global image understanding, as the relationship between objects is pertinent. Multiple findings on lung ultrasound have been observed in the context of COVID-19 pneumonia, including B-lines, pleural line abnormalities, and consolidation [43].
Linear classifiers were trained on the COVID training set and evaluated on the COVID test set. As shown in Table 7, AugUS-O was observed to have the greatest test multiclass AUC, which was considered the primary metric of interest. Looking at the t-SNE visualizations in Figure 5, AugUS-O corresponds to the only visualization where the representations for the COVID-19 Pneumonia and non-COVID-19 Pneumonia classes are clustered together.
Table 7 provides test metrics for fine-tuned COVID classifiers. Again, AugUS-O exhibited the best performance. Moreover, fine-tuned models generally performed worse than the linear classifiers trained on feature vectors from SSL-pretrained models; they likely suffered from overfitting, as COVIDx-US is a smaller dataset.
Unlike the AB and PE tasks, models trained for COVID were pretrained using the LUSData dataset. We repeated the linear classification and fine-tuning experiments using models pretrained on the COVIDx-ultrasound training set. Table 7 reports the results of linear and fine-tuning evaluations. As was observed for backbones pretrained on LUSData, pretraining with the AugUS-O pipeline resulted in the greatest test set AUC.
The trends observed for the COVID task are different than those observed for the AB and PE tasks. Regardless of the augmentation pipeline, SimCLR-pretrained weights resulted in better performance than ImageNet-pretrained or random weight initialization. On object classification tasks, models pretrained using the StandardAug and AugUS-D pipelines performed the best. However, on the diagnostic COVID task, AugUS-O performed best. Recall that AugUS-O was designed to retain semantic information, while both StandardAug and AugUS-D contain the very impactful crop and resize (C&R) transform that can obscure large portions of the image. Object classification tasks require scale invariance, which is enforced by applying C&R during SSL pretraining. Diagnostic tasks, on the other hand, require global image context for interpreters to make a decision, which is preserved best by the AugUS-O pipeline.

4.4. Object Detection Task Evaluation

Recall that the PL task is an object detection problem geared toward localizing the pleural line. We evaluated the pretrained models on PL to explore whether the trends observed for object-centric LU classification tasks would hold for an object detection task, where locality understanding is explicit. We considered two evaluation settings: one in which the pretrained backbone’s weights were held constant, and another in which the backbone’s weights were trainable. Table 8 reports the average precision at a 50 % intersection over union threshold (AP@50) evaluated on the LUSData test set. When the backbone weights were frozen, SimCLR pretraining with AugUS-O resulted in the greatest test AP@50. These trends differed from the results observed for AB and PE classification, which both require object recognition. We speculate that aggressive cropping during the pretraining phase likely produced positive pairs where one image contained a pleural line while the other did not, which we believe would make it difficult to learn representations for pleural line objects when pretraining with the StandardAug or AugUS-D pipelines. The performance of the frozen pretrained backbones was strong overall, considering the low capacity of the backbone and the small, narrow shape of pleural line objects. When fine-tuning end-to-end, SimCLR pretraining with AugUS-D resulted in the greatest test AP@50.

4.5. Label Efficiency Assessment

Experiments were conducted to test the robustness of pretrained models in settings where few labelled samples are available. The experiment was conducted only for the AB and PE classification tasks because there were enough unique videos and patients in the training set to create several disjoint subsets. Backbones were fine-tuned on 20 subsets of approximately 5 % of the LUSData training set, split by patient, yielding 20 performance estimates for low-label settings. Splitting was conducted at the patient level to heighten the difficulty of the task and to limit dependence between subsets. Baseline estimates without SSL pretraining were obtained via initialization with random weights and with ImageNet-pretrained weights, resulting in five different performance conditions. Figure 6 displays boxplots for test AUC distributions under each condition. Friedman’s test indicated that there were significant differences among the median test AUC across conditions, for both the AB and PE tasks. Post-hoc Wilcoxon Signed-Rank Tests were then conducted for each pair of conditions, using the Bonferroni correction with a family-wise error rate of α = 0.05 . The median test AUCs of SimCLR-pretrained models were significantly greater than those initialized with random or ImageNet-pretrained weights for both the AB and PE tasks. All medians were significantly different for AB, except for the SimCLR-pretrained models using the StandardAug and AugUS-D pipelines, which achieved the greatest performance. Notably, these pipelines both utilize the crop and resize transformation. No significant differences were observed between any of the SimCLR-pretrained models for PE. Appendix I provides the test statistics for the above comparisons.

4.6. Impact of Semantics-Preserving Preprocessing

As outlined in Section 3.2, all ultrasound images were cropped to the smallest rectangle enclosing the FOV because the areas outside the FOV are bereft of information. Since pipelines containing the crop and resize transformation (C&R) would be more likely to result in positive pairs that do not cover the FOV, it was hypothesized that cropping to the FOV as a preprocessing step would result in stronger pretrained backbones. To investigate the effect of this semantics-preserving preprocessing step, we pretrained backbones on LUSData using each data augmentation pipeline and evaluated them on the AB, AB, and COVID tasks. Table 9 compares the performance of each backbone with and without the preprocessing step. Performance on the AB task did not change. However, test AUC on both the PE and COVID tasks was consistently lower when the semantics-preserving preprocessing was not applied. Note that greatly less labelled data is available for PE and COVID than for AB. Based on these experiments, FOV cropping is a valuable semantics-preserving preprocessing step for multiple LU classification tasks.

4.7. Impact of the Cropping in Object Classification Tasks

The leave-one-out analysis for transformations exhibited the striking finding that crop and resize (C&R) was the most effective transformation in the StandardAug pipeline for the two object classification tasks: AB and PE. Moreover, both pipelines containing C&R resulted in the greatest downstream test performance on AB and PE. Ordinarily, crops are taken at random locations in an image, with areas between 8 % and 100 % of the original image’s area. Aggressive crops can create situations in which positive pairs do not contain the same objects of interest. Figure 7 shows how C&R could produce positive pairs with different semantic content. Despite this, the results indicated that pipelines containing C&R led to improved performance for the object-centric AB and PE tasks. The exceptional influence of C&R warranted further investigation into its hyperparameters.
We investigated the impact of the minimum crop area, c, as a hyperparameter. Models were pretrained with the AugUS-D pipeline, using values for c in the range [ 0.05 , 0.9 ] . Linear evaluation was conducted for the AB and PE tasks. As shown in Figure 8, smaller values of c yielded better performance, peaking at c 0.1 . The default assignment of c = 0.08 was already a reasonable choice for these two tasks. Additional experiments elucidating the effects of C&R hyperparameters can be found in Appendix J.
Another concern with C&R is that it could result in crops covering the black background on images with a convex FOV. Despite the semantics-preserving preprocessing (described in Figure 2), the top left and right corners of such images provide no information. To characterize the robustness of pretraining under these circumstances, we repeated the experiments sweeping over c [ 0.05 , 0.9 ] but first applied the probe type change transformation (i.e., B00) to every convex FOV. Thus, all inputs to the model were linear FOVs devoid of non-semantic background. A by-product of this transformation is that the near fields of convex images are horizontally stretched. As seen in Figure 8, this change resulted in a slight decrease in performance for both the AB and PE tasks. Evidently, the detriment of spatial distortion outweighed the benefit of guaranteeing that crops were positioned over semantic regions.
Overall, it is clear that aggressive C&R is beneficial for distinguishing between A-lines and B-lines and detecting pleural effusions on LU. Both are object-centric classification tasks. Even though some crops may not contain the object, the backbone would be exposed to several paired instances of transformed portions of objects during pretraining, potentially facilitating texture and shape recognition. Conversely, solving diagnostic tasks such as COVID requires a holistic assessment of the FOV, wherein the co-occurrence of objects is contributory to the overall impression.

5. Conclusions

This study proposed and evaluated data augmentation and preprocessing strategies for self-supervised learning in ultrasound. A commonly employed baseline pipeline (StandardAug) was compared to a handcrafted semantics-preserving pipeline (AugUS-O) and a hybrid pipeline (AugUS-D) composed from the first two. Evaluation of LU interpretation tasks revealed a dichotomy between the utility of the pipelines. Pipelines featuring the cropping transformation (StandardAug and AugUS-D) were most useful for object classification and detection tasks in LU. On the other hand, AugUS-O—designed to preserve semantics in LU—resulted in the greatest performance on a diagnostic task that required global context. Additionally, ultrasound field of view cropping was found to be a beneficial preprocessing step for multiple LU classification tasks, regardless of the data augmentation strategy.
Based on the results of this study, we provide guidance for machine learning practitioners seeking to apply self-supervised pretraining for tasks in ultrasound imaging. First, developers should use semantics-preserving preprocessing during pretraining that crops images to the bounds of the ultrasound FOV. When considering data augmentation strategies for pretraining, semantics-preserving transformations should be considered for tasks requiring holistic interpretation of images, while cropping-based transformations should be leveraged for object-centric downstream tasks.
Some limitations are acknowledged in this study. For example, SimCLR was the only SSL objective that was investigated, and all downstream tasks were confined to the lung. Moreover, some of the transformations introduced in this work constitute computationally expensive preprocessing steps, as they are applied with nonzero probability to each image. Lastly, while AugUS-O was composed of several transformations, we acknowledge that it does not encapsulate all possible transformations that could preserve semantic information in ultrasound images.
Future work should apply this study’s methods to assess the impact of data augmentation pipelines for ultrasound diagnostic tasks outside of the lung and for other SSL methods. Future studies could also compare data augmentation strategies for localization and segmentation downstream tasks in ultrasound.

Author Contributions

Conceptualization, B.V., J.H. and A.W.; methodology, B.V.; software, B.V.; validation, B.V.; formal analysis, B.V.; investigation, B.V.; resources, R.A.; data curation, B.V. and R.A.; writing—original draft preparation, B.V.; writing—review and editing, B.V., J.H., A.W. and R.A.; visualization, B.V.; supervision, J.H. and A.W.; project administration, B.V.; funding acquisition, B.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) grant number 186945.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Research Ethics Boards of Western University (116838; 28 January 2021) and the University of Waterloo (43986; 21 December 2021).

Informed Consent Statement

Not applicable.

Data Availability Statement

The LUSData dataset is not readily available due to restrictions imposed by the data owner. It proprietary and thus cannot be shared. The COVIDxUS dataset is available to the public in the COVID-US repository at https://github.com/nrc-cnrc/COVID-US, as presented by Ebadi et al.  [27].

Acknowledgments

Computational resource support was provided by Compute Ontario (https://computeontario.ca), BC DRI Group, and the Digital Research Alliance of Canada (alliancecan.ca).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Dataset Details

This section provides further details regarding the composition of the LUSData and COVIDx-US datasets, stratified by different attributes. Table A1 provides a breakdown of the characteristics of the unlabelled data, training set, validation set, local test set, and external test set. As can be seen in the table, the external test set’s manufacturer and probe type distributions differ greatly from those in the local test set. As such, the external test set constitutes a meaningful assessment of generalizability for models trained using the LUSData dataset. Table A2 provides similar information for the COVIDx-US dataset [27]. Note that the probe types listed in Table A1 and Table A2 are predictions produced by a product (UltraMask, Deep Breathe Inc., London, ON, Canada) and not from metadata accompanying the examinations.
Table A1. Characteristics of the ultrasound videos in the LUSData dataset. The number of videos possessing known values for a variety of attributes are displayed. Percentages of the total in each split are provided as well, but some do not sum to 100 due to rounding.
Table A1. Characteristics of the ultrasound videos in the LUSData dataset. The number of videos possessing known values for a variety of attributes are displayed. Percentages of the total in each split are provided as well, but some do not sum to 100 due to rounding.
LocalExternal
UnlabelledTrainValidationTestTest
Probe Type
Phased Array50,769 ( 85.6 % ) 5146 ( 90.6 % ) 1051 ( 88.8 % ) 1062 ( 85.0 % ) 586 ( 63.4 % )
Curved Linear 6601 ( 11.1 % ) 439 ( 7.7 % ) 108 ( 9.1 % ) 167 ( 13.4 % ) 92 ( 9.9 % )
Linear 1939 ( 3.3 % ) 94 ( 1.7 % ) 25 ( 2.1 % ) 20 ( 1.6 % ) 247 ( 26.7 % )
Manufacturer
Sonosite53,663 ( 90.5 % ) 4386 ( 77.2 % ) 848 ( 71.6 % ) 963 ( 77.1 % ) 626 ( 67.7 % )
Mindray 4045 ( 6.8 % ) 847 ( 14.9 % ) 216 ( 18.2 % ) 153 ( 12.2 % ) 55 ( 5.9 % )
Philips 66 ( 0.1 % ) 50 ( 0.9 % ) 6 ( 0.5 % ) 11 ( 0.9 % ) 244 ( 26.4 % )
Esaote 233 ( 0.4 % ) 4 ( 0.1 % ) 0 ( 0.0 % ) 0 ( 0.0 % ) 0 ( 0.0 % )
GE § 10 ( 0.0 % ) 0 ( 0.0 % ) 0 ( 0.0 % ) 0 ( 0.0 % ) 0 ( 0.0 % )
Depth (cm)
Mean [STD] 14.3 [ 4.5 ] 13.0 [ 3.8 ] 13.1 [ 3.7 ] 12.7 [ 3.8 ] 11.1 [ 4.8 ]
Unknown 1606 ( 2.7 % ) 407 ( 7.2 % ) 115 ( 9.7 % ) 122 ( 9.8 % ) 0 ( 0.0 % )
Environment
ICU 43,839 ( 73.9 % ) 3722 ( 65.5 % ) 706 ( 59.6 % ) 727 ( 58.2 % ) 0 ( 0.0 % )
ER 13,280 ( 22.4 % ) 760 ( 13.4 % ) 206 ( 17.4 % ) 253 ( 20.3 % ) 0 ( 0.0 % )
Ward 2033 ( 3.4 % ) 173 ( 3.0 % ) 25 ( 2.1 % ) 49 ( 3.9 % ) 0 ( 0.0 % )
Urgent Care 129 ( 0.2 % ) 3 ( 0.1 % ) 0 ( 0.0 % ) 1 ( 0.1 % ) 0 ( 0.0 % )
Unknown 28 ( 0.0 % ) 1021 ( 18.0 % ) 247 ( 20.9 % ) 219 ( 17.5 % ) 925 ( 100.0 % )
Patient Sex
Male30,300 ( 51.1 % ) 2963 ( 52.2 % ) 607 ( 51.3 % ) 588 ( 47.1 % ) 0 ( 0.0 % )
Female20,809 ( 35.1 % ) 1793 ( 31.6 % ) 325 ( 27.4 % ) 412 ( 33.0 % ) 0 ( 0.0 % )
Unknown 8200 ( 13.8 % ) 923 ( 16.3 % ) 252 ( 21.3 % ) 249 ( 19.9 % ) 925 ( 100.0 % )
Patient Age
Mean [STD] 62.3 [ 20.0 ] 63.3 [ 16.5 ] 62.0 [ 18.4 ] 62.8 [ 17.3 ] -
Unknown 53 ( 0.1 % ) 1029 ( 18.2 % ) 247 ( 20.9 % ) 219 ( 17.5 % ) 925 ( 100.0 % )
Total59,309567911841249925
§ General Electric. Intensive Care Unit. Emergency Room.
Table A2. Characteristics of the ultrasound videos in the COVIDx-US dataset. The number of videos possessing known values for a variety of attributes is displayed. Percentages of the total in each split are provided as well, but some do not sum to 100 due to rounding.
Table A2. Characteristics of the ultrasound videos in the COVIDx-US dataset. The number of videos possessing known values for a variety of attributes is displayed. Percentages of the total in each split are provided as well, but some do not sum to 100 due to rounding.
TrainValidationTest
Probe Type
Phased Array 55 ( 32.5 % ) 18 ( 42.9 % ) 11 ( 35.5 % )
Curved Linear 83 ( 49.1 % ) 13 ( 31.0 % ) 10 ( 32.3 % )
Linear 31 ( 18.3 % ) 11 ( 26.2 % ) 10 ( 32.3 % )
Patient Sex
Male 33 ( 19.5 % ) 15 ( 35.7 % ) 7 ( 22.6 % )
Female 18 ( 10.7 % ) 3 ( 7.1 % ) 3 ( 9.7 % )
Unknown 118 ( 69.8 % ) 24 ( 57.1 % ) 21 ( 67.7 % )
Patient Age
Mean [STD] 36.6 [ 14.6 ] 47.6 [ 18.9 ] 52.9 [ 21.1 ]
Unknown 127 ( 75.1 % ) 22 ( 52.4 % ) 20 ( 64.5 % )
Total1694231

Appendix B. Transformation Runtime Estimates

We aimed to examine relative runtime differences between the transformations used in this study. Runtime estimates were obtained for each transformation in the StandardAug and AugUS-v1 pipelines. Estimates were calculated by conducting the transformation 1000 times using the same image. The experiments were run on a system with an Intel i9-10900K CPU at 3.7 GHz. Python 3.11 was utilized, and the transforms were written using PyTorch version 2.2.1 and TorchVision 0.17.1. Note that runtime may vary considerably depending on the software environment and underlying hardware.

Appendix C. StandardAug Transformations

We investigated a standard data augmentation pipeline that has been used extensively in the SSL literature [14,15,16,17]. To standardize experiments, we adopted the symmetric version from the VICReg paper [17], which uses the same transformations and probabilities for each of the two branches in the joint embedding architecture. The transformations and their parameter settings are widely adopted. Below, we detail their operation and parameter assignments.

Appendix C.1. Crop and Resize (A00)

A rectangular crop of the input image is designated at a random location. The area of the cropped region is sampled from the uniform distribution U ( 0.08 , 1 ) . The cropped region’s aspect ratio is sampled from the uniform distribution U ( 0.75 , 1.33 ) . Its width and height are calculated accordingly. The cropped region is then resized to the original image dimension.

Appendix C.2. Horizontal Reflection (A01)

The image is reflected about the central vertical axis.

Appendix C.3. Colour Jitter (A02)

The brightness, contrast, saturation, and hue of the image are modified. The brightness change factor, contrast change factor, saturation change factor, and hue change factor are sampled from U ( 0.6 , 1.4 ) , U ( 0.6 , 1.4 ) , U ( 0.8 , 1.2 ) , and U ( 0.1 , 0.1 ) , respectively.

Appendix C.4. Conversion to Grayscale (A03)

Images are converted to grayscale. The output images have three channels, such that each channel has the same pixel intensity.

Appendix C.5. Gaussian Blur (A04)

The image is denoised using a Gaussian blur with kernel size 13 and standard deviation sampled uniformly at random from U ( 0.1 , 2 ) . Note that in the original pipeline, the kernel size was set to 23 and 224 × 224 images were used. We used 128 × 128 images; as such, we selected a kernel size that covers a similar fraction of the image.

Appendix C.6. Solarization (A05)

All pixels with an intensity of 128 or greater are inverted. Note that the inputs are unsigned 8-bit images.

Appendix D. Ultrasound-Specific Transformations

In this section, we provide details on the set of transformations that comprise AugUS-v1.
Several of the transformations operate on the pixels contained within the ultrasound field of view (FOV). As such, the geometrical form of the FOV was required to perform some transformations. We adopted the same naming convention for the vertices of the ultrasound FOV as Kim et al. [44]. Let p 1 , p 2 , p 3 , and p 4 represent the respective locations of the top left, top right, bottom left, and bottom right vertices, and let x i , y i be the x- and y-coordinates of p i in image space. For convex FOV shapes, we denote the intersection of lines p 1 p 3 and p 2 p 4 as p 0 . Figure A1 depicts the arrangement of these points for each of the three main ultrasound FOV shapes: linear, curvilinear, and phased array. A software tool was used to estimate the FOV shape and probe type for all videos in each dataset (UltraMask, Deep Breathe Inc., Canada).
Figure A1. Locations of the named FOV vertices for each of the three main field of view shapes in ultrasound imaging.
Figure A1. Locations of the named FOV vertices for each of the three main field of view shapes in ultrasound imaging.
Bioengineering 12 00855 g0a1

Appendix D.1. Probe Type Change (B00)

To produce a transformed ultrasound image with a different FOV shape, a mapping that gives the location of pixels in the original image for each coordinate in the new image is calculated. Concretely, the function f : R 2 [ 1 , 1 ] 2 returns the coordinates of the point in the original image that corresponds to a point in the transformed image. Note that ( 1 , 1 ) corresponds to the top left of the original image. Pixel intensities in the transformed image are interpolated according to their corresponding location in the original image.
Algorithm A1 details the calculation of f c for converting linear FOVs to convex FOVs with a random radius factor ρ U ( 1 , 2 ) , along with new FOV vertices. Similarly, curvilinear and phased array FOV shapes are converted to linear FOV shapes. Algorithm A2 details the calculation of the mapping f c that transforms convex FOV shapes to linear shapes, along with calculations for the updated named coordinates. To ensure that no aspects of the old FOV remain on the image, bitmask M { 0 , 1 } h × w is produced using the new named coordinates.
Since the private dataset was resized to square images that exactly encapsulated the FOV, images were resized to match their original aspect ratios to ensure that the sectors were circular. They are then resized to their original dimensions following the transformation.
Algorithm A1 Compute a point mapping for linear to curvilinear FOV shape, along with new FOV vertices
Require: 
FOV vertices p 1 , p 2 , p 3 , p 4 ; radius factor ρ ; coordinate maps x = 1 h × 1 [ 0 , 1 , , w 1 ] and y = [ 0 , 1 , , h 1 ] T 1 1 × w
 1:
r b ρ ( y 3 y 1 )
Bottom sector radius
 2:
x 0 max ( x 3 , 0 ) + ( x 4 x 3 ) / 2
 3:
y 0 y 3 r b
Lateral bounds will intersect at ( x 0 , y 0 )
 4:
y 1 = y 2 y 1
 5:
y 3 = y 4 y 0 + r b 2 ( x 0 x 1 ) 2
 6:
x 1 x 0 ( y 1 y 0 ) ( x 0 x 3 ) / ( y 3 y 0 )
 7:
x 2 x 0 + ( y 1 y 0 ) ( x 0 x 3 ) / ( y 3 y 0 )
 8:
r t ( x 0 x 1 ) 2 + ( y 1 y 0 ) 2
Top sector radius
 9:
ϕ atan2 ( x x 0 , y y 0 )
Angle with the central vertical
10:
f c ϕ + ( x i w / 2 ) / w | ϕ [ y 3 , 0 ] | 2 ( x 0 x ) 2 + ( y 0 y ) 2 r t r b r t 1
Final coordinate mapping
11:
return  f c , p 1 , p 2 , p 3 , p 4
Coordinate mapping, new FOV vertices
Algorithm A2 Compute a point mapping for convex to linear FOV shape, along with new FOV vertices
Require: 
FOV vertices p 1 , p 2 , p 3 , p 4 ; point of intersection p 0 ; angle of intersection θ 0 ; width fraction ω [ 0 , 1 ] ; coordinate maps
x = 1 h × 1 [ 0 , 1 , , w 1 ] and y = [ 0 , 1 , , h 1 ] T 1 1 × w
 1:
r b ( x 0 x 3 ) 2 + ( y 0 y 3 ) 2
Bottom sector radius
 2:
x 1 = x 3 x 0 ω w / 2
 3:
x 2 = x 4 x 0 + ω w / 2
 4:
y 1 = y 2 y 1
 5:
y 3 = y 4 y 0 + r b
 6:
ϕ θ 0 ( ( x x 3 ) / ( x 4 x 3 ) 1 2 )
Angle with the central vertical
 7:
y n ( y y 1 ) / ( y 0 + r b y 1 )
Normalized y-coordinates
 8:
if probe type is curvilinear then
 9:
    r t ( x 0 x 1 ) 2 + ( y 0 y 1 ) 2
Curvilinear; top sector radius
10:
else
11:
    r t ( y 1 y 0 ) / cos ( ϕ / w )
Phased array; distance to top bound
12:
end if
13:
f c x 0 + sin ( ϕ / w ) ( r t + y n ( r b r t ) ) y 0 + cos ( ϕ / w ) ( r t + y n ( r b r t ) )
14:
return  f c , p 1 , p 2 , p 3 , p 4
Coordinate mapping, new FOV vertices

Appendix D.2. Convexity Change (B01)

To mimic an alternative convex FOV shape with a different θ 0 , a mapping is calculated that results in a new FOV shape wherein p 0 is translated vertically. A new value for the width of the top of the FOV is randomly calculated, facilitating the specification of a new p 0 . Given the new p 0 , a pixel map f c c is computed according to Algorithm A3. Similar to the probe type change transformation, pixel intensities at each coordinate in the transformed image are interpolated according to the corresponding coordinate in the original image returned by f c c .
Algorithm A3 Compute a point mapping from an original to a modified convex FOV shape.
Require: 
FOV vertices p 1 , p 2 , p 3 , p 4 ; point of intersection p 0 ; angle of intersection θ 0 ; new top width w ; coordinate maps
x = 1 h × 1 [ 0 , 1 , , w 1 ] and y = [ 0 , 1 , , h 1 ] T 1 1 × w
 1:
s w ( x 4 x 3 ) / ( x 2 x 1 )
Scale change for top bound
 2:
x 1 x 0 s ( x 0 x 1 )
 3:
x 2 x 0 + s ( x 2 x 0 )
 4:
y 1 = y 2 y 1
 5:
y 3 = y 4 y 3
 6:
θ 0 , p 0 angle, point of intersection of p 1 p 3 and p 2 p 4
 7:
r b ( x 0 x 3 ) 2 + ( y 0 y 3 ) 2
Current bottom radius
 8:
r b ( x 0 x 3 ) 2 + ( y 0 y 3 ) 2
New bottom radius
 9:
r t ( x 0 x 1 ) 2 + ( y 0 y 1 ) 2
Current top radius
10:
r t ( x 0 x 1 ) 2 + ( y 0 y 1 ) 2
New top radius
11:
ϕ atan2 ( x x 0 , y y 0 )
12:
r ( x 0 x ) 2 + ( y 0 y ) 2
13:
r ( r r t ) ( r b r t ) / ( r b r t ) + r t
14:
f c c ( x , y ) x 0 + r sin ( ϕ θ 0 / θ 0 ) y 0 + r cos ( ϕ θ 0 / θ 0 )
15:
return  f c c , p 1 , p 2 , p 3 , p 4
Coordinate mapping, new FOV vertices

Appendix D.3. Wavelet Denoising (B02)

Following the soft thresholding method by Birgé and Massart [29], we apply a wavelet transform, conduct thresholding, then apply the inverse wavelet transform. The mother wavelet is randomly chosen from a set, and the sparsity parameter α is sampled from a uniform distribution. We use J = 3 levels of wavelet decomposition and set the decomposition level J 0 = 2 . We designated Daubechies wavelets { db 2 , db 5 } as the set of mother wavelets, which is a subset of those identified by Vilimek et al.’s assessment [31] as most suitable for denoising ultrasound images.

Appendix D.4. Contrast-Limited Adaptive Histogram Equalization (B03)

Contrast-limited adaptive histogram equalization is applied to the input image. The transformation enhances low-contrast regions of ultrasound images while avoiding excessive noise amplification. We found that CLAHE enhances the artifact. The tiles are 8 × 8 regions of pixels. The clip limit is sampled from the uniform distribution U ( 30 , 50 ) .

Appendix D.5. Gamma Correction (B04)

The pixel intensities of the image are nonlinearly modified. Pixel intensity I is transformed as follows:
I 255 I 255 γ
where γ U ( 0.5 , 1.75 ) . The gain is fixed at 1.

Appendix D.6. Brightness and Contrast Change (B05)

The brightness and contrast of the image are modified. The brightness change factor, contrast change factor are sampled from U ( 0.6 , 1.4 ) , U ( 0.6 , 1.4 ) , respectively. The image is then multiplied element-wise by its FOV mask to ensure black regions external to the FOV remain black.

Appendix D.7. Depth Change Simulation (B06)

The image is zoomed about a point that differs according to FOV type, simulating a change in imaging depth. The transformation preserves the centre for linear FOV shapes and preserves p 0 for convex FOV shapes. The magnitude of the zoom transformation, d, is randomly sampled from a uniform distribution. Increasing the depth corresponds to zooming out ( d > 1 ), while decreasing the depth corresponds to zooming in ( d < 1 ).

Appendix D.8. Speckle Noise Simulation (B07)

Following Singh et al.’s method [32], this transformation calculates synthetic speckle noise and applies it to the ultrasound beam. Various parameters of the algorithm are randomly sampled upon each invocation. The lateral and axial resolutions for interpolation are random integers drawn from the ranges [ 35 , 45 ] and [ 75 , 85 ] , respectively. The number of synthetic phasors is randomly drawn from the integer range [ 5 , 10 ] . Sample points on the image are evenly spaced in Cartesian coordinates for linear beam shapes. For convex beams, the sample points are evenly spaced in polar coordinates.

Appendix D.9. Gaussian Noise Simulation (B08)

Multiplicative Gaussian noise is applied to the pixel intensities across the image. First, the standard deviation of the Gaussian noise, σ , is randomly drawn from the uniform distribution U ( 0.5 , 2.5 ) . Multiplicative Gaussian noise with mean 1 and standard deviation σ is then applied independently to each pixel in the image.

Appendix D.10. Salt and Pepper Noise Simulation (B09)

A random assortment of points in the image are set to 255 (salt) or 0 (pepper). The fractions of pixels set to salt and pepper values are sampled randomly according to U ( 0.001 , 0.005 ) .

Appendix D.11. Horizontal Reflection (B10)

The image is reflected about the central vertical axis. This transformation is identical to A01.

Appendix D.12. Rotation and Shift (B11)

A non-scaling affine transformation is applied to the image. More specifically, the image is translated and rotated. The horizontal component of the translation is sampled from U ( 0.2 , 0.2 ) , as a fraction of the image’s width. Similarly, the vertical component is sampled from U ( 0.2 , 0.2 ) , as a fraction of the image’s height. The rotation angle, in degrees, is sampled from U ( 22.5 , 22.5 ) .

Appendix E. Pleural Line Object Detection Training

The Single Shot Detector (SSD) method [37] was employed to perform a cursory evaluation of the pretrained models on an object detection task. The PL task involved localizing instances of the pleural line in LU images, which can be described as a bright horizontal line that is typically situated slightly below the vertical level of the ribs. It is only visible between the rib spaces, since bone blocks the ultrasound scan lines. The artifact represents the interface between the parietal and visceral linings of the lung in a properly acquired LU view.
As in the classification experiments, we used the MobileNetV3Small architecture as the backbone of the network. There is precedent for using the SSD object detection method, as it has been applied to assess the object detection capabilities of MobileNet architectures [33,45]. The feature maps outputted from a designated set of layers were passed to the SSD regression and classification heads. We selected a range of layers whose feature maps had varying spatial resolution. Table A3 provides the identities and dimensions of the feature maps from the backbone that were fed to the SSD model head. The head contained 43,796 trainable parameters, which was light compared to the backbone (927,008).
The set of default anchor box aspect ratios was manually specified after examining the distribution of bounding box aspect ratios in the training set. The 25th percentile was 2.894 , and the 75th percentile was 4.989 . The pleural line typically has a much greater width than height. Accordingly, we designated the set of default anchor box aspect ratios ( w / h ) as { 1 , 2 , 3 , 4 , 5 } . Six anchor box scales were used. The first five were spaced out evenly over the range [ 0.023 , 0.170 ] , which corresponds to the square roots of the minimum and maximum areas of the bounding box labels present in the training set, in normalized image coordinates. The final scale is 1.000 , which is included by default. The box confidence threshold was 0.01 , and the intersection over union threshold to match anchors to box labels was 0.3 . The non-maximum suppression (NMS) threshold was 0.45 , and the number of detected boxes to keep after NMS was 50.
The backbone and head were assigned initial learning rates of 0.002 and 0.02 , respectively. Learning rates were annealed according to a cosine decay schedule. The model was trained for 30 epochs to minimize the loss function from the original SSD paper, which has a regression component for bounding box offsets and a classification component for distinguishing objects and background [37]. The weights corresponding to the epoch with the lowest validation loss were retained for test set evaluation.
Table A3. MobileNetV3Small block indices and the corresponding dimensions of the feature maps that they output, given an input of size 128 × 128 × 3 .
Table A3. MobileNetV3Small block indices and the corresponding dimensions of the feature maps that they output, given an input of size 128 × 128 × 3 .
Block IndexFeature Map Dimensions ( w × h × c )
1 32 × 32 × 16
3 16 × 16 × 24
6 8 × 8 × 40
9 4 × 4 × 96
12 4 × 4 × 576

Appendix F. Leave-One-Out Analysis Statistical Testing

As outlined in Section 4.1, statistical testing was performed to detect differences between pretrained models trained using an ablated version of the StandardAug and AugUS-O pipelines and baseline models pretrained on the original pipelines. Each ablated version of the pipeline was missing one transformation from the data augmentation pipeline. Ten-fold cross-validation conducted on the training set provided 10 samples of test AUC metrics for both the A-line versus B-line (AB) and pleural effusion (PE) binary classification tasks. The samples were taken as a proxy for test-time performance for linear classifiers trained on each of the above downstream tasks.
To determine whether the mean test AUC for each ablated model was different from the baseline model, hypothesis testing was conducted. The model pretrained using the original pipeline was the control group, while the models pretrained using ablated versions of the pipeline were the experimental groups. First, Friedman’s test [38] was conducted to determine if there was any difference in the mean test AUC among the baseline and ablated models. We selected a nonparametric multiple comparisons test because of the lack of assumptions regarding normality or homogeneity of variances. Each collection had 10 samples. Table A4 details the results of Friedman’s test for each pipeline and classification task. Friedman’s test detected differences among the collection of test AUC for both classification tasks with the StandardAug pipeline. Only the AB task exhibited significant differences for the AugUS-O pipeline.
Table A4. Friedman test statistics and p-values for mean cross-validation test AUC attained by models pretrained using an entire data augmentation pipeline and ablated versions of it.
Table A4. Friedman test statistics and p-values for mean cross-validation test AUC attained by models pretrained using an entire data augmentation pipeline and ablated versions of it.
PipelineABPE
F r Statisticp-Value F r Statisticp-Value
StandardAug 30.73 0.000 * 36.51 0.000 *
AugUS-O 76.43 0.000 * 18.91 0.091
* Statistically significant at α = 0.05 .
When the null hypothesis of the Friedman test was rejected, post hoc tests were conducted to determine whether any of the test AUC means in the experimental groups were significantly different than the control group. The Wilcoxon Signed-Rank Test [39] was designated as the post hoc test, due to its absence of any normality assumptions. Note that for each pipeline, n comparisons were performed, where n is the number of transformations within the pipeline. The Holm–Bonferroni correction [40] was applied to keep the family-wise error rate at α = 0.05 for each pipeline/task combination. Results of the post hoc tests are given in Table A5. No post hoc tests were performed for the AugUS-O pipeline evaluated on the PE task because the Friedman test revealed no significant differences.
Table A5. Test statistics (T) and p-values obtained from the Wilcoxon Signed-Rank post hoc tests that compared linear classifiers trained with ablated models’ features to a control linear classifier trained on the baseline model. Experimental groups are identified according to the left-out transformation, as defined in Table 2 and Table 3.
Table A5. Test statistics (T) and p-values obtained from the Wilcoxon Signed-Rank post hoc tests that compared linear classifiers trained with ablated models’ features to a control linear classifier trained on the baseline model. Experimental groups are identified according to the left-out transformation, as defined in Table 2 and Table 3.
PipelineComparisonABPE
T p-Value T p-Value
StandardAugA000 0.002 *0 0.002 *
A016 0.027 21 0.557
A021 0.004 *3 0.010 *
A0319 0.432 10 0.084
A049 0.064 5 0.020
A0515 0.232 10 0.084
AugUS-OB0018 0.375 --
B018 0.049 --
B020 0.002 *--
B030 0.002 *--
B0412 0.131 --
B059 0.064 --
B0613 0.160 --
B0713 0.160 --
B081 0.004 *--
B091 0.004 *--
B1023 0.695 --
B110 0.002 *--
* Statistically significant at family-wise error rate of 0.05 .

Appendix G. Additional Positive Pair Examples

Figure A2, Figure A3 and Figure A4 provide several examples of positive pairs produced by the StandardAug, AugUS-O, and AugUS-D pipelines, respectively. Each figure shows original images from LUSData, along with two views of each image that were produced by applying stochastic data augmentation twice to the original images.
Figure A2. Examples of lung ultrasound images (left) and positive pairs produced using the StandardAug pipeline (right).
Figure A2. Examples of lung ultrasound images (left) and positive pairs produced using the StandardAug pipeline (right).
Bioengineering 12 00855 g0a2
Figure A3. Examples of lung ultrasound images (left) and positive pairs produced using the AugUS-O pipeline (right).
Figure A3. Examples of lung ultrasound images (left) and positive pairs produced using the AugUS-O pipeline (right).
Bioengineering 12 00855 g0a3
Figure A4. Examples of lung ultrasound images (left) and positive pairs produced using the AugUS-D pipeline (right).
Figure A4. Examples of lung ultrasound images (left) and positive pairs produced using the AugUS-D pipeline (right).
Bioengineering 12 00855 g0a4

Appendix H. Results with ResNet18 Backbone

As outlined in Section 3.5, MobileNetV3Small [33] was selected as the feature extractor for all experiments in this study, primarily due to its suitability for lightweight inference in edge deployment scenarios. It has also been used in prior work for similar tasks [25,34]. However, we wanted to determine if similar trends in the results held for an alternate backbone architecture with greater capacity. Larger CNN backbones are frequently used for self-supervised pretraining.
We repeated the experiments in Section 4.2 and Section 4.3 using ResNet18 [42] as the feature extractor, which is a more common architecture. Note that ResNet18’s capacity is far greater than that of MobileNetV3Small—the former has approximately 11,200,000 trainable parameters, while the latter has about 927,000. We applied the same architecture for the projector head that was used for MobileNetV3Small experiments, which was a 2-layer multilayer perceptron with 576 nodes comprising each layer.
ResNet18 feature extractors were pretrained using virtual machines equipped with an Intel Silver 4216 Cascade Lake CPU at 2.1 GHz and a Nvidia Tesla V100 GPU with 12 GB of VRAM. Supervised learning was conducted using a virtual machine equipped with an Intel E5-2683 v4 Broadwell CPU at 2.1 GHz and a Nvidia Tesla P100 GPU.
Table A6 displays the performance of pretrained ResNet18 backbones when evaluated on the LUSData test set for the AB, PE, and COVID tasks. Most fine-tuned models exhibited overfitting, likely due to the architecture’s capacity. Notably, the pretrained models tended to exhibit overfitting when fine-tuned, while the fully supervised ImageNet-pretrained model achieved strong performance on AB and PE. Linear classifiers trained on frozen pretrained backbones’ output features generally achieved the strongest performance and avoided overfitting. Consistent with MobileNetV3Small, the use of cropping-based augmentation pipelines translated to greater performance on AB and PE in the linear evaluation setting. In the fine-tuning setting, the backbone pretrained using AugUS-O exhibited markedly poorer performance on AB but achieved the greatest test AUC on PE. On the COVID test set, the backbone pretrained using the AugUS-O pipeline led to the greatest average AUC in both the linear and fine-tuning settings.
Table A6. Test set performance for linear classification (LC) and fine-tuning (FT) experiments with the COVID task using pretrained ResNet18 backbones. For COVID, metrics are averages across all four classes. The best observed metrics in each experimental setting are in boldface.
Table A6. Test set performance for linear classification (LC) and fine-tuning (FT) experiments with the COVID task using pretrained ResNet18 backbones. For COVID, metrics are averages across all four classes. The best observed metrics in each experimental setting are in boldface.
Train
Setting
TaskWeightsPipelineAccuracyPrecisionRecallAUC
LCABSimCLRStandardAug 0.932 0.929 0.845 0.968
SimCLRAugUS-O 0.916 0.923 0.792 0.954
SimCLRAugUS-D 0.931 0.951 0.819 0.966
ImageNet- 0.878 0.838 0.749 0.933
PESimCLRStandardAug 0.781 0.772 0.777 0.879
SimCLRAugUS-O 0.794 0.806 0.759 0.878
SimCLRAugUS-D 0.779 0.739 0.840 0.873
ImageNet- 0.732 0.693 0.799 0.823
COVIDSimCLRStandardAug 0.537 0.538 0.496 0.827
SimCLRAugUS-O 0.567 0.438 0.515 0.782
SimCLRAugUS-D 0.487 0.353 0.447 0.789
ImageNet- 0.527 0.334 0.476 0.699
FTABSimCLRStandardAug 0.882 0.796 0.824 0.929
SimCLRAugUS-O 0.375 0.328 0.990 0.867
SimCLRAugUS-D 0.498 0.377 0.979 0.907
Random- 0.919 0.898 0.830 0.960
ImageNet- 0.892 0.778 0.906 0.961
PESimCLRStandardAug 0.692 0.643 0.817 0.795
SimCLRAugUS-O 0.773 0.928 0.576 0.887
SimCLRAugUS-D 0.654 0.601 0.852 0.755
Random- 0.720 0.684 0.784 0.815
ImageNet- 0.778 0.797 0.727 0.852
COVIDSimCLRStandardAug 0.418 0.451 0.447 0.719
SimCLRAugUS-O 0.564 0.434 0.511 0.723
SimCLRAugUS-D 0.431 0.346 0.376 0.631
Random- 0.372 0.277 0.354 0.629
ImageNet- 0.549 0.433 0.496 0.719
The fine-tuned backbones and linear classifiers were also evaluated on the external test set for the AB and PE tasks. Table A7 displays the performance of pretrained ResNet18 backbones when evaluated on the LUSData test set for the AB and PE tasks. Again, linear classifiers with pretrained backbones mostly achieved greater performance than fine-tuned classifiers. Backbones pretrained with the StandardAug pipeline achieved the greatest external test set AUC for the AB task. However, those pretrained with AugUS-O performed comparably with StandardAug for the PE task under the linear evaluation setting but achieved the greatest AUC in the fine-tuning setting.
Table A7. External test set performance for linear classifiers (LCs) and fine-tuned models (FTs). The best observed metrics in each experimental setting are in boldface.
Table A7. External test set performance for linear classifiers (LCs) and fine-tuned models (FTs). The best observed metrics in each experimental setting are in boldface.
Train
Setting
TaskInitial
Weights
PipelineAccuracyPrecisionRecallAUC
LCABSimCLRStandardAug 0.762 0.956 0.581 0.877
SimCLRAugUS-O 0.713 0.939 0.494 0.826
SimCLRAugUS-D 0.739 0.945 0.542 0.862
ImageNet- 0.659 0.904 0.402 0.785
PESimCLRStandardAug 0.797 0.884 0.784 0.887
SimCLRAugUS-O 0.786 0.911 0.737 0.887
SimCLRAugUS-D 0.772 0.818 0.826 0.851
ImageNet- 0.701 0.790 0.723 0.776
FTABSimCLRStandardAug 0.767 0.951 0.594 0.880
SimCLRAugUS-O 0.597 0.576 0.924 0.728
SimCLRAugUS-D 0.697 0.666 0.867 0.816
Random- 0.744 0.919 0.576 0.851
ImageNet- 0.771 0.855 0.688 0.857
PESimCLRStandardAug 0.612 0.791 0.532 0.700
SimCLRAugUS-O 0.678 0.972 0.509 0.897
SimCLRAugUS-D 0.703 0.752 0.797 0.755
Random- 0.743 0.845 0.731 0.809
ImageNet- 0.764 0.897 0.712 0.854
Two key observations were drawn from these experiments. First, the low-capacity MobileNetV3Small backbones achieved similar performance to the high-capacity ResNet18 backbones for these LU tasks when their weights were frozen (i.e., linear classifiers). Second, with a higher-capacity backbone, linear classifiers trained on the features outputted by SSL-pretrained backbones often achieved greater performance than fine-tuning despite the weight initialization strategy. The opposite trend was observed for backbones initialized with ImageNet-pretrained weights.

Appendix I. Label Efficiency Statistical Testing

For each of the AB and PE tasks, there were five experimental conditions: SimCLR pretraining with the StandardAug pipeline, SimCLR pretraining with the AugUS-O pipeline, SimCLR pretraining with the AugUS-D pipeline, ImageNet weight initializations, and random weight initialization. The population consisted of 20 subsets of the training set that were split randomly by patient. The same splits were used across all conditions, reflecting a repeated measures design. The experiment was repeated separately for the AB and the PE task.
The Friedman Test Statistic ( F r ) was 75.44 for AB, with a p-value of 0. For PE, the F r = 45.44 , and the p-value was 0. As such, the null hypothesis was rejected for both cases, indicating the existence of differences in the mean test AUC across conditions. The Wilcoxon Signed-Rank Test was performed as a post hoc test between each pair of populations. Table A8 and Table A9 provide all Wilcoxon Test Statistics, along with p-values and differences in the medians between conditions. The Bonferroni correction was applied to the p-values to keep the family-wise error rate to α = 0.05 . Statistically significant comparisons are indicated.
Table A8. Test statistics (T) and p-values obtained from the Wilcoxon Signed-Rank post hoc tests comparing LUSData test AUC on AB for classifiers trained on subsets of the training set. For comparison a / b , Δ : = median ( b ) median ( a ) . The displayed p-values have been adjusted according to the Bonferroni correction to control the family-wise error rate.
Table A8. Test statistics (T) and p-values obtained from the Wilcoxon Signed-Rank post hoc tests comparing LUSData test AUC on AB for classifiers trained on subsets of the training set. For comparison a / b , Δ : = median ( b ) median ( a ) . The displayed p-values have been adjusted according to the Bonferroni correction to control the family-wise error rate.
ComparisonTp-Value Δ
Random/ImageNet39.4 × 10−5 * 0.058
Random/StandardAug01.9 × 10−5 * 0.044
Random/AugUS-O121.3 × 10−3 * 0.019
Random/AugUS-D01.9 × 10−5 * 0.042
ImageNet/StandardAug01.9 × 10−5 * 0.100
ImageNet/AugUS-O01.9 × 10−5 * 0.078
ImageNet/AugUS-D01.9 × 10−5 * 0.100
StandardAug/AugUS-O01.9 × 10−5 * 0.024
StandardAug/AugUS-D611.0 0.002
AugUS-O/AugUS-D01.9 × 10−5 * 0.022
* Statistically significant at family-wise error rate of 0.05 .
Table A9. Test statistics (T) and p-values obtained from the Wilcoxon Signed-Rank post hoc tests comparing LUSData test AUC on PE for classifiers trained on subsets of the training set. For comparison a / b , Δ : = median ( b ) median ( a ) . The displayed p-values have been adjusted according to the Bonferroni correction to control the family-wise error rate.
Table A9. Test statistics (T) and p-values obtained from the Wilcoxon Signed-Rank post hoc tests comparing LUSData test AUC on PE for classifiers trained on subsets of the training set. For comparison a / b , Δ : = median ( b ) median ( a ) . The displayed p-values have been adjusted according to the Bonferroni correction to control the family-wise error rate.
ComparisonTp-Value Δ
Random/ImageNet13.8 × 10−5 * 0.127
Random/StandardAug314.2 × 10−2 * 0.025
Random/AugUS-O163.2 × 10−3 * 0.030
Random/AugUS-D51.9 × 10−4 * 0.036
ImageNet/StandardAug13.8 × 10−5 * 0.152
ImageNet/AugUS-O01.9 × 10−5 * 0.157
ImageNet/AugUS-D13.8 × 10−5 * 0.163
StandardAug/AugUS-O577.6 × 10−1 0.005
StandardAug/AugUS-D535.3 × 10−1 0.011
AugUS-O/AugUS-D781 0.006
* Statistically significant at family-wise error rate of 0.05 .

Appendix J. Additional Random Crop and Resize Experiments

The C&R transform encourages pretrained representations to be invariant to scale. It is also believed that the C&R transform instills invariance between global and local views or between disjoint views of the same object type [14]. While the minimum area of the crop determines the magnitude of the scaling transformations, the aspect ratio range dictates the difference in distortion in both axes of the image. The default aspect ratio range is [ 0.75 , 1.33 ] . We pretrained with the AugUS-D pipeline using a fixed aspect range of 1 and c = 0.08 , which resulted in a test AUC of 0.971 for AB and 0.881 for PE in the LC training setting. Compared to the regular AugUS-D that uses the default aspect ratio range (Table 5), AB test AUC remained unchanged, and PE test AUC decreased by 0.005 .
Lastly, we conducted pretraining on LUSData using only the C&R transformation; that is, the data augmentation pipeline was [A00]. Recent work by Moutakanni et al. [46] suggests that, with sufficient quantities of training data, competitive performance in downstream computer vision tasks can be achieved using crop and resize as the sole transformation in joint embedding SSL. Linear evaluation of a feature extractor pretrained with only C&R yielded test AUC of 0.964 and 0.874 on AB and PE, respectively. Compared to the linear evaluations presented in Section 4, these metrics are greater than those achieved using AugUS-O, but less than those achieved with StandardAug or AugUS-D. It is evident that C&R is a powerful transformation for detecting local objects.

References

  1. Wang, Y.; Ge, X.; Ma, H.; Qi, S.; Zhang, G.; Yao, Y. Deep learning in medical ultrasound image analysis: A review. IEEE Access 2021, 9, 54310–54324. [Google Scholar] [CrossRef]
  2. Yang, Q.; Wei, J.; Hao, X.; Kong, D.; Yu, X.; Jiang, T.; Xi, J.; Cai, W.; Luo, Y.; Jing, X.; et al. Improving B-mode ultrasound diagnostic performance for focal liver lesions using deep learning: A multicentre study. EBioMedicine 2020, 56, 102777. [Google Scholar] [CrossRef]
  3. Ghorbani, A.; Ouyang, D.; Abid, A.; He, B.; Chen, J.H.; Harrington, R.A.; Liang, D.H.; Ashley, E.A.; Zou, J.Y. Deep learning interpretation of echocardiograms. npj Digit. Med. 2020, 3, 10. [Google Scholar] [CrossRef]
  4. VanBerlo, B.; Wu, D.; Li, B.; Rahman, M.A.; Hogg, G.; VanBerlo, B.; Tschirhart, J.; Ford, A.; Ho, J.; McCauley, J.; et al. Accurate assessment of the lung sliding artefact on lung ultrasonography using a deep learning approach. Comput. Biol. Med. 2022, 148, 105953. [Google Scholar] [CrossRef]
  5. Kim, J.; Maranna, S.; Watson, C.; Parange, N. A scoping review on the integration of artificial intelligence in point-of-care ultrasound: Current clinical applications. Am. J. Emerg. Med. 2025, 92, 172–181. [Google Scholar] [CrossRef]
  6. Liu, S.; Wang, Y.; Yang, X.; Lei, B.; Liu, L.; Li, S.X.; Ni, D.; Wang, T. Deep learning in medical ultrasound analysis: A review. Engineering 2019, 5, 261–275. [Google Scholar] [CrossRef]
  7. Ansari, M.Y.; Mangalote, I.A.C.; Meher, P.K.; Aboumarzouk, O.; Al-Ansari, A.; Halabi, O.; Dakua, S.P. Advancements in Deep Learning for B-Mode Ultrasound Segmentation: A Comprehensive Review. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2126–2149. [Google Scholar] [CrossRef]
  8. VanBerlo, B.; Hoey, J.; Wong, A. A survey of the impact of self-supervised pretraining for diagnostic tasks in medical X-ray, CT, MRI, and ultrasound. BMC Med. Imaging 2024, 24, 79. [Google Scholar] [CrossRef]
  9. Balestriero, R.; LeCun, Y. Contrastive and non-contrastive self-supervised learning recover global and local spectral embedding methods. Adv. Neural Inf. Process. Syst. 2022, 35, 26671–26685. [Google Scholar]
  10. Azizi, S.; Mustafa, B.; Ryan, F.; Beaver, Z.; Freyberg, J.; Deaton, J.; Loh, A.; Karthikesalingam, A.; Kornblith, S.; Chen, T.; et al. Big self-supervised models advance medical image classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 3478–3488. [Google Scholar]
  11. Zhao, Q.; Liu, Z.; Adeli, E.; Pohl, K.M. Longitudinal self-supervised learning. Med. Image Anal. 2021, 71, 102051. [Google Scholar] [CrossRef]
  12. Basu, S.; Singla, S.; Gupta, M.; Rana, P.; Gupta, P.; Arora, C. Unsupervised contrastive learning of image representations from ultrasound videos with hard negative mining. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 18–22 September 2022; pp. 423–433. [Google Scholar]
  13. Cabannes, V.; Kiani, B.; Balestriero, R.; LeCun, Y.; Bietti, A. The SSL interplay: Augmentations, inductive bias, and generalization. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 3252–3298. [Google Scholar]
  14. Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
  15. Grill, J.B.; Strub, F.; Altché, F.; Tallec, C.; Richemond, P.; Buchatskaya, E.; Doersch, C.; Avila Pires, B.; Guo, Z.; Gheshlaghi Azar, M.; et al. Bootstrap your own latent-a new approach to self-supervised learning. Adv. Neural Inf. Process. Syst. 2020, 33, 21271–21284. [Google Scholar]
  16. Zbontar, J.; Jing, L.; Misra, I.; LeCun, Y.; Deny, S. Barlow Twins: Self-supervised Learning via Redundancy Reduction. In Proceedings of the International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 12310–12320. [Google Scholar]
  17. Bardes, A.; Ponce, J.; LeCun, Y. VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning. In Proceedings of the International Conference on Learning Representations, Virtual, 25–29 April 2022. [Google Scholar]
  18. Fernandez-Quilez, A.; Eftestøl, T.; Kjosavik, S.R.; Goodwin, M.; Oppedal, K. Contrasting axial T2W mri for prostate cancer triage: A self-supervised learning approach. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar]
  19. Anand, D.; Annangi, P.; Sudhakar, P. Benchmarking Self-Supervised Representation Learning from a million Cardiac Ultrasound images. In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Glasgow, UK, 11–15 July 2022; pp. 529–532. [Google Scholar]
  20. Saeed, M.; Muhtaseb, R.; Yaqub, M. Contrastive Pretraining for Echocardiography Segmentation with Limited Data. In Medical Image Understanding and Analysis, Proceedings of the 26th Annual Conference, MIUA 2022, Cambridge, UK, 27–29 July 2022; Springer International Publishing: Cham, Switzerland, 2022; pp. 680–691. ISBN 9783031120527. [Google Scholar]
  21. Nguyen, N.Q.; Le, T.S. A semi-supervised learning method to remedy the lack of labeled data. In Proceedings of the 2021 15th International Conference on Advanced Computing and Applications (ACOMP), Ho Chi Minh City, Vietnam, 24–26 November 2021; pp. 78–84. [Google Scholar]
  22. Chen, Y.; Zhang, C.; Liu, L.; Feng, C.; Dong, C.; Luo, Y.; Wan, X. USCL: Pretraining Deep Ultrasound Image Diagnosis Model Through Video Contrastive Representation Learning. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, 27 September–1 October 2021; Proceedings, Part VIII 24. Springer: Berlin/Heidelberg, Germany, 2021; pp. 627–637. [Google Scholar]
  23. Chen, Y.; Zhang, C.; Ding, C.H.; Liu, L. Generating and weighting semantically consistent sample pairs for ultrasound contrastive learning. IEEE Trans. Med. Imaging 2022, 42, 1388–1400. [Google Scholar] [CrossRef] [PubMed]
  24. Zhang, C.; Chen, Y.; Liu, L.; Liu, Q.; Zhou, X. HiCo: Hierarchical Contrastive Learning for Ultrasound Video Model Pretraining. In Proceedings of the Asian Conference on Computer Vision, Macao, China, 4–8 December 2022; pp. 229–246. [Google Scholar]
  25. VanBerlo, B.; Wong, A.; Hoey, J.; Arntfield, R. Intra-video positive pairs in self-supervised learning for ultrasound. Front. Imaging 2024, 3, 1416114. [Google Scholar] [CrossRef]
  26. Chen, L.; Rubin, J.; Ouyang, J.; Balaraju, N.; Patil, S.; Mehanian, C.; Kulhare, S.; Millin, R.; Gregory, K.W.; Gregory, C.R.; et al. Contrastive self-supervised learning for spatio-temporal analysis of lung ultrasound videos. In Proceedings of the 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI), Cartagena, Colombia, 18–21 April 2023; pp. 1–5. [Google Scholar]
  27. Ebadi, A.; Xi, P.; MacLean, A.; Florea, A.; Tremblay, S.; Kohli, S.; Wong, A. COVIDx-US: An open-access benchmark dataset of ultrasound imaging data for AI-driven COVID-19 analytics. Front. Biosci. 2022, 27, 198. [Google Scholar] [CrossRef] [PubMed]
  28. Zeng, E.Z.; Ebadi, A.; Florea, A.; Wong, A. COVID-Net L2C-ULTRA: An Explainable Linear-Convex Ultrasound Augmentation Learning Framework to Improve COVID-19 Assessment and Monitoring. Sensors 2024, 24, 1664. [Google Scholar] [CrossRef]
  29. Birgé, L.; Massart, P. From Model Selection to Adaptive Estimation. In Festschrift for Lucien Le Cam: Research Papers in Probability and Statistics; Pollard, D., Torgersen, E., Yang, G.L., Eds.; Springer: New York, NY, USA, 1997; pp. 55–87. [Google Scholar] [CrossRef]
  30. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vision, Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  31. Vilimek, D.; Kubicek, J.; Golian, M.; Jaros, R.; Kahankova, R.; Hanzlikova, P.; Barvik, D.; Krestanova, A.; Penhaker, M.; Cerny, M.; et al. Comparative analysis of wavelet transform filtering systems for noise reduction in ultrasound images. PLoS ONE 2022, 17, e0270745. [Google Scholar] [CrossRef]
  32. Singh, P.; Mukundan, R.; de Ryke, R. Synthetic models of ultrasound image formation for speckle noise simulation and analysis. In Proceedings of the 2017 International Conference on Signals and Systems (ICSigSys), Bali, Indonesia, 16–18 May 2017; pp. 278–284. [Google Scholar]
  33. Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1314–1324. [Google Scholar]
  34. VanBerlo, B.; Li, B.; Hoey, J.; Wong, A. Self-Supervised Pretraining Improves Performance and Inference Efficiency in Multiple Lung Ultrasound Interpretation Tasks. IEEE Access 2023, 11, 135696–135707. [Google Scholar] [CrossRef]
  35. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
  36. You, Y.; Li, J.; Reddi, S.; Hseu, J.; Kumar, S.; Bhojanapalli, S.; Song, X.; Demmel, J.; Keutzer, K.; Hsieh, C.J. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv 2019, arXiv:1904.00962. [Google Scholar]
  37. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14. Springer: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  38. Friedman, M. A comparison of alternative tests of significance for the problem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  39. Wilcoxon, F. Individual Comparisons by Ranking Methods. Biom. Bull. 1945, 1, 80–83. [Google Scholar] [CrossRef]
  40. Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 1979, 6, 65–70. [Google Scholar]
  41. van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  42. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  43. Blazic, I.; Cogliati, C.; Flor, N.; Frija, G.; Kawooya, M.; Umbrello, M.; Ali, S.; Baranne, M.L.; Cho, Y.J.; Pitcher, R.; et al. The use of lung ultrasound in COVID-19. ERJ Open Res. 2023, 9. [Google Scholar] [CrossRef]
  44. Kim, K.; Macruz, F.; Wu, D.; Bridge, C.; McKinney, S.; Al Saud, A.A.; Sharaf, E.; Pely, A.; Danset, P.; Duffy, T.; et al. Point-of-care AI-assisted stepwise ultrasound pneumothorax diagnosis. Phys. Med. Biol. 2023, 68, 205013. [Google Scholar] [CrossRef] [PubMed]
  45. Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; Chen, L.C. MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4510–4520. [Google Scholar]
  46. Moutakanni, T.; Oquab, M.; Szafraniec, M.; Vakalopoulou, M.; Bojanowski, P. You Don’t Need Domain-Specific Data Augmentations When Scaling Self-Supervised Learning. Adv. Neural Inf. Process. Syst. 2024, 37, 116106–116125. [Google Scholar]
Figure 1. Examples of the preprocessing and data augmentation methods in this study. (a) Original images are from ultrasound exams. (b) Semantics-preserving preprocessing is applied to crop out areas external to the field of view. (c) The StandardAug pipeline is a commonly employed data augmentation pipeline in self-supervised learning. (d) The AugUS-O pipeline was designed to preserve semantic content in ultrasound images. (e) AugUS-D is a hybrid pipeline whose construction was informed by empirical investigations into the StandardAug and AugUS-O pipelines.
Figure 1. Examples of the preprocessing and data augmentation methods in this study. (a) Original images are from ultrasound exams. (b) Semantics-preserving preprocessing is applied to crop out areas external to the field of view. (c) The StandardAug pipeline is a commonly employed data augmentation pipeline in self-supervised learning. (d) The AugUS-O pipeline was designed to preserve semantic content in ultrasound images. (e) AugUS-D is a hybrid pipeline whose construction was informed by empirical investigations into the StandardAug and AugUS-O pipelines.
Bioengineering 12 00855 g001
Figure 2. Raw ultrasound images are preprocessed by performing an element-wise multiplication (⊗) of the raw image with a binary mask that preserves only the field of view, then cropped according to the bounds of the field of view.
Figure 2. Raw ultrasound images are preprocessed by performing an element-wise multiplication (⊗) of the raw image with a binary mask that preserves only the field of view, then cropped according to the bounds of the field of view.
Bioengineering 12 00855 g002
Figure 3. Examples of ultrasound-specific data augmentation transformations applied to the same ultrasound image.
Figure 3. Examples of ultrasound-specific data augmentation transformations applied to the same ultrasound image.
Bioengineering 12 00855 g003
Figure 4. Examples of positive pairs produced using each of the (a) StandardAug, (b) AugUS-O, and (c) AugUS-D data augmentation pipelines.
Figure 4. Examples of positive pairs produced using each of the (a) StandardAug, (b) AugUS-O, and (c) AugUS-D data augmentation pipelines.
Bioengineering 12 00855 g004
Figure 5. Two-dimensional t-distributed Stochastic Neighbour Embeddings (t-SNEs) for test set feature vectors produced by SimCLR-pretrained backbones, for all tasks and data augmentation pipelines.
Figure 5. Two-dimensional t-distributed Stochastic Neighbour Embeddings (t-SNEs) for test set feature vectors produced by SimCLR-pretrained backbones, for all tasks and data augmentation pipelines.
Bioengineering 12 00855 g005
Figure 6. Distribution of test AUC for classifiers trained on disjoint subsets of 5 % of the patients in the training partition of LUSData for (a) the AB task and (b) the PE task.
Figure 6. Distribution of test AUC for classifiers trained on disjoint subsets of 5 % of the patients in the training partition of LUSData for (a) the AB task and (b) the PE task.
Bioengineering 12 00855 g006
Figure 7. Examples of how the random crop and resize transformation (A00) can reduce semantic information. Original images are on the left, and two random crops of the image are on the right. Top: The original image contains a B-line (purple), which is visible in View 2 but not in View 1. The original image also contains instances of the pleural line (yellow) which are visible in View 1 but not in View 2. Bottom: The original image contains a pleural effusion (green), which is visible in View 1 but largely obscured in View 2.
Figure 7. Examples of how the random crop and resize transformation (A00) can reduce semantic information. Original images are on the left, and two random crops of the image are on the right. Top: The original image contains a B-line (purple), which is visible in View 2 but not in View 1. The original image also contains instances of the pleural line (yellow) which are visible in View 1 but not in View 2. Bottom: The original image contains a pleural effusion (green), which is visible in View 1 but largely obscured in View 2.
Bioengineering 12 00855 g007
Figure 8. Test set AUC for linear classifiers trained on the representations outputted by pretrained backbones, for (a) the AB task and (b) the PE task. Each backbone was pretrained using AugUS-D with different values for the minimum crop area, c. Results are provided for models pretrained with the original ultrasound FOV, along with images transformed to linear field of view (FOV) only. The dashed line indicates the default value of c = 0.08 .
Figure 8. Test set AUC for linear classifiers trained on the representations outputted by pretrained backbones, for (a) the AB task and (b) the PE task. Each backbone was pretrained using AugUS-D with different values for the minimum crop area, c. Results are provided for models pretrained with the original ultrasound FOV, along with images transformed to linear field of view (FOV) only. The dashed line indicates the default value of c = 0.08 .
Bioengineering 12 00855 g008
Table 1. Breakdown of the unlabelled, training, validation, and test sets in the private dataset. For each split, we indicate the number of distinct patients, videos, and images. x / y indicates the number of labelled videos in the negative and positive class for each binary classification task. For the PL task, we indicate the number of videos with frame-level bounding box annotations.
Table 1. Breakdown of the unlabelled, training, validation, and test sets in the private dataset. For each split, we indicate the number of distinct patients, videos, and images. x / y indicates the number of labelled videos in the negative and positive class for each binary classification task. For the PL task, we indicate the number of videos with frame-level bounding box annotations.
LocalExternal
UnlabelledTrainValidationTestTest
Patients55711702364364168
Videos59,309567911841249925
Images1.3 × 1071.2 × 1062.5 × 1052.6 × 1051.1 × 105
AB labelsN/A 2067 / 999 459 / 178 458 / 221 286 / 327
PE labelsN/A 789 / 762 176 / 142 162 / 158 68 / 110
PL labelsN/A20039450
Table 2. The sequence of transformations in the StandardAug data augmentation pipeline.
Table 2. The sequence of transformations in the StandardAug data augmentation pipeline.
IdentifierProbabilityTransformationTime [ms]
A001.0Crop and resize 0.29
A010.5Horizontal reflection 0.08
A020.8Colour jitter. 2.40
A030.2Conversion to grayscale 0.19
A040.5Gaussian blur 0.74
A050.1Solarization 0.15
Table 3. The sequence of transformations in the ultrasound-specific augmentation pipeline.
Table 3. The sequence of transformations in the ultrasound-specific augmentation pipeline.
IdentifierProbabilityTransformationTime [ms]
B000.3Probe type change 2.25
B010.75Convexity change 1.92
B020.5Wavelet denoising 5.00
B030.2CLAHE 4.64
B040.5Gamma correction 0.52
B050.5Brightness and contrast change 0.49
B060.5Depth change simulation 1.76
B070.333Speckle noise simulation 3.69
B080.333Gaussian noise 0.28
B090.1Salt and pepper noise 0.18
B100.5Horizontal reflection 0.19
B110.5Rotation and shift 1.42
Contrast-limited adaptive histogram equalization.
Table 4. A comparison of ablated versions of the StandardAug and AugUS-O pipeline, with one excluded transformation versus the original pipelines. Models were pretrained on the LUSData unlabelled set and evaluated on two downstream classification tasks—AB and PE. Performance is expressed as mean and median test area under the receiver operating characteristic curve (AUC) from 10-fold cross-validation achieved by a linear classifier trained on the feature vectors of a frozen backbone.
Table 4. A comparison of ablated versions of the StandardAug and AugUS-O pipeline, with one excluded transformation versus the original pipelines. Models were pretrained on the LUSData unlabelled set and evaluated on two downstream classification tasks—AB and PE. Performance is expressed as mean and median test area under the receiver operating characteristic curve (AUC) from 10-fold cross-validation achieved by a linear classifier trained on the feature vectors of a frozen backbone.
PipelineOmittedABPE
Mean (std)MedianMeanMedian
StandardAugNone 0.978 ( 0.007 ) 0.978 0.852 ( 0.040 ) 0.845
A00 0.864 ( 0.022 ) 0.873   0.695 ( 0.050 ) 0.707  
A01 0.976 ( 0.006 ) 0.974 0.848 ( 0.046 ) 0.856
A02 0.975 ( 0.007 ) 0.975   0.840 ( 0.046 ) 0.842  
A03 0.978 ( 0.007 ) 0.978 0.849 ( 0.044 ) 0.846
A04 0.976 ( 0.007 ) 0.975 0.840 ( 0.041 ) 0.842
A05 0.977 ( 0.007 ) 0.977 0.851 ( 0.041 ) 0.853
AugUS-ONone 0.956 ( 0.013 ) 0.959 0.828 ( 0.030 ) 0.837
B00 0.958 ( 0.011 ) 0.957 0.831 ( 0.034 ) 0.839
B01 0.952 ( 0.016 ) 0.952 0.835 ( 0.027 ) 0.838
B02 0.965 ( 0.011 ) 0.967   § 0.840 ( 0.032 ) 0.851
B03 0.950 ( 0.011 ) 0.951   0.825 ( 0.028 ) 0.827
B04 0.957 ( 0.013 ) 0.958 0.831 ( 0.034 ) 0.836
B05 0.953 ( 0.014 ) 0.952 0.839 ( 0.024 ) 0.845
B06 0.961 ( 0.009 ) 0.959 0.829 ( 0.037 ) 0.833
B07 0.959 ( 0.012 ) 0.960 0.838 ( 0.035 ) 0.856
B08 0.961 ( 0.013 ) 0.966   § 0.834 ( 0.027 ) 0.849
B09 0.962 ( 0.012 ) 0.967   § 0.838 ( 0.030 ) 0.845
B10 0.956 ( 0.011 ) 0.959 0.826 ( 0.035 ) 0.838
B11 0.937 ( 0.020 ) 0.939   0.825 ( 0.028 ) 0.823
Median is significantly less than baseline, where no transformations were omitted. § Median is significantly greater than baseline, where no transformations were omitted.
Table 5. Test set performance for linear classification (LC) and fine-tuning (FT) experiments with the AB and PE tasks. Binary metrics are averages across classes. The best observed metrics in each experimental setting are in boldface.
Table 5. Test set performance for linear classification (LC) and fine-tuning (FT) experiments with the AB and PE tasks. Binary metrics are averages across classes. The best observed metrics in each experimental setting are in boldface.
Train
Setting
TaskInitial
Weights
PipelineAccuracyPrecisionRecallAUC
LCABSimCLRStandardAug 0.932 0.951 0.819 0.970
SimCLRAugUS-O 0.910 0.939 0.756 0.953
SimCLRAugUS-D 0.931 0.947 0.820 0.971
ImageNet- 0.898 0.894 0.758 0.949
PESimCLRStandardAug 0.782 0.769 0.787 0.881
SimCLRAugUS-O 0.795 0.796 0.776 0.853
SimCLRAugUS-D 0.800 0.798 0.782 0.886
ImageNet- 0.779 0.756 0.804 0.864
FTABSimCLRStandardAug 0.941 0.951 0.850 0.970
SimCLRAugUS-O 0.939 0.938 0.859 0.968
SimCLRAugUS-D 0.931 0.960 0.809 0.962
Random- 0.883 0.794 0.834 0.938
ImageNet- 0.911 0.872 0.830 0.953
PESimCLRStandardAug 0.766 0.713 0.863 0.882
SimCLRAugUS-O 0.487 0.479 0.685 0.557
SimCLRAugUS-D 0.802 0.782 0.818 0.884
Random- 0.703 0.733 0.607 0.767
ImageNet- 0.708 0.640 0.907 0.845
Table 6. External test set metrics for linear classification (LC) and fine-tuning (FT) experiments with the AB and PE tasks. Binary metrics are averages across classes. The best observed metrics in each experimental setting are in boldface.
Table 6. External test set metrics for linear classification (LC) and fine-tuning (FT) experiments with the AB and PE tasks. Binary metrics are averages across classes. The best observed metrics in each experimental setting are in boldface.
Train
Setting
TaskInitial
Weights
PipelineAccuracyPrecisionRecallAUC
LCABSimCLRStandardAug 0.749 0.956 0.555 0.868
SimCLRAugUS-O 0.689 0.927 0.453 0.810
SimCLRAugUS-D 0.726 0.922 0.531 0.859
ImageNet- 0.643 0.872 0.389 0.770
PESimCLRStandardAug 0.794 0.835 0.843 0.880
SimCLRAugUS-O 0.784 0.916 0.728 0.870
SimCLRAugUS-D 0.806 0.877 0.809 0.887
ImageNet- 0.758 0.836 0.771 0.840
FTABSimCLRStandardAug 0.751 0.961 0.556 0.883
SimCLRAugUS-O 0.734 0.960 0.523 0.850
SimCLRAugUS-D 0.712 0.934 0.500 0.854
Random- 0.748 0.885 0.606 0.853
ImageNet- 0.718 0.903 0.527 0.814
PESimCLRStandardAug 0.805 0.838 0.861 0.898
SimCLRAugUS-O 0.572 0.649 0.717 0.536
SimCLRAugUS-D 0.800 0.904 0.768 0.879
Random- 0.700 0.850 0.643 0.804
ImageNet- 0.776 0.775 0.914 0.840
Table 7. Test set performance for linear classification (LC) and fine-tuning (FT) experiments with the COVID task. Binary metrics are averages across classes. The best observed metrics in each experimental setting are in boldface.
Table 7. Test set performance for linear classification (LC) and fine-tuning (FT) experiments with the COVID task. Binary metrics are averages across classes. The best observed metrics in each experimental setting are in boldface.
Train
Setting
Pretraining
Dataset
Initial
Weights
PipelineAccuracyPrecisionRecallAUC
LCLUSDataSimCLRStandardAug 0.454 0.371 0.413 0.784
SimCLRAugUS-O 0.560 0.431 0.513 0.836
SimCLRAugUS-D 0.487 0.348 0.447 0.713
COVIDx-USSimCLRStandardAug 0.498 0.582 0.501 0.781
SimCLRAugUS-O 0.557 0.506 0.543 0.820
SimCLRAugUS-D 0.540 0.400 0.491 0.760
-ImageNet- 0.503 0.304 0.451 0.699
FTLUSDataSimCLRStandardAug 0.381 0.259 0.365 0.753
SimCLRAugUS-O 0.557 0.428 0.509 0.836
SimCLRAugUS-D 0.465 0.321 0.430 0.744
COVIDx-USSimCLRStandardAug 0.450 0.540 0.464 0.770
SimCLRAugUS-O 0.517 0.483 0.510 0.814
SimCLRAugUS-D 0.526 0.384 0.479 0.672
-Random- 0.423 0.327 0.401 0.534
-ImageNet- 0.502 0.305 0.457 0.698
Table 8. LUSData local test set AP@50 for the PL task observed for SSD models whose backbones were pretrained using different data augmentation pipelines. Boldface values indicate top performance.
Table 8. LUSData local test set AP@50 for the PL task observed for SSD models whose backbones were pretrained using different data augmentation pipelines. Boldface values indicate top performance.
BackboneInitial WeightsPipelineAP@50
FrozenSimCLRStandardAug 0.228
SimCLRAugUS-O 0.255
SimCLRAugUS-D 0.194
Random- 0.041
ImageNet- 0.127
TrainableSimCLRStandardAug 0.316
SimCLRAugUS-O 0.332
SimCLRAugUS-D 0.351
Random- 0.308
ImageNet- 0.310
Table 9. Test set AUC for SimCLR-pretrained models with (✓) and without (✗) semantics-preserving preprocessing. Results are reported for linear classifiers and fine-tuned models.
Table 9. Test set AUC for SimCLR-pretrained models with (✓) and without (✗) semantics-preserving preprocessing. Results are reported for linear classifiers and fine-tuned models.
Linear ClassifierFine-Tuned
TaskPipeline/Preprocessing
ABStandardAug 0.971 0.970 0.971 0.970
AugUS-O 0.950 0.953 0.926 0.968
AugUS-D 0.971 0.971 0.961 0.962
PEStandardAug 0.873 0.893 0.869 0.882
AugUS-O 0.846 0.865 0.522 0.557
AugUS-D 0.867 0.897 0.864 0.884
COVIDStandardAug 0.742 0.784 0.724 0.753
AugUS-O 0.793 0.836 0.805 0.836
AugUS-D 0.585 0.713 0.737 0.744
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

VanBerlo, B.; Hoey, J.; Wong, A.; Arntfield, R. The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound. Bioengineering 2025, 12, 855. https://doi.org/10.3390/bioengineering12080855

AMA Style

VanBerlo B, Hoey J, Wong A, Arntfield R. The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound. Bioengineering. 2025; 12(8):855. https://doi.org/10.3390/bioengineering12080855

Chicago/Turabian Style

VanBerlo, Blake, Jesse Hoey, Alexander Wong, and Robert Arntfield. 2025. "The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound" Bioengineering 12, no. 8: 855. https://doi.org/10.3390/bioengineering12080855

APA Style

VanBerlo, B., Hoey, J., Wong, A., & Arntfield, R. (2025). The Efficacy of Semantics-Preserving Transformations in Self-Supervised Learning for Medical Ultrasound. Bioengineering, 12(8), 855. https://doi.org/10.3390/bioengineering12080855

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop