Next Article in Journal
Synchronization for Reaction–Diffusion Switched Delayed Feedback Epidemic Systems via Impulsive Control
Previous Article in Journal
Analysis of Higher-Order Bézier Curves for Approximation of the Static Magnetic Properties of NO Electrical Steels
Previous Article in Special Issue
OCT Retinopathy Classification via a Semi-Supervised Pseudo-Label Sub-Domain Adaptation and Fine-Tuning Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Deep Learning for Motion Artifact-Suppressed OCTA Image Generation from Both Repeated and Adjacent OCT Scans

1
Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528000, China
2
Innovation and Entrepreneurship Teams Project of Guangdong Provincial Pearl River Talents Program, Guangdong Weiren Meditech Co., Ltd., Foshan 528015, China
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(3), 446; https://doi.org/10.3390/math12030446
Submission received: 16 December 2023 / Revised: 18 January 2024 / Accepted: 29 January 2024 / Published: 30 January 2024

Abstract

:
Optical coherence tomography angiography (OCTA) is a popular technique for imaging microvascular networks, but OCTA image quality is commonly affected by motion artifacts. Deep learning (DL) has been used to generate OCTA images from structural OCT images, yet limitations persist, such as low label image quality caused by motion and insufficient use of neighborhood information. In this study, an attention-based U-Net incorporating both repeated and adjacent structural OCT images in network input and high-quality label OCTA images in training was proposed to generate high-quality OCTA images with motion artifact suppression. A sliding-window correlation-based adjacent position (SWCB-AP) image fusion method was proposed to generate high-quality OCTA label images with suppressed motion noise. Six different DL schemes with various configurations of network inputs and label images were compared to demonstrate the superiority of the proposed method. Motion artifact severity was evaluated by a motion noise index in B-scan (MNI-B) and in en-face (MNI-C) OCTA images, which were specifically defined in this study for the purpose of evaluating various DL models’ capability in motion noise suppression. Experimental results on a nailfold OCTA image dataset showed that the proposed DL method generated the best results with a peak signal-to-noise ratio (PSNR) of 32.666 ± 7.010 dB, structural similarity (SSIM) of 0.926 ± 0.051, mean absolute error (MAE) of 1.798 ± 1.575, and MNI-B of 0.528 ± 0.124 in B-scan OCTA images and a contrast-to-noise ratio (CNR) of 1.420 ± 0.291 and MNI-C of 0.156 ± 0.057 in en-face OCTA images. Our proposed DL approach generated OCTA images with improved blood flow contrast and reduced motion artifacts, which could be used as a fundamental signal processing module in generating high-quality OCTA images from structural OCT images.

1. Introduction

Optical coherence tomography (OCT) is a non-invasive depth-resolved imaging technique that uses low-coherence light to acquire 2D and 3D images with micrometer-level resolution from optical scattering media, exploring the structural features of biological tissues [1,2]. Optical coherence tomography angiography (OCTA) is a functional imaging technique based on OCT, which can rapidly generate high-resolution microvascular network images of tested tissues [3]. OCT and OCTA have been applied to disease detection and diagnosis in various fields such as ophthalmology [4], dermatology [5], neurology [6], and others [7]. OCTA can be used to perform qualitative and quantitative analysis in clinical applications, yielding critical physical or physiological indicators such as vascular density and vessel diameter, which reflect the functional status of tissues [8,9].
The fundamental imaging principle of OCTA is to detect the contrast of signal changes caused by the movement of particles, primarily red blood cells, which allows for the visualization of the microvascular network in biological tissues [10]. The specific imaging process involves scanning a single slow-axis position multiple times and processing the OCT signals among repeated B-scans to distinguish between signals of static tissues and flowing blood [11,12]. However, OCTA exhibits certain technical drawbacks, such as noise and artifacts, which hinder the extraction of the true vascular network and accurate quantification of vasculature information [13]. High-quality OCTA images are of great significance for the precise assessment of clinical diseases and biomedical research. Improvement in OCTA image quality can be achieved by hardware or software-based approaches. Hardware-based methods normally rely on additional sensors to be installed in imaging systems to detect abnormalities in scanning conditions and guide OCTA rescanning, such as eye-tracking technology. However, this approach can significantly increase hardware costs and system complexity [14], and a lot of researchers resort to software-based approaches, such as increasing the number of repeated B-scans or averaging repeated en-face scans. But these approaches increase acquisition time and data size [15,16]. Considering normal OCTA data acquisition already takes several seconds and a longer time would induce extra motion artifacts, it is not favorable to simply increase the data acquisition time in clinical situations. It deserves more research efforts to develop an optimal signal processing method based on available OCTA data acquired in a normal mode to generate high-quality OCTA images.
In recent years, rapid advancements in deep learning (DL) computer vision techniques, particularly the popular convolutional neural networks (CNNs), have found widespread applications in medical image analysis, including classification, segmentation, denoising, and super-resolution. DL has also gradually found applications in the fields of OCT, resulting in significant breakthroughs in various tasks such as retinopathy classification and retinal layer segmentation [17]. In order to enhance the quality of OCTA images, several studies have also used DL to directly process OCTA images. OCTA image enhancement can be achieved through the supervised learning of training a DL model using low-quality OCTA images as input and corresponding high-quality OCTA images as the target. A high-quality label OCTA image could be obtained using different approaches. In studies [18,19,20], a high-quality label en-face OCTA image was obtained using multiple volume-scan averaging. In a study by Liao et al. [21], a low-quality input en-face OCTA image was obtained from a small number of repeated B-scan OCT images at each slow-axis position, while a high-quality label en-face OCTA image was reconstructed from a large number of repeated B-scan OCT images at each slow-axis position. These methods significantly improve en-face OCTA image quality without the need for repeated volume data acquisitions or a large number of repeated B-scan OCT images at each slow-axis position. DL could also be used to enhance the quality of low-resolution OCTA images obtained from low-density spatial sampling with a similar supervised learning scheme using high-resolution OCTA images as the target. Gao et al. [22] proposed a DL network to reconstruct high-resolution OCTA images from low-density sampled OCTA images of the superficial vascular complex. Subsequently, they designed networks for reconstructing high-resolution OCTA images of the intermediate capillary plexus and deep vascular plexus [23]. Using this type of supervised super-resolution DL learning method, OCTA images were reconstructed with enhanced resolution, improved vascular connectivity, and reduced noise. Kim et al. [24] proposed an integrated DL framework with a two-stage adversarial training scheme to reconstruct both high-density sampled and high-quality OCTA images from their corresponding under-sampled and low-quality counterparts. The abovementioned approaches commonly relied on paired high-quality/resolution and low-quality/resolution images from the same subjects for DL model training, which was not easy to obtain in some practical situations. Zhang et al. [25] introduced a frequency-aware inverse-consistent generative adversarial network (GAN) to enhance OCTA image resolution without relying on paired low-resolution and high-resolution images. By leveraging frequency transformation, their model successfully refined high-frequency information while preserving low-frequency information, resulting in the reconstruction of high-quality OCTA images. Considering the essence of a traditional OCTA flow contrast computation algorithm is to produce OCTA images with blood flow information calculated from repeated-scan structural OCT images, deep learning can therefore be directly used for this purpose, transforming the task of OCTA image generation into an image-to-image translation process. DL has been extensively used in natural image translation tasks. Isola et al. [26] proposed conditional GAN (cGAN) to facilitate image translation. This approach involves utilizing the input image as a condition to generate the corresponding output image. Similar endeavors also become common in the medical image field, which include generating polarization-sensitive OCT images from conventional OCT images [27,28], translating between multimodal magnetic resonance (MR) images [29], and converting MR images into computed tomography (CT) images [30].
Using deep learning techniques to generate OCTA images is a promising approach to improving the quality of OCTA images. Lee et al. [31] first used a symmetric autoencoder network to produce a retinal OCTA image from a single B-scan OCT image. This study demonstrated the feasibility of using DL to generate OCTA images from single-scan structural OCT images, though this method was not designed for the purpose of OCTA image quality enhancement. Using a similar scheme, Zhang et al. [32] further proposed a texture-guided U-Net architecture with a customized loss function based on Euclidean distance and successfully enhanced the quality of the generated OCTA images. Li et al. [33] proposed a GAN-based method combining neighborhood OCT image information for the purpose of OCTA image enhancement. They utilized three OCT images at adjacent positions and their corresponding three-label OCTA images to explore neighborhood information for better OCTA image generation. However, this method only utilized a single OCT image at each position as the input and showed limited capability in extracting capillary details. Liu et al. [34,35] trained DL models with high-quality label OCTA images generated from 48 consecutive B-scans at each slow-axis position and used a small number of repeated OCT images at a single position as the input. Due to a higher signal-to-noise ratio of the target image, the quality of the generated OCTA images was significantly improved. Furthermore, their studies compared the results obtained from two to four repetitions of scans as input, and they found that a larger number of repetitions used as input resulted in higher-quality OCTA images. Jiang et al. [36] used repeated-scan OCT images from adjacent positions as input with a label image acquired at a single position. A pseudo-3D architecture was constructed to further integrate input information and improve the accuracy of OCTA reconstruction. This approach led to an enhancement in the quality of OCTA images compared with the approach using input from a single position. The performance of the OCTA image generation algorithm largely depends on the quality of the label images in the training. Recently, there have also been some studies that addressed the problem of imperfect labels in training. Jiang et al. [37] addressed this issue by using a weakly supervised training strategy. Weak label OCTA images were generated from only two or four repeated OCT images of B-scans at a slow-axis position using the traditional flow contrast computation method. Another two or four repeated OCT images scanned at the same slow-axis position but independent of those used for the weak label OCTA image generation were placed as the input for a DL model, which was trained with the Noise2Noise strategy. This method was found to produce better image quality compared with supervised learning using the weak label OCTA images directly as true labels in training. Li et al. [38] proposed a novel translation framework to generate 3D OCTA volumes from 3D OCT volumes using a 3D DL model. The framework primarily consists of a 3D GAN image translation baseline, featuring a 2D segmentation network referred to as the Vessel Promoted Guidance (VPG) module and a 2D GAN named the Heuristic Contextual Guidance (HCG) module. The VPG played a crucial role in enhancing the quality of blood flow within the generated OCTA images, while the HCG module contributed to improving the overall image translation completeness and enhancing blood flow continuity.
A motion artifact is a common type of artifact observed in OCTA imaging, which may typically appear as significantly increased noise of bright pixels in B-scan OCTA images and white lines in en-face OCTA images [13]. Although quite a number of deep learning approaches have been proposed for the generation of OCTA images from structural OCT images, as described in the abovementioned literature, the problem of motion artifacts seems to not be addressed in these studies, which is the focus of this study. Considering motion noise, we proposed using both repeated and adjacent OCT images as the input and utilizing high-quality label OCTA images fused from adjacent OCTA images for the training of deep learning models. We also proposed two dedicated evaluation metrics, a motion noise index (MNI) in B-scan OCTA images (MNI-B) and an MNI in en-face OCTA images (MNI-C), to assess the effect of motion noise suppression and demonstrate the superior performance of our proposed approach. Details of this study are described as follows.

2. Materials and Methods

Figure 1 illustrates the overall flowchart of the method proposed in this paper, which uses a deep learning model to generate high-quality OCTA blood flow images from structural OCT images. We proposed using repeated and adjacent structural OCT images as the input and explored the advantages of high-quality label OCTA images in training to improve the performance of OCTA image generation, particularly its effect on motion artifact suppression. A comparison of methods with respect to other configurations of network input and label image used for training was conducted using an OCTA dataset obtained from a nailfold OCTA imaging experiment. Various evaluation metrics including those specifically proposed in this study were utilized to demonstrate the superior performance of the proposed method, which is described in detail as follows.

2.1. OCTA Data Preparation

The OCTA data used in this study were acquired by a custom-built linear-k spectral domain OCT (SD-OCT) for imaging the nailfold microvasculature [39]. Figure 2a shows a schematic of the SD-OCT system. The system mainly included a superluminescent diode light source (IPSDS1307C-1311, Inphenix Inc., Livermore, CA, USA) with a central wavelength of 1290 nm and a 3 dB bandwidth of 80 nm (1250–1330 nm), a 50:50 fiber coupler (TW1300R5A2, Thorlabs, Inc., Newton, NJ, USA), a linear-wavenumber spectrometer (PSLKS1300-001-GL, Pharostek, Rochester, MN, USA), and a highspeed line scan camera (GL2048L-10A-ENC-STD-210, Sensors Unlimited Inc., Princeton, NJ, USA) running at an A-line acquisition rate of 76 kHz. The OCTA data acquisition mode repeated four B-scans at each slow-axis position, which could be used to extract the blood flow signal in conventional OCTA imaging. A single OCTA volume dataset covering an approximate area of 1 mm × 1 mm with a scan density of 400 × 400 was acquired from the central region of the right ring finger nailfold area of healthy subjects in approximately ten seconds. Sixteen subjects were recruited, and two volume datasets were collected from each subject with the same operators using the same imaging protocols. One volume dataset was not used because of poor image quality, and finally, thirty-one volume datasets were used in this study. The schematic diagram of the OCTA scanning protocols is shown in Figure 2b.

2.2. OCTA Data Preprocessing

2.2.1. Raw Label OCTA Image Reconstruction

For the linear-k SD-OCT system, a Fourier inverse transform was performed on the spectral data, yielding the structural OCT signal. The blood flow signal was extracted from each A-line in the repeated four B-scans at each position, using an eigendecomposition (ED)-based clutter filtering technique [40]. Final B-scan OCT/OCTA images were obtained as 400 × 400 grayscale images. Two-dimensional en-face images were constructed for each OCTA volume dataset using the maximum intensity projection method, enabling a comprehensive observation of the entire blood flow image. Figure 3 shows some typical nailfold B-scan OCT/OCTA images and a typical en-face OCTA image. OCTA images with motion artifacts were quite commonly seen in the raw label OCTA images, which were shown as ubiquitous white noisy points in B-scan images and white lines in en-face images (Figure 3c,d). The low-quality label OCTA images with obvious motion artifacts were expected to affect the training of OCTA image generation; therefore, the following subsection of the high-quality label OCTA image preparation process was presented.

2.2.2. Label OCTA Image Preparation and Image Fusion Experiment

In order to train a deep learning network that can extract blood flow signals and suppress motion artifacts, it is necessary to prepare high-quality label OCTA images. In this study, these high-quality label OCTA images were prepared by fusing adjacent-position OCTA images. A sliding-window correlation-based adjacent-position (SWCB-AP) image fusion method was proposed for this purpose. For each B-scan position, three consecutive B-scan OCTA images were recruited for the fusion, considering that the motion artifact took place in a single B-scan in most cases. The image fusion process is shown in Figure 4, and the main steps of the fusion process are described as follows.
SWCB-AP was conducted with a sliding window method. The window size and the step size were set as 15 × 15 pixels and 5 pixels, respectively. In each window, a correlation coefficient between each pair of images in the three consecutive images was calculated, denoted as C ij :
C ij   = m n A mn     A ¯ B mn     B ¯ m n A mn     A ¯ 2 m n B mn     B ¯ 2
where A mn and B mn are the pixel intensity at the pixel position (m, n) of the two comparison window images, A ¯ and B ¯ are the mean values of the window images, and i ,   j = 1 ,   2 ,   3 are the index for window images of A (Window i) and B (Window j). A fused window image was then constructed by averaging all window images except the ones with motion artifacts using the following fusion rule:
Win = 0 . 3 Win 1 + 0 . 4 Win 2 + 0 . 3 Win 3 , all   C ij     T ,   i ,   j = 1 , 2 , 3   0 . 5 Win 1 + 0 . 5 Win 2 ,   C 12     T ,   C 23   <   T ,   C 13   <   T 0 . 5 Win 2 + 0 . 5 Win 3 ,   C 12   <   T ,   C 23     T ,   C 13   <   T 0 . 5 Win 1 + 0 . 5 Win 3 ,   C 12   <   T ,   C 23   <   T ,   C 13     T 0 ,   else  
where Win represents the fused window image, and Win 1 , Win 2 , and Win 3 are three windows of the same depth and fast-axis positions in three consecutive images with Win 2 as the center window. T represents a threshold value of C ij , ranging from 0 to 1 for the detection of motion frame. Different values of T were experimented with to perform the image fusion operation, and metrics were utilized to assess the OCTA image fusion results to determine the optimal T used in this study. Finally, all the fused window images were added and averaged in the overlapping area to obtain a final full OCTA image of size 400 × 400.
To verify the effectiveness of this method in improving the quality of OCTA images, fusion experiments were conducted using all 31 OCTA volume datasets. The SWCB-AP image fusion method was applied to the OCTA images obtained with the ED algorithm (SWCB-AP-ED). Raw OCTA images were also obtained using other flow contrast computation methods including speckle variance (SV) [41] and intensity differentiation (ID) [42]. Another two fusion methods were also implemented for comparison with the SWCB-AP-ED method. One was the weighted average fusion of the three types of full-scale OCTA images (SV, ID, and ED) at a single position, which is called WA-SP-SV/ID/ED. The second one was the weighted average fusion of adjacent-position full-scale OCTA images obtained with the ED algorithm (WA-AP-ED). The weight coefficients were set as 0.3, 0.4, and 0.3 in both the weighted average fusion methods. The image quality of the raw and fused OCTA images was qualitatively assessed and quantitatively compared using three evaluation metrics (MNI-B, CNR, MNI-C) as described in the subsequent “Evaluation metrics” Subsection.

2.3. OCTA Image Generation Using Deep Learning

2.3.1. Deep Learning NETWORK Architecture

Attention-based U-Net was utilized in this study as the primary deep learning network framework to perform the OCTA image generation task. It was used to extract and analyze the features of structural OCT images and learn the mapping relationship between the dynamic changes in the tissue structural signal in OCT images and the blood flow signal in OCTA images. U-Net [43] is mainly composed of multi-level feature extraction and concatenation with symmetrical skip connections, enabling accurate localization and context information sensing. The network architecture is mainly composed of blocks of convolutional layers, batch normalization (BN) layers, rectified linear unit (ReLU) layers, skip connection, max pooling layers, upsampling layers, and final 1 × 1 convolutional layer. To further enhance network performance, this paper integrated the Convolutional Block Attention Module (CBAM) [44], a channel-spatial mixed attention module, within the backbone U-Net. Figure 5 illustrates the basic network architecture used in this paper, where the CBAM module is added before the first upsampling layer of the U-Net.

2.3.2. Loss Function

The L 1 (MAE) loss function, which calculates the average absolute error between the predicted and the true values, can preserve the edge and detailed information of an image without over-smoothing. The structural similarity (SSIM) [45] based on the human visual perception of image quality can quantify the visual similarity between two images and better preserve the contrast and high-frequency details of an image. An L 1 -SSIM loss function [46], which combines L 1 and SSIM loss functions, was used in this paper as the optimization objective for the OCT to OCTA image translation task. Definitions of these loss functions are given as follows:
MAE = 1 HW i = 1 H j = 1 W I ij     J ij ,
SSIM = 2 μ I μ J + C 1 μ I 2 + μ J 2 + C 1 * 2 σ IJ   + C 2 σ I 2 + σ J 2 + C 2 ,
Loss = α 1     SSIM + 1     α MAE ,
where, in Equation (3), I and J represent the image generated by the network and the reference image, respectively, ( i ,   j ) represents the pixel position, and H and W represent the height and width of the image. In Equation (4), μ I and μ J represent the mean pixel intensity of the two images I and J, σ I and σ J represent the standard deviations and σ IJ represents their covariance, and C 1 and C 2 are constants, with default values of 0.01 and 0.03 used in this study. In Equation (5), α is a weighting constant empirically set as 0.84, which was adopted from [46] for this study.

2.3.3. Models and Training Schemes

In this study, we proposed using both repeated and adjacent OCT scans in the network input and incorporating enhanced high-quality OCTA label images in the model training for the task of structural OCT image to OCTA image conversion with better suppression of motion artifacts. In order to demonstrate the superior performance of our proposed approach, six different experimental schemes were tested in this study according to the arrangements of the network input (structural OCT images) and the label image (OCTA images) used in training. The network input was divided into two types based on whether a single scan (SS) or repeated scans (RS) were used for the slow-axis position and whether OCT images of a single position (SP) or multiple adjacent positions (MP) were used. The label image was divided into two types based on whether fused (F) or non-fused (NF) OCTA images were used, representing high-quality and normal label images utilized in training, respectively. Based on these definitions, six schemes were tested, which were simply named SS-SP-NF, SS-MP-NF, RS-SP-NF, RS-MP-NF, RS-SP-F, and RS-MP-F, respectively. Specifically, “SS-SP-NF” means the input with single-scan OCT images acquired at a single position and training with non-fused conventional OCTA images, and the other five scheme names can be interpreted similarly. Figure 6 shows a schematic diagram of these training schemes with respect to the network input and the label OCTA image used in training.

2.3.4. Implementation

An approximate six-fold cross-validation strategy was used to test the performances of different OCTA generation schemes using the acquired 31 OCTA volume datasets. In each fold, 5 datasets were used for testing, and the remaining 26 datasets were used for training. Therefore, a total of 30 datasets were used for result evaluation. Data augmentation was performed on the training dataset by horizontally flipping each image, doubling the number of training datasets to 52. Each dataset contained 400 different slow-axis positions, and thus, in total, there were 20,800 pairs of B-scan OCT and OCTA images for a single slow-axis position included in the training dataset.
All experiments were conducted using a computer equipped with 2 Intel E5-2678V3 CPUs, 64 GB DRAM memory, and 4 NVIDIA GTX1080 8 GB GPUs, running an Ubuntu operating system where Python 3.6 and PyTorch 1.10.2 deep learning framework were installed. The number of training epochs was set as 40, and the batch size was set as 32 for all models. The Adam [47] optimizer, a stochastic optimization algorithm based on adaptive gradient estimation, was used to train the model parameters. The initial learning rate was set to 0.0001 for all models, and a cosine annealing learning rate decay strategy [48] was used, which smoothly reduced the learning rate during the training and avoided premature convergence or local optima. The testing datasets were also used as the validation datasets to prevent overfitting in DL model training, and all the models were saved at the point of a validation loss minimum for generating OCTA images in the testing datasets. Due to limitations in memory and computing resources, each input B-scan image was resized to 256 × 256 in training and then rescaled to its original size of 400 × 400 in testing using a bilinear interpolation method. The resizing process may affect image generation performance a little. But this process was performed for all comparison schemes and, therefore, fairness could be maintained for comparisons among different DL schemes with the same resizing operation.

2.4. Evaluation Metrics

To quantitatively evaluate the quality of the generated B-scan OCTA images, this study used three reference-based evaluation metrics: the peak signal-to-noise ratio (PSNR), the structural similarity index (SSIM), and mean absolute error (MAE). These metrics were calculated with reference to a label image Y and a predicted image Z. PSNR is defined as:
PSNR = 10 log 10 Max ( Y ) 2 MSE Y , Z ,
where Max Y represents the maximum intensity of the label image Y and MSE Y ,   Z is the mean squared error between Y and Z, which is defined as follows:
MSE Y , Z = 1 HW i = 1 H j = 1 W Y ij     Z ij 2 ,
where ( i ,   j ) indicates the pixel position index and (H, W) are the height and width of the image. The definitions of MAE and SSIM have already been given in Equations (3) and (4), respectively.
We also proposed a new no-reference metric, called the motion noise index (MNI) for B-scan (MNI-B), to measure the level of flow intensity compared to the background motion noise across the fast-axis direction in a B-scan OCTA image. MNI-B is calculated as follows: First, the gray level of each column of the B-scan image is averaged to obtain a one-dimensional intensity vector across the fast-axis direction. Subsequently, this intensity vector is divided into multiple consecutive segments based on its peaks and valleys, with each segment containing only one peak and two valleys. Finally, the MNI of each segment is calculated and averaged to obtain an overall mean MNI-B for each B-scan OCTA image, which is defined as follows:
MNI B = 1 N n = 0 N 1 v n + v n + 1 2 p n + 1 ,
where N is the total number of valley–peak–valley segments, n is a segment number index, p n + 1 represents the segment peak intensity, and ( v n ,   v n + 1 ) represent the left and right valley intensity of the segment n + 1 . A schematic diagram of this calculation process is shown in Figure 7. MNI-B lies in the range of 0~1, with a higher value indicating a high motion noise level. Typical OCTA images with low and high levels of motion noise, which can be reflected by small and high values of MNI-B, are shown in Figure 8.
To evaluate the quality of the generated en-face OCTA images, two no-reference metrics were used: one was the contrast-to-noise ratio (CNR) and the other was the motion noise index in the en-face mode (MNI-C), which was proposed in this study, similar to MNI-B. Among them, CNR is a metric that measures the contrast between the vasculature and non-vasculature regions in the image, where a higher CNR value indicates better image quality. CNR is calculated as follows:
CNR = μ s     μ b σ s 2   + σ b 2 ,
where μ s and μ b denote the mean intensity of the vasculature and non-vasculature regions, and σ 2 represents the variance in intensity within each region. The two regions were differentiated by a localized Bernsen thresholding-based segmentation algorithm with manual adjustment [49] for the vasculature, where the background is defined as the region excluding the vasculature part in an OCTA image.
MNI-C was the second metric proposed in this study that could quantify the severity of motion artifacts, most commonly seen as white lines in an en-face OCTA image. A lower MNI-C indicates better suppression of the motion artifact-related white lines. To calculate MNI-C, a 1D intensity signal I n (n is the row index) was firstly calculated by averaging the pixel intensity along the horizontal direction in an en-face image. Then, a 1D gradient signal G I ( n )   was calculated by a difference operation of the 1D intensity vector I n . MNI-C is calculated as:
MNI C = max G I μ s ,
where μ s denotes the mean value of the vasculature region and max G I is calculated using the mean of the top 20 peak values of the 1D intensity gradient signal G I ( n ) . Figure 9 shows a typical en-face OCTA image with significant motion-related white lines and a typical en-face OCTA image where motion artifacts have been suppressed. These two images are calculated to have high and low values of MNI-C, respectively, indicating the capability of MNI-C in reflecting the severity of motion artifacts in en-face OCTA images.

3. Results

3.1. Results for Label OCTA Image Fusion

Figure 10 demonstrates the performance of the image fusion results for the CNR, MNI-B, and MNI-C of different T values ranging from 0.05 to 0.6 with a step of 0.05. Varying thresholds yield different effects on the fused images, and based on the representative CNR curve, a threshold value of 0.35 for T was finally selected for subsequent use.
The SWCB-AP algorithm could effectively suppress motion noise in the OCTA images. Figure 11 shows the typical results for the SWCB-AP processing of OCTA images with different levels of motion noise, both in the B-scan and en-face modes. It was observed that some raw OCTA images had severe motion artifacts that reduced the blood flow contrast in the B-scan mode (Figure 11b) and caused the appearance of many white lines in the en-face mode (Figure 11d). By using the SWCB-AP algorithm, the problem of motion artifact was alleviated, and vasculature contrast was greatly improved, as seen in the B-scan mode (Figure 11g). Motion white lines were also greatly suppressed in the en-face mode (Figure 11i).
Quantitative results for the comparison of different raw OCTA processing methods and different image fusion algorithms are shown in Table 1. Typical OCTA images in the B-scan and en-face modes obtained by various raw OCTA methods (ID, SV, and ED) and various fusion methods (WA-SP-SV/ID/ED, WA-AP-ED, and SWCB-AP-ED) are also presented in Figure 12. For the 31 testing volume datasets, SWCB-AP-ED achieved an MNI-B of 0.484 ± 0.168 in the B-scan mode, which was the lowest among all three image fusion methods. At the position affected by motion, the three raw OCTA calculation methods displayed obvious motion noise with low vasculature contrast (Figure 12a–c). A simple fusion of these three raw OCTA images at a single scan position could not achieve the effect of motion artifact suppression (Figure 12d). The method of weighted average using multiple adjacent-position OCTA images (WA-AP-ED) could suppress the motion noise and improve the blood flow contrast (Figure 12e). However, its effect was still inferior to the SWCB-AP-ED fusion method proposed in this study. The evaluation of CNR and MNI-C in the en-face images showed that ED achieved the best results compared with the other two raw OCTA flow contrast computation methods (ID and SV), which was the reason why ED was used as the basic OCTA calculation method for image fusion in this study. Based on ED, the SWCB-AP-ED method also achieved the best results for CNR (2.334 ± 0.371) and MNI-C (0.150 ± 0.068) in the en-face mode among all three image fusion methods.

3.2. Results for Deep Learning OCTA Image Generation

The quantitative results for the DL-generated B-scan OCTA images and the corresponding volume-projected en-face OCTA images in the test datasets using various DL schemes are presented in Table 2. Figure 13 displays typical OCTA images generated by different DL schemes.
All DL schemes successfully achieved the task of generating OCTA images from OCT structure images with a good convergence in model training. This demonstrated the effectiveness of using the UNet-CBAM architecture with the L1-SSIM loss function to accomplish the task. Among the results from the different DL schemes, the proposed RS-MP-F scheme performed the best, as evident from both the qualitative observation in Figure 13 and the quantitative results in terms of the reference and non-reference evaluation metrics, as shown in Table 2. As expected, the scheme using repeated scan OCT images as input increased the vasculature contrast, as shown in CNR in comparison with a single scan. Multiple position OCT images together with fused high-quality label OCTA images further improved the vasculature contrast, as shown in CNR, and enhanced the capability of motion artifact suppression, as shown in MNI-B and MNI-C. The possible effects of different factors with respect to network input and label image configurations on the generated OCTA image quality are further discussed in detail in the Discussion Section. The DL model computational cost and speed achieved using our current implementation configurations are also reported here. For the six DL schemes implemented using a backbone of U-Net combined with a CBAM module, the mean model size was 34.66 million total parameters, 65.64 billion floating point operations (FLOPs), a batch training time of 850.0 ms, and an interference time of 33.0 ms for generating an OCTA image. The difference in these metrics among different schemes was small because the main difference came from the change in the input channel number only.

4. Discussion and Conclusions

In this study, a deep learning (DL) method was used to generate OCTA images from structural OCT images. We proposed using repeated and adjacent structural OCT images in the input and utilizing high-quality label OCTA images in training to produce an OCTA image generation model that could suppress motion noise and improve vasculature contrast. A sliding-window correlation-based adjacent-position (SWCB-AP) image fusion scheme was proposed to prepare the high-quality OCTA images used for DL model training. We also proposed new evaluation metrics, i.e., the motion noise indices (MNI-B and MNI-C), together with other conventional ones to quantitatively assess the quality of the DL-generated OCTA images. The results of six different DL construction schemes with respect to different configurations of inputs and label image arrangements showed that our proposed method achieved the best performance in generating OCTA images with high CNR and low motion noise.
Traditionally, it is technically challenging to obtain high-quality OCTA images as it is not only affected by flow contrast computation methods but also by a number of external factors including tissue motion and appropriate data acquisition operations [3,50]. In this study, we used the recently very popular DL method to generate OCTA images directly from structural OCT images with specific advantages such as denoising capability [37]. The specific focus of this study was related to motion noise, which is very commonly seen as white lines in en-face OCTA images and has not been attended to in previous studies. In our data, these motion lines appeared in such a periodic pattern (Figure 3d, Figure 9a and Figure 11d) that they might be associated with the involuntary cyclic and pulsatile movement in nailfold skin caused by the heartbeat. Motion causes obvious artifacts in OCTA images, so it is an important task to develop a DL method that can be used directly to suppress these motion artifacts. This issue was addressed from the point of view of DL network input configurations and label image preparation in this study.
This study proposed a sliding-window correlation-based adjacent-position (SWCB-AP) image fusion method to suppress motion noise and obtain high-quality B-scan label OCTA images for DL model training. Motion noise could be seen obviously in different OCTA computation algorithms including ID, SV, and ED (Figure 12a–c). Simply averaging OCTA images obtained by different flow computation methods at a single position could not suppress this motion noise (Figure 12d). Consequently, neighborhood OCTA images without motion noise were considered for this purpose. The correlation analysis in SWCB-AP was used to judge whether there was significant motion in one of the consecutive three OCTA frames, and the one with significant motion noise was omitted in averaging, thus achieving the effect of noise artifact suppression. In positions where no motion existed, this correlation-based weighted averaging scheme could also perform the task of frame averaging or zeroing to reduce or even totally remove the baseline noise and improve the overall contrast-to-noise ratio of the OCTA image (Figure 12f vs. Figure 12e). The averaging of OCTA images acquired at multiple positions normally led to a blurring of the contrast of vasculature, but this effect was considered to be minimal in this study as most of the vessels were perpendicular to the imaging plane in the nailfold region because of its anatomic characteristics. In general, our proposed SWCB-AP method outperformed the other two comparison fusion methods, as shown in Figure 12. The quantitative parameters in SWCB-AP included the window size, and the threshold of correlation used for detecting the frame with significant motion noise should be set properly. A too-large window would reduce the effect of background noise removal and a too-small window would not be enough to cover the vessel region with simultaneously increased computation cost. A large threshold might miss the detection of some windows with significant motion, while a small one would detect too many fault windows without significant motion. These two parameters were optimized with experiments in this study and fixed for processing all the OCTA images. However, for more complex scenarios involving different levels of motion amplitude, further exploration can focus on adaptive fusion algorithms, where the window size and correlation threshold can be adaptively determined for different scenarios. On the other hand, the application of the current fusion method is limited to the suppression of significant motion noise existing in 1~2 frames. When tissue motion is quite violent and causes significant motion artifacts at multiple (>2) consecutive positions, this fusion method may fail to work properly, and more advanced algorithms such as the recently proposed window-based blood flow computation method [51] or bulk motion correction [52] need to be developed to solve this problem.
Regarding deep learning methods, this study used U-Net combined with the attention mechanism as the main network architecture. U-Net is a popular DL model with a typical encoder–decoder structure, which has been successfully applied for a series of tasks including the generation of OCTA images using structural OCT images [35]. To demonstrate the efficacy of the added CBAM module in the network architecture, we conducted an ablation study based on the RS-MP-F scheme between the U-Net model with and without the CBAM module. Compared with U-Net without CBAM, the results indicate an increase of 0.186 dB, 0.002, and 0.053 in PSNR, SSIM, and CNR, respectively, and a decrease of 0.056, 0.040, and 0.003 in MAE, MNI-B, and MNI-C, respectively, for the model of U-Net with CBAM. These findings underscore the effectiveness of incorporating the CBAM module on the model performance. Nowadays, DL technology is developing quite fast, and U-Net may not be the best model for our task of OCTA image generation. In future studies, different variants of U-Net as well as more advanced architectures such as pix2pix and CycleGAN can be considered in our task to further improve OCTA image generation [53]. For model evaluation, traditional reference-based metrics including PSNR and SSIM were used for the comparison between the generated and labeled OCTA images. Furthermore, this study proposed novel evaluation metrics—the motion noise index (MNI) tailored to B-scan and en-face OCTA images, referred to as MNI-B and MNI-C, respectively—to assess the motion artifact level. MNI-B calculates the ratio between the valley and peak values of the intensity distribution in the horizontal direction to indicate the level of motion artifacts in a B-scan OCTA image. This index was defined based on the observation that peak and valley values of the averaged 1D intensity line were more likely to be similar in B-scans with significant motion noise, but they were quite different in B-scans without significant motion noise (Figure 8b,e). This generally led to a large versus small value of MNI-B, as observed in OCTA images with versus without significant motion noise (Figure 8a,d). It should be noted that this MNI-B is only applicable to B-scan OCTA images with a sparse distribution of blood flow points. For en-face OCTA images, this study also proposed MNI-C, defined as the averaged peak value of the 1D intensity gradient line versus the vasculature region intensity, for characterizing the severity of motion white lines. This index could well describe the strength of white lines in en-face OCTA images, as these lines were shown as very large peaks in the 1D intensity gradient line (Figure 9b). Wei et al. [54] proposed an instantaneous motion strength index (IMSI), where a normalized standard deviation of OCTA image intensity was defined for detecting a microsaccade and blink motion artifact in wide-field en-face OCTA images. MNI-C is similar to IMSI, but the denominator is changed to that of the vasculature region signal intensity to avoid the small mean in IMSI for some regions where no vasculature exists. The two metrics custom-designed for this study facilitated the assessment of the motion artifact level, providing valuable insights into the evaluation of the effectiveness of different DL motion artifact suppression schemes, as discussed in detail as follows.
The performance of the DL method using a non-repeated single scan (SS-SP-NF) was the poorest among all the comparison methods. Lee et al. [31] first proposed generating OCTA images from a single structural OCT image. Their study successfully demonstrated that DL was a viable tool for generating OCTA images with better flow contrast than that seen in structural OCT images. The reason for this capability was explained by the different speckle characteristics around the dynamic tissue induced by blood flow compared with the static tissue in a single structural OCT image [31]. However, it is easily understandable that pure speckle characteristics in a single OCT structural image are not sensitive to blood flow, particularly to that of small vessels, leading to a limited performance with weak and non-smooth capillaries, as qualitatively observed (Row 3, Column 4–Column 6 in Figure 13), and inferior evaluation metric values, as quantitatively calculated (Table 2), in SS-SP-NF. Increasing the network input from a single OCT image to several OCT images acquired at different time points, whether at a single position (RS-SP-NF), multiple positions (SS-MP-NF), or both (RS-MP-NF), would improve the quality of the generated OCTA image because the temporal speckle variation would be a better source for generating blood flow contrast [33,34]. The CNR results showed that the flow contrast increased gradually from SS-MP-NF to RS-SP-NF, and then to RS-MP-NF, with the increase in input OCT image number (from 3 to 4, and to 12 images). Jiang et al. demonstrated that the incorporation of neighborhood information could enhance DL-generated OCTA images [36]; therefore, the input involving repeated scans at multiple positions was used in this study. However, the motion artifact was still quite large in RS-MP-NF, and this might be induced by the non-fused OCTA label used in training. Among the three methods, the motion artifact was suppressed the best with RS-SP-NF, although with a relatively lower CNR. In practical situations where only a non-repeated scan was acquired from a single position such as a conventional OCT volume scan, SS-MP-NF would also be a viable choice for generating OCTA images with better flow contrast than pure structural OCT images. Finally, when a fused label OCTA image was used, RS-MP-F obtained the best performance in terms of all the evaluation metrics (Table 2), which further indicated the importance of incorporating high-quality labels in model training. High-quality label OCTA images could also be prepared by a large number of repeated scans, as used in a previous study [34]. This scheme was not used in the current study because the focus of this study was related to motion noise that could be suppressed from the perspective of combining adjacent-position image information, but not from repeated scans at a single position. Multiple position OCT images were used together with high-quality label OCTA images considering that the high-quality OCTA image was also fused from multiple adjacent OCTA images, which were generated from adjacent structural OCT images. Overall, the best performance of RS-MP-F showed the advantages of both combining neighborhood and repeated scan information and utilizing high-quality labels in training to perform the task of generating OCTA images from structural OCT images.
There are still some limitations in the current study. Firstly, this study was based on a nailfold OCTA imaging dataset, and the amount of data involved in training and testing was relatively small. The generalizability of the current method to other fields such as the ophthalmic OCTA dataset needs to be further investigated. Secondly, the proposed SWCB-AP image fusion method can be used at OCTA image acquisition positions where there is no significant bulk motion. When a large inter-frame bulk motion exists, the applicability of the correlation-based algorithm may fail to work effectively, and a more advanced fusion method is needed to tackle this problem. Thirdly, the effect of an unbalanced sample image ratio (the number of OCTA images with severe motion noise might be small compared with those without severe motion noise) on different schemes, particularly those using the non-fused OCTA as the label, was not quantitatively studied. Finally, the DL network architecture and model hyperparameters were not deliberately optimized in the current study. Future work may be explored from aspects including a neural architecture search [55,56], the design of a loss function incorporating the evaluation of vasculature characteristics such as vessel continuity, and utilizing weakly supervised or unsupervised methods [37] to address the challenge of obtaining high-quality label OCTA images.
In conclusion, this study investigated a DL-based generation of OCTA images from structural OCT images with capability in motion artifact suppression. Various DL schemes with respect to different arrangements of input structural OCT images and label OCTA images were experimented with, and their capability in suppressing motion artifacts was evaluated using specific evaluation metrics including MNI-B and MNI-C, which were specifically proposed in this study. The results of the image fusion experiment showed that our proposed sliding-window correlation-based adjacent-position (SWCB-AP) method could successfully obtain high-quality OCTA images with good suppression of motion noise. The results of the deep learning OCTA image-generation experiment showed that our proposed DL model, with the input of repeated and adjacent structural OCT images together with training using SWCB-AP-fused high-quality label OCTA images, achieved the best performance in terms of qualitative visual observation and quantitative evaluation. The proposed deep learning method may serve as a potential tool in generating high-quality tissue microvasculature images from OCT structural images with good suppression of motion noise in OCTA imaging with broad applications in both clinics and research.

Author Contributions

Conceptualization, Z.L., Q.Z. and Y.H.; methodology, Z.L., Q.Z. and Y.H.; software, Z.L.; validation, Q.Z. and Y.H.; formal analysis, all authors; investigation, Z.L.; resources, Q.Z., G.L., J.X., J.Q., L.A. and Y.H.; data curation, Z.L., Q.Z. and Y.H.; writing—original draft preparation, Z.L., Q.Z. and Y.H.; writing—review and editing, all authors; visualization, Z.L., Q.Z. and Y.H.; supervision, Q.Z. and Y.H.; project administration, Q.Z. and Y.H.; funding acquisition, Q.Z., G.L., J.X., J.Q., L.A. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Natural Science Foundation of China (62001114, 61871130), the Guangdong Provincial Pearl River Talents Program (2019ZT08Y105), the Foshan HKUST Projects (FSUST21-HKUST10E), the Guangdong Eye Intelligent Medical Imaging Equipment Engineering Technology Research Center (2022E076), and the Guangdong-Hong Kong-Macao Intelligent Micro-Nano Optoelectronic Technology Joint Laboratory (2020B1212030010).

Data Availability Statement

The data that support the findings of this study are available upon request from the corresponding author.

Conflicts of Interest

G.L., J.X., and Y.H. are consultants at Weiren Meditech Co., Ltd. J.Q. and L.A. are currently working at Weiren Meditech Co., Ltd. The remaining authors declare no conflicts of interest. The funders had no role in the design of this study; in the collection, analyses, or interpretation of data; in the writing of this manuscript; or in the decision to publish the results.

References

  1. Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A. Optical Coherence Tomography. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef] [PubMed]
  2. Fercher, A.F. Optical Coherence Tomography—Development, Principles, Applications. Z. Med. Phys. 2010, 20, 251–276. [Google Scholar] [CrossRef] [PubMed]
  3. Chen, C.L.; Wang, R.K. Optical Coherence Tomography Based Angiography. Biomed. Opt. Express 2017, 8, 1056–1082. [Google Scholar] [CrossRef] [PubMed]
  4. Spaide, R.F.; Fujimoto, J.G.; Waheed, N.K.; Sadda, S.R.; Staurenghi, G. Optical Coherence Tomography Angiography. Prog. Retin. Eye Res. 2018, 64, 1–55. [Google Scholar] [CrossRef] [PubMed]
  5. Ji, Y.; Zhou, K.; Ibbotson, S.H.; Wang, R.K.; Li, C.; Huang, Z. A Novel Automatic 3D Stitching Algorithm for Optical Coherence Tomography Angiography and Its Application in Dermatology. J. Biophotonics 2021, 14, e202100152. [Google Scholar] [CrossRef] [PubMed]
  6. Baran, U.; Wang, R.K. Review of Optical Coherence Tomography Based Angiography in Neuroscience. Neurophotonics 2016, 3, 010902. [Google Scholar] [CrossRef]
  7. Swanson, E.A.; Fujimoto, J.G. The Ecosystem That Powered the Translation of OCT from Fundamental Research to Clinical and Commercial Impact. Biomed. Opt. Express 2017, 8, 1638–1664. [Google Scholar] [CrossRef] [PubMed]
  8. Yao, X.; Alam, M.N.; Le, D.; Toslak, D. Quantitative Optical Coherence Tomography Angiography: A Review. Exp. Biol. Med. 2020, 245, 301–312. [Google Scholar] [CrossRef]
  9. Arya, M.; Rashad, R.; Sorour, O.; Moult, E.M.; Fujimoto, J.G.; Waheed, N.K. Optical Coherence Tomography Angiography (OCTA) Flow Speed Mapping Technology for Retinal Diseases. Expert. Rev. Med. Devices 2018, 15, 875–882. [Google Scholar] [CrossRef]
  10. Jia, Y.; Bailey, S.T.; Hwang, T.S.; McClintic, S.M.; Gao, S.S.; Pennesi, M.E.; Flaxel, C.J.; Lauer, A.K.; Wilson, D.J.; Hornegger, J.; et al. Quantitative Optical Coherence Tomography Angiography of Vascular Abnormalities in the Living Human Eye. Proc. Natl. Acad. Sci. USA 2015, 112, E2395–E2402. [Google Scholar] [CrossRef]
  11. Kashani, A.H.; Chen, C.L.; Gahm, J.K.; Zheng, F.; Richter, G.M.; Rosenfeld, P.J.; Shi, Y.; Wang, R.K. Optical Coherence Tomography Angiography: A Comprehensive Review of Current Methods and Clinical Applications. Prog. Retin. Eye Res. 2017, 60, 66–100. [Google Scholar] [CrossRef] [PubMed]
  12. Gorczynska, I.; Migacz, J.V.; Zawadzki, R.J.; Capps, A.G.; Werner, J.S. Comparison of Amplitude-Decorrelation, Speckle-Variance and Phase-Variance OCT Angiography Methods for Imaging the Human Retina and Choroid. Biomed. Opt. Express 2016, 7, 911–942. [Google Scholar] [CrossRef] [PubMed]
  13. Hormel, T.T.; Huang, D.; Jia, Y. Artifacts and Artifact Removal in Optical Coherence Tomographic Angiography. Quant. Imaging Med. Surg. 2021, 11, 1120–1133. [Google Scholar] [CrossRef] [PubMed]
  14. Braaf, B.; Vienola, K.V.; Sheehy, C.K.; Yang, Q.; Vermeer, K.A.; Tiruveedhula, P.; Arathorn, D.W.; Roorda, A.; de Boer, J.F. Real-Time Eye Motion Correction in Phase-Resolved OCT Angiography with Tracking SLO. Biomed. Opt. Express 2013, 4, 51–65. [Google Scholar] [CrossRef] [PubMed]
  15. Tan, B.; Sim, R.; Chua, J.; Wong, D.W.K.; Yao, X.; Garhoefer, G.; Schmidl, D.; Werkmeister, R.M.; Schmetterer, L. Approaches to Quantify Optical Coherence Tomography Angiography Metrics. Ann. Transl. Med. 2020, 8, 1205. [Google Scholar] [CrossRef] [PubMed]
  16. Uji, A.; Balasubramanian, S.; Lei, J.; Baghdasaryan, E.; Al-Sheikh, M.; Sadda, S.R. Impact of Multiple En Face Image Averaging on Quantitative Assessment from Optical Coherence Tomography Angiography Images. Ophthalmology 2017, 124, 944–952. [Google Scholar] [CrossRef]
  17. Hormel, T.T.; Hwang, T.S.; Bailey, S.T.; Wilson, D.J.; Huang, D.; Jia, Y. Artificial Intelligence in OCT Angiography. Prog. Retin. Eye Res. 2021, 85, 100965. [Google Scholar] [CrossRef] [PubMed]
  18. Kadomoto, S.; Uji, A.; Muraoka, Y.; Akagi, T.; Tsujikawa, A. Enhanced Visualization of Retinal Microvasculature in Optical Coherence Tomography Angiography Imaging Via Deep Learning. J. Clin. Med. 2020, 9, 1322. [Google Scholar] [CrossRef]
  19. Xu, J.; Yuan, X.; Huang, Y.; Qin, J.; Lan, G.; Qiu, H.; Yu, B.; Jia, H.; Tan, H.; Zhao, S. Deep-Learning Visualization Enhancement Method for Optical Coherence Tomography Angiography in Dermatology. J. Biophotonics 2023, 16, e202200366. [Google Scholar] [CrossRef]
  20. Xu, Y.; Su, Y.; Hua, D.; Heiduschka, P.; Zhang, W.; Cao, T.; Liu, J.; Ji, Z.; Eter, N. Enhanced Visualization of Retinal Microvasculature Via Deep Learning on OCTA Image Quality. Dis. Markers 2021, 2021, 1373362. [Google Scholar] [CrossRef]
  21. Liao, J.; Yang, S.; Zhang, T.; Li, C.; Huang, Z. Fast Optical Coherence Tomography Angiography Image Acquisition and Reconstruction Pipeline for Skin Application. Biomed. Opt. Express 2023, 14, 3899–3913. [Google Scholar] [CrossRef]
  22. Gao, M.; Guo, Y.; Hormel, T.T.; Sun, J.; Hwang, T.S.; Jia, Y. Reconstruction of High-Resolution 6 X 6-mm OCT Angiograms Using Deep Learning. Biomed. Opt. Express 2020, 11, 3585–3600. [Google Scholar] [CrossRef] [PubMed]
  23. Gao, M.; Hormel, T.T.; Wang, J.; Guo, Y.; Bailey, S.T.; Hwang, T.S.; Jia, Y. An Open-Source Deep Learning Network for Reconstruction of High-Resolution OCT Angiograms of Retinal Intermediate and Deep Capillary Plexuses. Transl. Vis. Sci. Technol. 2021, 10, 13. [Google Scholar] [CrossRef] [PubMed]
  24. Kim, G.; Kim, J.; Choi, W.J.; Kim, C.; Lee, S. Integrated Deep Learning Framework for Accelerated Optical Coherence Tomography Angiography. Sci. Rep. 2022, 12, 1289. [Google Scholar] [CrossRef] [PubMed]
  25. Zhang, W.; Yang, D.; Cheung, C.Y.; Chen, H. Frequency-Aware Inverse-Consistent Deep Learning for OCT Angiogram Super-Resolution. In Proceedings of the Medical Image Computing and Computer Assisted Intervention (MICCAI), Singapore, 18–22 September 2022; pp. 645–655. [Google Scholar]
  26. Isola, P.; Zhu, J.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
  27. Makita, S.; Miura, M.; Azuma, S.; Mino, T.; Yasuno, Y. Synthesizing the Degree of Polarization Uniformity from Non-Polarization-Sensitive Optical Coherence Tomography Signals Using a Neural Network. Biomed. Opt. Express 2023, 14, 1522–1543. [Google Scholar] [CrossRef] [PubMed]
  28. Sun, Y.; Wang, J.; Shi, J.; Boppart, S.A. Synthetic Polarization-Sensitive Optical Coherence Tomography by Deep Learning. Npj Digit. Med. 2021, 4, 105. [Google Scholar] [CrossRef] [PubMed]
  29. Yang, Q.; Li, N.; Zhao, Z.; Fan, X.; Chang, E.I.-C.; Xu, Y. MRI Cross-Modality Image-to-Image Translation. Sci. Rep. 2020, 10, 3753. [Google Scholar] [CrossRef] [PubMed]
  30. Kearney, V.; Ziemer, B.P.; Perry, A.; Wang, T.; Chan, J.W.; Ma, L.; Morin, O.; Yom, S.S.; Solberg, T.D. Attention-Aware Discrimination for MR-to-CT Image Translation Using Cycle-Consistent Generative Adversarial Networks. Radiol.-Artif. Intell. 2020, 2, e190027. [Google Scholar] [CrossRef] [PubMed]
  31. Lee, C.S.; Tyring, A.J.; Wu, Y.; Xiao, S.; Rokem, A.S.; DeRuyter, N.P.; Zhang, Q.; Tufail, A.; Wang, R.K.; Lee, A.Y. Generating Retinal Flow Maps from Structural Optical Coherence Tomography with Artificial Intelligence. Sci. Rep. 2019, 9, 5694. [Google Scholar] [CrossRef]
  32. Zhang, Z.; Ji, Z.; Chen, Q.; Yuan, S.; Fan, W. Texture-Guided U-Net for OCT-to-OCTA Generation. In Proceedings of the Pattern Recognition and Computer Vision (PRCV): 4th Chinese Conference, Beijing, China, 29 October–1 November 2021; pp. 42–52. [Google Scholar]
  33. Li, P.L.; O’Neil, C.; Saberi, S.; Sinder, K.; Wang, K.; Tan, B.; Hosseinaee, Z.; Bizhevat, K.; Lakshminarayanan, V. Deep Learning Algorithm for Generating Optical Coherence Tomography Angiography (OCTA) Maps of the Retinal Vasculature. In Applications of Machine Learning; SPIE: Bellingham, DC, USA, 2020; pp. 39–49. [Google Scholar]
  34. Liu, X.; Huang, Z.; Wang, Z.; Wen, C.; Jiang, Z.; Yu, Z.; Liu, J.; Liu, G.; Huang, X.; Maier, A.; et al. A Deep Learning Based Pipeline for Optical Coherence Tomography Angiography. J. Biophotonics 2019, 12, e201900008. [Google Scholar] [CrossRef]
  35. Jiang, Z.; Huang, Z.; Qiu, B.; Meng, X.; You, Y.; Xi, L.; Liu, G.; Zhou, C.; Yang, K.; Maier, A.; et al. Comparative Study of Deep Learning Models for Optical Coherence Tomography Angiography. Biomed. Opt. Express 2020, 11, 1580–1597. [Google Scholar] [CrossRef] [PubMed]
  36. Jiang, Z.; Huang, Z.; You, Y.; Geng, M.; Meng, X.; Qiu, B.; Zhu, L.; Gao, M.; Wang, J.; Zhou, C. Rethinking the Neighborhood Information for Deep Learning-Based Optical Coherence Tomography Angiography. Med. Phys. 2022, 49, 3705–3716. [Google Scholar] [CrossRef] [PubMed]
  37. Jiang, Z.; Huang, Z.; Qiu, B.; Meng, X.; You, Y.; Liu, X.; Geng, M.; Liu, G.; Zhou, C.; Yang, K. Weakly Supervised Deep Learning-Based Optical Coherence Tomography Angiography. IEEE Trans. Med. Imaging 2020, 40, 688–698. [Google Scholar] [CrossRef] [PubMed]
  38. Li, S.; Zhang, D.; Li, X.; Ou, C.; An, L.; Xu, Y.; Cheng, K.-T. Vessel-Promoted OCT to OCTA Image Translation by Heuristic Contextual Constraints. arXiv 2023, arXiv:2303.06807. [Google Scholar]
  39. Dong, L.; Wei, Y.; Lan, G.; Chen, J.; Xu, J.; Qin, J.; An, L.; Tan, H.; Huang, Y. High Resolution Imaging and Quantification of the Nailfold Microvasculature Using Optical Coherence Tomography Angiography (OCTA) and Capillaroscopy: A Preliminary Study in Healthy Subjects. Quant. Imaging Med. Surg. 2022, 12, 1844–1858. [Google Scholar] [CrossRef] [PubMed]
  40. Yousefi, S.; Zhi, Z.; Wang, R.K. Eigendecomposition-Based Clutter Filtering Technique for Optical Microangiography. IEEE Trans. Biomed. Eng. 2011, 58, 2316–2323. [Google Scholar] [CrossRef] [PubMed]
  41. Mariampillai, A.; Standish, B.A.; Moriyama, E.H.; Khurana, M.; Munce, N.R.; Leung, M.K.; Jiang, J.; Cable, A.; Wilson, B.C.; Vitkin, I.A.; et al. Speckle Variance Detection of Microvasculature Using Swept-Source Optical Coherence Tomography. Opt. Lett. 2008, 33, 1530–1532. [Google Scholar] [CrossRef]
  42. Huang, Y.; Zhang, Q.; Thorell, M.R.; An, L.; Durbin, M.K.; Laron, M.; Sharma, U.; Gregori, G.; Rosenfeld, P.J.; Wang, R.K. Swept-Source OCT Angiography of the Retinal Vasculature Using Intensity Differentiation-Based Optical Microangiography Algorithms. OSLI Retin. 2014, 45, 382–389. [Google Scholar] [CrossRef]
  43. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI ), Munich, Germany, 18–19 June 2015; pp. 234–241. [Google Scholar]
  44. Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  45. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process 2004, 13, 600–612. [Google Scholar] [CrossRef]
  46. Zhao, H.; Gallo, O.; Frosio, I.; Kautz, J. Loss Functions for Image Restoration with Neural Networks. IEEE Trans. Comput. Imaging 2016, 3, 47–57. [Google Scholar] [CrossRef]
  47. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  48. Loshchilov, I.; Hutter, F. SGDR: Stochastic Gradient Descent with Warm Restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  49. Bernse, J. Dynamic Thresholding of Grey-Level Images. In Proceedings of the ICPR’86: International Conference on Pattern Recognition, Berlin, Germany, October 1986; pp. 1251–1255. [Google Scholar]
  50. Spaide, R.F.; Fujimoto, J.G.; Waheed, N.K. Image Artifacts in Optical Coherence Tomography Angiography. Retina 2015, 35, 2163–2180. [Google Scholar] [CrossRef] [PubMed]
  51. Zhang, T.; Zhou, K.; Rocliffe, H.R.R.; Pellicoro, A.; Cash, J.L.L.; Wang, W.; Wang, Z.; Li, C.; Huang, Z. Windowed Eigen-Decomposition Algorithm for Motion Artifact Reduction in Optical Coherence Tomography-Based Angiography. Appl. Sci. 2023, 13, 378. [Google Scholar] [CrossRef]
  52. Fan, J.; He, Y.; Wang, P.; Liu, G.; Shi, G. Interplane Bulk Motion Analysis and Removal Based on Normalized Cross-Correlation in Optical Coherence Tomography Angiography. J. Biophotonics 2020, 13, e202000046. [Google Scholar] [CrossRef] [PubMed]
  53. Kaji, S.; Kida, S. Overview of Image-to-Image Translation by Use of Deep Neural Networks: Denoising, Super-Resolution, Modality Conversion, and Reconstruction in Medical Imaging. Radiol. Phys. Technol. 2019, 12, 235–248. [Google Scholar] [CrossRef]
  54. Wei, X.; Hormel, T.T.; Guo, Y.; Hwang, T.S.; Jia, Y. High-Resolution Wide-Field OCT Angiography with a Self-Navigation Method to Correct Microsaccades and Blinks. Biomed. Opt. Express 2020, 11, 3234–3245. [Google Scholar] [CrossRef]
  55. Kang, J.-S.; Kang, J.; Kim, J.-J.; Jeon, K.-W.; Chung, H.-J.; Park, B.-H. Neural Architecture Search Survey: A Computer Vision Perspective. Sensors 2023, 23, 1713. [Google Scholar] [CrossRef]
  56. Tian, Y.; Shen, L.; Su, G.; Li, Z.; Liu, W. Alphagan: Fully Differentiable Architecture Search for Generative Adversarial Networks. IEEE Trans. Pattern Anal. 2022, 44, 6752–6766. [Google Scholar] [CrossRef]
Figure 1. Overall flowchart of this study using deep learning models to generate high-quality OCTA blood flow images from repeated and adjacent OCT images. Fused high-quality label OCTA images are used in training to improve the model performance, particularly, its capability in motion artifact suppression. Green and red border colors in the left part indicate that the images are from the center and adjacent slow-axis positions, respectively. Images with the same border color correspond to repeated B-scans from the same slow-axis position.
Figure 1. Overall flowchart of this study using deep learning models to generate high-quality OCTA blood flow images from repeated and adjacent OCT images. Fused high-quality label OCTA images are used in training to improve the model performance, particularly, its capability in motion artifact suppression. Green and red border colors in the left part indicate that the images are from the center and adjacent slow-axis positions, respectively. Images with the same border color correspond to repeated B-scans from the same slow-axis position.
Mathematics 12 00446 g001
Figure 2. OCTA data collection system and its scanning protocols. (a) Schematic diagram of the spectral domain optical coherence tomography (SD-OCT) system used for imaging the nailfold microvasculature. SLD, superluminescent diode. VLD, visible laser diode. (b) Schematic diagram of the OCTA scanning protocols. The frames with different border colors in the B-scan images represent different slow-axis positions.
Figure 2. OCTA data collection system and its scanning protocols. (a) Schematic diagram of the spectral domain optical coherence tomography (SD-OCT) system used for imaging the nailfold microvasculature. SLD, superluminescent diode. VLD, visible laser diode. (b) Schematic diagram of the OCTA scanning protocols. The frames with different border colors in the B-scan images represent different slow-axis positions.
Mathematics 12 00446 g002
Figure 3. Typical OCT/OCTA images of nailfold microcirculation. (a) Repeated B-scan OCT images. The four B-scan OCT images are labeled with an identical border color, indicating that they were acquired at the same position. (b) B-scan OCTA image without significant motion artifacts. (c) B-scan OCTA image with obvious motion artifacts. (d) En-face OCTA image, where the white lines represent typical motion-induced artifacts.
Figure 3. Typical OCT/OCTA images of nailfold microcirculation. (a) Repeated B-scan OCT images. The four B-scan OCT images are labeled with an identical border color, indicating that they were acquired at the same position. (b) B-scan OCTA image without significant motion artifacts. (c) B-scan OCTA image with obvious motion artifacts. (d) En-face OCTA image, where the white lines represent typical motion-induced artifacts.
Mathematics 12 00446 g003
Figure 4. Schematic diagram of the sliding-window correlation-based adjacent-position (SWCB-AP) image fusion method. Bn−1, Bn, and Bn+1 are three consequent OCTA images, where n is the center position for image fusion. Window fusion for the three window images P1, P2, and P3 are shown. CC: correlation coefficient.
Figure 4. Schematic diagram of the sliding-window correlation-based adjacent-position (SWCB-AP) image fusion method. Bn−1, Bn, and Bn+1 are three consequent OCTA images, where n is the center position for image fusion. Window fusion for the three window images P1, P2, and P3 are shown. CC: correlation coefficient.
Mathematics 12 00446 g004
Figure 5. Schematic diagram of the network architecture. The top part displays the network architecture including a U-Net with the added Convolution Block Attention Module (CBAM). The bottom part illustrates the CBAM module, consisting of both the Channel Attention Module (CAM) and the Spatial Attention Module (SAM). N is the total number of B-scan structural OCT images used in the network input for different DL schemes. BN: batch normalization; ReLU: rectified linear unit; MLP: multilayer perceptron.
Figure 5. Schematic diagram of the network architecture. The top part displays the network architecture including a U-Net with the added Convolution Block Attention Module (CBAM). The bottom part illustrates the CBAM module, consisting of both the Channel Attention Module (CAM) and the Spatial Attention Module (SAM). N is the total number of B-scan structural OCT images used in the network input for different DL schemes. BN: batch normalization; ReLU: rectified linear unit; MLP: multilayer perceptron.
Mathematics 12 00446 g005
Figure 6. The different arrangements of input structural OCT images and label OCTA images in different DL schemes explored in this study.
Figure 6. The different arrangements of input structural OCT images and label OCTA images in different DL schemes explored in this study.
Mathematics 12 00446 g006
Figure 7. Schematic diagram of the MNI-B calculation process for a representative OCTA image.
Figure 7. Schematic diagram of the MNI-B calculation process for a representative OCTA image.
Mathematics 12 00446 g007
Figure 8. MNI-B calculation process for typical B-scan OCTA images with large and small motion noise. (a) B-scan OCTA image with large motion noise with an MNI-B of 0.9347. (b) One-dimensional intensity distribution curve for the image in (a). (c) Bar plot of MNI values calculated for each segment of the intensity distribution curve shown in (b). (d) Normal B-scan OCTA image with small motion noise with an MNI-B of 0.5559. (e) One-dimensional intensity distribution curve for the image in (d). (f) Bar plot of MNI values calculated for each segment of the intensity distribution curve shown in (e).
Figure 8. MNI-B calculation process for typical B-scan OCTA images with large and small motion noise. (a) B-scan OCTA image with large motion noise with an MNI-B of 0.9347. (b) One-dimensional intensity distribution curve for the image in (a). (c) Bar plot of MNI values calculated for each segment of the intensity distribution curve shown in (b). (d) Normal B-scan OCTA image with small motion noise with an MNI-B of 0.5559. (e) One-dimensional intensity distribution curve for the image in (d). (f) Bar plot of MNI values calculated for each segment of the intensity distribution curve shown in (e).
Mathematics 12 00446 g008
Figure 9. MNI-C values and the 1D intensity gradient distribution curves for a typical en-face OCTA image with significant motion-related white lines and the same en-face OCTA image after motion noise suppression. (a) An en-face OCTA image with significant white lines and an MNI-C of 0.346. (b) One-dimensional intensity gradient distribution curve for the image in (a). (c) The en-face OCTA image shown in (a) with motion noise suppressed, now having an MNI-C of 0.084. (d) One-dimensional intensity gradient distribution curve for the image in (c).
Figure 9. MNI-C values and the 1D intensity gradient distribution curves for a typical en-face OCTA image with significant motion-related white lines and the same en-face OCTA image after motion noise suppression. (a) An en-face OCTA image with significant white lines and an MNI-C of 0.346. (b) One-dimensional intensity gradient distribution curve for the image in (a). (c) The en-face OCTA image shown in (a) with motion noise suppressed, now having an MNI-C of 0.084. (d) One-dimensional intensity gradient distribution curve for the image in (c).
Mathematics 12 00446 g009
Figure 10. Curves of CNR, MNI-B, and MNI-C for images obtained by the SWCB-AP algorithm under different thresholds of T.
Figure 10. Curves of CNR, MNI-B, and MNI-C for images obtained by the SWCB-AP algorithm under different thresholds of T.
Mathematics 12 00446 g010
Figure 11. Comparison of OCTA images before and after SWCB-AP processing. The first row is the raw OCTA images, and the second row includes the OCTA images obtained using the SWCB-AP image fusion method. (a) Typical B-scan OCTA image without significant motion artifacts. (b) Typical B-scan OCTA image with significant motion artifacts. (c) An enlarged view of the red box area in (b). (d) Typical en-face OCTA image with some significant motion-related white lines. (e) An enlarged view of the red box area in (d), where a motion artifact can be seen clearly. (f) SWCB-AP fused B-scan OCTA image in correspondence to (a). (g) SWCB-AP fused B-scan OCTA image in correspondence to (b). (h) An enlarged view of the red box area in (g). (i) En-face OCTA image after SWCB-AP fusion in correspondence to (d), where motion artifacts are greatly suppressed. (j) An enlarged view of the red box area in (i).
Figure 11. Comparison of OCTA images before and after SWCB-AP processing. The first row is the raw OCTA images, and the second row includes the OCTA images obtained using the SWCB-AP image fusion method. (a) Typical B-scan OCTA image without significant motion artifacts. (b) Typical B-scan OCTA image with significant motion artifacts. (c) An enlarged view of the red box area in (b). (d) Typical en-face OCTA image with some significant motion-related white lines. (e) An enlarged view of the red box area in (d), where a motion artifact can be seen clearly. (f) SWCB-AP fused B-scan OCTA image in correspondence to (a). (g) SWCB-AP fused B-scan OCTA image in correspondence to (b). (h) An enlarged view of the red box area in (g). (i) En-face OCTA image after SWCB-AP fusion in correspondence to (d), where motion artifacts are greatly suppressed. (j) An enlarged view of the red box area in (i).
Mathematics 12 00446 g011
Figure 12. Comparisons of OCTA images obtained by different raw OCTA flow contrast computation methods and different image fusion algorithms. The first row shows the results of six processing methods for a B-scan OCTA image acquired at a position affected by severe motion, and the second row shows the results of six processing methods for typical en-face OCTA images from an OCTA volume dataset affected by motion artifacts. Different columns represent OCTA images obtained by different raw OCTA algorithms (ac) and different fusion methods (df). (a) ID OCTA images; (b) SV OCTA images; (c) ED OCTA images; (d) weighted average fusion of single-position images from SV/ID/ED OCTA images (WA-SP-SV/ID/ED); (e) weighted average fusion of adjacent-position images from ED OCTA images (WA-AP-ED); and (f) sliding-window correlation-based adjacent-position image fusion from ED OCTA images (SWCB-AP-ED).
Figure 12. Comparisons of OCTA images obtained by different raw OCTA flow contrast computation methods and different image fusion algorithms. The first row shows the results of six processing methods for a B-scan OCTA image acquired at a position affected by severe motion, and the second row shows the results of six processing methods for typical en-face OCTA images from an OCTA volume dataset affected by motion artifacts. Different columns represent OCTA images obtained by different raw OCTA algorithms (ac) and different fusion methods (df). (a) ID OCTA images; (b) SV OCTA images; (c) ED OCTA images; (d) weighted average fusion of single-position images from SV/ID/ED OCTA images (WA-SP-SV/ID/ED); (e) weighted average fusion of adjacent-position images from ED OCTA images (WA-AP-ED); and (f) sliding-window correlation-based adjacent-position image fusion from ED OCTA images (SWCB-AP-ED).
Mathematics 12 00446 g012
Figure 13. Typical fused and non-fused label OCTA images and those generated by different DL schemes. Column 1 and Column 2 indicate typical B-scan OCTA images acquired at two positions with different levels of motion and processed by different processing methods. Column 4 and Column 5 show typical en-face OCTA images acquired with different levels of motion and processed by different processing methods. Column 3 and Column 6 provide enlarged images of the selected regions in Columns 1 and 2 and Columns 4 and 5, respectively. The blue horizontal lines in Column 4 and the green horizontal lines in Column 5 correspond to the acquisition positions of the B-scan images in Column 1 and Column 2, respectively. The first row indicates non-fused OCTA images, and the second row indicates fused OCTA images obtained by the SWCB-AP image fusion method. Rows 3 to 8 present the results of six different DL schemes: SS-SP-NP, SS-MP-NF, RS-SP-NF, RS-MP-NF, RS-SP-F, and RS-MP-F. SS: single scan; RS: repeated scan; SP: single position; MP: multiple position; NF: non-fused OCTA label; F: fused OCTA label.
Figure 13. Typical fused and non-fused label OCTA images and those generated by different DL schemes. Column 1 and Column 2 indicate typical B-scan OCTA images acquired at two positions with different levels of motion and processed by different processing methods. Column 4 and Column 5 show typical en-face OCTA images acquired with different levels of motion and processed by different processing methods. Column 3 and Column 6 provide enlarged images of the selected regions in Columns 1 and 2 and Columns 4 and 5, respectively. The blue horizontal lines in Column 4 and the green horizontal lines in Column 5 correspond to the acquisition positions of the B-scan images in Column 1 and Column 2, respectively. The first row indicates non-fused OCTA images, and the second row indicates fused OCTA images obtained by the SWCB-AP image fusion method. Rows 3 to 8 present the results of six different DL schemes: SS-SP-NP, SS-MP-NF, RS-SP-NF, RS-MP-NF, RS-SP-F, and RS-MP-F. SS: single scan; RS: repeated scan; SP: single position; MP: multiple position; NF: non-fused OCTA label; F: fused OCTA label.
Mathematics 12 00446 g013
Table 1. Quantitative image quality evaluation results for the OCTA images obtained by different processing methods in the B-scan and en-face modes.
Table 1. Quantitative image quality evaluation results for the OCTA images obtained by different processing methods in the B-scan and en-face modes.
Processing MethodB-Scan OCTA ImagesEn-Face OCTA Images
MNI-B ↓CNR ↑MNI-C ↓
ID0.558 ± 0.1891.052 ± 0.2850.405 ± 0.065
SV0.590 ± 0.1910.830 ± 0.2590.496 ± 0.078
ED0.503 ± 0.1851.697 ± 0.3110.356 ± 0.155
WA-SP-SV/ID/ED0.601 ± 0.1681.425 ± 0.2420.341 ± 0.063
WA-AP-ED0.579 ± 0.1551.985 ± 0.3520.166 ± 0.079
SWCB-AP-ED0.484 ± 0.1682.334 ± 0.3710.150 ± 0.068
Values are given as mean ± standard deviation. Bold represents the best result for each column. ↓ means lower is better, and ↑ means higher is better for each column’s metric.
Table 2. Quantitative image quality evaluation results for OCTA image generation from structural OCT images using different DL schemes.
Table 2. Quantitative image quality evaluation results for OCTA image generation from structural OCT images using different DL schemes.
DL SchemesB-Scan OCTA ImagesEn-Face OCTA Images
PSNR (dB) ↑SSIM ↑MAE ↓MNI-B ↓CNR ↑MNI-C ↓
SS-SP-NF30.434 ± 7.8960.910 ± 0.0592.455 ± 2.0120.622 ± 0.1190.816 ± 0.2610.356 ± 0.204
SS-MP-NF30.324 ± 7.4790.910 ± 0.0572.563 ± 2.1460.590 ± 0.1221.020 ± 0.2890.344 ± 0.161
RS-SP-NF31.593 ± 7.5990.918 ± 0.0542.166 ± 1.8080.585 ± 0.1131.249 ± 0.3030.196 ± 0.098
RS-MP-NF30.974 ± 7.3470.911 ± 0.0592.609 ± 2.8780.587 ± 0.1131.307 ± 0.2480.357 ± 0.198
RS-SP-F32.118 ± 7.4020.921 ± 0.0541.940 ± 1.6910.530 ± 0.1281.294 ± 0.3080.190 ± 0.119
RS-MP-F32.666 ± 7.0100.926 ± 0.0511.798 ± 1.5750.528 ± 0.1241.420 ± 0.2910.156 ± 0.057
Values are given as mean ± standard deviation. Bold represents the best result for each column. ↓ means lower is better, and ↑ means higher is better for each column’s metric.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Z.; Zhang, Q.; Lan, G.; Xu, J.; Qin, J.; An, L.; Huang, Y. Deep Learning for Motion Artifact-Suppressed OCTA Image Generation from Both Repeated and Adjacent OCT Scans. Mathematics 2024, 12, 446. https://doi.org/10.3390/math12030446

AMA Style

Lin Z, Zhang Q, Lan G, Xu J, Qin J, An L, Huang Y. Deep Learning for Motion Artifact-Suppressed OCTA Image Generation from Both Repeated and Adjacent OCT Scans. Mathematics. 2024; 12(3):446. https://doi.org/10.3390/math12030446

Chicago/Turabian Style

Lin, Zhefan, Qinqin Zhang, Gongpu Lan, Jingjiang Xu, Jia Qin, Lin An, and Yanping Huang. 2024. "Deep Learning for Motion Artifact-Suppressed OCTA Image Generation from Both Repeated and Adjacent OCT Scans" Mathematics 12, no. 3: 446. https://doi.org/10.3390/math12030446

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop