Next Article in Journal
The Method for Identifying the Scope of Cyberattack Stages in Relation to Their Impact on Cyber-Sustainability Control over a System
Next Article in Special Issue
Deep Learning–Based Segmentation of Trypanosoma cruzi Nests in Histopathological Images
Previous Article in Journal
Software Engineering of Resistive Elements Electrophysical Parameters Simulation in the Process of Laser Trimming
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Ischemic Stroke Lesion Segmentation Using Mutation Model and Generative Adversarial Network

Department of Computer Science/Cybersecurity, Princess Sumaya University for Technology, Amman 11941, Jordan
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(3), 590; https://doi.org/10.3390/electronics12030590
Submission received: 6 January 2023 / Revised: 22 January 2023 / Accepted: 24 January 2023 / Published: 25 January 2023

Abstract

:
Ischemic stroke lesion segmentation using different types of images, such as Computed Tomography Perfusion (CTP), is important for medical and Artificial intelligence fields. These images are potential resources to enhance machine learning and deep learning models. However, collecting these types of images is a considerable challenge. Therefore, new augmentation techniques are required to handle the lack of collected images presenting Ischemic strokes. In this paper, the proposed model of mutation model using a distance map is integrated into the generative adversarial network (GAN) to generate a synthetic dataset. The Euclidean distance is used to compute the average distance of each pixel with its neighbor in the right and bottom directions. Then a threshold is used to select the adjacent locations with similar intensities for the mutation process. Furthermore, semi-supervised GAN is enhanced and transformed into supervised GAN, where the segmentation and discriminator are shared the same convolution neural network to reduce the computation process. The mutation and GAN models are trained as an end-to-end model. The results show that the mutation model enhances the dice coefficient of the proposed GAN model by 2.54%. Furthermore, it slightly enhances the recall of the proposed GAN model compared to other GAN models.

1. Introduction

Medical imaging is a domain that uses methods to create images for the human body that can be used in medical decisions [1]. Deep learning is the most commonly used in computer vision [2]. Automated segmentation in medical images is considered an important task and research in both fields: artificial intelligence (AI) and medical [3,4,5,6,7]. AI models are used to determine the location of the damaged tissues and structures in the images of the human body, such as tumors and ischemic stroke (IS) in the brain [8]. In contrast, these types of images are considered potential resources for machine learning and deep learning model enhancements. On the other hand, these images suffer heterogenous in the intensity of each voxel. They are low contrast, low frequency, and wide variety because they belong to different low contrast in medical scans [8,9,10,11,12,13,14]. Furthermore, medical image annotation requires expert doctors and is considered to be a time-consuming task. Therefore, the number of existing images for specific diseases is a significant challenge for deep learning models [4,8,11,15,16,17]. Another challenge in medical-image segmentation, especially for 3D images, is the sturdy imbalance between damaged and normal tissues. The size of damaged tissues is minimal compared to normal tissues, and some images did not contain damaged tissues [18,19,20,21,22,23,24,25].
Ischemic stroke lesion segmentation is one of the essential segmentation tasks where traditional machine learning and deep learning approaches are used to automatically determine the location of the damaged tissues in the brain. The computed tomography perfusion (CTP) is the most popular computed tomography (CT) modality for ischemic stroke (IS). CTP is used to assess the magnitude of infarct core by hypoperfused in the damaged tissues to decide whether there is an ability to treat the patient [3,8]. Table 1 shows brief descriptions for all abbreviations of the modalities that used in the proposed model, as well as the low non-uniformity contrast of voxel intensities in the brain CTP images and a limited number of available images, the lesions in the brain have a different shape, magnitude, and positions in the brain [3,4,10,11,17,26].
One of the most popular datasets for segmenting the IS lesions in the brain is ischemic stroke lesion segmentation (ISLES-2018) [3,27]. It consists of 94 and 62 cases for training and test sets, respectively. Therefore, many researchers have used different machine learning models and different augmentation techniques to expand the training set in the ISLES-2018 dataset. For instance, the rotation, flipping, and re-scaling images are used in the [16,29,30,31,32,33] models. Furthermore, the [12] model uses mirroring and permuting the input axis technique for augmentation. While the [31] model shifts forward or backward and adds Gaussian noise. However, the performance using the test set of the ISLES-2018 dataset is not satisfactory using these techniques.
The [33] model uses the generative adversarial network (GAN) to translate the CTP images into DWI modality. In contrast, other models use the GAN approach to generate synthetic images to increase the number of the training set [10,16,32]. The [16,32] models use the Gaussian noise in the generator method, where their performance is evaluated using the training set.
The [10] model uses synthesized pseudo-DWI as a generator method to generate lesion regions based on the DWI label. Additionally, the input consists of six channels: low-level and high-level features of trained CTP image, and the original images of the rest modalities. However, it depends on the DWI label to generate lesion locations without taking into account the normal regions. Additionally, it ignores the slice level where a single channel for each modality is used in the generator.
Thus, more new techniques are required to handle the small number issue of medical images, particularly brain images in the ISLES-2018 dataset. In this paper, the proposed mutation model inspired by the genetic algorithm is used to generate a synthetic dataset to increase the number of training samples. Consequently, the performance of identifying the IS tissues increases by training various generated samples called mutated images. The mutation model uses the distance map to determine the adjacent locations related to a region in the CTP image. These locations are used to mutate the corresponding regions of all modalities and slices in two inputs to generate synthetic inputs.
Furthermore, a semi-supervised GAN model, called a few-shot 3D multi-model [8], was enhanced and transformed into a supervised GAN model to exploit its generator while gaining more meaningful information using labels. Therefore, four main contributions are used in the proposed model, as illustrated below.
  • The CTP image and the euclidean distance method are used to generate a distance map by computing distance horizontally and vertically for every two adjacent pixels.
  • The distance map and mutation model are used to produce new synthetic samples. A set of adjacent pixels, locations of a region, are selected from one CTP image to assign the locations values of the modalities into different corresponding modalities while preserving the shape of these locations.
  • A semi-supervised GAN model was enhanced and modified to supervised GAN model to exploit the entire knowledge and gain more meaningful information using labels.
  • A shared module between segmentation and discriminator are used to reduce complexity of the GAN model and evaluate the proposed mutation method as an end-to-end model.
The rest of this paper is organized as follows. Section 2 presents the literature review. Section 3 illustrates the proposed end-to-end model. Section 4 presents the experiment setting. Section 5 presents applications and future directions. Finally, Section 6 concludes the paper.

2. Literature Review

Similarly to the detection tasks, such as ADHD detection [34] and schizophrenia patients detection [35], the segmentation task also has a significant role in the medical field, such as IS lesion segmentation. Although the ISLES-2018 dataset was created for IS lesions segmentation, authors in [9] have used it for the classification task. The lesion segmentation is utilized as a pre-processing step to classify the brain image into healthy or unhealthy. The DWI, CBV, and CBF modalities are only used in the model where the most noisier modalities are ignored. Based on the [10], the Tmax and MTT modalities are the most noise modalities in the ISLES-2018 dataset and have a lower spatial resolution. In contrast, many researchers have utilized the ISLES-2018 dataset and machine learning for IS lesions segmentation. Authors in [30,31] have considered the imbalance issue and use encoder–decoder convolutional neural network to train data. The Clera2 model [30] uses the balanced patches and a category weighting loss function. While the [31] model uses weighted binary cross-entropy to equalize the previous category probabilities during training.
State-of-the-art models are used widely in the IS lesions segmentation. Both [28] and [29] models uses the ResNet [36] as a segmentation task. Researchers in [28] have transferred the pre-trained ResNet and integrated its output with the output of the random forest (RF) model. Researchers in [29] have combined a fully convolutional neural network and dilated convolutional network to introduce a pyramid pooling. Furthermore, the dilated convolutional network is integrated with a different state-of-the-art model called U-shape [37] to encode features of the brain images [12]. While the MS-DCNN model [38] integrates the U-shape [37] with dense block for more robust extract features and to mitigate overfitting.
Moreover, the GAN approach is used to expand the training set of the ISLES-2018 dataset [10,16,33]. The [10] model uses the extracted features of CTP images to create new segments that are similar to lesions located in the determined set of the lesion DWI label. The mean square error (MSE) and L1 normalization are used as loss functions for training the generator. While the [33] model uses the generator module to generate DWI modality from the CTP modality by exploiting the U-net model [37]. The [16] model uses the generator module to increase the dataset by generating noisy images, and the discriminator module to provide GAN the ability to distinguish between original and generated samples. On the other hand, the [17] model inputs the error of the discriminator into the segmentation module for second back-propagation. However, it excludes the generator module, which does not provide a synthetic dataset.
In contrast to the previous frameworks that either utilize the DWI modality or add noise to create a synthetic dataset of the brain images, the proposed model uses a new technique to generate a synthetic dataset using mutation operation and distance map. Moreover, the mutated images were integrated into the supervised GAN model to utilize its generative model. Table 2 summarized the previous works, which used the benchmark ISLES-2018 dataset [3,27], and noted their limitations and findings.

3. Materials and Methods

The proposed model is an end-to-end model that uses two stages of generating a synthetic dataset: a mutation model using a distance map and a supervised GAN model. The first stage is an augmentation technique to generate new images that preserve the spatial dimension of the original images. In the second stage, a set of patches of each image are fed to the GAN model. However, the mutation model is used during training, where the test set is introduced directly to the second stage.
This section is organized as follows. Section 3.1 presents the pre-processing methods that are used to enhance and normalize data. Section 3.2 illustrates the integrated mutation model using a distance map. Section 3.3 presents the baseline semi-supervised GAN model and proposed supervised GAN model.

3.1. Data Pre-Processing

Two major pre-processing techniques are used to handle the low-frequency and the variety of the intensities values in the different modalities of the ISLES-2018 dataset: bias correction [12], and image enhancement [9]. The bias correction method corrects the non-uniformity low-frequency intensity in the input modalities: CTP, CBF, CBV, MTT, and Tmax [12]. The image enhancement method enhances CTP images and normalizes their intensity values to provide better contrast and information [9]. The first step of the image enhancement is converting each image of dimension h e i g h t ( H ) × w i d t h ( W ) × n u m b e r o f s l i c e s ( D ) into a single channel as presented in Equation (1), where ( i , j ) is the location of the intensity value in spatial dimension H × W , and D is the number of slices. Then, each image is normalized into gray-scale as demonstrated in Equation (2), where m i n i m u m v a l u e and m a x i m u m v a l u e are the minimum and maximum intensity values of the H × W image. Consequently, the range of intensity values is between 0 and 255.
i m a g e i , j = d = 1 D i m a g e i , j , d D
i m a g e i , j = ( i m a g e i , j m i n i m u m v a l u e ) 255 m a x i m u m v a l u e m i n i m u m v a l u e
Finally, linear transformation, log transformation, smooth, and edge enhancement are used for image enhancement [9]. Figure 1. shows an example of the image enhancement technique. However, the image enhancement technique is used as a pre-processing of the mutation model to determine mutated locations, where the CTP images are prepared to have a clear brain structure that is observed by visualization, as presented in Figure 1. The 32 slices for each input modality are fed to the GAN model to mitigate losing information.

3.2. Mutation Model Using Distance Map

Two proposed modules use the CTP modality to generate synthetic images: distance map and mutation. The comprehensive model of this stage is illustrated in Algorithm 1, and Figure 2 shows an example of generating a new CTP image. Two generated distance maps are used to compute the average distance map: horizontal and vertical distance maps. The horizontal map computes the average distance values between each pixel and its right neighbor, while the vertical map computes the average distance values between each pixel and its bottom neighbor. Then, a threshold is used to select all adjacent pixels related to a single region, where these pixels have similar intensity values.
Algorithm 1 Generate synthetic data using mutation model
    procedure Mutation_Model(data1, label1, data2, label2, center_point)
    ▷ Input d a t a 1 : The first input consists of five modalities with dimension H × W × D 1 × N
    ▷ Input l a b e l 1 : The semantic label of the d a t a 1 with dimension H × W × D 1
    ▷ Input d a t a 2 : The second input consists of five modalities with dimension × H × W × D 2 × N
    ▷ Input l a b e l 2 : The semantic label of the d a t a 2 with dimension H × W × D 2
    ▷ Input c e n t e r _ p o i n t : The determined center point ( y , x ) of the selected region
            c t p _ i m a g e _ 1 return the CTP modality from d a t a 1
            d i s t a n c e _ m a p _ 1 D I S T A N C E _ M A P _ M E T H O D ( c t p _ i m a g e _ 1 ) as illustrated in Algorithm 2
            c t p _ i m a g e _ 2 return the CTP modality from d a t a 2
            d i s t a n c e _ m a p _ 2 D I S T A N C E _ M A P _ M E T H O D ( c t p _ i m a g e _ 2 ) as illustrated in Algorithm 2
            a n g l e R O T A T I O N _ M E T H O D ( d i s t a n c e _ m a p _ 1 , d i s t a n c e _ m a p _ 2 ) as illustrated in Algorithm 3
            r o t a t e d _ i m a g e _ 2 rotate c t p _ i m a g e _ 2 by a n g l e
            l o c a t i o n s S E L E C T _ L O C A T I O N S _ M E T H O D ( r o t a t e d _ i m a g e _ 2 , c e n t e r _ p o i n t ) as illustrated in Algorithm 4
            n e w D a t a 1 , n e w L a b e l 1 M U T A T I O N _ M E T H O D ( d a t a 1 , d a t a 2 ,
                          l a b e l 1 , l a b e l 2 , l o c a t i o n s , a n g l e , c e n t e r _ p o i n t ) illustrated in Algorithm 5

            RETURN n e w D a t a 1 , n e w L a b e l 1
end procedure
Algorithm 2 demonstrates the method of the computing distance map. The Euclidean distance is used to create the horizontal and vertical distance maps. First, a 2-dimensional (2D) array locates the intensity value of the right neighbor pixel for each CTP image pixel. The second 2D array is used to modify the spatial dimension of the CTP image to the same dimension of the first 2D array while preserving the location of each pixel. Then, the Euclidean distance computes the distance between every two corresponding pixels in these two 2D arrays. Second, the same scenario is used to compute the vertical distance map, where the first 2D array is used to locate the intensity values of the bottom neighbor for each CTP image pixel. Finally, the distance map is the average of these two maps, where it is normalized to the gray-scale.
Algorithm 2 Distance map method
    procedure Distance_Map_Method(ctp_image)
    ▷ Input c t p _ i m a g e : CTP image of dimension h × w × d
            c t p _ i m a g e enhance the image as illustrated in Section 3.1

            i m 1 c t p _ i m a g e [ : , 1 : ]
            i m 2 c t p _ i m a g e [ : , : 1 ]
            h o r i z o n t a l _ d i s t a n c e _ m a p ( i m 1 i m 2 ) 2

            i m 1 c t p _ i m a g e [ 1 : , : ]
            i m 2 c t p _ i m a g e [ : 1 , : ]
            v e r t i c a l _ d i s t a n c e _ m a p ( i m 1 i m 2 ) 2

            d i s t a n c e _ m a p h o r i z o n t a l _ d i s t a n c e _ m a p + v e r t i c a l _ d i s t a n c e _ m a p 2
            d i s t a n c e _ m a p normalize d i s t a n c e _ m a p to gray-scale as illustrated in Equation (2)

            RETURN d i s t a n c e _ m a p of dimension h 1 × w 1
    end procedure
Algorithm 3 explains the image rotation method that ensures two images have approximately the same structure rotation. The distance map has the shape of a brain structure, as presented in Figure 2. Therefore, the front and back regions of the brain structure are used to rotate the distance map of the second image to have a similar structure rotation to the first distance map. The Euclidean distance is used to find the best rotation angle, which finds the minimum distance between two distance maps using both front and back parts. Then, the optimal angle is computed by finding the average of these two angles. However, it is different for every two inputs.
Algorithm 4 illustrates the method of selecting a set of adjacent locations for the mutation process. A single point ( y , x ) is selected randomly from determined regions of the brain structure to prevent generating arbitrary regions: center, top left, and top right. Then, regions are cropped from the distance map where the point ( y , x ) is the center. Algorithm 2 is used to compute the distance map of the cropped region, where a threshold is used to select a set of adjacent locations (pixels). These locations have approximately similar intensity values.
Finally, Algorithm 5 illustrates the mutation process. The selected adjacent locations are used to mutate the intensity values in all corresponding modalities of two inputs. It is performed for each slice separately, where the intensity equal to zero is ignored. Although the rotation method is used to achieve similar structure rotation for the two images, they have not achieved exactly the same rotation. Therefore, the Euclidean distance is used to determine the best new locations where the intensity values of the first input have a minimum distance from the intensity values of the second one.
Algorithm 3 Rotation method
    procedure Rotation_Method(img1, img2)
    ▷ Input i m g 1 : Image of dimension h × w
    ▷ Input i m g 2 : Image of dimension h × w
            b e s t _ f r o n t _ a n g l e 0
            b e s t _ b a c k _ a n g l e 0
            m i n i _ f r o n t _ d i s t a n c e 10 6
            m i n i _ b a c k _ d i s t a n c e 10 6
            i m g 1 _ f r o n t i m g 1 [ h / 2 16 : h / 2 + 16 , 0 : w / 10 ]
            i m g 1 _ b a c k i m g 1 [ h / 2 16 : h / 2 + 16 , w / 10 : ]
           For a ← 1 to 90
               For s ∈ [1, −1]
                   r o t a t e d _ i m g 2 rotate i m g 2 by angle equals to a s
                   i m g 2 _ f r o n t r o t a t e d _ i m g 2 [ h / 2 16 : h / 2 + 16 , 0 : w / 10 ]
                   i m g 2 _ b a c k r o t a t e d _ i m g 2 [ h / 2 16 : h / 2 + 16 , w / 10 : ]

                   f r o n t _ d i s t a n c e i = 0 h j = 0 w ( i m g 1 _ f r o n t [ i , j ] i m g 2 _ f r o n t [ i , j ] ) 2
                   b a c k _ d i s t a n c e i = 0 h j = 0 w ( i m g 1 _ b a c k [ i , j ] i m g 2 _ b a c k [ i , j ] ) 2

                   b e s t _ f r o n t _ a n g l e a*s, if the f r o n t _ d i s t a n c e is less than m i n i _ f r o n t _ d i s t a n c e
                   b e s t _ b a c k _ a n g l e a*s, if the b a c k _ d i s t a n c e is less than m i n i _ b a c k _ d i s t a n c e
            a n g l e b e s t _ f r o n t _ a n g l e + b e s t _ b a c k _ a n g l e 2

            RETURN a n g l e
    end procedure
Algorithm 4 Select adjacent locations from the distance map for mutation process
    procedure Select_Locations_Method(img, center_point)
    ▷ Input i m g : Image of dimension h × w
    ▷ Input c e n t e r _ p o i n t : The determined center point (y, x) of the selected region
    ▷L: Length used for cropped image
    ▷T: Threshold used to select adjacent locations
           c r o p p e d _ i m g i m g [ y L : y + L , x L : x + L ]
           c r o p p e d _ d i s t a n c e _ m a p D I S T A N C E _ M A P _ M E T H O D ( c r o p p e d _ i m g ) as illustrated in Algorithm 2
           l o c a t i o n s list of all locations in c r o p p e d _ d i s t a n c e _ m a p where their distance values are less than T

           RETURN l o c a t i o n s
    end procedure
Algorithm 5 Mutate regions to generate new input
    procedure Mutation_Method(modalities_1, modalities_2, locations, angle, center_point)
    ▷ Input m o d a l i t i e s _ 1 : First input consists of five modalities with dimensions H × W × D 1 × N
    ▷ Input m o d a l i t i e s _ 2 : Second input consists of five modalities with dimensions H × W × D 2 × N
    ▷ Input l a b e l 1 : The semantic label of the m o d a l i t i e s _ 1 that its dimension H × W × D 2
    ▷ Input l a b e l 2 : The semantic label of the m o d a l i t i e s _ 2 that its dimension H × W × D 2
    ▷ Input l o c a t i o n s : Set of the adjacent locations used for mutation model
    ▷ Input a n g l e : The rotated angle that used to rotate images of the m o d a l i t i e s _ 2
    ▷ Input c e n t e r _ p o i n t : The center points of the selected locations in the m o d a l i t i e s _ 2

            m i n _ s l i c e s _ n u m MIN( D 1 , D 2 )
           For i ← 1 to N
                M 1 m o d a l i t i e s _ 1 [ : , : , : , i ]
                M 2 m o d a l i t i e s _ 2 [ : , : , : , i ]
                M 2 rotate M 2 by a n g e l

               IF i ← 1
                   r e g i o n M 2 [ c e n t e r _ p o i n t l : c e n t e r _ p o i n t + l , c e n t e r _ p o i n t l : c e n t e r _ p o i n t + l , : , i ] [ l o c a t i o n s ]
                       r e g i o n convert the CTP image to single channel and normalized it using Equitation (1) and Equitation (2), respectively
                    n e w _ c e n t e r return shifted center based on the new region in CTP image of the M1 and r e g i o n

                    l a b e l 1 [ n e w _ c e n t e r l : n e w _ c e n t e r + l , n e w _ c e n t e r : n e w _ c e n t e r , : , i ] [ l o c a t i o n s ]
                            l a b e l 2 [ c e n t e r _ p o i n t l : c e n t e r _ p o i n t + l , c e n t e r _ p o i n t l : c e n t e r _ p o i n t + l , : , i ] [ l o c a t i o n s ]

               END IF

                M 1 [ n e w _ c e n t e r l : n e w _ c e n t e r + l , n e w _ c e n t e r : n e w _ c e n t e r , : , i ] [ l o c a t i o n s ]
                              M 2 [ c e n t e r _ p o i n t l : c e n t e r _ p o i n t + l , c e n t e r _ p o i n t l : c e n t e r _ p o i n t + l , : , i ] [ l o c a t i o n s ]

                n e w D a t a 1 append M 1 in axis zero
           END FOR,

            RETURN n e w D a t a 1 , l a b e l 1
    end procedure

3.3. Supervised GAN model

The semi-supervised GAN model called few-shot 3D multi-model [8] was enhanced and transformed into a supervised GAN model to exploit its generator module to generate patches with similar distributions to the mutated patches. More details of the few-shot 3D multi-model [8] and its generator are illustrated in Section 3.3.1. Furthermore, the proposed supervised GAN model is presented in Section 3.3.2.

3.3.1. Semi-Supervised GAN Model

A set of labeled patches X and unlabeled patches U are generated from the original images and introduced into the few-shot 3D multi-model [8]. The model consists of two modules: discriminator D and generator G. The discriminator module differs from the standard GAN approach by using a single output layer to segment the K + 1 classes. The first K probabilities are for segmentation, while the last probability K + 1 predicts whether each voxel of input is fake. The discriminator has three inputs: labeled patches, unlabeled patches, and fake patches X - generated by the generator module. The discriminator loss function consists of three loss functions for each input where the cross-entropy of the segmentation task uses labeled patches, while both unlabeled patches and fake patches are used to achieve the discriminator purpose using k + 1 output. In the generator module, vectors of random noises Z are fed to the deconvolution network to generate a set of fake patches y { X - | G ( Z ) } have a similar distribution to the unlabeled patches. The feature matching ( F M ) loss is used to optimize the generator module that generates features matching the features of the unlabeled patches in the intermediate layers of discriminator ϝ .
F M = | | E U ϝ ( U ) E Z ϝ ( G ( Z ) ) | | 2 2

3.3.2. Proposed Supervised GAN Model

In contrast to the few-shot 3D multi-model [8], the proposed GAN model uses patches generated from the mutated images and ignores the unlabeled patches to exploit the entire knowledge and gain more meaningful information using labels. Furthermore, the few-shot 3D multi-model [8] was designed for semi-supervised datasets, while the proposed model uses a supervised ISLES-2018 dataset. The proposed model consists of two modules: shared neural network S N N and generator G, as illustrated in Figure 3. Inspired by the multi-channel segmentation (MCS) [39] model, the segmentation S and discriminator D have a shared neural network architecture S N N to utilize shared feature map and reduce computation processes. Each of S and G has a separate output and loss function, where both of these two loss functions optimize the weights of the S N N module. However, the S N N architecture is similar to the autoencoder module that is used in the few-shot 3D multi-model [8].
The S N N module consists of two inputs and three output layers. The inputs are labeled patches X and generated patches y { X - | G ( Z ) } . The outputs are the IS segmentation y { y I S | S N N D ( X ) } , and two binary classification outputs: y { y X | S N N E ( X ) } and y { y X - | S N N E ( G ( Z ) ) } . The S N N E and S N N D indicate to encoder and decoder, respectively, where the fully connected layer with soft-max activation function is performed on latent vector y { Z - | S N N E ( · ) } to predict the y X and y X - into fake or real patches. The cross-entropy loss function is used for each output, where the discriminator loss function is the summation of the binary classification loss functions. The discriminator loss function optimizes the SNNE module. The proposed generator is similar to the generator in the few-shot 3D multi-model [8]. However, it uses the labeled patches X to compute the F M loss function.
F M = | | E X ϝ ( X ) E Z ϝ ( G ( Z ) ) | | 2 2

4. Experiment Result

The proposed supervised GAN model and the ISLES-2018 dataset are used to evaluate our main contribution, which is the mutation model. This section is organized as follows. Section 4.1 presents the model setting used for the experiment. Section 4.2 shows the performance results of the proposed model.

4.1. Experimental Settings

The experiments were conducted using Google Colab and Tensorflow with Keras for implementation. The initial values of the selected parameters are similar to the [8] study. The Adam optimizer is used with an initial learning rate of 0.0001 , 128 batch size. However, the number of epochs used to train the proposed model is 20, where the model is converged. The training images of the ISLES-2018 dataset are split into training and valid by a ratio of 80:10. The dimensions of stride to generate patches is ( 8 , 8 , 8 ) , and the dimension of the random noise Z is 200. In the mutation model, the threshold is 44, and the max length used to crop images in all directions is 64. Therefore, the spatial dimension of the cropped region is 128 × 128 , where locations that have intensity values less than the threshold are used for the mutation process.
In the training phase, each input image is mutated randomly for each batch, where each image has the probability of 10 % not being augmented. Therefore, the mutation model generates new synthetic training images at each epoch. In contrast, the original valid images are fed to the proposed GAN model without augmentation to evaluate the end-to-end model.

4.2. Results

Since the mutation and supervised GAN models are trained as an end-to-end model to generate and train a variety of images, the performance increases by 2.54% using a valid set, as presented in Table 3. Since the valid set is not mutated, the mutation model succeeds in increasing and diversifying the training images by generating new images with no different distribution than the original one. Consequently, it enhances the proposed model performance using the original valid set.
Table 4 presents the average loss value of training the SNN module over 20 epochs. It is reduced slightly during the training of the mutated images. The proposed model has an extremely high loss value in the first epoch, then it reduces significantly in the second epoch, as illustrated in Figure 4. However, it suffers overfitting in the early epochs, and the loss cost is relatively high due to the SNN loss function combining both segmentation and discriminator loss functions. Nevertheless, the segmentation performance is satisfactory despite many obstacles in the ISLES-2018 dataset. Table 5 presents that the proposed model outperforms the recall of other GAN models that use general augmentation techniques, such as the Gaussian noise technique, by utilizing the Euclidean distance to select the adjacent locations that have similar intensities for the mutation process. Where the recall has significant importance over the precision as it represents a ratio of the true positive samples to all test data [39].

5. Applications and Future Direction

The CT scans have a significant issue that is vital to be obtained and the number of existing images for specific diseases is significant challenge for deep learning models. The conventional methods use CT images due to is considered a challenges datasets and a potential resources for machine learning and deep learning models enhancements. Therefore, this section provides a future direction by suggesting exploiting the proposed augmentation technique in various applications.

5.1. Augmentation Technique

In addition to increasing the images related to ischemic stroke lesion segmentation, the proposed mutation process could be used for different wide image process applications and different types of images, such as breast lesion segmentation using dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) [40].

5.2. Feature Extraction

The proposed distance map model that finds the adjusting regions has the ability to be used and enhanced to extract further features aiding in classification and segmentation tasks. For instance, extract further features using the proposed distance map for detecting coronary artery lesions [41], diagnosing neurodegenerative diseases, such as Alzheimer’s disease [42] and retinal vessels segmentation [43]. Furthermore, it has the ability to enhance for extracting features related to the volume of tissues, such as hepatic tumors [44] and clinical evaluation of muscle volume [45]. These features aiding to study the brain anatomy in neuroscience and neuroanatomy [46].
In addition, it has the ability to integrated to different machine learning models for different objects that not related to medical images such as Q-learning RL-based optimization algorithm (ROA) that proposed for natural scene image classification [47].

6. Conclusions

A new synthetic dataset to handle the limited number of images in the ISLES-2018 dataset for semantic segmentation using a model-based mutation and distance map has been generated, presented, and evaluated in this paper. The distance map preserves the structure shape of the human brain in the images. It prevents duplication of intensity values in the selected locations used for the mutation process. Furthermore, a semi-supervised GAN model is modified to a supervised GAN and enhanced using a shared module for both segmentation and discriminator to reduce the complexity of the end-to-end model. The proposed mutation model improves the dice coefficient of the proposed supervised GAN model by 2.54% using the original valid set of the ISLES-2018 dataset. Moreover, it enhances its recall compared to the existing GAN models, where the recall has significant importance over the precision as it represents a ratio of the true positive samples to all test data. However, it suffers overfitting, and the mutation model lacks adaptation. The self-organizing map (SOM) approach will be integrated into the proposed mutation model to achieve adaptation in future work.

Author Contributions

Conceptualization, R.G. and A.K.; methodology, R.G. and A.K.; software, R.G. and A.K.; validation, R.G. and Q.A.A.-H.; formal analysis, R.G. and A.K.; investigation, R.G. and A.K.; resources, R.G., A.K. and Q.A.A.-H.; data curation, A.K.; writing—original draft preparation, R.G. and A.K.; writing—review and editing, R.G. and Q.A.A.-H.; visualization, R.G. and A.K.; supervision, R.G. and Q.A.A.-H.; funding acquisition, R.G. and Q.A.A.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the anonymous reviewers and the editor for their constructive comments and suggestions.

Conflicts of Interest

The author declares no conflict of interest.

Abbreviations

The following abbreviations are for all modalities mentioned in the proposed paper:
CTComputed tomography
CTPComputed tomography perfusion
DWIDiffusion-weighted imaging
CBFCerebral blood flow
CBVCerebral blood volume
TmaxTime-to-maximum flow
MTTMean transit time
OTSemantic segmentation label for IS lesions

References

  1. Biniaz, A.; Abbasi, A. Fast FCM with spatial neighborhood information for Brain Mr image segmentation. J. Artif. Intell. Soft Comput. Res. 2013, 3, 15–25. [Google Scholar] [CrossRef] [Green Version]
  2. Shi, L.; Copot, C.; Vanlanduit, S. Evaluating Dropout Placements in Bayesian Regression Resnet. J. Artif. Intell. Soft Comput. Res. 2022, 12, 61–73. [Google Scholar] [CrossRef]
  3. Hakim, A.; Christensen, S.; Winzeck, S.; Lansberg, M.G.; Parsons, M.W.; Lucas, C.; Robben, D.; Wiest, R.; Reyes, M.; Zaharchuk, G. Predicting Infarct Core From Computed Tomography Perfusion in Acute Ischemia With Machine Learning: Lessons From the ISLES Challenge. Stroke 2021, 52, 2328–2337. [Google Scholar] [CrossRef]
  4. Chlap, P.; Min, H.; Vandenberg, N.; Dowling, J.; Holloway, L.; Haworth, A. A review of medical image data augmentation techniques for deep learning applications. J. Med. Imaging Radiat. Oncol. 2021, 65, 545–563. [Google Scholar] [CrossRef]
  5. Sharma, N.; Aggarwal, L.M. Automated medical image segmentation techniques. J. Med. Phys. 2010, 35, 3. [Google Scholar] [CrossRef]
  6. Pröve, P.L.; Jopp-van Well, E.; Stanczus, B.; Morlock, M.M.; Herrmann, J.; Groth, M.; Säring, D.; Auf der Mauer, M. Automated segmentation of the knee for age assessment in 3D MR images using convolutional neural networks. Int. J. Leg. Med. 2019, 133, 1191–1205. [Google Scholar] [CrossRef] [PubMed]
  7. Al-Haija, Q.A.; Smadi, M.; Al-Bataineh, O.M. Early Stage Diabetes Risk Prediction via Machine Learning. In Proceedings of the 13th International Conference on Soft Computing and Pattern Recognition (SoCPaR 2021); Lecture Notes in Networks and Systems; Springer: Cham, Switzerland, 2022; Volume 417. [Google Scholar] [CrossRef]
  8. Mondal, A.; Dolz, J.; Desrosiers, C. Few-shot 3d multi-modal medical image segmentation using generative adversarial learning. arXiv 2018, arXiv:1810.12241. [Google Scholar]
  9. Amin, J.; Sharif, M.; Yasmin, M.; Saba, T.; Anjum, M.; Fernandes, S. A new approach for brain tumor segmentation and classification based on score level fusion using transfer learning. J. Med. Syst. 2019, 43, 326. [Google Scholar] [CrossRef]
  10. Wang, G.; Song, T.; Dong, Q.; Cui, M.; Huang, N.; Zhang, S. Automatic ischemic stroke lesion segmentation from computed tomography perfusion images by image synthesis and attention-based deep neural networks. Med. Image Anal. 2020, 65, 101787. [Google Scholar] [CrossRef]
  11. Platscher, M.; Zopes, J.; Federau, C. Image translation for medical image generation: Ischemic stroke lesion segmentation. Biomed. Signal Process. Control 2022, 72, 103283. [Google Scholar] [CrossRef]
  12. Tureckova, A.; Rodríguez-Sánchez, A. ISLES challenge: U-shaped convolution neural network with dilated convolution for 3D stroke lesion segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 319–327. [Google Scholar]
  13. Andersen, A.H.; Zhang, Z.; Avison, M.J.; Gash, D.M. Automated segmentation of multispectral brain MR images. J. Neurosci. Methods 2002, 122, 13–23. [Google Scholar] [CrossRef] [PubMed]
  14. Al-Haija, Q.A.; Smadi, M.; Al-Bataineh, O.M. Identifying Phasic dopamine releases using DarkNet-19 Convolutional Neural Network. In Proceedings of the 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 21–24 April 2021; pp. 1–5. [Google Scholar] [CrossRef]
  15. Diaz, O.; Kushibar, K.; Osuala, R.; Linardos, A.; Garrucho, L.; Igual, L.; Radeva, P.; Prior, F.; Gkontra, P.; Lekadir, K. Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Phys. Med. 2021, 83, 25–37. [Google Scholar] [CrossRef]
  16. Rezaei, M.; Yang, H.; Meinel, C. Learning imbalanced semantic segmentation through cross-domain relations of multi-agent generative adversarial networks. In Proceedings of the Medical Imaging 2019: Computer-Aided Diagnosis, San Diego, CA, USA, 16–21 February 2019; p. 1095027. [Google Scholar]
  17. Yang, H. Volumetric Adversarial Training for Ischemic Stroke Lesion Segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 343–351. [Google Scholar]
  18. Shen, C.; Roth, H.R.; Hayashi, Y.; Oda, M.; Miyamoto, T.; Sato, G.; Mori, K. Cascaded fully convolutional network framework for dilated pancreatic duct segmentation. Int. J. Comput. Assist. Radiol. Surg. 2022, 17, 343–354. [Google Scholar] [CrossRef] [PubMed]
  19. Roy, S.; Meena, T.; Lim, S.J. Attention UW-Net: A fully connected model for automatic segmentation and annotation of chest X-ray. Comput. Biol. Med. 2022, 150, 106083. [Google Scholar]
  20. Zhang, C.; Lu, J.; Hua, Q.; Li, C.; Wang, P. SAA-Net: U-shaped network with Scale-Axis-Attention for liver tumor segmentation. Biomed. Signal Process. Control 2022, 73, 103460. [Google Scholar] [CrossRef]
  21. Indraswari, R.; Kurita, T.; Arifin, A.Z.; Suciati, N.; Astuti, E.R. Multi-projection deep learning network for segmentation of 3D medical images. Pattern Recognit. Lett. 2019, 125, 791–797. [Google Scholar] [CrossRef]
  22. Hashemi, S.R.; Salehi, S.S.M.; Erdogmus, D.; Prabhu, S.P.; Warfield, S.K.; Gholipour, A. Asymmetric loss functions and deep densely-connected networks for highly-imbalanced medical image segmentation: Application to multiple sclerosis lesion detection. IEEE Access 2018, 7, 1721–1735. [Google Scholar] [CrossRef]
  23. Zhu, W.; Huang, Y.; Tang, H.; Qian, Z.; Du, N.; Fan, W.; Xie, X. Anatomynet: Deep 3d squeeze-and-excitation u-nets for fast and fully automated whole-volume anatomical segmentation. BioRxiv 2018, 392969. [Google Scholar] [CrossRef] [Green Version]
  24. Al-Haija, Q.A.; Adebanjo, A. Breast Cancer Diagnosis in Histopathological Images Using ResNet-50 Convolutional Neural Network. In Proceedings of the 2020 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Vancouver, BC, Canada, 9–12 September 2020; pp. 1–7. [Google Scholar] [CrossRef]
  25. Rezaei, M.; Yang, H.; Meinel, C. Recurrent generative adversarial network for learning imbalanced medical image semantic segmentation. Multimed. Tools Appl. 2020, 79, 15329–15348. [Google Scholar] [CrossRef]
  26. Wouters, A.; Robben, D.; Christensen, S.; Marquering, H.A.; Roos, Y.B.; van Oostenbrugge, R.J.; van Zwam, W.H.; Dippel, D.W.; Majoie, C.B.; Schonewille, W.J.; et al. Prediction of stroke infarct growth rates by baseline perfusion imaging. Stroke 2022, 53, 569–577. [Google Scholar] [CrossRef]
  27. Ischemic Stroke Lesion Segmentation (ISLES-2018). Available online: www.isles-challenge.org (accessed on 21 June 2022).
  28. Böhme, L.; Madesta, F.; Sentker, T.; Werner, R. Combining good old random forest and DeepLabv3+ for ISLES 2018 CT-based stroke segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 335–342. [Google Scholar]
  29. Abulnaga, S.; Rubin, J. Ischemic stroke lesion segmentation in CT perfusion scans using pyramid pooling and focal loss. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 352–363. [Google Scholar]
  30. Clerigues, A.; Valverde, S.; Bernal, J.; Freixenet, J.; Oliver, A.; Lladó, X. Acute ischemic stroke lesion core segmentation in CT perfusion images using fully convolutional neural networks. Comput. Biol. Med. 2019, 115, 103487. [Google Scholar] [CrossRef]
  31. Bertels, J.; Robben, D.; Vandermeulen, D.; Suetens, P. Contra-lateral information CNN for core lesion segmentation based on native CTP in acute stroke. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 263–270. [Google Scholar]
  32. Rezaei, M.; Yang, H.; Meinel, C. voxel-GAN: Adversarial framework for learning imbalanced brain tumor segmentation. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 321–333. [Google Scholar]
  33. Liu, P. Stroke lesion segmentation with 2D novel CNN pipeline and novel loss function. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 253–262. [Google Scholar]
  34. Khare, S.K.; Gaikwad, N.B.; Bajaj, V. VHERS: A Novel Variational Mode Decomposition and Hilbert Transform-Based EEG Rhythm Separation for Automatic ADHD Detection. IEEE Trans. Instrum. Meast. 2022, 71, 4008310. [Google Scholar] [CrossRef]
  35. Khare, S.K.; Bajaj, V.; Acharya, U.R. SPWVD-CNN for automated detection of schizophrenia patients using EEG signals. IEEE Trans. Instrum. Meas. 2021, 70, 2507409. [Google Scholar] [CrossRef]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  37. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  38. Liu, L.; Yang, S.; Meng, L.; Li, M.; Wang, J. Multi-scale deep convolutional neural network for stroke lesions segmentation on CT images. In Proceedings of the International MICCAI Brainlesion Workshop, Granada, Spain, 16 September 2018; pp. 283–291. [Google Scholar]
  39. Khalil, A.; Jarrah, M.; Al-Ayyoub, M.; Jararweh, Y. Text detection and script identification in natural scene images using deep learning. Comput. Electr. Eng. 2021, 91, 107043. [Google Scholar] [CrossRef]
  40. Chen, M.; Zheng, H.; Lu, C.; Tu, E.; Yang, J.; Kasabov, N. Accurate breast lesion segmentation by exploiting spatio-temporal information with deep recurrent and convolutional network. J. Ambient Intell. Humaniz. Comput. 2019, 1–9. [Google Scholar] [CrossRef]
  41. Zuluaga, M.A.; Hoyos, M.H.; Orkisz, M. Feature selection based on empirical-risk function to detect lesions in vascular computed tomography. IRBM. 2014, 35, 244–254. [Google Scholar] [CrossRef]
  42. Garali, I.; Adel, M.; Bourennane, S.; Ceccaldi, M.; Guedj, E. Brain region of interest selection for 18FDG positrons emission tomography computer-aided image classification. IRBM 2016, 37, 23–30. [Google Scholar] [CrossRef]
  43. Balasubramanian, K.; Ananthamoorthy, N.P. Robust retinal blood vessel segmentation using convolutional neural network and support vector machine. J. Ambient Intell. Humaniz. Comput. 2021, 12, 3559–3569. [Google Scholar] [CrossRef]
  44. Cohen, M.E.; Pellot-Barakat, C.; Tacchella, J.M.; Lefort, M.; De Cesare, A.; Lebenberg, J.; Souedet, N.; Lucidarme, O.; Delzescaux, T.; Frouin, F. Quantitative evaluation of rigid and elastic registrations for abdominal perfusion imaging with X-ray computed tomography. IRBM 2013, 34, 283–286. [Google Scholar] [CrossRef]
  45. Jolivet, E.; Daguet, E.; Bousson, V.; Bergot, C.; Skalli, W.; Laredo, J.D. Variability of hip muscle volume determined by computed tomography. IRBM 2009, 30, 14–19. [Google Scholar] [CrossRef]
  46. Mohtasebi, M.; Bayat, M.; Ghadimi, S.; Moghaddam, H.A.; Wallois, F. Modeling of neonatal skull development using computed tomography images. IRBM 2021, 42, 19–27. [Google Scholar] [CrossRef]
  47. Talaat, F.M.; Gamel, S.A. RL based hyper-parameters optimization algorithm (ROA) for convolutional neural network. J. Ambient Intell. Humaniz. Comput. 2022, 1–11. [Google Scholar] [CrossRef]
Figure 1. Example of image enhancement technique output for each step using CTP image. (a) Single channel, (b) gray-scale, (c) linear transformation, (d) log transformation, (e) smooth, and (f) edge enhancement.
Figure 1. Example of image enhancement technique output for each step using CTP image. (a) Single channel, (b) gray-scale, (c) linear transformation, (d) log transformation, (e) smooth, and (f) edge enhancement.
Electronics 12 00590 g001
Figure 2. Example of each step output of the mutation model using CTP image. (a) Distance map, (b) rotation, (c) mutation, and (d) show the adjacent location where the white regions are mutated.
Figure 2. Example of each step output of the mutation model using CTP image. (a) Distance map, (b) rotation, (c) mutation, and (d) show the adjacent location where the white regions are mutated.
Electronics 12 00590 g002
Figure 3. Presents the proposed end-to-end model.
Figure 3. Presents the proposed end-to-end model.
Electronics 12 00590 g003
Figure 4. Presents the loss values of training the SNN module at each epoch using the mutation model and without it. The training set of the ISLES-2018 dataset is trained for 20 epochs.
Figure 4. Presents the loss values of training the SNN module at each epoch using the mutation model and without it. The training set of the ISLES-2018 dataset is trained for 20 epochs.
Electronics 12 00590 g004
Table 1. Shows brief descriptions for all modalities and their abbreviations that are used in the proposed model [3,27,28].
Table 1. Shows brief descriptions for all modalities and their abbreviations that are used in the proposed model [3,27,28].
AbbreviationsDescription
CTComputed tomography
CTPComputed tomography perfusion
DWIDiffusion-weighted imaging
CBFCerebral blood flow
CBVCerebral blood volume
TmaxTime-to-maximum flow
MTTMean transit time
OTSemantic segmentation label for IS lesions
Table 2. Summary of the existing GAN and proposed models that use the ISLES-2018 dataset for evaluation.
Table 2. Summary of the existing GAN and proposed models that use the ISLES-2018 dataset for evaluation.
ReferencesAugmentation MethodFindingLimitation
Wang et al., 2020 [10]Synthesized pseudo DWI moduleThe Synthesized pseudo DWI module uses the ISLES-2018 to generate a synthetic datasetThe DWI label is used to switch lesion regions with normal regions without considering other possible choices for switching regions
Rezaei et al., 2019 [16]General augmentation methodsThe general augmentation techniques are used to increase the training setThe traditional augmentation processes are used to increase the training set, such as flipping and adding Gaussian noise.
Yang et al., 2018 [17]-The error of the discriminator is fed into the segmentation module for second back-propagationThe generator module is excluded, and a synthetic dataset is not provided.
Rezaei et al., 2018 [32]Gaussian noise methodThe Gaussian noise method is used to increase the training setThe Gaussian noise is used only for augmentation.
Liu and Pengbo 2018 [33]General augmentation methods and translation methodGeneral augmentation methods are used to increase the dataset, and the generator is used to translate the CTP modality into DWI modalityTraditional processes are used for augmentation, such as flipping and scaling images.
Proposed modelMutation model based on distance map that integrated into proposed GAN modelPresents a mutation model based on a distance map that randomly selects normal or damaged regions to generate a synthetic dataset and integrate it into the GAN model. Furthermore, a supervised GAN model is proposed to exploit the generator and gain information from labels. Finally, utilize a shared network for segmentation and discriminator to reduce GAN complexityThe proposed mutation model is not adaptive, and the proposed end-to-end model suffers from overfitting
Table 3. Shows the performance enhancement by integrating the proposed mutation model into the proposed supervised GAN model. The valid set of the ISLES-2018 dataset is used for evaluation.
Table 3. Shows the performance enhancement by integrating the proposed mutation model into the proposed supervised GAN model. The valid set of the ISLES-2018 dataset is used for evaluation.
ModelDice
Supervised GAN without integrated mutated model40.68%
Supervised GAN with integrated mutated model43.22%
Table 4. The training set of the ISLES-2018 dataset is used to present the average loss value of training the SNN module for 20 epochs with and without an integrated mutation model.
Table 4. The training set of the ISLES-2018 dataset is used to present the average loss value of training the SNN module for 20 epochs with and without an integrated mutation model.
Modelloss
Supervised GAN without mutated images42.99%
Supervised GAN with mutated images42.86%
Table 5. The performance of the proposed model using the training set of the ISLES-2018 dataset.
Table 5. The performance of the proposed model using the training set of the ISLES-2018 dataset.
ModelRecall
Proposed model79.46%
3DJoinGANs [16]79.00%
Voxel-GAN [32]78.00%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ghnemat, R.; Khalil, A.; Abu Al-Haija, Q. Ischemic Stroke Lesion Segmentation Using Mutation Model and Generative Adversarial Network. Electronics 2023, 12, 590. https://doi.org/10.3390/electronics12030590

AMA Style

Ghnemat R, Khalil A, Abu Al-Haija Q. Ischemic Stroke Lesion Segmentation Using Mutation Model and Generative Adversarial Network. Electronics. 2023; 12(3):590. https://doi.org/10.3390/electronics12030590

Chicago/Turabian Style

Ghnemat, Rawan, Ashwaq Khalil, and Qasem Abu Al-Haija. 2023. "Ischemic Stroke Lesion Segmentation Using Mutation Model and Generative Adversarial Network" Electronics 12, no. 3: 590. https://doi.org/10.3390/electronics12030590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop