Indirect Volume Estimation for Acute Ischemic Stroke from Diffusion Weighted Image Using Slice Image Segmentation

The accurate estimation of acute ischemic stroke (AIS) using diffusion-weighted imaging (DWI) is crucial for assessing patients and guiding treatment options. This study aimed to propose a method that estimates AIS volume in DWI objectively, quickly, and accurately. We used a dataset of DWI with AIS, including 2159 participants (1179 for internal validation and 980 for external validation) with various types of AIS. We constructed algorithms using 3D segmentation (direct estimation) and 2D segmentation (indirect estimation) and compared their performances with those annotated by neurologists. The proposed pretrained indirect model demonstrated higher segmentation performance than the direct model, with a sensitivity, specificity, F1-score, and Jaccard index of 75.0%, 77.9%, 76.0, and 62.1%, respectively, for internal validation, and 72.8%, 84.3%, 77.2, and 63.8%, respectively, for external validation. Volume estimation was more reliable for the indirect model, with 93.3% volume similarity (VS), 0.797 mean absolute error (MAE) for internal validation, VS of 89.2% and a MAE of 2.5% for external validation. These results suggest that the indirect model using 2D segmentation developed in this study can provide an accurate estimation of volume from DWI of AIS and may serve as a supporting tool to help physicians make crucial clinical decisions.


Introduction
Acute ischemic stroke (AIS) is a major cause of disability, and an accurate and rapid decision for treatment strategies is critical to improving patient outcomes [1,2]. Time is regarded as the most important factor in acute stroke triage, and reperfusion by thrombectomy with or without intravenous tissue plasminogen is the only way to successfully reverse ischemic changes [3]. Recent clinical trials, such as Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) [4], Diffusion and Perfusion Imaging Evaluations for Understanding Stroke Evolution Study (DEFUSE) [5], and Diffusion-weighted imaging (DWI) or Computerized Tomography Perfusion Assessment with Clinical Mismatch in the Triage of Wake-Up and Late Presenting Strokes Undergoing Neurointervention with Trevo (DAWN) [6], have demonstrated the value of evaluating the infarct volume and viable brain tissue on brain imaging. DWI is a commonly used magnetic resonance imaging (MRI) sequence with high sensitivity and a short acquisition time [7]. The initial signal change in DWI of AIS can indicate the infarct core that correlates with the final infarct volume; accurate segmentation of acute lesions on DWI is essential for guiding treatment options and evaluating patients [7,8].
Various high-performance automatic segmentation methods have been developed for DWI of AIS [9][10][11], but their high dependence on handcrafted features limits their modeling capabilities. Additionally, AIS remains challenging due to various subtypes, many artifacts, multifocal distribution, and ill-defined stroke boundaries [12]. Detecting DWI in AIS is crucial for diagnosis and treatment planning, but this task is laborious, time-consuming, and can involve physicians' subjectivity.
This study aimed to propose a method that estimates AIS volume in DWI objectively, quickly, and accurately. The proposed method performs segmentation on DWI and estimates the volume of the segmented AIS.

Study Population
Consecutive patients who underwent DWI for AIS were retrospectively recruited from two hospitals (Hallym University Chuncheon Sacred Heart Hospital (HUCSHH) and Kangwon National University Hospital (KNUH)). The AIS cases in each hospital were as follows: from January 2011 to December 2019 for HUCSHH, and from July 2014 to October 2019 for KNUH. Eventually, 2159 DWI images of AIS (for HUCSHH, 694 men and 485 women with a mean ± standard deviation age of 69.8 ± 12.7, and for KNUH, 555 men and 425 women with a mean ± standard deviation age of 72.4 ± 12.4) were included. Additionally, 121 DWI images of control subjects (for HUCSHH, 50 men and 52 women with a mean ± standard deviation age of 59.2 ± 16.4, and for KNUH, 10 men and 11 women with a mean ± standard deviation age of 44.0 ± 17.1) were included in the evaluation of the developed segmentation algorithm. These DWIs were of healthy participants or those whose b-value of 1000 s/mm 2 showed no lesions, whereas a b-value of 0 s/mm 2 might show some old lesions. This retrospective study was approved by the Institutional Review Boards of HUCSHH and KNUH, which waived the requirement for informed consent (approval no. HUCSHH 2021-06-013 and KNUH-A-2021-021-001).

Annotation
All lesions on DWI were manually segmented by one neurologist (C.K) in HUCSHH and by two neurologists (S.H.K and J.-W.J) in KNUH using ITK-Snap, an open-source software application used to segment structures in medical images (http://www.itksnap. org. accessed on 30 December 2020). All images with masks were saved in Digital Imaging and Communications in Medicine (DICOM) format. Figure 1 shows examples of segmented images for large-sized AIS and small-sized AIS. (a) (b) Figure 1. Example of (a) large-sized AIS on DWI and (b) small-sized AIS on DWI.

Method
The method we proposed to execute the volume estimation for AIS is shown in Figure 2. It follows the process of image augmentation, a training segmentation model, and volume estimation using pixels/voxels.

Augmentation
Training segmentation models require a large amount of data. However, it is difficult to train segmentation models sufficiently because our data are lacking. To address the lack of data, we applied data augmentation. We executed a random horizontal/vertical flip and random 90° rotation [13,14]. Figure 3 shows the results of augmentation. The data were resized to 256 × 256 to shorten the training time and reduce the computational cost.


Direct Volume Estimation (Using 3D Segmentation) Direct volume estimation was executed by a 3D U-Net [15], an architecture based on convolutional neural networks (CNNs) to process volumes. The 3D U-Net has a U-shaped structure and employs a symmetric encoder and decoder network with skipped connections. The encoder path determines the context by downsampling the spatial dimensions. The encoder consists of three layers: two 3D convolutions, ReLU (rectified linear unit) and batch normalization, and a 2 × 2 max-pooling layer. The decoder path performs upsampling for the encoder output image to increase the spatial dimensions. The decoder consists of three layers, including two 3D convolutions, ReLU and batch normalization, and 2 × 2 up-convolution. Symmetric blocks are used with skip connections from the corresponding encoding blocks to replenish the lost context. The skip connection improves the

Method
The method we proposed to execute the volume estimation for AIS is shown in Figure 2. It follows the process of image augmentation, a training segmentation model, and volume estimation using pixels/voxels.

Method
The method we proposed to execute the volume estimation for AIS is shown in Figure 2. It follows the process of image augmentation, a training segmentation model, and volume estimation using pixels/voxels.

Augmentation
Training segmentation models require a large amount of data. However, it is difficult to train segmentation models sufficiently because our data are lacking. To address the lack of data, we applied data augmentation. We executed a random horizontal/vertical flip and random 90° rotation [13,14]. Figure 3 shows the results of augmentation. The data were resized to 256 × 256 to shorten the training time and reduce the computational cost.


Direct Volume Estimation (Using 3D Segmentation) Direct volume estimation was executed by a 3D U-Net [15], an architecture based on convolutional neural networks (CNNs) to process volumes. The 3D U-Net has a U-shaped structure and employs a symmetric encoder and decoder network with skipped connections. The encoder path determines the context by downsampling the spatial dimensions. The encoder consists of three layers: two 3D convolutions, ReLU (rectified linear unit) and batch normalization, and a 2 × 2 max-pooling layer. The decoder path performs upsampling for the encoder output image to increase the spatial dimensions. The decoder consists of three layers, including two 3D convolutions, ReLU and batch normalization, and 2 × 2 up-convolution. Symmetric blocks are used with skip connections from the corresponding encoding blocks to replenish the lost context. The skip connection improves the data recovery performance by adding a layer input to the output to compensate for the

Augmentation
Training segmentation models require a large amount of data. However, it is difficult to train segmentation models sufficiently because our data are lacking. To address the lack of data, we applied data augmentation. We executed a random horizontal/vertical flip and random 90 • rotation [13,14]. Figure 3 shows the results of augmentation. The data were resized to 256 × 256 to shorten the training time and reduce the computational cost.

Method
The method we proposed to execute the volume estimation for AIS is shown in Fig  ure 2. It follows the process of image augmentation, a training segmentation model, an volume estimation using pixels/voxels.

Augmentation
Training segmentation models require a large amount of data. However, it is difficu to train segmentation models sufficiently because our data are lacking. To address the lac of data, we applied data augmentation. We executed a random horizontal/vertical flip an random 90° rotation [13,14]. Figure 3 shows the results of augmentation. The data wer resized to 256 × 256 to shorten the training time and reduce the computational cost.


Direct Volume Estimation (Using 3D Segmentation) Direct volume estimation was executed by a 3D U-Net [15], an architecture based o convolutional neural networks (CNNs) to process volumes. The 3D U-Net has a U-shape structure and employs a symmetric encoder and decoder network with skipped connec tions. The encoder path determines the context by downsampling the spatial dimensions The encoder consists of three layers: two 3D convolutions, ReLU (rectified linear unit) an batch normalization, and a 2 × 2 max-pooling layer. The decoder path performs upsam pling for the encoder output image to increase the spatial dimensions. The decoder con sists of three layers, including two 3D convolutions, ReLU and batch normalization, an 2 × 2 up-convolution. Symmetric blocks are used with skip connections from the corre sponding encoding blocks to replenish the lost context. The skip connection improves th data recovery performance by adding a layer input to the output to compensate for th

•
Direct Volume Estimation (Using 3D Segmentation) Direct volume estimation was executed by a 3D U-Net [15], an architecture based on convolutional neural networks (CNNs) to process volumes. The 3D U-Net has a U-shaped structure and employs a symmetric encoder and decoder network with skipped connections. The encoder path determines the context by downsampling the spatial dimensions. The encoder consists of three layers: two 3D convolutions, ReLU (rectified linear unit) and batch normalization, and a 2 × 2 max-pooling layer. The decoder path performs upsampling for the encoder output image to increase the spatial dimensions. The decoder consists of three layers, including two 3D convolutions, ReLU and batch normalization, and 2 × 2 up-convolution. Symmetric blocks are used with skip connections from the corresponding encoding blocks to replenish the lost context. The skip connection improves the data recovery performance by adding a layer input to the output to compensate for the information lost during the convolution process and helps to perform semantic segmentation well. The training of 3D segmentation models requires a high computational time and memory. Therefore, we performed transfer learning by using pretrained models with an auto implant and tumor, respectively, and performed patch-based 3D segmentation to alleviate this problem. In the data, the size of the AIS was relatively small compared to the size of the brain and was sparsely located. Extracting small patches from these data causes a problem in that AIS cannot be contained in the patch, resulting in class imbalance. We proposed indirect volume estimation using 2D segmentation, described in the subsequent section, to mitigate the imbalance problem.

•
Indirect Volume Estimation (Using 2D Segmentation) Indirect volume estimation measures the volume of a slice image after changing the volumetric image to a slice image. By dividing the volumetric image into each crosssectional image and using it as an input for the model, the amount of data increases, and the overall brain cross-sectional information can be viewed, mitigating hierarchical imbalances and data shortages. Indirect volume estimation is executed by 2D U-Net, which has the same model architecture as 3D U-Net but uses 2D convolutions instead of 3D convolutions. In the previous method [16], one channel was used as the input. However, Buda et al. [17] used two adjacent slices as a three-channel input to provide additional information, such as context information and volumetric information. Similarly, our model also used three input channels. Furthermore, we performed transfer learning by using a pretrained model with glioma to improve the model's performance and save time for the model to train AIS.
To estimate the volume indirectly, we used the pixels of the mask predicted as AIS. The detailed algorithm for estimating the volume using the predicted mask is shown in Figure 4.
information lost during the convolution process and helps to perform semantic segm tation well. The training of 3D segmentation models requires a high computational and memory. Therefore, we performed transfer learning by using pretrained models an auto implant and tumor, respectively, and performed patch-based 3D segmentatio alleviate this problem. In the data, the size of the AIS was relatively small compare the size of the brain and was sparsely located. Extracting small patches from these causes a problem in that AIS cannot be contained in the patch, resulting in class imbala We proposed indirect volume estimation using 2D segmentation, described in the su quent section, to mitigate the imbalance problem.


Indirect Volume Estimation (Using 2D Segmentation) Indirect volume estimation measures the volume of a slice image after changing volumetric image to a slice image. By dividing the volumetric image into each cross tional image and using it as an input for the model, the amount of data increases, and overall brain cross-sectional information can be viewed, mitigating hierarchical im ances and data shortages. Indirect volume estimation is executed by 2D U-Net, which the same model architecture as 3D U-Net but uses 2D convolutions instead of 3D co lutions. In the previous method [16], one channel was used as the input. However, B et al. [17] used two adjacent slices as a three-channel input to provide additional in mation, such as context information and volumetric information. Similarly, our m also used three input channels. Furthermore, we performed transfer learning by usi pretrained model with glioma to improve the model's performance and save time fo model to train AIS.
To estimate the volume indirectly, we used the pixels of the mask predicted as The detailed algorithm for estimating the volume using the predicted mask is show Figure 4. To obtain the actual volume, the scale factors, , ℎ , for the width and height o original image, respectively, are determined from the resized mask size. The spacing formation, , was obtained by referring to space between slices and slice thickness in DICOM header file. The volume of a pixel, , is obtained by multiplying each scale fa and the spacing information as follows: To obtain the actual volume, the scale factors, s f w , s f h , for the width and height of the original image, respectively, are determined from the resized mask size. The spacing information, si, was obtained by referring to space between slices and slice thickness in the DICOM header file. The volume of a pixel, V, is obtained by multiplying each scale factor and the spacing information as follows: By letting H, W, N be the height, width and number of slices, respectively, we obtained the total number of pixels, P, by cumulating all the predicted binary pixels, p, in all slices, as: The patient's AIS volume was obtained by multiplying the volume, V, of one pixel by the total number of pixels, P.

Loss Function
The Dice coefficient [18] is used to compare the areas between the prediction mask and label and is adopted as a loss function, known as Dice loss: We trained a model with Dice loss, and the model trained knowledge, such as DWI of AIS. Where y true is the label for the pixels/voxels value and y pred is the predicted probability for the pixel/voxel value.

Evaluation Metrics
To evaluate the segmentation models, we adopted the sensitivity, specificity, F1-score, and Jaccard index as follows: Jaccard Index = |prediction ∩ label| |prediction ∪ label| .
Because our study focused on the importance of accurate AIS segmentation, we mainly set metrics for true-positive (TP). In Equations (4)- (7), TP is the number of outcome pixels correctly predicted as AIS. False-positive (FP) is the number of outcome pixels mispredicted as AIS, and false-negative (FN) is the number of outcome pixels mispredicted as non-AIS.
To evaluate the volumes, we adopted the volume similarity (VS) [19] and mean absolute error (MAE): VS estimated the volume of the predicted AIS to indicate its similarity with the labeled AIS. MAE is the average of the absolute values of the errors that are the difference between the actual and predicted values, i.e., it estimates the error between the predicted AIS and labeled AIS. In Equations (8) and (9), the measurement is the region of the predicted AIS (TP), and the ground truth is the region of the labeled AIS.

Implementation Details
All methods were implemented using the Pytorch framework 1.10 in Python 3.8.10 on Ubuntu, using four NVIDIA RTX 3090s, and have been included as core algorithms in AI-based medical solutions (ZioMed; ZIOVISION, Chuncheon, Korea). Internal validation was performed using data acquired from HUCSHH. We randomly split the dataset into training, validation, and testing sets at a ratio of 8:1:1. Data acquired from KNUH were used for external validation. Loading the image and spacing information of the dataset is time-consuming. Considering that it takes a long time to load the DICOM file, the image and spacing information extracted from the DICOM file were saved in HDF5.

Segmentation Performance
We conducted an experiment to evaluate the methods for the segmentation performance of DWI in AIS. Tables 1 and 2 show the performance results of the model that segment the DWI of AIS for internal and external validation. Regarding the internal validation data of patients with AIS, the F1-score of the indirect segmentation with a pretrained model for brain glioma [17] was 76.02%. This was higher than those of the scratch-based indirect volume estimation model (73.09%), the scratch-based direct volume estimation model (54.76%), and direct volume estimation with a pretrained model for auto implants and tumors (55.71%, 52.48%) [20]. Regarding the external validation data of patients with AIS, the F1-score of indirect segmentation with a pretrained model for brain glioma [17] was 77.23%, which was higher than that of the scratch-based indirect segmentation model (73.93%), the scratch-based direct segmentation model (63.94%), and direct segmentation with a pretrained model for auto implants and tumors (56.73%, 60.33%) [20]. Similarly, the Jaccard indices of internal and external validation with pretrained indirect models were 62.12% and 63.82%, respectively, and higher than those of the other models.

Volume Estimation
Through VS and MAE, we confirmed whether the result of estimating the AIS volume with the proposed model is reliable. Table 3 shows the results of estimating the volume of the AIS segmented for internal and external validations. The results of direct volume estimation with a pretrained model for a brain tumor, VS, and MAE in internal validation were 67.68% and 1.159 cc, and in external validation, were 62.59% and 5.706 cc, respectively. Moreso, for indirect volume estimation with a pretrained model for brain glioma, VS and MAE in internal validation were 93.25% and 0.797 cc, and in external validation, were 89.17% and 2.468 cc, respectively. In addition, to check for FP errors, our best model (indirect pretrained) was validated in the normal group that had no AIS volume at all. For internal and external validation, the MAE values were 0.028 cc and 0.009 cc, respectively.

Discussion
We proposed a novel method to estimate AIS volume and help doctors make quick decisions accurately. The proposed method is an indirect volume estimation method that estimates AIS volume by converting a volumetric image into a slice image; the volume is estimated using pixels predicted as AIS in each slice. This approach is an effective and attractive method with the advantage of low computational cost. We conducted internal and external validations to confirm this remarkable performance. Tables 1 and 2 show the performance of AIS segmentation. The internal validation F1-scores of the direct and indirect segmentations were 55.71% and 76.02%, respectively, while the external validation F1-scores of the direct segmentation model and indirect segmentation model were 63.94% and 77.23%, respectively. Although direct segmentation on volumetric data was expected to be high, surprisingly, this study recorded better performance for indirect segmentation by converting volumetric data into slice data ( Figure 6A). The reason for this result could be explained by the following: First, the image information was different for each image. Heterogeneity exists because of the devices used in each hospital and the option settings in the devices (e.g., 1.5T, 3T), noise, etc., which can make it difficult to train models for indirect volume estimation and direct volume estimation. Additionally, the volume shape of patients depends on the number of slices, pathological lesions and slice width [21]. To solve the problem of inconsistent information and image shape, the input image is made consistent by resizing, padding, cropping on the image, or matching the shape. However, in this process, small AIS with small axis pixels, such as Figures 5, 6B and 6C are deformed, making it difficult to segment AIS.
the MAE values were 0.028 cc and 0.009 cc, respectively.

Discussion
We proposed a novel method to estimate AIS volume and help doctors make decisions accurately. The proposed method is an indirect volume estimation method estimates AIS volume by converting a volumetric image into a slice image; the volu estimated using pixels predicted as AIS in each slice. This approach is an effectiv attractive method with the advantage of low computational cost. We conducted int and external validations to confirm this remarkable performance. Tables 1 and 2 sho performance of AIS segmentation. The internal validation F1-scores of the direct an direct segmentations were 55.71% and 76.02%, respectively, while the external valid F1-scores of the direct segmentation model and indirect segmentation model were 63 and 77.23%, respectively. Although direct segmentation on volumetric data was exp to be high, surprisingly, this study recorded better performance for indirect segment by converting volumetric data into slice data ( Figure 6A). The reason for this result be explained by the following: First, the image information was different for each image. Heterogeneity exis cause of the devices used in each hospital and the option settings in the devices (e.g., 3T), noise, etc., which can make it difficult to train models for indirect volume estim and direct volume estimation. Additionally, the volume shape of patients depends o number of slices, pathological lesions and slice width [21]. To solve the problem of i sistent information and image shape, the input image is made consistent by resizing ding, cropping on the image, or matching the shape. However, in this process, sma with small axis pixels, such as Figure 5, Figure 6B, and Figure 6C are deformed, mak difficult to segment AIS.
As mentioned in 2.4 Method, patch-based 3D segmentation was performed on volume estimation. This patch-based 3D segmentation trains a model by transformi input 3D image into 3D subsamples of arbitrary size. However, patch-based 3D seg tation may not be able to extract global features considering the actual image volum the size of the AIS to be segmented like ours is relatively small compared to the size brain and may show poor results when located infrequently. In addition, class imba may occur if an inappropriate patch size is used. Due to these limitations of path-b 3D segmentation, it can be inferred that the training result of direct volume estimat worse than indirect volume estimation.     In the case of direct segmentation, there was no significant performance difference in the F1-score between the estimated scratch model and the pretrained model. In the case of indirect volume estimation, the pretrained model had a higher F1-score than the scratch model. The reasons for these results are as follows: First, what the pretrained model predicts is different. In direct volume estimation, the pretrained model for auto implants predicts the shape and size of an implant through an image of a defective skull, unlike ours, which segments lesions through brain images. Moreso, the number of modalities and classes used in the pretrained model is different from ours. In direct volume estimation, brain tumor segmentation of the pretrained model is for segmenting brain tumors composed of multiple labels, like activators and edema, by inputting multimode MRI, such as FLAIR (fluid-attenuated inversion recovery), T1, and T2. This is different from our study, which involves segmenting AIS lesions by input- As mentioned in 2.4 Method, patch-based 3D segmentation was performed on direct volume estimation. This patch-based 3D segmentation trains a model by transforming an input 3D image into 3D subsamples of arbitrary size. However, patch-based 3D segmentation may not be able to extract global features considering the actual image volume, so the size of the AIS to be segmented like ours is relatively small compared to the size of the brain and may show poor results when located infrequently. In addition, class imbalance may occur if an inappropriate patch size is used. Due to these limitations of path-based 3D segmentation, it can be inferred that the training result of direct volume estimation is worse than indirect volume estimation.
In the case of direct segmentation, there was no significant performance difference in the F1-score between the estimated scratch model and the pretrained model. In the case of indirect volume estimation, the pretrained model had a higher F1-score than the scratch model. The reasons for these results are as follows: First, what the pretrained model predicts is different. In direct volume estimation, the pretrained model for auto implants predicts the shape and size of an implant through an image of a defective skull, unlike ours, which segments lesions through brain images. Moreso, the number of modalities and classes used in the pretrained model is different from ours. In direct volume estimation, brain tumor segmentation of the pretrained model is for segmenting brain tumors composed of multiple labels, like activators and edema, by inputting multimode MRI, such as FLAIR (fluid-attenuated inversion recovery), T1, and T2. This is different from our study, which involves segmenting AIS lesions by inputting only DWI. Therefore, the result of the F1-score seems poor. However, in indirect volume estimation, the pretrained model that segments gliomas by inputting only FLAIR is similar to our study, which segments only one lesion. Hence, the pretrained model outperformed the scratch model in terms of transfer learning effects. Table 3 shows the results of estimating the actual volume with our proposed model pretrained with glioma, which showed the best performance. These results indicate that the indirect volume estimation shows higher accuracy than direct volume estimation and is somewhat stable and significant. As a result, considering only the F1-score of the AIS segmentation performance, external validation seems to have shown better performance than internal validation; however, internal validation showed better performance in volume estimation. The reason is presumed to be as follows. According to Tables 1 and 2, focusing on sensitivity that accurately identifies AIS pixels, and specificity that accurately identifies non-AIS pixels, external validation has high sensitivity but low specificity. Internal validation does not show a significant difference between sensitivity and specificity. In other words, in the external validation, pixels other than AIS were well identified, but the actual AIS pixel was not well identified. Hence, the error appeared larger when the volume was measured. These results appear to have been caused by the abovementioned problems for the following reasons. When performing AIS segmentation, internal validation was performed by one neurologist, but two neurologists performed external validation; therefore, the segments were expected to be inconsistent due to their respective subjectivity. To solve the problem of subjectivity in this situation, the proposed model is required.
In AIS, estimating infarction volume is crucial for properly managing patients regarding clinical decisions and predicting outcomes [22,23]. Delayed treatment is well known to be associated with worse cerebral injury, and it is said that "time is brain" [1,24]. According to recent clinical trials, endovascular treatment has become the standard treatment for patients with AIS owing to large arterial occlusion [4][5][6]. In the DAWN trial, which extended the thrombectomy window from 6 h to 24 h, patient selection was based on the mismatch between clinical deficit and the specific infarct volume [6]. The DEFUSE 3 trial, which extended the thrombectomy window to 6 h to 16 h, also adopted an initial infarct volume of <70 mL and a ratio of the volume of ischemic tissue on perfusion imaging to infarct volume of ≥1.8 for endovascular therapy candidates [5]. In the EXTEND trial, which extended the thrombolysis window up to 9 h after the onset of stroke, perfusion lesion-ischemic core mismatch was defined as 1.2, an absolute difference in the volume >10 mL, and an ischemic core volume <70 mL [4]. Therefore, fast decision-making for the selection of patients for endovascular therapy might be supported by the precise and rapid estimation of infarct volume to produce maximal clinical outcomes [25]. Our model, which uses indirect volume estimation, providing a precise estimation of the volume of AIS lesions, could serve as a marker for selecting patients in endovascular treatment and predicting prognosis. Additionally, one of the most important future goals might be the implementation of a model in clinical practice, such as the critical pathway of AIS. In line with this, our model has a limitation in that it is based on DWI of MRI that is available from relatively larger centers. Thus, this study could serve as the starting point for reducing the door to needle time based on future studies to validate with other imaging modalities, such as angiography or computed tomography.
In conclusion, indirect volume estimation showed better results than direct volume estimation. However, indirect volume estimation may encounter difficulty in segmenting sparse and small-sized AIS because the small-sized AIS does not provide many axial slices useful for learning segmentation modes [21]. In future studies, we will focus on improving the performance of small AISs by direct volume estimation. In addition, we will research and develop a system that can predict the degree of the patient's modified Rankin Score, which can make complex evaluations along with image information by reflecting the patient's clinical information. Informed Consent Statement: Patient consent was waived due to the fact that this was a retrospective study with de-identified data.

Data Availability Statement:
The data presented in this study are available upon request from the corresponding author.

Conflicts of Interest:
The authors declare no conflict of interest.