Next Article in Journal
Implication of Different Tumor Biomarkers in Drug Resistance and Invasiveness in Primary and Metastatic Colorectal Cancer Cell Lines
Next Article in Special Issue
Widen the Applicability of a Convolutional Neural-Network-Assisted Glaucoma Detection Algorithm of Limited Training Images across Different Datasets
Previous Article in Journal
Predictors of Acute Kidney Disease Severity in Hospitalized Patients with Acute Kidney Injury
Previous Article in Special Issue
Temporal and Locational Values of Images Affecting the Deep Learning of Cancer Stem Cell Morphology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Endocardial Border Detection and Left Ventricular Functional Assessment in Echocardiography Using Deep Learning

1
Department of Cardiovascular Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
2
Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
3
Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
4
Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan
5
RIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
6
Department of NCC Cancer Science, Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Biomedicines 2022, 10(5), 1082; https://doi.org/10.3390/biomedicines10051082
Submission received: 4 April 2022 / Revised: 2 May 2022 / Accepted: 4 May 2022 / Published: 6 May 2022
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging)

Abstract

:
Endocardial border detection is a key step in assessing left ventricular systolic function in echocardiography. However, this process is still not sufficiently accurate, and manual retracing is often required, causing time-consuming and intra-/inter-observer variability in clinical practice. To address these clinical issues, more accurate and normalized automatic endocardial border detection would be valuable. Here, we develop a deep learning-based method for automated endocardial border detection and left ventricular functional assessment in two-dimensional echocardiographic videos. First, segmentation of the left ventricular cavity was performed in the six representative projections for a cardiac cycle. We employed four segmentation methods: U-Net, UNet++, UNet3+, and Deep Residual U-Net. UNet++ and UNet3+ showed a sufficiently high performance in the mean value of intersection over union and Dice coefficient. The accuracy of the four segmentation methods was then evaluated by calculating the mean value for the estimation error of the echocardiographic indexes. UNet++ was superior to the other segmentation methods, with the acceptable mean estimation error of the left ventricular ejection fraction of 10.8%, global longitudinal strain of 8.5%, and global circumferential strain of 5.8%, respectively. Our method using UNet++ demonstrated the best performance. This method may potentially support examiners and improve the workflow in echocardiography.

1. Introduction

Two-dimensional (2D) echocardiography is extensively utilized in cardiovascular examination owing to its real-time and non-invasive nature. This imaging modality allows us to assess not only cardiovascular morphology but also its function with several quantitative or qualitative dynamic analyses, including Doppler imaging and regional wall motion analysis. However, echocardiographic images are acquired through manual sweep scanning, which means that the image quality and diagnostic accuracy depend on the skill levels of the examiners. Echocardiographic technologies, such as three-dimensional (3D) echocardiography and myocardial deformation imaging, have gradually evolved to increase the ability of the scanning probe, image quality, or accuracy of functional analyses [1]. However, these latest technologies still demand expertise for acquiring images with acceptable quality in 2D echocardiography.
Concerning echocardiographic functional analysis, the assessment of left ventricular systolic function is fundamental in diagnosing and managing cardiovascular diseases. The left ventricular ejection fraction (LVEF) is one of the major established echocardiographic indexes. LVEF is calculated from the end-diastolic volume and the end-systolic volume estimates by the biplane disk summation method (modified Simpson’s rule) based on left ventricular endocardial border detection [2]. Myocardial strain assessment has also developed along with the current technological progress in myocardial deformation imaging. Because of the incidence of several types of heart failure that exhibit preserved ejection fraction, myocardial strain assessment is expected to be useful for the early detection of the symptoms of these cardiovascular diseases. Strains can be analyzed in three directions: longitudinal, circumferential, and radial. Among these strain indexes, the highest level of clinical evidence has been accumulated for the global longitudinal strain (GLS) [3]. Prior studies have reported that the reproducibility of GLS is superior to LVEF [4]. GLS defines the relative change of the left ventricular myocardial length between end-diastole and end-systole [5]. This index is usually derived from the peak value of 2D longitudinal speckle tracking, which also requires endocardial border detection [6].
As mentioned above, endocardial border detection is a key step in assessing left ventricular systolic function. Currently, several commercially available ultrasound machines are equipped with semi-automatic techniques to detect the endocardial border [7]. However, their endocardial border detection lacks accuracy, meaning that examiners often have to fix the initial endocardial contour manually in clinical practice. This subjective process is time-consuming and causes differences among examiners and devices. Therefore, further accurate and normalized automatic endocardial border detection would be valuable.
Artificial intelligence (AI), including machine learning and deep learning, has developed remarkably and has since been applied to a wide range of medical research topics [8,9,10,11,12,13]. AI has the potential to achieve tasks more rapidly and accurately than humans, especially in the field of medical imaging [14,15,16]. However, data acquisition with manual sweep scanning and acoustic shadows makes AI-based ultrasound imaging analysis more difficult than other medical imaging modalities. This deterioration in practical performance needs to be addressed by utilizing specialized algorithms and preprocessing [17,18,19]. The clinical applications of AI may support examiners and improve the workflow in ultrasound imaging [20]. Prominent efforts have been made in medical AI research of echocardiography. Reportedly, the automated machine learning algorithm could be used to quickly measure dynamic left ventricular and atrial volumes in 3D echocardiography [21]. A method to detect cardiac events in echocardiography using 3D convolutional recurrent neural networks was developed [22]. Salte et al. proposed a fully automated pipeline to measure GLS using a motion estimation technology based on deep learning [23].
In this study, we introduce state-of-the-art segmentation methods of the left ventricular cavity in six representative projections in 2D echocardiographic videos. We compare their performance in endocardial border detection and left ventricular functional assessment.

2. Materials and Methods

2.1. Data Preparation

A total of 3938 ultrasound images from 154 echocardiographic videos of 29 subjects were used in this study. All subjects underwent echocardiography after providing written informed consent at the Tokyo Medical and Dental University Hospital (Tokyo, Japan) according to the guidelines of the American Society of Echocardiography (ASE) and the European Association of Cardiovascular Imaging (EACVI) [2]. All participants were men, and their mean age was 37 (20–60). The cohort comprised 27 healthy volunteers and 2 patients with cardiac diseases (bicuspid aortic valve and hypertensive heart disease). All subjects were enrolled in research protocols approved by the Institutional Review Board of RIKEN, Fujitsu Ltd., Tokyo Medical and Dental University, and the National Cancer Center Japan (approval ID: Wako3 2019-36). Echocardiographic videos were acquired by board-certified specialists in echocardiography using Vivid E95® (GE Healthcare, Chicago, IL, USA) or EPIQ CVx® (Philips Healthcare, Amsterdam, The Netherlands). There was no bias in image quality caused by the examiner acquiring the echocardiographic videos. The dataset of each subject involved several videos of the six representative projections, including apical two-chamber view (2CV), apical three-chamber view (3CV), apical four-chamber view (4CV), parasternal short-axis views at the apex (SA), mitral valve (SM), and papillary muscle level (SP). All methods were performed in accordance with the Ethical Guidelines for Medical and Health Research Involving Human Subjects. With regard to the handling of data, we followed the Data Handling Guidelines for the Medical AI Project at the National Cancer Center Japan (ver.3.6 (2021)).

2.2. Data Preprocessing and Augmentation

The actual sections of the left ventricular endocardium were annotated pixel-by-pixel under the supervision of two cardiologists specializing in echocardiography to create the correct answer labels. A dataset of 23 healthy volunteers for training and a dataset of 4 residual healthy volunteers and 2 patients for the test data were randomly employed. The training dataset included 2798 images from 118 videos, and the training images of each projection were assigned a ratio of 4:1, which corresponded to a ratio of the training to validation data. None of the subjects straddled the training and validation datasets. The test dataset comprised 1140 images from 36 videos, which equally consisted of 6 videos per projection (Table S1).
Since the amount of our data was limited, data augmentation was performed. Rotation, brightness, and contrast were changed for the training data (Figure S1). The image was rotated in the range of ±15 degrees. For brightness and contrast, the following conversion was performed using function src as the original image and function dst as the output image:
dst I = saturate _ cast src I × α + β .
The α ranged from 0.7 to 1.3 and β from −30 to 30. As a result of the data augmentation, the training data for each projection increased 21-fold.

2.3. Endocardial Border Detection and Left Ventricular Functional Assessment

Figure 1 shows a flow chart of the method used for endocardial border detection and left ventricular functional assessment. First, segmentation of the left ventricular cavity was performed in the six representative projections for a cardiac cycle. We employed four segmentation methods: U-Net [24], UNet++ [25], UNet3+ [26], and Deep Residual U-Net (ResUNet) [27]. The input and output images were resized to 256 × 256 pixels. Hyperparameters for each method were retrieved from the literature.
Next, an end-diastolic frame and an end-systolic frame were detected from each echocardiographic video to measure LVEF and myocardial strain. According to the guidelines [2], end-diastole is preferably defined as the first frame after mitral valve closure or the frame in the cardiac cycle in which the respective left ventricular dimension or volume measurement is the largest. End-systole is best defined as the frame after aortic valve closure or the frame in which the cardiac dimension or volume is smallest. In this study, we defined the peak of the QRS complex as end-diastole. Therefore, an end-diastolic frame could be detected with the highest point of a red marker in the electrocardiogram on the echocardiographic video using Vivid E95®. Because aortic valve closure could not be detected in this study, end-systole was defined as when the left ventricular segmentation area was minimum in a single cardiac cycle. Thus, we developed an automatic detection method for end-diastolic and end-systolic frames (Figure S2).
Additionally, the mitral valve annulus was detected to measure LVEF and GLS using the apical chamber views. Since the contour of the segmentation image is uneven, the contour was smoothed by morphology processing. Subsequently, we detected a straight line on the contour by Hough transform and both endpoints of the straight line as mitral valve annulus. Furthermore, the apex of the heart was detected with the largest Euclidean distance between the contour of the segmentation image and the midpoint of the mitral valve annulus to measure LVEF (Figure 1b). Regarding the measurement of global circumferential strain (GCS) using parasternal short-axis views, we extracted the contour of the segmentation image at the end-diastolic and end-systolic frames (Figure 1c).

2.4. Metrics

2.4.1. Segmentation Performance

Intersection over Union (IoU) and the Dice coefficient (Dice) are generally used to quantify the performance of segmentation methods. When true-positive pixels are defined as TP, false-positive pixels as FP, and false-negative pixels as FN, these indexes are calculated as follows:
IoU = TP TP + FP + FN Dice = 2 TP 2 TP + FP + FN .
These metrics take values between 0 and 1, with values closer to 1 corresponding to better predictions. For each of the four segmentation methods, both IoU and Dice were calculated for all frames of each projection. For the inference results and correct labels, the mean value of IoU (mIoU), the mean value of Dice (mDice), and the standard deviation were calculated for each projection. The performance of four segmentation methods was evaluated using mIoU and mDice.

2.4.2. LVEF

The biplane disk summation method (modified Simpson’s rule) is currently recommended to assess LVEF by consensus of the committee of ASE and EACVI [2]. According to this method, we divided the long axis (L) of the apical two-chamber view and the apical four-chamber view into 20 disks, determined the inner diameter (ai, bi) of the short axis orthogonal to the long axis, and then assumed the volume of each disk as an elliptical column. The volume (V) was calculated using the following formula:
V = π 4 i = 1 20 a i b i L 20 .
LVEF is defined as the ratio of left ventricular stroke volume to left ventricular end-diastolic volume. The stroke volume of the left ventricle was calculated by subtracting the end-systolic volume (ESV) from the end-diastolic volume (EDV). Therefore, LVEF was calculated as follows:
LVEF   % = EDV ESV EDV × 100 .

2.4.3. GLS and GCS

Myocardial strain assessment is used to evaluate the left ventricular systolic function that cannot be stratified by LVEF. Regarding GLS, clinical evidence has accumulated, and it is expected to be useful for the early detection of heart failure with preserved ejection fraction and myocardial disorders related to anticancer drug treatment [3]. Based on the Lagrangian analysis, the global strain was defined as the relative shortening of the whole endocardial contour length [5]. Both GLS and GCS define the relative change of the endocardial border length of the left ventricle between end-systole (LES) and end-diastole (LED). GLS and GCS are calculated as follows:
GLS   % = L ES L ED L ED × 100 GCS   % = L ES L ED L ED × 100 .
According to the guidelines, GLS measurements should be made in the three standard apical views and averaged [2]. We further performed GCS measurements in the three standard parasternal short-axis views and calculated the average.
Estimation errors of LVEF, GLS, and GCS were evaluated between the correct value from the ground truth label and the estimated value using the segmentation image by each method. Since relative error can take both positive and negative values, we averaged the absolute values of the relative error. The accuracy of the four segmentation methods was evaluated by calculating the mean and median values for the absolute error of each index.

3. Results

3.1. Performance Comparison of the Segmentation Methods

Figure 2 shows representative segmentation images of the left ventricular cavity in the six projections for U-Net, UNet++, UNet3+, and ResUNet, respectively. The upper three rows represent the apical chamber views, including 2CV, 3CV, and 4CV. The lower three rows represent the parasternal short-axis views, including SA, SM, and SP. The red region represents the ground-truth label, and the green region represents the segmented left ventricular cavity.
Table 1 shows the quantitative evaluation of segmentation results in the six projections for each method using mIoU and mDice. UNet++ yielded the highest values in 4CV, SM, and SP; the mIoU/mDice values were 0.871/0.929, 0.887/0.939, and 0.888/0.939, respectively. In contrast, UNet3+ yielded the highest values in 2CV, 3CV, and SA; mIoU/mDice were 0.891/0.942, 0.901/0.948, and 0.817/0.893, respectively. UNet++ and UNet3+ tended to demonstrate higher performance than U-Net and ResUNet in the experiment part of segmentation of the left ventricular cavity.

3.2. Left Ventricular Functional Assessment

Based on the segmentation images of the left ventricular cavity at end-diastole and end-systole by each method, the contour of each segmented area was extracted as an endocardial border, and left ventricular functional assessment was conducted by measuring LVEF, GLS, and GCS. The representative estimated images and video of the endocardial border in the six projections using UNet++ are shown in Figure 3 and Video S1. The red line represents the ground-truth label, and the blue line represents the estimated endocardial border. The test dataset comprised 1140 images from 36 videos of 6 cases, which equally consisted of 6 videos per projection. The accuracy of the four segmentation methods was evaluated by calculating the mean and median values for the estimation error of each index in the test data (Table 2). The estimation error due to UNet++ was the smallest for LVEF, GLS, and GCS; the mean (median) values for the error were 10.8 (7.8)%, 8.5 (8.7)%, and 5.8 (5.2)%, respectively.

4. Discussion

To our knowledge, various AI-based analysis methods of ultrasound imaging have been previously reported, and the Food and Drug Administration in the United States has approved several AI-powered medical devices for ultrasound imaging [20]. However, difficulties and limitations in image quality control and acoustic shadows have affected and slowed the progress of medical AI research, as well as the development of ultrasound imaging compared to other medical imaging modalities. To address these characteristic problems of ultrasound imaging, we previously proposed segmentation methods using time-series information in ultrasound video [17,18] and the shadow estimation method using auto-encoder and synthetic shadows [19]. Furthermore, the clinical application of AI-powered medical devices remains challenging because of the black box problem; therefore, explainable AI needs to be considered in ultrasound imaging [28,29].
In this study, we focused on endocardial border detection and left ventricular functional assessment based on state-of-the-art segmentation methods of the left ventricular cavity in the six representative projections in 2D echocardiographic videos. As mentioned above, endocardial border detection is an important process, and several ultrasound machines are equipped with semi-automatic techniques to detect the endocardial border. However, these methods are still not sufficiently accurate, and manual retracing is often required and causes time-consuming and intra-/inter-observer variability in clinical practice. Moreover, there is a statistically significant variation in GLS measurement among vendors [7]. To address these clinical issues, developing accurate and normalized automatic endocardial border detection methods is important. Zyuzin et al. used U-Net to segment the left ventricular cavity in 4CV and identify the endocardial border on 2D echocardiographic images [30]. Their obtained accuracy (mDice) of left ventricular segmentation was 0.923. EchoNet-Dynamic, a video-based deep learning algorithm, segmented the left ventricle in 4CV on 2D echocardiographic videos with an mDice of 0.920 [31]. Wu et al. evaluated their semi-supervised model on two public echocardiographic video datasets, where mDice on the left ventricular endocardium segmentation achieved 0.929 and 0.938, respectively [32]. These reports demonstrated high segmentation performance of the left ventricular cavity, mainly in 4CV.
According to the ASE and EACVI guidelines, LVEF measurement by the biplane disk summation method (modified Simpson’s rule) is recommended, with reference to 2CV and 4CV. Furthermore, GLS measurements should be made in 2CV, 3CV, and 4CV and then averaged [2]. Although the clinical evidence of GCS remains limited, the clinical value of GCS could increase from now on. Therefore, segmentation of the left ventricular cavity was investigated in these six representative projections in this study. Regarding the segmentation of the left ventricle in the six projections, Kim et al. reported automatic segmentation of the left ventricle in echocardiographic images of pigs using convolutional neural networks. The mDice on the left ventricular cavity segmentation was 0.903 and 0.912 for U-Net and segAN, respectively [33]. We employed four segmentation methods: U-Net, UNet++, UNet3+, and ResUNet. In these analyses, UNet++ yielded the highest values in 4CV, SM, and SP, whereas UNet3+ yielded the highest values in 2CV, 3CV, and SA. Compared to the abovementioned research in terms of mDice, UNet++ and UNet3+ demonstrated sufficiently high performance in the experiment part of segmentation of the left ventricular cavity.
Subsequently, the accuracy of the four segmentation methods was evaluated by calculating the mean and median values for the estimation error of the echocardiographic indexes. Our result demonstrated that the estimation error due to UNet++ was the smallest for LVEF, GLS, and GCS. To assess these estimation errors, we should consider clinical intra-or inter-observer reproducibilities for these indexes. Chuang et al. reported an intra-observer score of 13.4% and inter-observer variability of 17.8% for LVEF in 2D echocardiography [34]. Referring to other reports, the inter-observer variation of LVEF can be as high as 13.9% [31,35]. Farsalinos et al. reported that intra-observer variability ranged from 4.9% to 7.3%, and inter-observer variability for GLS ranged from 5.3% to 8.6% [7]. In this study, UNet++ was superior to the other segmentation methods, with acceptable estimation accuracy of the echocardiographic indexes within clinical intra-/inter-observer variability. EchoNet-Dynamic predicted LVEF with the mean absolute error of 4.1% and 6.0% for two different datasets [31]. A prospective evaluation of the estimation accuracy of LVEF, GLS, and GCS using UNet++ for other datasets with repeated human measurements should be conducted in the future.
This study has several limitations. First, there was a limited number of test data from healthy volunteers and patients. To prove the clinical value of our method, we could have performed a prospective accuracy evaluation of our method using big datasets; we could have conducted a k-fold cross-validation and classified the subjects so as not to induce bias according to the clinical background, including age, sex, and types of cardiovascular diseases. Second, we did not evaluate the influence of acoustic shadows in 2D echocardiographic videos. Because acoustic shadows affect image quality control, shadow detection and other preprocessing may need to be considered in future studies. Finally, all data were acquired by board-certified specialists in echocardiography using the same type of ultrasound equipment; we did not experiment with examiners of all experience levels or with other equipment. These are important because statistically significant differences in image quality and echocardiographic index measurements can occur among examiners and vendors. The generalization of our method to examiners of all experience levels and equipment in a clinical scenario is a subject for future studies.

5. Conclusions

We developed a deep learning-based method for automated endocardial border detection and left ventricular functional assessment in 2D echocardiographic videos. Our method using Unet++ demonstrated the best performance and has the potential to support examiners and improve the workflow in echocardiography. For future work, to improve the accuracy of our method for clinical application, we should continue to acquire further echocardiographic videos and perform a prospective evaluation using big datasets. From another perspective, it may be necessary to develop an image quality evaluation technique that determines in advance whether the acquired echocardiographic video is a suitable input video for our method.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/biomedicines10051082/s1, Figure S1: data augmentation, Figure S2: extraction of the end-diastolic frame and end-systolic frame, Table S1: data preparation for deep learning, Video S1: endocardial border detection in the six projections using Unet++.

Author Contributions

Conceptualization, S.O. and M.K.; methodology, S.O., M.K., A.S. and S.Y.; software, S.O. and A.S.; validation, S.O., M.K. and A.S.; investigation, S.O., M.K., A.S. and R.A.; resources, H.A., M.O. and T.S.; data curation, S.O. and M.K.; writing—original draft preparation, S.O. and M.K.; writing—review and editing, A.S., H.A., M.O., R.A., S.Y., K.A., S.K., T.S. and R.H.; supervision, M.K. and R.H.; project administration, S.O. and M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the subsidy for the Advanced Integrated Intelligence Platform (MEXT) and the commissioned projects’ income of the RIKEN AIP-FUJITSU Collaboration Center.

Institutional Review Board Statement

The study was conducted in accordance with the guidelines of the Declaration of Helsinki. The study was approved by the IRB of RIKEN, Fujitsu Ltd., Tokyo Medical and Dental University, and the National Cancer Center Japan (approval ID: Wako3 2019-36, approval date: 4 February 2020).

Informed Consent Statement

The research protocol for each study was approved by the medical ethics committees of the collaborating research facilities. Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to Hisayuki Sano, Hiroyuki Yoshida, and all members of the Hamamoto laboratory for their helpful discussion and support.

Conflicts of Interest

R.H. received a joint research grant from Fujitsu Ltd. The other authors declare no conflict of interest.

References

  1. Lang, R.M.; Addetia, K.; Narang, A.; Mor-Avi, V. 3-dimensional echocardiography: Latest developments and future directions. JACC Cardiovasc. Imaging 2018, 11, 1854–1878. [Google Scholar] [CrossRef] [PubMed]
  2. Lang, R.M.; Badano, L.P.; Mor-Avi, V.; Afilalo, J.; Armstrong, A.; Ernande, L.; Flachskampf, F.A.; Foster, E.; Goldstein, S.A.; Kuznetsova, T.; et al. Recommendations for cardiac chamber quantification by echocardiography in adults: An update from the American Society of Echocardiography and the European Association of Cardiovascular Imaging. J. Am. Soc. Echocardiogr. 2015, 28, 1–39.e14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Armstrong, A.C.; Ricketts, E.P.; Cox, C.; Adler, P.; Arynchyn, A.; Liu, K.; Stengel, E.; Sidney, S.; Lewis, C.E.; Schreiner, P.J.; et al. Quality control and reproducibility in M-mode, two-dimensional, and speckle tracking echocardiography acquisition and analysis: The CARDIA study, year 25 examination experience. Echocardiography 2015, 32, 1233–1240. [Google Scholar] [CrossRef] [PubMed]
  4. Karlsen, S.; Dahlslett, T.; Grenne, B.; Sjøli, B.; Smiseth, O.; Edvardsen, T.; Brunvand, H. Global longitudinal strain is a more reproducible measure of left ventricular function than ejection fraction regardless of echocardiographic training. Cardiovasc. Ultrasound 2019, 17, 1–12. [Google Scholar] [CrossRef] [Green Version]
  5. Riffel, J.H.; Keller, M.G.; Aurich, M.; Sander, Y.; Andre, F.; Giusca, S.; Aus dem Siepen, F.; Seitz, S.; Galuschky, C.; Korosoglou, G.; et al. Assessment of global longitudinal strain using standardized myocardial deformation imaging: A modality independent software approach. Clin. Res. Cardiol. 2015, 104, 591–602. [Google Scholar] [CrossRef]
  6. Voigt, J.U.; Pedrizzetti, G.; Lysyansky, P.; Marwick, T.H.; Houle, H.; Baumann, R.; Pedri, S.; Ito, Y.; Abe, Y.; Metz, S.; et al. Definitions for a common standard for 2D speckle tracking echocardiography: Consensus document of the EACVI/ASE/Industry Task Force to standardize deformation imaging. Eur. Heart J. Cardiovasc. Imaging 2015, 16, 1–11. [Google Scholar] [CrossRef]
  7. Farsalinos, K.E.; Daraban, A.M.; Ünlü, S.; Thomas, J.D.; Badano, L.P.; Voigt, J.-U. Head-to-head comparison of global longitudinal strain measurements among nine different vendors: The EACVI/ASE inter-vendor comparison study. J. Am. Soc. Echocardiogr. 2015, 28, 1171–1181.e1172. [Google Scholar] [CrossRef]
  8. Hamamoto, R.; Suvarna, K.; Yamada, M.; Kobayashi, K.; Shinkai, N.; Miyake, M.; Takahashi, M.; Jinnai, S.; Shimoyama, R.; Sakai, A.; et al. Application of artificial intelligence technology in oncology: Towards the establishment of precision medicine. Cancers 2020, 12, 3532. [Google Scholar] [CrossRef]
  9. Asada, K.; Takasawa, K.; Machino, H.; Takahashi, S.; Shinkai, N.; Bolatkan, A.; Kobayashi, K.; Komatsu, M.; Kaneko, S.; Okamoto, K.; et al. Single-cell analysis using machine learning techniques and its application to medical research. Biomedicines 2021, 9, 1513. [Google Scholar] [CrossRef]
  10. Hamamoto, R.; Komatsu, M.; Takasawa, K.; Asada, K.; Kaneko, S. Epigenetics analysis and integrated analysis of multiomics data, including epigenetic data, using artificial intelligence in the era of precision medicine. Biomolecules 2019, 10, 62. [Google Scholar] [CrossRef] [Green Version]
  11. Asada, K.; Kaneko, S.; Takasawa, K.; Machino, H.; Takahashi, S.; Shinkai, N.; Shimoyama, R.; Komatsu, M.; Hamamoto, R. Integrated Analysis of Whole Genome and Epigenome Data Using Machine Learning Technology: Toward the Establishment of Precision Oncology. Front. Oncol. 2021, 11, 666937. [Google Scholar] [CrossRef]
  12. Takahashi, S.; Asada, K.; Takasawa, K.; Shimoyama, R.; Sakai, A.; Bolatkan, A.; Shinkai, N.; Kobayashi, K.; Komatsu, M.; Kaneko, S.; et al. Predicting deep learning based multi-omics parallel integration survival subtypes in lung cancer using reverse phase protein array data. Biomolecules 2020, 10, 1460. [Google Scholar] [CrossRef]
  13. Asada, K.; Komatsu, M.; Shimoyama, R.; Takasawa, K.; Shinkai, N.; Sakai, A.; Bolatkan, A.; Yamada, M.; Takahashi, S.; Machino, H.; et al. Application of artificial intelligence in COVID-19 diagnosis and therapeutics. J. Pers. Med. 2021, 11, 886. [Google Scholar] [CrossRef]
  14. De Fauw, J.; Ledsam, J.R.; Romera-Paredes, B.; Nikolov, S.; Tomasev, N.; Blackwell, S.; Askham, H.; Glorot, X.; O’Donoghue, B.; Visentin, D.; et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat. Med. 2018, 24, 1342–1350. [Google Scholar] [CrossRef]
  15. Yamada, M.; Saito, Y.; Imaoka, H.; Saiko, M.; Yamada, S.; Kondo, H.; Takamaru, H.; Sakamoto, T.; Sese, J.; Kuchiba, A.; et al. Development of a real-time endoscopic image diagnosis support system using deep learning technology in colonoscopy. Sci. Rep. 2019, 9, 14465. [Google Scholar] [CrossRef] [Green Version]
  16. Takahashi, S.; Takahashi, M.; Kinoshita, M.; Miyake, M.; Kawaguchi, R.; Shinojima, N.; Mukasa, A.; Saito, K.; Nagane, M.; Otani, R.; et al. Fine-tuning approach for segmentation of gliomas in brain magnetic resonance images with a machine learning method to normalize image differences among facilities. Cancers 2021, 13, 1415. [Google Scholar] [CrossRef]
  17. Dozen, A.; Komatsu, M.; Sakai, A.; Komatsu, R.; Shozu, K.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Image segmentation of the ventricular septum in fetal cardiac ultrasound videos based on deep learning using time-series information. Biomolecules 2020, 10, 1526. [Google Scholar] [CrossRef]
  18. Shozu, K.; Komatsu, M.; Sakai, A.; Komatsu, R.; Dozen, A.; Machino, H.; Yasutomi, S.; Arakaki, T.; Asada, K.; Kaneko, S.; et al. Model-agnostic method for thoracic wall segmentation in fetal ultrasound videos. Biomolecules 2020, 10, 1691. [Google Scholar] [CrossRef]
  19. Yasutomi, S.; Arakaki, T.; Matsuoka, R.; Sakai, A.; Komatsu, R.; Shozu, K.; Dozen, A.; Machino, H.; Asada, K.; Kaneko, S.; et al. Shadow estimation for ultrasound images using auto-encoding structures and synthetic shadows. Appl. Sci. 2021, 11, 1127. [Google Scholar] [CrossRef]
  20. Komatsu, M.; Sakai, A.; Dozen, A.; Shozu, K.; Yasutomi, S.; Machino, H.; Asada, K.; Kaneko, S.; Hamamoto, R. Towards clinical application of artificial intelligence in ultrasound imaging. Biomedicines 2021, 9, 720. [Google Scholar] [CrossRef]
  21. Narang, A.; Mor-Avi, V.; Prado, A.; Volpato, V.; Prater, D.; Tamborini, G.; Fusini, L.; Pepi, M.; Goyal, N.; Addetia, K.; et al. Machine learning based automated dynamic quantification of left heart chamber volumes. Eur. Heart J. Cardiovasc. Imaging 2019, 20, 541–549. [Google Scholar] [CrossRef]
  22. Fiorito, A.M.; Østvik, A.; Smistad, E.; Leclerc, S.; Bernard, O.; Lovstakken, L. Detection of cardiac events in echocardiography using 3D convolutional recurrent neural networks. In Proceedings of the 2018 IEEE International Ultrasonics Symposium (IUS), Kobe, Japan, 22–25 October 2018; pp. 1–4. [Google Scholar]
  23. Salte, I.M.; Østvik, A.; Smistad, E.; Melichova, D.; Nguyen, T.M.; Karlsen, S.; Brunvand, H.; Haugaa, K.H.; Edvardsen, T.; Lovstakken, L.; et al. Artificial intelligence for automatic measurement of left ventricular strain in echocardiography. JACC Cardiovasc. Imaging 2021, 14, 1918–1928. [Google Scholar] [CrossRef]
  24. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  25. Zhou, Z.; Rahman Siddiquee, M.M.; Tajbakhsh, N.; Liang, J. UNet++: A nested U-Net architecture for medical image segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Granada, Spain, 20 September 2018; pp. 3–11. [Google Scholar]
  26. Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. UNet 3+: A full-scale connected UNet for medical image segmentation. In Proceedings of the ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
  27. Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual U-Net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef] [Green Version]
  28. Komatsu, M.; Sakai, A.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Shozu, K.; Dozen, A.; Machino, H.; Hidaka, H.; Arakaki, T.; et al. Detection of cardiac structural abnormalities in fetal ultrasound videos using deep learning. Appl. Sci. 2021, 11, 371. [Google Scholar] [CrossRef]
  29. Sakai, A.; Komatsu, M.; Komatsu, R.; Matsuoka, R.; Yasutomi, S.; Dozen, A.; Shozu, K.; Arakaki, T.; Machino, H.; Asada, K.; et al. Medical professional enhancement using explainable artificial intelligence in fetal cardiac ultrasound screening. Biomedicines 2022, 10, 551. [Google Scholar] [CrossRef]
  30. Zyuzin, V.; Sergey, P.; Mukhtarov, A.; Chumarnaya, T.; Solovyova, O.; Bobkova, A.; Myasnikov, V. Identification of the left ventricle endocardial border on two-dimensional ultrasound images using the convolutional neural network Unet. In Proceedings of the 2018 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT), Yekaterinburg, Russia, 7–8 May 2018; pp. 76–78. [Google Scholar]
  31. Ouyang, D.; He, B.; Ghorbani, A.; Yuan, N.; Ebinger, J.; Langlotz, C.P.; Heidenreich, P.A.; Harrington, R.A.; Liang, D.H.; Ashley, E.A.; et al. Video-based AI for beat-to-beat assessment of cardiac function. Nature 2020, 580, 252–256. [Google Scholar] [CrossRef]
  32. Wu, H.; Liu, J.; Xiao, F.; Wen, Z.; Cheng, L.; Qin, J. Semi-supervised segmentation of echocardiography videos via noise-resilient spatiotemporal semantic calibration and fusion. Med. Image Anal. 2022, 78, 102397. [Google Scholar] [CrossRef]
  33. Kim, T.; Hedayat, M.; Vaitkus, V.V.; Belohlavek, M.; Krishnamurthy, V.; Borazjani, I. Automatic segmentation of the left ventricle in echocardiographic images using convolutional neural networks. Quant. Imaging Med. Surg. 2021, 11, 1763–1781. [Google Scholar] [CrossRef]
  34. Chuang, M.L.; Hibberd, M.G.; Salton, C.J.; Beaudin, R.A.; Riley, M.F.; Parker, R.A.; Douglas, P.S.; Manning, W.J. Importance of imaging method over imaging modality in noninvasive determination of left ventricular volumes and ejection fraction: Assessment by two- and three-dimensional echocardiography and magnetic resonance imaging. J. Am. Coll. Cardiol. 2000, 35, 477–484. [Google Scholar] [CrossRef]
  35. Cole, G.D.; Dhutia, N.M.; Shun-Shin, M.J.; Willson, K.; Harrison, J.; Raphael, C.E.; Zolgharni, M.; Mayet, J.; Francis, D.P. Defining the real-world reproducibility of visual grading of left ventricular function and visual estimation of left ventricular ejection fraction: Impact of image quality, experience and accreditation. Int. J. Cardiovasc. Imaging 2015, 31, 1303–1314. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Flow chart of endocardial border detection and left ventricular functional assessment. (a) Four segmentation methods of the left ventricular cavity were evaluated in the six projections. After automatic detection of end-diastolic and end-systolic frames and extraction of the contour as an endocardial border, the echocardiographic indexes were measured using the apical chamber views (b) and the parasternal short-axis views (c). ED, end-diastolic; ES, end-systolic; LVEF, left ventricular ejection fraction; GLS, global longitudinal strain; GCS, global circumferential strain.
Figure 1. Flow chart of endocardial border detection and left ventricular functional assessment. (a) Four segmentation methods of the left ventricular cavity were evaluated in the six projections. After automatic detection of end-diastolic and end-systolic frames and extraction of the contour as an endocardial border, the echocardiographic indexes were measured using the apical chamber views (b) and the parasternal short-axis views (c). ED, end-diastolic; ES, end-systolic; LVEF, left ventricular ejection fraction; GLS, global longitudinal strain; GCS, global circumferential strain.
Biomedicines 10 01082 g001
Figure 2. Representative segmentation images of the left ventricular cavity in the six projections for the 4 methods. The red region represents the ground-truth label, and the green region represents the segmented left ventricular cavity. GT, ground truth; 2CV, apical two-chamber view; 3CV, apical three-chamber view; 4CV, apical four-chamber view; SA, parasternal short-axis view (apex level); SM, parasternal short-axis view (mitral valve level); SP, parasternal short-axis view (papillary muscle level).
Figure 2. Representative segmentation images of the left ventricular cavity in the six projections for the 4 methods. The red region represents the ground-truth label, and the green region represents the segmented left ventricular cavity. GT, ground truth; 2CV, apical two-chamber view; 3CV, apical three-chamber view; 4CV, apical four-chamber view; SA, parasternal short-axis view (apex level); SM, parasternal short-axis view (mitral valve level); SP, parasternal short-axis view (papillary muscle level).
Biomedicines 10 01082 g002
Figure 3. Representative estimated images of the endocardial border in the six projections using UNet++. The red line represents the ground-truth label, and the blue line represents the estimated endocardial border.
Figure 3. Representative estimated images of the endocardial border in the six projections using UNet++. The red line represents the ground-truth label, and the blue line represents the estimated endocardial border.
Biomedicines 10 01082 g003
Table 1. Evaluation of segmentation results in the six projections for each method using mIoU and mDice.
Table 1. Evaluation of segmentation results in the six projections for each method using mIoU and mDice.
MethodProjectionmIoUmDice
U-Net2CV0.855 ± 0.0680.920 ± 0.041
3CV0.752 ± 0.1370.851 ± 0.097
4CV0.816 ± 0.1000.895 ± 0.063
SA0.670 ± 0.1530.791 ± 0.125
SM0.841 ± 0.0900.911 ± 0.057
SP0.813 ± 0.0930.893 ± 0.062
UNet++2CV0.890 ± 0.0420.941 ± 0.024
3CV0.886 ± 0.0340.939 ± 0.019
4CV0.871 ± 0.0670.929 ± 0.040
SA0.808 ± 0.1250.887 ± 0.099
SM0.887 ± 0.0660.939 ± 0.039
SP0.888 ± 0.0640.939 ± 0.040
UNet3+2CV0.891 ± 0.0390.942 ± 0.022
3CV0.901 ± 0.0280.948 ± 0.016
4CV0.864 ± 0.0630.926 ± 0.039
SA0.817 ± 0.1160.893 ± 0.095
SM0.887 ± 0.0790.938 ± 0.047
SP0.873 ± 0.0840.930 ± 0.056
ResUNet2CV0.851 ± 0.0560.919 ± 0.034
3CV0.837 ± 0.0630.910 ± 0.038
4CV0.822 ± 0.0880.900 ± 0.057
SA0.732 ± 0.1550.834 ± 0.130
SM0.834 ± 0.0900.907 ± 0.057
SP0.814 ± 0.0820.895 ± 0.056
The values are mean ± standard deviation. mIoU, the mean value of Intersection over Union; mDice, the mean value of Dice.
Table 2. Accuracy evaluation of echocardiographic indexes for each method using the mean and median values for the estimation error.
Table 2. Accuracy evaluation of echocardiographic indexes for each method using the mean and median values for the estimation error.
MethodLVEFGLSGCS
MeanMedianMeanMedianMeanMedian
U-Net24.323.336.437.717.714.7
UNet++10.87.88.58.75.85.2
UNet3+11.710.714.616.06.45.2
ResUNet12.513.913.015.716.222.3
The values are estimation errors [%].
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ono, S.; Komatsu, M.; Sakai, A.; Arima, H.; Ochida, M.; Aoyama, R.; Yasutomi, S.; Asada, K.; Kaneko, S.; Sasano, T.; et al. Automated Endocardial Border Detection and Left Ventricular Functional Assessment in Echocardiography Using Deep Learning. Biomedicines 2022, 10, 1082. https://doi.org/10.3390/biomedicines10051082

AMA Style

Ono S, Komatsu M, Sakai A, Arima H, Ochida M, Aoyama R, Yasutomi S, Asada K, Kaneko S, Sasano T, et al. Automated Endocardial Border Detection and Left Ventricular Functional Assessment in Echocardiography Using Deep Learning. Biomedicines. 2022; 10(5):1082. https://doi.org/10.3390/biomedicines10051082

Chicago/Turabian Style

Ono, Shunzaburo, Masaaki Komatsu, Akira Sakai, Hideki Arima, Mie Ochida, Rina Aoyama, Suguru Yasutomi, Ken Asada, Syuzo Kaneko, Tetsuo Sasano, and et al. 2022. "Automated Endocardial Border Detection and Left Ventricular Functional Assessment in Echocardiography Using Deep Learning" Biomedicines 10, no. 5: 1082. https://doi.org/10.3390/biomedicines10051082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop