Next Article in Journal
A Modified Run-Off Resistance Score from Cross-Sectional Imaging Discriminates Chronic Critical Limb Ischemia from Intermittent Claudication in Peripheral Arterial Disease
Next Article in Special Issue
Endoscopic Diagnosis of Eosinophilic Esophagitis: Basics and Recent Advances
Previous Article in Journal
Full-Endoscopic Lumbar Foraminotomy for Foraminal Stenosis in Spondylolisthesis: Two-Year Follow-Up Results
Previous Article in Special Issue
Third-Generation High-Vision Ultrathin Endoscopy Using Texture and Color Enhancement Imaging and Narrow-Band Imaging to Evaluate Barrett’s Esophagus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Current Status of Artificial Intelligence-Based Computer-Assisted Diagnosis Systems for Gastric Cancer in Endoscopy

1
Department of Surgical Oncology, Faculty of Medicine, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033, Japan
2
Tomohiro Tada the Institute of Gastroenterology and Proctology, Musashi-Urawa, Saitama 336-0021, Japan
3
AI Medical Service Inc. Toshima-ku, Tokyo 104-0061, Japan
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(12), 3153; https://doi.org/10.3390/diagnostics12123153
Submission received: 1 November 2022 / Revised: 7 December 2022 / Accepted: 10 December 2022 / Published: 13 December 2022
(This article belongs to the Special Issue Advanced Endoscopic Imaging in Gastrointestinal Diseases)

Abstract

:
Artificial intelligence (AI) is gradually being utilized in various fields as its performance has been improving with the development of deep learning methods, availability of big data, and the progression of computer processing units. In the field of medicine, AI is mainly implemented in image recognition, such as in radiographic and pathologic diagnoses. In the realm of gastrointestinal endoscopy, although AI-based computer-assisted detection/diagnosis (CAD) systems have been applied in some areas, such as colorectal polyp detection and diagnosis, so far, their implementation in real-world clinical settings is limited. The accurate detection or diagnosis of gastric cancer (GC) is one of the challenges in which performance varies greatly depending on the endoscopist’s skill. The diagnosis of early GC is especially challenging, partly because early GC mimics atrophic gastritis in the background mucosa. Therefore, several CAD systems for GC are being actively developed. The development of a CAD system for GC is considered challenging because it requires a large number of GC images. In particular, early stage GC images are rarely available, partly because it is difficult to diagnose gastric cancer during the early stages. Additionally, the training image data should be of a sufficiently high quality to conduct proper CAD training. Recently, several AI systems for GC that exhibit a robust performance, owing to being trained on a large number of high-quality images, have been reported. This review outlines the current status and prospects of AI use in esophagogastroduodenoscopy (EGDS), focusing on the diagnosis of GC.

1. Background

Artificial intelligence (AI) is a program that changes its behavior (output) in response to surrounding circumstances (input) and is constructed by simulating human intelligence. In recent years, the image recognition capabilities of AI have been improved by the emergence of machine learning methods known as deep learning, improvement in computer performance and affordability, and accumulation of large amounts of digital data. In certain areas, the performance of AI has been suggested to surpass that of humans [1,2]. In the field of medicine, AI is expected to be utilized, especially for diagnosing medical images, such as radiologic, pathologic, and endoscopic images.
Gastrointestinal endoscopy plays an essential role in the diagnosis and treatment of gastrointestinal disease. One of the challenges of endoscopy is that the examination quality varies greatly among examiners, as it requires not only expert technical skills, but also appropriate diagnostic skills for various types of gastrointestinal diseases. To solve this issue, AI-based computer-assisted detection/diagnosis (CAD) systems have been actively studied. In the field of colonoscopy, there have been many publications, including randomized control trials (RCTs) and meta-analyses, because several endoscopic AI systems for colonoscopy have been commercialized and are clinically available [3]. However, in the field of esophagogastroduodenoscopy (EGDS), the performance of AI is still insufficient for utilization in real-world clinical settings, and the CAD system for EGDS is in the developmental stage.
CAD systems can be classified into computer-aided detection (CADe), which assists endoscopists in detecting abnormal lesions during the procedure, and computer-aided diagnosis (CADx), which assists endoscopists in diagnosing the features of the lesions, such as differentiating between benign and malignant lesions, or determining the degree of progression of a lesion.
In this article, we mainly review CADe and CADx systems for gastric neoplasms, which were developed using convolutional neural networks (CNN) and are considered to be the de facto standard for deep learning-based image recognition technology.

2. Method

We conducted an electronic search using PubMed to identify original publications relevant to the review topic. The following search terms were used: “artificial intelligence,” “endoscopy,” “esophagogastroduodenoscopy,” “deep learning,” “neural network,” “helicobacter pylori,” “gastric cancer,” “computer-aided,” “CAD,” “CADe,” and “CADx.” Additional articles were identified through both manual search and by consulting the reference section of the included articles. Case reports, comments, and non-English publications were excluded from the review. Articles without details pertaining to methods or results were also excluded.

2.1. Endoscopic CAD for Gastric Cancers

Gastric cancer (GC) is the fifth most common malignant disease worldwide and the fourth most common cause of cancer mortality [4]. The 5-year survival rate of GC has been reported to be better than 95% in patients with stage I disease, whereas it dramatically decreases to 66.5% in those with stage II, and to 46.9% in cases of stage III diseases [5]. Thus, detecting GC at an earlier stage is crucial for improving the survival rate of patients. However, early stage GC resembles the background atrophic gastric mucosa, making the early diagnosis and detection of GC challenging. In fact, false-negative rates for GC in a screening endoscopy have been reported to be as high as 25.8%. Furthermore, inexperienced endoscopists tend to have higher false-negative rates [6,7,8]. To solve these issues, several CADe and CADx systems for GC are being developed (Table 1 and Table 2).

2.2. Endoscopic CADe for Gastric Cancers

The first AI-based endoscopic CADe system for GC was reported by Hirasawa et al. in 2018 [9] (Figure 1). The AI was trained on 13,584 endoscopic still images of GC and was validated on 2296 independent still images, and it showed a sensitivity of 92.2% for detecting GCs per image analysis. The performance of the CADe system was then validated by Ishioka et al. in 68 endoscopy videos, including those of GCs. The sensitivity of the CADe system for videos was 94.1%, which is comparable to that of still images [10].
Several retrospective studies on the performance of detecting gastric neoplasms between endoscopists and CAD systems have been reported. In the report by Ikenoyama et al., the AI showed a higher sensitivity than endoscopists (58.4% vs. 31.9%), whereas it showed a lower specificity (87.3% vs. 97.2%) and positive predictive value (PPV) (26.0% vs. 46.2%) [11]. In a cancer screening endoscopy, sensitivity is the most important parameter, and the CAD system demonstrates a preferable performance. However, the specificities of these CAD systems were comparatively low, and the specificity reported by Hirasawa et al. in a similar study was also as low as 30.6%, suggesting that a low specificity is one of the issues associated with the CADe systems for GC. One of the reasons the CADe system reported by Hirasawa et al. had a low specificity was the high rate of the false-positive detection of gastric ulcers (GU) as GC [11]. Namikawa et al. trained the CAD system using 13,584 GC images and 4453 GU images to improve the false-positive rate owing to the misidentification of GU as GC [12]. The performance of the CAD system was validated on 739 GC and 720 GU images, and the specificity successfully improved to 99.0%, whereas the sensitivity remained at 93.3%.
A few prospective studies comparing the performance of CADe for GCs between endoscopists and CAD systems have also been reported. In 2019, Luo et al. first reported a multicenter case-control study evaluating the performance of the CADe system for upper gastrointestinal tract cancers, named GRAIDS. They reported an accuracy of 92.7% in detecting both gastric and esophageal cancers. They also compared the diagnostic performance of the CADe system and those of endoscopists and reported that the performance of the CADe system was comparable to that of endoscopists with >10 years of experience (94.2% vs. 94.5% in sensitivity) and superior to those of competent (85.8%) and trainee (72.2%) endoscopists [13]. Niikura et al. compared the GC detection performance of expert endoscopists and CAD using retrospective data consisting of 500 patients (including 100 GC cases). In this study, background information (e.g., prevalence of early/advanced GC, patients’ H. Pylori infection status, etc.) was matched between the two conditions using a computer-based system. The reported sensitivity for the CAD system was 100% (49/49 cases), comparable to the expert endoscopists’ sensitivity of 94.1% (48/51 cases) [14]. These reports suggest that AI may have the potential to fill the gap in diagnostic performance between expert and non-expert endoscopists.
Wu et al. reported a series of studies using their CAD system for gastric neoplasms named ENDOANGEL, which was created based on deep reinforcement learning (DRL) and CNN, and has multiple functions, including the detection of GC and pre-cancerous lesions, differentiation of cancerous and non-cancerous lesions, prediction of tumor invasion depth, and anatomical detection of the upper gastrointestinal tract. In a tandem RCT that compared the miss rate of gastric neoplasms between routine EGDS (routine-first, n = 905) and EGDS with the CAD-first group (AI-first, n = 907), they reported that the miss rate of neoplasms in the AI-first group was significantly lower than that in the routine-first group (6.1% vs. 27.3%, p = 0.015), suggesting that the use of AI could reduce the number of overlooked gastric neoplasms [15]. According to another prospective study that compared the detection of early gastric cancer (EGC) using white light imaging (WLI) between the CADe system and endoscopists, the AI outperformed the endoscopists in specificity (93.2% vs. 72.3%, p < 0.01), accuracy (91.0% vs. 76.9, p < 0.01), and PPV (90.0% vs. 76.9%, p = 0.013) [16]. They conducted another trial in a setting that considered the clinical use of AI. In this prospective trial of a total of 2010 patients, endoscopists performed EGDS using ENDOANGEL, and the performances of ENDOANGEL as CADe and CADx systems for gastric neoplasms were evaluated. They reported a sensitivity of 91.8% and specificity of 92.4% for the detection of gastric neoplasms, indicating that the AI may be useful in real-world clinical settings [17].
These reports suggest that CADe for GC may have a comparable performance to that of expert endoscopists and may even surpass that of inexperienced endoscopists in detecting GCs. EGDS is performed by various endoscopists at different levels; therefore, it is expected that AI will equalize their performance to that of specialists regardless of the skill of the endoscopist.

2.3. Endoscopic CADx for Gastric Cancers

Once a gastric lesion is detected, differentiation into neoplastic and non-neoplastic is necessary. Currently, image-enhanced technologies, such as narrow band imaging (NBI) and blue laser imaging (BLI) or magnified endoscopy, have been developed to help differentiate gastric lesions. NBI and BLI were equipped with commercial endoscopy systems in 2006 and 2012, respectively. The combination of a magnifying endoscopy and these image-enhancing technologies has been widely used in clinical practice to visualize microvascular and mucosal surface microstructures with a higher contrast. The usefulness of the vessel plus surface classification system (VSCS) or a diagnostic algorithm for early gastric cancer using magnified NBI (M-NBI) to differentiate gastric neoplasms has been reported [18,19,20,21]. CADx systems for differentiating cancerous/non-cancerous lesions have also been developed, and most are based on enhanced images.
Li et al. and Ueyama et al. reported a high diagnostic accuracy of their CADx systems for GC based on M-NBI images, with sensitivities of 95.4% and 98%, and specificities of 71.0% and 100%, respectively [22,23]. Horiuchi et al. reported a sensitivity of 91.18% and a specificity of 90.64% of their CADx systems for GC on M-NBI still images [24]. They also validated their CADx system using 174 videos of M-NBI and reported a sensitivity of 87.4% and specificity of 82.8%, indicating that it may be feasible for real-time differentiation during EGDS [25]. Furthermore, they compared the performance of the CADx system with that of endoscopists and reported that the performance of the CADx system was comparable to that of expert endoscopists.
Hu et al. investigated whether endoscopists’ diagnostic performance could be improved by the CADx system. While the diagnostic accuracy of the CADx system is similar to that of expert endoscopists, expert endoscopists that used the CADx system demonstrated improved performance in sensitivity (76.7% vs. 87.4%) and negative predictive value (74.5% vs. 83.6%) [26]. This report suggests that the CADx system for GC could benefit not only non-experts but also the experts.
Wu et al. prospectively evaluated the performance of ENDOANGEL, which had been developed using M-NBI images, and reported a sensitivity of 100% and a specificity of 82.54%, which were equivalent to those of endoscopists [15].
While all of the aforementioned reports were based on enhanced images, Wu et al. reported a CADx system based on WLI [16]. They prospectively attempted to differentiate neoplastic/non-neoplastic lesions using their CADx system and reported a sensitivity of 92.9% and a specificity of 91.7%, suggesting that AI can be used to differentiate with a high accuracy, even with WLI. Recently, Ishioka et al. developed a CADx system for EGCs, called Tango. They compared the performance of the CADx system with that of endoscopists, using a dataset comprising only EGCs and benign lesions. Tango demonstrated a superior sensitivity over even specialists (84.7% vs. 65.8%) [27].
AI has also been reported to differentiate between lesions other than GC. Yuan et al. reported an AI capable of multiclassification not only for GC. The AI was trained on 29,809 images containing various lesions and tested on 1579 images to classify lesions into EGC, advanced GC, submucosal tumor, polyp, ulcer, erosion, and normal mucosa [28]. The overall accuracy was 85.7%, which was equivalent to that of senior endoscopists (85.1%) and higher than that of junior endoscopists (78.8%).

2.4. Endoscopic CADx for Diagnosing Various Features of Gastric Cancers

Endoscopic submucosal dissection (ESD) is the preferred treatment option for EGCs because, compared to surgery, it is less invasive, achieves superior postoperative quality of life, and is more cost-effective. However, its indication is limited to lesions with a low risk of lymph node metastasis (LNM). In the Japanese guidelines, the indication of endoscopic resection for GCs is based on multiple factors, including tumor diameter, depth, differentiation status, and the presence of ulcers [11].
Regarding tumor depth, GCs with invasion depths of M and SM1 (<500 μm) are considered to have a low risk of LNM, while those with depths of SM2 and deeper have a high risk of LNM, and surgical intervention is recommended for such tumors in the Japanese GC treatment guidelines [11]. Endoscopic resection is also recommended for early stage localized disease (cTis-cT1a) in the National Comprehensive Cancer Network guidelines [29].
In current clinical practice, the tumor invasion depth is predicted based on macroscopic features using conventional endoscopy or endoscopic ultrasonography (EUS). With conventional endoscopy, various morphological features have been reported as predictors of the tumor invasion depth [30]. Reported indicators of EGC deeper than SM1 observed with WLI include pathomorphological changes at the tips of converging folds, a tumor diameter greater than 30 mm, marked erythema, surface irregularities, marginal elevation with and without submucosal tumor-like features, and trapezoid elevation [31,32,33,34,35]. However, owing to its subjective nature, an accurate prediction remains challenging.
Yoon et al. developed an AI system for assessing the tumor invasion depth (T1a or T1b) with non-magnified WLI and reported a sensitivity of 79.2% and a specificity of 77.8% [36]. Similarly, Cho et al. developed a CADx system to diagnose the tumor depth (Tis/T1 or T2) in non-magnified WLI and reported a sensitivity of 80.4% and a specificity of 80.7% [37].
While these CADx systems were for non-magnified WLI, Nagao et al. evaluated the performance of CADx to diagnose the tumor depth using enhanced images, including NBI and indigo-staining chromoendoscopy [38]. They reported a sensitivity, specificity, and accuracy of 75.0%, 100.0%, and 94.3%, respectively, with NBI; and 87.5%, 100.0%, and 95.5%, respectively, with indigo-stained images, which were comparable to those with WLI.
Zhu et al. and Tang et al. compared the performance of AI and endoscopists in predicting the tumor invasion depth. Zhu et al. used 790 images to develop a CADx system to differentiate the invasion depth of GCs (M or SM1/deeper than SM1) and compared its performance with that of endoscopists using 203 independent validation images. They reported that the performance of the CADx system outperformed those of both junior and experienced endoscopists in accuracy (AI, 89.16%; junior endoscopists, 66.17%; and experienced endoscopists, 77.46%) and specificity (AI, 95.56%; junior endoscopists, 56.71%; and experienced endoscopists 70.74%) [39]. Tang et al. reported that the CADx system showed an accuracy of 88.2%, a sensitivity of 90.5%, and a specificity of 85.3% for differentiating mucosal/submucosal invasion in WLI images. They also reported that the diagnostic performance of endoscopists improved by using a CADx system not only in novice endoscopists (accuracy 74.0% vs. 84.6%, p < 0.001; sensitivity 81.1% vs. 85.7%, p = 0.018; specificity 65.2% vs. 83.3%, p < 0.001) but also in expert endoscopists (accuracy 79.8% vs. 85.5%, p < 0.001; sensitivity 84.3% vs. 87.4%, p = 0.018; specificity 74.2% vs. 83.0%, p < 0.001) [40].
The AI reported by Nam et al. was a multistep model that first detected gastric lesions from endoscopic images, then classified them as GU, early GC, or advanced GC, and finally predicted the tumor invasion depth (T1a/T1b) for lesions classified as early GC [41]. The performance of the AI was compared with the EUS for the invasion depth prediction in this study, which reported that the AI had a better area under the receiver operating characteristic curve (AUC) than the EUS performed by experts (0.73 vs 0.56) in the test with the external validation dataset.
Tumor differentiation status is also an important factor in determining indications for endoscopic resection. According to Japanese guidelines, endoscopic resection for undifferentiated-type GC is prescribed as an “expanded indication” because of the lack of sufficient evidence of long-term outcomes [42]. Regarding the differentiation status, undifferentiated types are associated with a high risk of LNM. Undifferentiated EGCs tend to have flat or depressed macroscopic features and are less likely to show raised features, whereas differentiated EGCs have both depressed and raised types. In addition to these macroscopic findings, microstructures with M-NBI findings have been used to predict the differentiation status [19,43].
Ling et al. reported a CADx system that predicts the differentiation status of lesions using M-NBI images. The CADx system showed a better accuracy than those of endoscopists (86.2% vs. 69.7%) [44].
A prospective study by Wu et al. using ENDOANGEL demonstrated a tumor invasion depth diagnosis and differentiation status prediction. This prospective study was conducted in a practical setting in which the AI detected gastric lesions using non-magnified WLI, then differentiated them into cancerous/non-cancerous lesions using M-NBI images, and, finally, the invasion depth (M or SM) and differentiation status (differentiated or undifferentiated). They reported that the performance of ENDOANGEL was comparable to those of expert endoscopists in predicting the invasion depth (sensitivity 70.0% vs. 56.7%, specificity 83.33% vs. 76.2%, accuracy 78.57% vs. 83.02%) and differentiation status (sensitivity 50.0% vs. 46.83%, specificity 80.00% vs. 71.89%, accuracy 71.43% vs. 71.89%) [16].

2.5. Endoscopic CADx for Helicobacter Pylori Infection

The first CNN-based computer-assisted diagnosis (CADx) system for gastric lesions in EGDS images was a CADx system for determining the presence of HP infection.
HP infection is one of the most important risk factors for gastric cancer (GC) [3]. HP infection causes atrophic gastritis, which progresses as the exposure period increases. It has also been shown that patients with severe atrophic gastritis have a higher risk of GC compared with those who have mild gastritis. Meanwhile, the eradication of HP has been suggested to decrease the incidence of GCs, as it halts the progression of gastritis. Thus, during the routine or screening EGDS, it is important to diagnose atrophic gastritis at an early stage [45]. However, to diagnose the presence or absence of HP-related atrophic gastritis requires a lot of training for endoscopists. Several AI-based CADx systems for determining HP-related atrophic gastritis have been developed (Table 3). Shichijo et al. first reported a CADx system for the presence of HP infection in 2017, which was trained on 32,208 endoscopic still images with or without HP infection, and the performance of the AI was validated on 11,481 independent still images. They reported the accuracy, sensitivity, and specificity of diagnosing the presence of current or past HP infection as 87.7%, 88.9%, and 87.4%, respectively [46], and it outperformed novice endoscopists. Although HP eradication cases are less likely to have GCs, it is much more challenging to endoscopically differentiate HP-eradicated cases from HP cases, even for skilled endoscopists [47]. The authors updated the CADx system using 98,564 endoscopic images, including 845 HP-eradication cases. It was validated on 847 independent cases (23,699 images), and the accuracies were 80% for HP-negative images, 48% for HP-positive images, and 84% for HP-eradicated images, respectively, in per-images analysis [48].
Nakashima et al. developed an AI diagnosis for HP infection using white light imaging (WLI) and linked color imaging (LCI) [49]. In per-patient analyses, the accuracies to diagnose uninfected images were 75.0% with WLI and 84.2% with LCI. The accuracy in diagnosing infected images was 77.5% with WLI and 82.5% with LCI. While the accuracy in diagnosing posteradication images was 74.2% with WLI and 79.2% with LCI.
While all these reports were retrospective and validated with still images, Nakashima et al. and Xu et al. evaluated the performance of AI using video images. Nakashima et al. trained their AI using WLI and LCI still images from 395 patients and validated it using endoscopic videos of the gastric lesser curvature. They reported an accuracy of 84.2% for uninfected cases, 82.5% for currently infected cases, and 79.2% for posteradication cases [50]. Xu et al. prospectively tested the performance of ENDOANGEL in diagnosing gastric atrophy, and its accuracy was 87.8% in video images [51]. These results suggest that the clinical application of AI for detecting HP infection may be feasible.

2.6. Endoscopic CAD for Quality Assurance

The AI systems described above were designed to detect and differentiate lesions in endoscopic images. However, its performance cannot be fully demonstrated unless all parts of the stomach are observed appropriately. In this regard, AI-based endoscopy support approaches differ from just detecting/diagnosing abnormal lesions, but developments have been made to reduce blind spots during the examination and to monitor the quality of endoscopy (Table 4).Wu et al. developed an AI system named WISENSE, the predecessor of ENDOANGEL, which can detect the anatomical part of the observation by learning 24,549 normal images of different parts of the stomach with the aim of reducing blind spots during examination [52]. They evaluated the performance of this AI through RCT and reported that the WISENSE-user group showed significantly lower blind spot rates than the non-user group (5.86% vs. 22.46%, p < 0.001) [53]. In a multicenter RCT conducted to evaluate the blind spot rates between those with or without ENDOANGEL, the ENDOANGEL-assisted group had significantly fewer blind spot rates (5.35% vs. 9.82%, p < 0.001) and a longer inspection time than the non-assisted group, although a longer inspection time was observed in the former group (5.40 min vs. 4.38 min, p < 0.001) [54]. Furthermore, ENDOANGEL correctly predicted all five GCs with a per-lesion accuracy of 84.7%, sensitivity of 100%, and specificity of 84.3%.
Similarly, the endoscopic AI system named IDEA, which was developed by Li et al., is capable of monitoring blind spots and provides an operation score during EGDS. The operation score is then graded according to the observed part of the stomach with a higher score, implying a lower blind spot rate. The results of a multicenter prospective study using IDEA showed that the operation score output of IDEA significantly correlated with higher GC detection rates, indicating that AI assistance may improve the quality of examination [55]. These results suggest that AI may also be useful for the quality control of upper endoscopy.

3. Discussion

In recent years, CNN-based AI systems have been mainly used for image recognition, and their capabilities have partly surpassed those of humans. In the medical field, AI systems have been applied to radiologic image recognition, such as X-rays, computed tomography (CT), and magnetic resonance imaging (MRI). Recently, it has been applied to endoscopic image recognition. However, endoscopic image recognition is considered to be more challenging than radiographic image recognition because endoscopic images are affected by multiple conditions, including the distance, angle, and clearance of the region of interest. Therefore, the development of CAD systems for endoscopy may be more difficult than the development of CAD systems for radiologic images. Recent progress in computer technology, in which copious digitized, high-resolution images are available as big data, has helped improve AI performance. State-of-the-art CNN technologies that can analyze endoscopic imagery with a high accuracy have also been reported on [56]. These technological advances resulted in the development of high-performance CAD systems for endoscopy. This paper presents a literature review of recent advances in CAD systems for EGDS. This review presents the most up-to-date discussion of endoscopic AI for GC diagnosis in upper GI. It is also the first review to report on the specific AI modalities of CADe, CADx, and CAD for quality assurance. As shown in this review, in the past few years, rapidly increasing reports of AI for GC with a robust performance achieved by learning a large number of high-quality images have been published. For CADe systems, a number of systems with sensitivities above those of novice endoscopists and equivalent to those of experts have been reported. RCTs have also shown that the use of the CADe system lowers the miss rate of GC. Regarding CADx, although RCTs have not yet been reported, many have shown expert-level performance. These results suggest that a CADx system could improve the diagnostic performance of non-expert endoscopists and bring them to the expert level.
However, there are fairly limited reports of CAD systems that perform better than the experts; thus, it remains to be seen whether AI can also be useful to the experts. There have also been a few reports of CAD systems that focus exclusively on EGC, which is far more difficult to detect/diagnose than advanced GCs. Furthermore, it should be noted that the previously reported specificity of CAD systems tends to be lower than that of endoscopists. In promoting the social implementation of AI, low specificity may be a drawback that leads to an increase in over diagnosis, leading to an increase in unnecessary additional tests, such as biopsies, which may increase medical costs and morbidity. The reimbursement and cost-effectiveness of AI are important concerns. In the field of colonoscopy, there are a few reports that suggest that the use of the CADe system may be cost-effective in reducing the incidence of colorectal cancer through improvement in adenoma detection and by reducing unnecessary polypectomy through accurate polyp differentiation [57,58]. Regarding the use of CAD systems for EGDS, its cost-effectiveness has not been fully verified so far; therefore, further research is expected in the near future. In addition, most studies on endoscopic AI for GC are from Asian countries, such as Japan and China, which may be due to the high prevalence of HP in these areas [59]. Thus, it is also necessary to verify whether these endoscopic AIs can be used with similar outcomes in Western populations.
AI for GC is being developed not only for cancer detection and differentiation but also for multifaceted approaches, such as tumor invasion depth prediction, prediction of HP infection, and monitoring examination quality. It is expected that these technologies will greatly contribute to the early detection and diagnosis of GCs for appropriate treatment selection.

4. Conclusions

The latest research and development trends of CNN-based endoscopic AI for GCs have been outlined. Most reports on AIs have shown that AIs have a better diagnostic performance than non-expert endoscopists and their performance is equivalent to those of the experts. Most AI systems presented in this review are based on training data annotated by expert endoscopists, indicating that endoscopic AI is the culmination of endoscopists’ wisdom. Future studies are required to evaluate the usefulness of endoscopic AI in clinical settings.
Table 1. Summary of CADe for gastric cancer.
Table 1. Summary of CADe for gastric cancer.
Study DesignReference, YearModalityTraining DatasetValidation/Test DatasetAUCAccuracy (%)Sensitivity (%)Specificity (%)
RetrospectiveHirasawa, 2018 [9]WLI, CE, NBI51,558 images
(13,584 GC images)
296 GC imagesn/an/a92.2n/a
Yoon, 2019 [36]WLI11,539 images
(1705 GC images)
11,539 images
(1705 GC images)
0.981n/a9197.6
Ishioka, 2019 [10] WLI, CE, NBI51,558 images
(13,584 GC images)
68 videos with GCn/an/a94.1n/a
Ikenoyama, 2021 [11]WLI, CE, NBI51,558 images
(13,584 GC images)
2940 GC images of 140 patients0.757n/a58.487.3
Nam, 2022 [41]WLI1009 images (110 GU, 620 EGC, 279 AGC)112 images (internal test),
245 images (external test)
0.78 (internal test),
0.73 (external test)
n/an/an/a
Niikura, 2022 [14]WLI51,558 images
(13,584 GC images)
500 patients (51 AGC, 49 EGC patients)n/an/a100n/a
Prospective Luo, 2019 [13]WLI141,570 images
(26,172 GC/EC images)
66,750 images
(4317 GC/EC images)
0.97492.794.692.6
ENDOANGEL
ProspectiveWu, 2022 [16]WLI24,704 images (15,341 GC); ENDOANGEL-CNN1a (detection module)100 lesions from 96 patientsn/a9187.8193.22
Wu, 2022 [17]WLI21,000 images
(15,341 GC);
ENDOANGEL-LD CNN1
internal test1: 1198 images (1000 GC),
internal test2: 5488 images (338 neoplastic),
external test: 15,886 images (774 neoplastic)
98.3 (internal test1), 96.9 (internal test2), 95.6 (external test), 100 (videos)98.4 (internal test1), 90.6 (internal test2) and 90.8 (external test)
RCTWu, 2021 [15]WLI18,579 images
(12,447 GC)
1012 patients
(93 patients with GC)
Findings:
The gastric neoplasm miss rate was significantly lower in the AI-first group than in the routine- first group (6.1% vs 27.3%, p = 0.015).
CADe: computer-assisted detection; GC: Gastric cancer; RCT: Randomized control trial; AUC: Area under the curve; WLI: White light imaging; CE: Chromoendoscopy; NBI: Narrow band imaging; BLI: Blue laser imaging; GU: Gastric ulcer; EGC: Early gastric cancer; AGC: Advanced gastric cancer.
Table 2. Summary of CADx in gastric cancer.
Table 2. Summary of CADx in gastric cancer.
Study DesignReference,
Year
ModalityTraining DatasetValidation/Test DatasetAUCAccuracy (%)Sensitivity (%)Specificity (%)
CADx-neoplastic/non-neoplasticRetrospectiveLi, 2020 [22]M-NBI2088 images (1702 EGC)341 images
(170 EGC)
n/a90.9191.1890.64
Horiuchi, 2020 [24] M-NBI2570 images (1492 EGC)258 images
(151 EGC)
0.8585.395.471
Horiuchi, 2020 [25]M-NBI2570 images (1492 EGC)174 videos
(87 with EGC)
0.868485.187.482.8
Namikawa, 2020 [12] WLI, NBI18,410 images (2649 GC, 4826 GU)739 EGC and 720 GU imagesn/a99.0 (GC),
93.3 (GU)
99.0 (GC),
93.3 (GU)
93.3 (GC),
99.0 (GU)
Ueyama, 2021 [23]M-NBI5574 images (3797 EGC)2300 images (1430 EGC)n/a98.798100
Hu, 2021 [26]M-NBIImages from 170 patients with EGC73 patients
(Internal test),
52 patients
(External test)
0.808
(Internal test),
0.813
(External test)
77.0
(Internal test),
76.3
(External test)
79.2
(Internal test),
78.2
(External test)
74.5
(Internal test),
74.1
(External test)
Nam, 2022 [41]WLI1009 images (110 GU, 620 EGC, 279 AGC)112 images (internal test), 245 images (external test)Internal test
0.89
External test
0.82
Internal test
GU 95,
EGC 89,
AGC 93
External test:
GU 86,
EGC 79
AGC 79
Internal test
GU 63
EGC 94
AGC 90
External test
GU 68
EGC 77
AGC 56
Internal test
GU 98
EGC 82
AGC 94
External test
GU 50
EGC 89
AGC 47
Yuan, 2022 [28]WLI29,809 images1579 imagesn/a85.7n/an/a
Ishioka, 2022 [27]WLI40,162 images (18,027 EGC)315 mages (150 EGC)n/a70.884.758.2
ENDOANGEL
ProspectiveWu, 2022 [16]M-NBI8301 WLI images (4442 neoplastic images); ENDOANGEL-CNN1b (WLI), 4667 M-NBI images (1950 EGC images); ENDOANGEL-CNN2 (M-NBI)100 lesions from 96 patientsn/a8910082.54
Wu, 2022 [17] 9824 images (5359 neoplastic images); ENDOANGEL-LD CNN2Internal test1:
1198 images
(1000 abnormal),
Internal test2:
5488 images
(338 neoplastic)
External test:
15,886 images
(774 neoplastic)
100 videos
(38 neoplastic)
0.960 (internal test1), Internal test1: 86.0
Internal test2: 88.8
External test: 88.6
Videos: 72.0
Internal test1: 94.0
Internal test2: 92.9
External test: 91.7
Videos: 100
Internal test1: 84.0
Internal test2: 88.8
External test: 88.2
Videos: 53.2
CADx-Invasion depth, pathological statusRetrospectiveKubota, 2012 [60] WLI902 GC images902 GC imagesn/a64.7n/an/a
Yoon, 2019 [37]WLI1750 GC images1705 GC images0.851n/a79.277.8
Zhu, 2019 [39]WLI790 GC images203 GC images0.9489.1676.4795.56
Nagao, 2020 [38]WLI, CE, NBI13,628 GC images2929 GC images0.959 (WLI), 0.9048 (NBI), 0.9491 (CE)94.49 (WLI),
94.30 (NBI),
95.50 (CE)
84.42 (WLI),
75.00 (NBI),
87.50 (CE)
99.37 (WLI),
100 (NBI),
100 (CE)
Cho, 2020 [37]WLI2899 images206 images0.88777.380.480.7
Tang, 2021 [40]WLI3407 images from 666 GC patients228 images0.94288.290.585.3
Ling 2021 [44]M-NBI2217 GC images1870 GC imagesn/a86.2Differentiated: 88.6,
Undifferentiated:
78.6
Differentiated:
78.6,
Undifferentiated:
88.6
Nam, 2022 [41]WLI1009 images (110 GU, 620 EGC, 279 AGC)112 images (internal test), 245 images (external test)Internal test:
0.78,
External test: 0.73
Internal test:
77,
External test:
72
Internal test:
86,
External test:
73
Internal test:
66
External test:
94
ProspectiveWu, 2022 [16]WLI, M-NBI3407 WLI images; ENDOANGEL-CNN3 (invasion depth), 2217 M-NBI images (1131 differentiated a 1086 undifferentiated); ENDOANGEL-CNN4 (differentiation status) 28 lesions from 28 patientsn/a78.6 (submucosal invasion), 71.4(undifferentiated EGC)70.0 (submucosal invasion), 50.0 (undifferentiated EGC)83.3 (submucosal invasion), 80.0 (undifferentiated EGC)
CADe: Computer-assisted detection; CADx: Computer-assisted diagnosis; GC: Gastric cancer; AUC: Area under the curve; WLI: White light imaging; CE: Chromoendoscopy; NBI: Narrow band imaging; M-NBI: Magnified narrow band imaging; BLI: Blue laser imaging; GU: Gastric ulcer; EGC: Early gastric cancer; AGC: Advanced gastric cancer.
Table 3. Summary of CADx for Helicobacter pylori infection.
Table 3. Summary of CADx for Helicobacter pylori infection.
Study DesignReference,
Year
ModalityTraining DatasetValidation/Test DatasetAUCAccuracy (%)Sensitivity (%)Specificity (%)
RetrospectiveShichijo, 2017 [47]WLI, CE, NBI32,208 images from 1750 patients (753 HP-positive)11,481 images from 397 patients (72 HP-positive)0.9387.70%88.987.4
Shichijo, 2019 [48]WLI98,564 images from 5236 patients (742 HP positive, 3649 HP negative, and 845 HP eradicated)23,699 images from 847 patients (70 positive, 493 negative, and 284 eradicated)n/a80 (HP negative),
84 (HP eradicated),
48 (HP positive)
n/an/a
Zheng, 2019 [61] WLI11,729 images from 1959 patients (847 HP positive)3755 images form 452 patients (310 HP positive)0.9793.891.698.6
Guimarães, 2020 [62] WLI200 images (100 HP positive)70 images (30 HP positive)0.98192.910087.5
Zhang, 2020 [63] WLIA total of 5470 images (3042 with atrophic gastritis),
70% for training and 30% for testing
0.9994.294.594
ProspectiveItoh, 2018 [64] WLI149 images (70 HP positive)30 images (15 HP positive)0.956n/a86.786.7
Nakashima, 2018 [49]WLI, BLI, LCI2592 images from 162 patients (75 HP positive)60 patients (30 HP-positive)0.66 (WLI), 0.96 (BLI), 0.95 (LCI)n/a66.7 (WLI), 96.7 (BLI), 96.7 (LCI)60.0 (WLI), 86.7 (BLI), 83.3 (LCI)
Nakashima, 2020 [50]WLI, LCI12,887 images from 395 patients
(138 HP positive,
141 HP negative,
116 HP eradicated)
120 videos
(40 HP positive,
40 HP negative,
40 HP eradicated)
0.82 (LCI, HP positive),
0.90 (LCI, HP negative),
0.77 (LCI, HP eradicated)
HP Positive
77.5 (WLI),
82.5 (LCI)

HP negative
75.0 (WLI),
84.2 (LCI)

HP eradicated
74.2 (WLI),
79.2 (LCI)
HP Positive
60.0 (WLI),
62.5 (LCI)

HP negative
95.0 (WLI),
92.5 (LCI)

HP eradicated
35.0 (WLI),
65.0 (LCI)
HP Positive
86.2 (WLI),
92.5 (LCI)

HP negative
65.0 (WLI), 80.0 (LCI)

HP eradicated
93.8 (WLI), 86.2 (LCI)
Xu, 2021 [51]M-NBI, M-BLI354 patients77 videos0.87887.896.773
CADx: computer-assisted diagnosis; HP: Helicobacter pylori; AUC: Area under the curve; WLI: White light imaging; CE: Chromoendoscopy; NBI: Narrow band imaging; BLI: Blue laser imaging.
Table 4. Summary of CAD for examination quality assurance.
Table 4. Summary of CAD for examination quality assurance.
Reference, YearStudy DesignApplicationModalityTraining DatasetValidation/Test DatasetFindings
Wu, 2019 [52]RetrospectiveClassification of observed locationWLI24,549 images170 imagesAccuracy:
90 (into 10 parts),
65.9 (into 26 parts)
Wu, 2019 [53]RCTMonitoring blind spotsWLI34,513 images107 videosAccuracy: 90.0%
Sensitivity: 87.5%,
Specificity 95.0%
Li, 2022 [55]ProspectiveMonitoring EGD qualityWLI170,297 images and 149 videos17,787 patientsAI out put the EGD quality monitoring scores.
The cancer detection rate (r = 0.775) and early cancer detection rate (r = 0.756) were positively correlated with total score.
CAD: computer-assisted detection/diagnosis; GC: Gastric cancer; RCT: Randomized control trial; AUC: Area under the curve; WLI: White light imaging; EGC: Early gastric cancer.

Author Contributions

Conceptualization, K.O. and T.O; methodology, K.O. and T.O; writing—original draft preparation, K.O. and T.O.; writing—review and editing, J.S., T.T. and S.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to the non-direct involvement of human participants since the review nature of it.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank Toshiaki Hirasawa for providing endoscopic images. We would like to thank Sean Huff for English language editing.

Conflicts of Interest

TT is the CEO of AI Medical Service Inc. TO and JS are advisory members of AI Medical Service Inc.

References

  1. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez, C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Chen, X.; Wang, X.; Zhang, K.; Fung, K.-M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444. [Google Scholar] [CrossRef]
  3. Correa, P.; Houghton, J. Carcinogenesis of Helicobacter pylori. Gastroenterology 2007, 133, 659–672. [Google Scholar] [CrossRef] [PubMed]
  4. Sung, H.; Ferlay, J.; Siegel, R.L.; Laversanne, M.; Soerjomataram, I.; Jemal, A.; Bray, F. Global Cancer Statistics 2020: GLOBOCAN Estimates of Incidence and Mortality Worldwide for 36 Cancers in 185 Countries. CA Cancer J. Clin. 2021, 71, 209–249. [Google Scholar] [CrossRef] [PubMed]
  5. Katai, H.; Ishikawa, T.; Akazawa, K.; Isobe, Y.; Miyashiro, I.; Oda, I.; Tsujitani, S.; Ono, H.; Tanabe, S.; Fukagawa, T.; et al. Five-year survival analysis of surgically resected gastric cancer cases in Japan: A retrospective analysis of more than 100,000 patients from the nationwide registry of the Japanese Gastric Cancer Association (2001–2007). Gastric Cancer 2018, 21, 144–154. [Google Scholar] [CrossRef] [PubMed]
  6. Januszewicz, W.; Witczak, K.; Wieszczy, P.; Socha, M.; Turkot, M.H.; Wojciechowska, U.; Didkowska, J.; Kaminski, M.F.; Regula, J. Prevalence and risk factors of upper gastrointestinal cancers missed during endoscopy: A nationwide registry-based study. Endoscopy 2022, 54, 653–660. [Google Scholar] [CrossRef]
  7. Hosokawa, O.; Hattori, M.; Douden, K.; Hayashi, H.; Ohta, K.; Kaizaki, Y. Difference in accuracy between gastroscopy and colonoscopy for detection of cancer. Hepatogastroenterology 2007, 54, 442–444. [Google Scholar]
  8. Menon, S.; Trudgill, N. How commonly is upper gastrointestinal cancer missed at endoscopy? A meta-analysis. Endosc. Int. Open 2014, 2, E46–E50. [Google Scholar] [CrossRef] [Green Version]
  9. Hirasawa, T.; Aoyama, K.; Tanimoto, T.; Ishihara, S.; Shichijo, S.; Ozawa, T.; Ohnishi, T.; Fujishiro, M.; Matsuo, K.; Fujisaki, J.; et al. Application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. Gastric Cancer 2018, 21, 653–660. [Google Scholar] [CrossRef] [Green Version]
  10. Ishioka, M.; Hirasawa, T.; Tada, T. Detecting gastric cancer from video images using convolutional neural networks. Dig. Endosc. 2019, 31, e34–e35. [Google Scholar] [CrossRef] [Green Version]
  11. Ikenoyama, Y.; Hirasawa, T.; Ishioka, M.; Namikawa, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Takeuchi, Y.; et al. Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Dig. Endosc. 2021, 33, 141–150. [Google Scholar] [CrossRef] [PubMed]
  12. Namikawa, K.; Hirasawa, T.; Nakano, K.; Ikenoyama, Y.; Ishioka, M.; Shiroma, S.; Tokai, Y.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; et al. Artificial intelligence-based diagnostic system classifying gastric cancers and ulcers: Comparison between the original and newly developed systems. Endoscopy 2020, 52, 1077–1083. [Google Scholar] [CrossRef] [PubMed]
  13. Luo, H.; Xu, G.; Li, C.; He, L.; Luo, L.; Wang, Z.; Jing, B.; Deng, Y.; Jin, Y.; Li, Y.; et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: A multicentre, case-control, diagnostic study. Lancet Oncol. 2019, 20, 1645–1654. [Google Scholar] [CrossRef] [PubMed]
  14. Niikura, R.; Aoki, T.; Shichijo, S.; Yamada, A.; Kawahara, T.; Kato, Y.; Hirata, Y.; Hayakawa, Y.; Suzuki, N.; Ochi, M.; et al. Artificial intelligence versus expert endoscopists for diagnosis of gastric cancer in patients who have undergone upper gastrointestinal endoscopy. Endoscopy 2022, 54, 780–784. [Google Scholar] [CrossRef]
  15. Wu, L.; Shang, R.; Sharma, P.; Zhou, W.; Liu, J.; Yao, L.; Dong, Z.; Yuan, J.; Zeng, Z.; Yu, Y.; et al. Effect of a deep learning-based system on the miss rate of gastric neoplasms during upper gastrointestinal endoscopy: A single-centre, tandem, randomised controlled trial. Lancet Gastroenterol. Hepatol. 2021, 6, 700–708. [Google Scholar] [CrossRef]
  16. Wu, L.; Wang, J.; He, X.; Zhu, Y.; Jiang, X.; Chen, Y.; Wang, Y.; Huang, L.; Shang, R.; Dong, Z.; et al. Deep learning system compared with expert endoscopists in predicting early gastric cancer and its invasion depth and differentiation status (with videos). Gastrointest. Endosc. 2022, 95, 92–104.e103. [Google Scholar] [CrossRef]
  17. Wu, L.; Xu, M.; Jiang, X.; He, X.; Zhang, H.; Ai, Y.; Tong, Q.; Lv, P.; Lu, B.; Guo, M.; et al. Real-time artificial intelligence for detecting focal lesions and diagnosing neoplasms of the stomach by white-light endoscopy (with videos). Gastrointest. Endosc. 2022, 95, 269–280.e266. [Google Scholar] [CrossRef]
  18. Yao, K.; Anagnostopoulos, G.K.; Ragunath, K. Magnifying endoscopy for diagnosing and delineating early gastric cancer. Endoscopy 2009, 41, 462–467. [Google Scholar] [CrossRef] [Green Version]
  19. Miyaoka, M.; Yao, K.; Tanabe, H.; Kanemitsu, T.; Otsu, K.; Imamura, K.; Ono, Y.; Ishikawa, S.; Yasaka, T.; Ueki, T.; et al. Diagnosis of early gastric cancer using image enhanced endoscopy: A systematic approach. Transl. Gastroenterol. Hepatol. 2020, 5, 50. [Google Scholar] [CrossRef]
  20. Doyama, H.; Nakanishi, H.; Yao, K. Image-Enhanced Endoscopy and Its Corresponding Histopathology in the Stomach. Gut Liver 2021, 15, 329–337. [Google Scholar] [CrossRef] [Green Version]
  21. Dohi, O.; Yagi, N.; Majima, A.; Horii, Y.; Kitaichi, T.; Onozawa, Y.; Suzuki, K.; Tomie, A.; Kimura-Tsuchiya, R.; Tsuji, T.; et al. Diagnostic ability of magnifying endoscopy with blue laser imaging for early gastric cancer: A prospective study. Gastric Cancer 2017, 20, 297–303. [Google Scholar] [CrossRef] [PubMed]
  22. Li, L.; Chen, Y.; Shen, Z.; Zhang, X.; Sang, J.; Ding, Y.; Yang, X.; Li, J.; Chen, M.; Jin, C.; et al. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer 2020, 23, 126–132. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Ueyama, H.; Kato, Y.; Akazawa, Y.; Yatagai, N.; Komori, H.; Takeda, T.; Matsumoto, K.; Ueda, K.; Matsumoto, K.; Hojo, M.; et al. Application of artificial intelligence using a convolutional neural network for diagnosis of early gastric cancer based on magnifying endoscopy with narrow-band imaging. J. Gastroenterol. Hepatol. 2021, 36, 482–489. [Google Scholar] [CrossRef] [PubMed]
  24. Horiuchi, Y.; Aoyama, K.; Tokai, Y.; Hirasawa, T.; Yoshimizu, S.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Fujisaki, J.; Tada, T. Convolutional Neural Network for Differentiating Gastric Cancer from Gastritis Using Magnified Endoscopy with Narrow Band Imaging. Dig. Dis. Sci. 2020, 65, 1355–1363. [Google Scholar] [CrossRef] [PubMed]
  25. Horiuchi, Y.; Hirasawa, T.; Ishizuka, N.; Tokai, Y.; Namikawa, K.; Yoshimizu, S.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Fujisaki, J.; et al. Performance of a computer-aided diagnosis system in diagnosing early gastric cancer using magnifying endoscopy videos with narrow-band imaging (with videos). Gastrointest. Endosc. 2020, 92, 856–865.e851. [Google Scholar] [CrossRef]
  26. Hu, H.; Gong, L.; Dong, D.; Zhu, L.; Wang, M.; He, J.; Shu, L.; Cai, Y.; Cai, S.; Su, W.; et al. Identifying early gastric cancer under magnifying narrow-band images with deep learning: A multicenter study. Gastrointest. Endosc. 2021, 93, 1333–1341.e1333. [Google Scholar] [CrossRef]
  27. Ishioka, M.; Osawa, H.; Hirasawa, T.; Kawachi, H.; Nakano, K.; Fukushima, N.; Sakaguchi, M.; Tada, T.; Kato, Y.; Shibata, J.; et al. Performance of an artificial intelligence-based diagnostic support tool for early gastric cancers: Retrospective study. Dig. Endosc. 2022. [Google Scholar] [CrossRef]
  28. Yuan, X.L.; Zhou, Y.; Liu, W.; Luo, Q.; Zeng, X.H.; Yi, Z.; Hu, B. Artificial intelligence for diagnosing gastric lesions under white-light endoscopy. Surg. Endosc. 2022, 36, 9444–9453. [Google Scholar] [CrossRef]
  29. Barreto, S.G.; Windsor, J.A. Redefining early gastric cancer. Surg. Endosc. 2016, 30, 24–37. [Google Scholar] [CrossRef]
  30. Zhou, Y.; Li, X.B. Endoscopic prediction of tumor margin and invasive depth in early gastric cancer. J. Dig. Dis. 2015, 16, 303–310. [Google Scholar] [CrossRef]
  31. Sano, T.; Okuyama, Y.; Kobori, O.; Shimizu, T.; Morioka, Y. Early gastric cancer. Endoscopic diagnosis of depth of invasion. Dig. Dis. Sci. 1990, 35, 1340–1344. [Google Scholar] [CrossRef] [PubMed]
  32. Choi, J.; Kim, S.G.; Im, J.P.; Kim, J.S.; Jung, H.C.; Song, I.S. Comparison of endoscopic ultrasonography and conventional endoscopy for prediction of depth of tumor invasion in early gastric cancer. Endoscopy 2010, 42, 705–713. [Google Scholar] [CrossRef] [PubMed]
  33. Abe, S.; Oda, I.; Shimazu, T.; Kinjo, T.; Tada, K.; Sakamoto, T.; Kusano, C.; Gotoda, T. Depth-predicting score for differentiated early gastric cancer. Gastric Cancer 2011, 14, 35–40. [Google Scholar] [CrossRef] [PubMed]
  34. Tsujii, Y.; Kato, M.; Inoue, T.; Yoshii, S.; Nagai, K.; Fujinaga, T.; Maekawa, A.; Hayashi, Y.; Akasaka, T.; Shinzaki, S.; et al. Integrated diagnostic strategy for the invasion depth of early gastric cancer by conventional endoscopy and EUS. Gastrointest. Endosc. 2015, 82, 452–459. [Google Scholar] [CrossRef] [PubMed]
  35. Nagahama, T.; Yao, K.; Imamura, K.; Kojima, T.; Ohtsu, K.; Chuman, K.; Tanabe, H.; Yamaoka, R.; Iwashita, A. Diagnostic performance of conventional endoscopy in the identification of submucosal invasion by early gastric cancer: The “non-extension sign” as a simple diagnostic marker. Gastric Cancer 2017, 20, 304–313. [Google Scholar] [CrossRef] [Green Version]
  36. Yoon, H.J.; Kim, S.; Kim, J.H.; Keum, J.S.; Oh, S.I.; Jo, J.; Chun, J.; Youn, Y.H.; Park, H.; Kwon, I.G.; et al. A Lesion-Based Convolutional Neural Network Improves Endoscopic Detection and Depth Prediction of Early Gastric Cancer. J. Clin. Med. 2019, 8, 1310. [Google Scholar] [CrossRef] [Green Version]
  37. Cho, B.J.; Bang, C.S.; Lee, J.J.; Seo, C.W.; Kim, J.H. Prediction of Submucosal Invasion for Gastric Neoplasms in Endoscopic Images Using Deep-Learning. J. Clin. Med. 2020, 9, 1858. [Google Scholar] [CrossRef]
  38. Nagao, S.; Tsuji, Y.; Sakaguchi, Y.; Takahashi, Y.; Minatsuki, C.; Niimi, K.; Yamashita, H.; Yamamichi, N.; Seto, Y.; Tada, T.; et al. Highly accurate artificial intelligence systems to predict the invasion depth of gastric cancer: Efficacy of conventional white-light imaging, nonmagnifying narrow-band imaging, and indigo-carmine dye contrast imaging. Gastrointest. Endosc. 2020, 92, 866–873.e861. [Google Scholar] [CrossRef]
  39. Zhu, Y.; Wang, Q.C.; Xu, M.D.; Zhang, Z.; Cheng, J.; Zhong, Y.S.; Zhang, Y.Q.; Chen, W.F.; Yao, L.Q.; Zhou, P.H.; et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest. Endosc. 2019, 89, 806–815.e801. [Google Scholar] [CrossRef]
  40. Tang, D.; Zhou, J.; Wang, L.; Ni, M.; Chen, M.; Hassan, S.; Luo, R.; Chen, X.; He, X.; Zhang, L.; et al. A Novel Model Based on Deep Convolutional Neural Network Improves Diagnostic Accuracy of Intramucosal Gastric Cancer (With Video). Front. Oncol. 2021, 11, 622827. [Google Scholar] [CrossRef]
  41. Nam, J.Y.; Chung, H.J.; Choi, K.S.; Lee, H.; Kim, T.J.; Soh, H.; Kang, E.A.; Cho, S.J.; Ye, J.C.; Im, J.P.; et al. Deep learning model for diagnosing gastric mucosal lesions using endoscopic images: Development, validation, and method comparison. Gastrointest. Endosc. 2022, 95, 258–268.e210. [Google Scholar] [CrossRef] [PubMed]
  42. Japanese Gastric Cancer Association. Japanese gastric cancer treatment guidelines 2018 (5th edition). Gastric Cancer 2021, 24, 1–21. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Nakayoshi, T.; Tajiri, H.; Matsuda, K.; Kaise, M.; Ikegami, M.; Sasaki, H. Magnifying endoscopy combined with narrow band imaging system for early gastric cancer: Correlation of vascular pattern with histopathology (including video). Endoscopy 2004, 36, 1080–1084. [Google Scholar] [CrossRef] [PubMed]
  44. Ling, T.; Wu, L.; Fu, Y.; Xu, Q.; An, P.; Zhang, J.; Hu, S.; Chen, Y.; He, X.; Wang, J.; et al. A deep learning-based system for identifying differentiation status and delineating the margins of early gastric cancer in magnifying narrow-band imaging endoscopy. Endoscopy 2021, 53, 469–477. [Google Scholar] [CrossRef]
  45. Sugano, K.; Tack, J.; Kuipers, E.J.; Graham, D.Y.; El-Omar, E.M.; Miura, S.; Haruma, K.; Asaka, M.; Uemura, N.; Malfertheiner, P.; et al. Kyoto global consensus report on Helicobacter pylori gastritis. Gut 2015, 64, 1353–1367. [Google Scholar] [CrossRef] [Green Version]
  46. Shichijo, S.; Nomura, S.; Aoyama, K.; Nishikawa, Y.; Miura, M.; Shinagawa, T.; Takiyama, H.; Tanimoto, T.; Ishihara, S.; Matsuo, K.; et al. Application of Convolutional Neural Networks in the Diagnosis of Helicobacter pylori Infection Based on Endoscopic Images. EBioMedicine 2017, 25, 106–111. [Google Scholar] [CrossRef] [Green Version]
  47. Watanabe, K.; Nagata, N.; Shimbo, T.; Nakashima, R.; Furuhata, E.; Sakurai, T.; Akazawa, N.; Yokoi, C.; Kobayakawa, M.; Akiyama, J.; et al. Accuracy of endoscopic diagnosis of Helicobacter pylori infection according to level of endoscopic experience and the effect of training. BMC Gastroenterol. 2013, 13, 128. [Google Scholar] [CrossRef] [Green Version]
  48. Shichijo, S.; Endo, Y.; Aoyama, K.; Takeuchi, Y.; Ozawa, T.; Takiyama, H.; Matsuo, K.; Fujishiro, M.; Ishihara, S.; Ishihara, R.; et al. Application of convolutional neural networks for evaluating Helicobacter pylori infection status on the basis of endoscopic images. Scand. J. Gastroenterol. 2019, 54, 158–163. [Google Scholar] [CrossRef]
  49. Nakashima, H.; Kawahira, H.; Kawachi, H.; Sakaki, N. Artificial intelligence diagnosis of Helicobacter pylori infection using blue laser imaging-bright and linked color imaging: A single-center prospective study. Ann. Gastroenterol. 2018, 31, 462–468. [Google Scholar] [CrossRef]
  50. Nakashima, H.; Kawahira, H.; Kawachi, H.; Sakaki, N. Endoscopic three-categorical diagnosis of Helicobacter pylori infection using linked color imaging and deep learning: A single-center prospective study (with video). Gastric Cancer 2020, 23, 1033–1040. [Google Scholar] [CrossRef]
  51. Xu, M.; Zhou, W.; Wu, L.; Zhang, J.; Wang, J.; Mu, G.; Huang, X.; Li, Y.; Yuan, J.; Zeng, Z.; et al. Artificial intelligence in the diagnosis of gastric precancerous conditions by image-enhanced endoscopy: A multicenter, diagnostic study (with video). Gastrointest. Endosc. 2021, 94, 540–548.e544. [Google Scholar] [CrossRef] [PubMed]
  52. Wu, L.; Zhou, W.; Wan, X.; Zhang, J.; Shen, L.; Hu, S.; Ding, Q.; Mu, G.; Yin, A.; Huang, X.; et al. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy 2019, 51, 522–531. [Google Scholar] [CrossRef] [Green Version]
  53. Wu, L.; Zhang, J.; Zhou, W.; An, P.; Shen, L.; Liu, J.; Jiang, X.; Huang, X.; Mu, G.; Wan, X.; et al. Randomised controlled trial of WISENSE, a real-time quality improving system for monitoring blind spots during esophagogastroduodenoscopy. Gut 2019, 68, 2161–2169. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Wu, L.; He, X.; Liu, M.; Xie, H.; An, P.; Zhang, J.; Zhang, H.; Ai, Y.; Tong, Q.; Guo, M.; et al. Evaluation of the effects of an artificial intelligence system on endoscopy quality and preliminary testing of its performance in detecting early gastric cancer: A randomized controlled trial. Endoscopy 2021, 53, 1199–1207. [Google Scholar] [CrossRef] [PubMed]
  55. Li, Y.D.; Li, H.Z.; Chen, S.S.; Jin, C.H.; Chen, M.; Cheng, M.; Ma, M.J.; Zhang, X.P.; Wang, X.; Zhou, J.B.; et al. Correlation of the detection rate of upper GI cancer with artificial intelligence score: Results from a multicenter trial (with video). Gastrointest. Endosc. 2022, 95, 1138–1146.e1132. [Google Scholar] [CrossRef] [PubMed]
  56. Ramamurthy, K.; George, T.T.; Shah, Y.; Sasidhar, P. A Novel Multi-Feature Fusion Method for Classification of Gastrointestinal Diseases Using Endoscopy Images. Diagnostics 2022, 12, 2316. [Google Scholar] [CrossRef]
  57. Mori, Y.; Kudo, S.E.; East, J.E.; Rastogi, A.; Bretthauer, M.; Misawa, M.; Sekiguchi, M.; Matsuda, T.; Saito, Y.; Ikematsu, H.; et al. Cost savings in colonoscopy with artificial intelligence-aided polyp diagnosis: An add-on analysis of a clinical trial (with video). Gastrointest. Endosc. 2020, 92, 905–911.e901. [Google Scholar] [CrossRef] [PubMed]
  58. Areia, M.; Mori, Y.; Correale, L.; Repici, A.; Bretthauer, M.; Sharma, P.; Taveira, F.; Spadaccini, M.; Antonelli, G.; Ebigbo, A.; et al. Cost-effectiveness of artificial intelligence for screening colonoscopy: A modelling study. Lancet Digit. Health 2022, 4, e436–e444. [Google Scholar] [CrossRef]
  59. Kumar, S.; Mantero, A.; Delgado, C.; Dominguez, B.; Nuchovich, N.; Goldberg, D.S. Eastern European and Asian-born populations are prone to gastric cancer: An epidemiologic analysis of foreign-born populations and gastric cancer. Ann. Gastroenterol. 2021, 34, 669–674. [Google Scholar] [CrossRef]
  60. Kubota, K.; Kuroda, J.; Yoshida, M.; Ohta, K.; Kitajima, M. Medical image analysis: Computer-aided diagnosis of gastric cancer invasion on endoscopic images. Surg. Endosc. 2012, 26, 1485–1489. [Google Scholar] [CrossRef]
  61. Zheng, W.; Zhang, X.; Kim, J.J.; Zhu, X.; Ye, G.; Ye, B.; Wang, J.; Luo, S.; Li, J.; Yu, T.; et al. High Accuracy of Convolutional Neural Network for Evaluation of Helicobacter pylori Infection Based on Endoscopic Images: Preliminary Experience. Clin. Transl. Gastroenterol. 2019, 10, e00109. [Google Scholar] [CrossRef] [PubMed]
  62. Guimaraes, P.; Keller, A.; Fehlmann, T.; Lammert, F.; Casper, M. Deep-learning based detection of gastric precancerous conditions. Gut 2020, 69, 4–6. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Zhang, Y.; Li, F.; Yuan, F.; Zhang, K.; Huo, L.; Dong, Z.; Lang, Y.; Zhang, Y.; Wang, M.; Gao, Z.; et al. Diagnosing chronic atrophic gastritis by gastroscopy using artificial intelligence. Dig. Liver Dis. 2020, 52, 566–572. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  64. Itoh, T.; Kawahira, H.; Nakashima, H.; Yata, N. Deep learning analyzes Helicobacter pylori infection by upper gastrointestinal endoscopy images. Endosc. Int. Open 2018, 6, E139–E144. [Google Scholar] [CrossRef] [PubMed]
Figure 1. An example of CADe system for gastric cancer. The system supports the detection of gastric cancers by displaying a rectangle. CADe: computer-assisted detection.
Figure 1. An example of CADe system for gastric cancer. The system supports the detection of gastric cancers by displaying a rectangle. CADe: computer-assisted detection.
Diagnostics 12 03153 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ochiai, K.; Ozawa, T.; Shibata, J.; Ishihara, S.; Tada, T. Current Status of Artificial Intelligence-Based Computer-Assisted Diagnosis Systems for Gastric Cancer in Endoscopy. Diagnostics 2022, 12, 3153. https://doi.org/10.3390/diagnostics12123153

AMA Style

Ochiai K, Ozawa T, Shibata J, Ishihara S, Tada T. Current Status of Artificial Intelligence-Based Computer-Assisted Diagnosis Systems for Gastric Cancer in Endoscopy. Diagnostics. 2022; 12(12):3153. https://doi.org/10.3390/diagnostics12123153

Chicago/Turabian Style

Ochiai, Kentaro, Tsuyoshi Ozawa, Junichi Shibata, Soichiro Ishihara, and Tomohiro Tada. 2022. "Current Status of Artificial Intelligence-Based Computer-Assisted Diagnosis Systems for Gastric Cancer in Endoscopy" Diagnostics 12, no. 12: 3153. https://doi.org/10.3390/diagnostics12123153

APA Style

Ochiai, K., Ozawa, T., Shibata, J., Ishihara, S., & Tada, T. (2022). Current Status of Artificial Intelligence-Based Computer-Assisted Diagnosis Systems for Gastric Cancer in Endoscopy. Diagnostics, 12(12), 3153. https://doi.org/10.3390/diagnostics12123153

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop