Next Article in Journal
Automated Intracranial Clot Detection: A Promising Tool for Vascular Occlusion Detection in Non-Enhanced CT
Previous Article in Journal
Selective IgM Deficiency: Evidence, Controversies, and Gaps
Previous Article in Special Issue
The Accuracy of Pre-Endoscopic Scores for Mortality Prediction in Patients with Upper GI Bleeding and No Endoscopy Performed
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Importance of Artificial Intelligence in Upper Gastrointestinal Endoscopy

by
Dusan Popovic
1,2,*,
Tijana Glisic
1,3,
Tomica Milosavljevic
4,
Natasa Panic
2,
Marija Marjanovic-Haljilji
2,
Dragana Mijac
1,3,
Milica Stojkovic Lalosevic
1,3,
Jelena Nestorov
1,3,
Sanja Dragasevic
1,3,
Predrag Savic
1,5 and
Branka Filipovic
1,2
1
Faculty of Medicine Belgrade, University of Belgrade, 11000 Belgrade, Serbia
2
Department of Gastroenterology, Clinical Hospital Center “Dr Dragisa Misovic-Dedinje”, 11000 Belgrade, Serbia
3
Clinic for Gastroenterohepatology, University Clinical Centre of Serbia, 11000 Belgrade, Serbia
4
General Hospital “Euromedic”, 11000 Belgrade, Serbia
5
Clinic for Surgery, Clinical Hospital Center “Dr Dragisa Misovic-Dedinje”, 11000 Belgrade, Serbia
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(18), 2862; https://doi.org/10.3390/diagnostics13182862
Submission received: 30 July 2023 / Revised: 28 August 2023 / Accepted: 1 September 2023 / Published: 5 September 2023

Abstract

:
Recently, there has been a growing interest in the application of artificial intelligence (AI) in medicine, especially in specialties where visualization methods are applied. AI is defined as a computer’s ability to achieve human cognitive performance, which is accomplished through enabling computer “learning”. This can be conducted in two ways, as machine learning and deep learning. Deep learning is a complex learning system involving the application of artificial neural networks, whose algorithms imitate the human form of learning. Upper gastrointestinal endoscopy allows examination of the esophagus, stomach and duodenum. In addition to the quality of endoscopic equipment and patient preparation, the performance of upper endoscopy depends on the experience and knowledge of the endoscopist. The application of artificial intelligence in endoscopy refers to computer-aided detection and the more complex computer-aided diagnosis. The application of AI in upper endoscopy is aimed at improving the detection of premalignant and malignant lesions, with special attention on the early detection of dysplasia in Barrett’s esophagus, the early detection of esophageal and stomach cancer and the detection of H. pylori infection. Artificial intelligence reduces the workload of endoscopists, is not influenced by human factors and increases the diagnostic accuracy and quality of endoscopic methods.

1. Introduction

In recent years, there has been a growing interest in the application of artificial intelligence (AI) in medicine, especially in specialties where visualization methods are applied. The most significant development of AI is taking place in the fields of radiology, gastroenterology (endoscopy), surgery and dermatology, but also in other specialties. The beginning of AI dates back to 1950 [1].
Artificial intelligence is defined as the ability of a computer to achieve human cognitive performance, primarily learning and decision making [2,3]. To achieve this, it is necessary to enable computers to “learn”. There are two ways that allow computers to perform specific operations. One is classic programming, where based on predefined algorithms (programs), the computer determines output data based on input data. Another much more complex approach is machine learning (ML), and it is the basis of AI.
In classic machine learning, during programming, mathematical descriptions of patterns (e.g., color, texture, edge, size, etc.) are defined, and the computer further classifies existing data (hand-crafted algorithm) [4,5]. The program itself is insufficient to enable the differentiation of output variables solely based on input data. In order to achieve further ML, “training” is necessary, attained by processing a large number of different input data, which the computer “learns”. In endoscopy, these inputs are images or videos. After the training phase, the computer is able to recognize certain features even in images or videos that are unknown to it. There are different ML models, which can be unsupervised and supervised [3]. The following concepts can be used: support vector machines (SVMs), decision trees and artificial neural networks [3].
A more complex machine learning system is deep learning (DL). The most commonly used system for DL is the convolutional neural network (CNN) [5]. The schematic structure of the CNN system is shown in Figure 1.
This system mimics the human neural network. It consists of a large number of artificial neurons, which are organized into an artificial neural network. Namely, neurons are classified into layers and the so-called multilayered system. The neurons of one layer are connected to the neurons of the next layer and the output data of each neuron have the function of input data for the neurons of the next layer [4,6]. During DL, the computer itself extracts the data, thus forming its recognition patterns, without the influence of the programmer [5]. However, the disadvantage of the deep learning system is that it remains unknown how the computer makes individual decisions (black box) [5]. Unlike conventional ML, which requires human intervention to correct errors, the DL system has the ability to learn from its errors [2].
The application of artificial intelligence in endoscopy refers to computer-aided detection (CAD) and the somewhat more complex computer-aided diagnosis (CADx).
The development of AI methods in upper gastrointestinal endoscopy is focused in three segments [7]:
  • Quality assessment;
  • Detection of lesions;
  • Characterizations of lesions.
These three segments follow the endoscopist’s cognitive process. Namely, first it is necessary to perform a quality examination, followed by the detection of lesions and their characterization. By integrating AI algorithms, the diagnostic process is significantly improved. Quality assessment refers to the adequate visualization of all anatomical landmarks, with the assistance of AI methods (e.g., multi-frame classification) [7]. The application of AI in upper endoscopy is primarily aimed at improving the detection of premalignant and malignant lesions. This is especially important if it is known that during endoscopy a significant part of cancer can be undiagnosed. Namely, the frequency of missed malignancies was evaluated in a study that included 4,105,399 patients [8]. Carcinomas diagnosed <6 months after upper endoscopy are marked as prevalent, and those diagnosed up to 6 to 36 months are marked as missed [8]. The highest percentage of missed esophageal cancers was for adenocarcinoma (6.1%), while for squamous cell carcinoma it was 4.2% [8]. The majority of missed gastric cancers were adenocarcinomas, at 5.7% [8]. Similar results were obtained in some other studies [9,10].
Current research in the field of AI application in upper gastrointestinal endoscopy is focused on the detection, demarcation and characterization of esophageal and stomach cancer, including premalignant conditions (Barrett’s esophagus, H. pylori infection, etc.) [2]. The emphasis is placed on early diagnosis of these diseases.

2. Barrett’s Esophagus and Esophageal Adenocarcinoma

Barrett’s esophagus (BE) represents the replacement of squamous epithelium of the esophagus by metaplastic columnar epithelium [11]. Since metaplasia is present in Barrett’s esophagus, this is a premalignant condition and can lead to esophageal adenocarcinoma (EAC). The progression of BE to EAC is below 1% per patient year [2,12,13].
During endoscopic exploration of the esophagus, it is necessary to determine the level of the esophagogastric junction and the Z line. Under normal conditions, the esophagogastric junction and the Z line are at the same level. If there is an extension of the cylindrical epithelium by more than 1 cm from the proximal end of the gastric folds, BE is suspected [14]. The diagnosis is confirmed by the histopathological finding of specialized intestinal metaplasia [14]. When taking biopsies of suspected Barrett’s esophagus, the Seattle protocol is applied. Namely, in patients with non-dysplastic BE, it is recommended to take biopsies from four quadrants of the esophagus, every 2 cm, starting from the esophagogastric junction [15]. In patients with BE and low-grade dysplasia, biopsies of all four quadrants, every 1–2 cm, starting from the esophagogastric junction are recommended, while in the case of BE with high-grade dysplasia, biopsies of all four quadrants, every 1 cm, from the esophagogastric junction are recommended [15]. Biopsies of all observed changes (nodules, depressions, irregularities, etc.) are mandatory. Barrett’s esophagus is further classified based on the Prague classification [16]. For its application, it is necessary to determine the level of the esophagogastric junction, the circumferential Barrett’s esophagus and the maximum extension (tongue) of the Barrett’s esophagus. The difference (in centimeters) between the esophagogastric junction and the circumferential (C), i.e., tongue of Barrett’s esophagus (M) is classified as C (centimeters) M (centimeters).
In order to make a correct diagnosis of BE, it is necessary to conduct a careful and detailed examination of the esophagus. The method of choice is high-definition white-light endoscopy (HD-WLE), but classical (dye) and virtual chromoendoscopy methods also play a significant role [12,17]. From dye chromoendoscopy, the use of 1.5–3.0% acetic acid is useful, because dysplastic tissue has an accelerated loss of aceto-whitening, and it enables easier differentiation from normal tissue [18]. In addition, better visualization is enabled by virtual chromoendoscopy techniques: NBI (narrow-band imaging), BLI (blue laser imaging), LCI (linked color imaging) and others [17].
Given that a detailed examination is necessary and discrete mucosal changes need to be observed, significant assistance could be provided by AI. Namely, by applying AI techniques, the detection, diagnosis and endoscopic treatment of BE may be improved [6]. The importance of early detection of dysplasia and carcinoma in BE is in the outcome and different therapeutic modalities. Specifically, advanced EAC has a poor prognosis and requires invasive treatment (most often surgical treatment), while earlier stages of the disease (stage T1) allow endoscopic resection [13,19,20].
The application of AI in the diagnosis of BE is aimed at the detection of lesions, their characterization and the assessment of the depth of invasion (if cancer is present) [6].
The first study on the application of AI in the detection of early BE neoplasia was published by van der Sommen et al. in 2016 [21]. In their research, 100 high-definition (HD) endoscopic images (44 patients) were used and further analyzed by a total of five expert endoscopists. One endoscopist had access to the results of the histopathological analysis (nonblinded) and his findings were the “gold standard”, while the others did not know the pathohistological findings [21]. The computer system was constructed using the SVM model, based on the color and texture of the region of interest [21]. The mentioned system showed a sensitivity and specificity of 86% and 87%, respectively [21]. De Groof et al. developed and validated a computer-aided detection system for early neoplasia in BE [12]. The authors used a CAD system based on the ResNet/UNet hybrid model. The ResNet model is used for image classification, while the UNet model is used for intra-image prediction segmentation [12]. The study indicated that the mentioned CAD system enables the classification of endoscopic images into non-dysplastic and dysplastic BE with an accuracy of 89%, a sensitivity of 90% and a specificity of 88% [12]. In addition to the detection of neoplasia, this system allows determining the optimal localization of the biopsy site in 97% of cases [12]. This is very significant because the histopathological analysis of the altered region is crucial for diagnosis establishment. The application of AI in BE neoplastic detection and delineation shows performance similar to that of expert endoscopists, which is higher than that of non-expert endoscopists [12,22]. Fockens et al. also proved that the CAD system with high sensitivity (depending on the datasets, 88% and 100%) has better performance compared to general endoscopists, but with slightly lower specificity (64–66%) [23].
Most of the developed AI models for the detection of neoplastic BE are image-based, that is, they are based on analyzing static images. With the further development of AI, systems enabling the analysis of real-time video sequences were formed [9,10,24]. These systems are much closer to everyday clinical practice. Abdelrahmin et al. have developed and validated a CAD system that enables the real-time detection of neoplastic BE with an accuracy of 92.0%, a sensitivity of 93.8% and a specificity 90.7% [24]. Furthermore, the CAD system showed significantly better accuracy, sensitivity and specificity compared to endoscopists [24].
Narrow-band imaging, as a method of virtual chromoendoscopy, enables a clearer visualization of the mucosal and vascular pattern. The application of this technique improves the visualization of early neoplastic lesions compared to classic WLE (white-light endoscopy), especially if it is used in combination with magnification (zoom endoscopy). Struyvenberg et al. conducted a study in which they evaluated the performance of the CAD system with the application of NBI zoom endoscopy in the detection of neoplastic BE [25]. The results of this study showed that the application of a video-based CAD system, along with the technique of NBI zoom endoscopy, had an accuracy of 83%, a sensitivity of 85% and a specificity of 83% [25]. As the main limitation of the study, the authors mentioned the small number of NBI zoom images that are available for “learning” the CAD system, which is understandable when it is known that the majority of datasets are images obtained by WLE. Swagner et al. developed an algorithm for the CAD of early BE neoplasia using volumetric laser endomicroscopy [26]. This algorithm showed good performance, and its importance as assistance to the endoscopists in their clinical work was pointed out [26].
Lui et al. conducted a meta-analysis in which the pooled sensitivity of AI techniques in the diagnosis of neoplastic BE was 88.0% (95% CI, 82.0–92.1%), the specificity was 90.4% (95% CI, 85.6–94.5%) and the area under the curve (AUC) was 0.96 (95% CI, 0.93–0.99) [22]. Additionally, there were no significant differences in the different modalities of endoscopy (WLE vs. volumetric laser endomicroscopy), nor in AI methods (CNN vs. non-CNN) [22]. Similar results were obtained in the meta-analysis conducted by Vissagi et al. [20]. These findings concluded that AI methods in the diagnosis of Barrett’s neoplasia have a sensitivity of 89%, a specificity of 86%, a positive likelihood ratio (PLR) of 6.50, a negative likelihood ratio (NLR) of 0.13, a diagnostic odds ratio (DOR) of 50.53 and an area under the summary receiver operating characteristic curve (AUROC) of 90% [27]. In the aforementioned study, no significant difference in performance between AI and endoscopists was recorded if WLE methods were used [27].
A summary of the results of selected studies in the application of AI for the detection of neoplastic BE is shown in Table 1 [9,10,12,21,23,24,25,26,28].

3. Esophageal Squamous Cell Carcinoma (ESCC)

Squamous cell carcinoma is the most common esophageal carcinoma [13,29]. Overall 5-year survival in this type of cancer is 15–25%, with better prognosis if the disease is detected at an earlier stage [13]. Also, detecting ESCC in earlier stages allows performing less aggressive treatment modalities (endoscopic therapy) [20]. The method of choice in the diagnosis of ESCC is upper gastrointestinal endoscopy. Dye and virtual chromoendoscopy methods also contribute to early diagnosis [20,30,31]. The application of dye chromoendoscopy with Lugol’s solution is particularly significant, which enables easier detection of esophageal squamous dysplasia [32]. Staining with Lugol’s solution enables the demarcation of the altered mucosa of the esophagus, because the mucosa containing early ESCC is not stained with this solution and due to glycogen depletion [19]. Although this method of dye chromoendoscopy represents the “gold standard” in the diagnosis of early squamous cell neoplasia, with a high sensitivity (over 90%), the specificity of this method is about 70% [5]. The reason for the lower specificity is the fact that some benign diseases can cause glycogen depletion, and therefore cannot be stained with Lugol’s solution. The appearance of dysplastic epithelium in the esophagus is quite difficult to detect with the use of WLE, since macroscopic changes (nodules, plaques and ulcerations) seen in advanced ESCC are generally not present. In order to improve the percentage of detection of esophageal dysplasia and early ESCC, AI techniques are also being developed.
The beginning of the application of computer assistance in the diagnosis of ESCC occurred in 2007, when Kodashima et al. developed a computer analysis system based on endocytoscopy [33]. It enabled easier differentiation of malignant from non-malignant esophageal tissue. However, this method was based on the computer analysis of images of the nuclear region of cells but did not involve AI.
One of the more significant studies in the field of AI application in the early diagnosis of esophageal cancer was published in 2018 by Horia et al. [34]. They used WLE and NBI as the endoscopic methods and a CNN as the deep learning method. The sensitivity of the developed method was 98%, with the detection of all lesions smaller than 10 mm [34]. In the aforementioned study, NBI showed higher sensitivity compared to WLE (89% vs. 81%) [34]. The diagnostic accuracy of the aforementioned AI system in the detection of superficial esophageal cancer was 99%, or 92% for advanced cancer [34]. The authors noted that the CNN could not adequately register some cases of esophageal cancer if the surrounding mucosa was inflamed [34]. In the manuscript by Feng et al., the application of a CNN on images obtained by WLI (white-light imaging) showed a sensitivity of 90.1%, a specificity of 94.3%, an accuracy of 88.3%, a PPV of 88.3% and an NPV of 94.7% [35]. Similar results were obtained by Wang et al. [36]. In their study, Feng et al. used the YOLOv5l model as a deep learning method. Depending on the endoscopy method, it was concluded that NBI has better performance compared to WLE but is inferior to dye chromoendoscopy with Lugol’s solution [35]. Otherwise, the obtained data are comparable to the performance of expert endoscopists, while they are significantly higher than those of less experienced endoscopists (junior and mid-level endoscopists) [35].
In addition to the detection of early ESCC, it is necessary to determine the depth of the invasion. This is necessary in order to determine the adequate therapeutic modality (surgery vs. endoscopy). Shimamoto et al. developed an AI model using a CNN to estimate the depth of ESCC invasion in real time [37]. This is the first study in which data extraction from video images was used. Their method showed a sensitivity of 71%, a specificity of 95% and an accuracy of 89% if ME was used along with WLE [37]. These performances are comparable to or better than expert endoscopists, depending on whether ME is used in addition to WLI or not [37].
Yuan et al. incorporated WLE, ME-NBI and Lugol’s solution staining into their AI model for the early detection of superficial ESCC [38]. In this multicenter study, they concluded that the application of AI enables the detection of this type of cancer with an accuracy of 91.1%, a sensitivity 96.9.7% and a specificity 83.9% for all investigated endoscopic imaging modalities [38]. This is a study that included the most different endoscopic modalities, and in addition to endoscopic images, it also included video analysis.
The pooled performance of the AI method in the detection of neoplastic lesions of the esophageal squamous epithelium is a sensitivity of 75.6% (95% CI, 48.3–92.5%), a specificity of 92.5% (95% CI, 66.8–99.5%) and an AUC of 0.88 (95% CI, 0.82–0.96) [32]. The results from this study favor the use of NBI over WLE [22]. In the meta-analysis by Vissagi et al., data were obtained that the application of AI techniques in the diagnosis of squamous cell carcinoma of the esophagus has a sensitivity of 95%, a specificity of 92%, a PLR of 12.65, an NLR of 0.05, a DOR of 258.36 and an AUROC of 97% [27]. The performance of the AI method is slightly better than the performance of the endoscopist, but without a significant difference [27].
The results of selected studies on the application of AI in the diagnosis of ESCC are shown in Table 2 [34,35,36,37,38,39,40,41].

4. Early Gastric Carcinoma

Gastric cancer is the fifth most common cancer worldwide [42]. Five-year survival depends on the stage at which the disease was detected. For advanced gastric cancer, 5-year survival is 5–25%, while for early it is over 90% [43]. Early gastric cancer does not penetrate the gastric wall deeper than the submucosa, regardless of lymph node metastases [44]. There are two types of gastric cancer, intestinal and diffuse [45]. The therapeutic approach and prognosis of these two types of cancer are different.
In the early stages, gastric cancer is usually asymptomatic. In the later evolution of the disease, the following may occur: dyspeptic symptoms, abdominal pain, nausea, vomiting, disgust for meat, weight loss, anemia, bleeding, etc. [45]. The diagnosis is made on the basis of upper gastrointestinal endoscopy and a pathohistological analysis of gastric biopsies. Endoscopic presentation of early gastric cancer can be in the form of red discoloration of part of the gastric mucosa, gastric ulceration or depressed gastric lesion [46]. Since these lesions are often discrete and difficult to visualize, advanced endoscopic methods can help in diagnosis. The methods of dye and virtual chromoendoscopy, as well as the application of ME, have better performance in detecting early gastric cancer compared to classic WLE [46,47,48].
Miyaki et al. published one of the first studies on the application of AI in the detection of early gastric cancer in 2013 [49]. The constructed system was trained on a total of 493 endoscopic images, of which 235 were images without neoplastic tissue and 258 had gastric cancer present. In the training sample with cancer, 67% of the samples were with differentiated cancer and 33% with undifferentiated cancer [49]. This system showed an accuracy of 85.9%, a sensitivity of 84.8%, a specificity of 87.0%, a PPV of 86.7% and an NPV of 85.1% [49].
The detection of early gastric cancer with the assistance of AI can be performed on previously obtained endoscopic images but also in real time. Luo et al. developed and validated the GRAIDS (Gastrointestinal Artificial Intelligence Diagnostic System) for the diagnosis of upper gastrointestinal cancers [50]. In this multicenter study, the authors developed the system using Deep Lab’s V3+ concept. The system was trained and validated on a total of 1,036,496 endoscopic images (84,424 patients), so this is the largest study on the use of AI in the diagnosis of cancer of the upper gastrointestinal tract [50]. The results indicated the excellent performance of this model, with the possibility of real-time use.
Early detection of gastric cancer, in addition to enabling better survival, also enables the application of less invasive but curative methods compared to advanced cancer. The main criterion used when evaluating the possibility of curative endoscopic resection is the depth of invasion. The first study in the application of CAD to assess the depth of gastric cancer invasion based on endoscopic images was published by Kubota et al. [51]. This system showed an overall accuracy of 64.7%, which is slightly higher for the T1 stage (77.2%) and lowest for the T2 stage (49.1) [51].
In a study by Niikura et al., the effectiveness of AI and expert endoscopists in the detection of gastric cancer was compared [52]. In a sample of 500 patients (100 with gastric cancer and 400 without cancer), AI detected cancer in 100% of patients and expert endoscopists in 94.1% of cases [52]. Early cancer was diagnosed in 100% in the AI group and in 88.4% in the expert group, while success in detecting invasive cancer (T1b stage and higher) was 100% in both groups [52]. Although there is a difference in the detection of early cancer, it is not statistically significant.
A meta-analysis that assessed the performance of AI in the detection of neoplastic gastric lesions indicated a pooled sensitivity of these techniques of 92.1% (95% CI, 87.7–95.4%) and a specificity of 88.0% (95% CI, 78.0–95.0%) with an AUC of 0.96 (95% CI, 0.94–0.99) [22]. There was no significant difference in the different modalities of endoscopy (WLE vs. NBI) or in the AI method (CNN vs. support vector model) [22].
The excellent performance of AI methods in the diagnosis of gastric cancer can also be explained by the characteristic morphological characteristics of these tumors. Namely, according to the Paris classification, gastric carcinomas are most often type IIa (elevated lesion), alone or in combination with type IIc (depressed lesion), as IIa+IIc or IIc+IIa [53]. Changes in the mucosal and vascular pattern are certainly important.
The application of AI methods, in addition to endoscopy, can be used in pathohistological and CT diagnoses of gastric cancer, surgical treatment and predicting the outcome of this disease [43,54].
The results of selected studies on the application of AI in the diagnosis of early gastric cancer are shown in Table 3 [49,50,55,56,57,58,59].

5. H. pylori Gastritis

Helicobacter pylori is a microaerophilic Gram-negative bacterium. It is estimated that the infection is present in half of the world’s population [60]. H. pylori can lead to chronic gastritis, intestinal metaplasia, MALT (mucosa-associated lymphoid tissue) lymphoma and gastric cancer [60]. In patients with H. pylori-induced gastritis, endoscopic findings include mucosal edema, diffuse hyperemia, thickening of gastric folds, mucosal nodularity and atrophy [61,62]. The regular arrangement of collecting venules and fundic gland polyps are characteristic of H. pylori negative mucosa [62]. However, the endoscopic findings are not specific, and the diagnosis of H. pylori infection can be confirmed by a histopathological analysis of gastric biopsies or non-invasive tests (urea breath test, stool antigen test, serology and molecular methods). For the histopathological diagnosis of H. pylori-positive gastritis, biopsies are used according to the updated Sidney protocol [15]. It involves taking two biopsies from the antrum, two biopsies from the gastric corpus and one biopsy from the angulus. If biopsies are taken only for the diagnosis of H. pylori infection, 1–2 biopsies from the antrum are sufficient [15].
AI systems can improve the optical diagnosis of H. pylori infection based on pattern recognition applied to endoscopic images [62]. Further refinement and development of the AI system would help in much faster and more accurate diagnosis, but also in avoiding unnecessary gastric biopsies in order to detect H. pylori infection.
Shischijo et al. developed an AI system, which they applied to a total of 397 patients, of which 72% were H. pylori-positive [63]. To develop this detection system, they used GooGLeNet, a 22-layer CNN [63]. This system showed an accuracy of 87.7%, a sensitivity of 88.9% and a specificity of 87.4% [63]. By comparing the performance of the AI system and the endoscopist, the authors concluded that the accuracy of the AI system was higher, with a shorter detection time, while the sensitivity and specificity were comparable [63].
An improvement in the performance of AI in the detection of H. pylori infection can be achieved if multiple endoscopic images are used, as well as if advanced endoscopy techniques are used. In a pilot study by Zheng et al., it was found that the application of multiple stomach images improves the accuracy (93.8% vs. 84.5%), sensitivity (91.6% vs. 81.4%) and specificity (98.6% vs. 90.1%) of endoscopy compared to the analysis of one image [64].
The use of advanced endoscopy techniques, primarily NBI, BLI, LCI and ME, leads to the improved detection of H. pylori infection [60,65]. Nakashima et al. showed that the use of advanced endoscopic methods is superior to classical WLI in the detection of H. pylori-positive gastritis [66]. In this study, which was conducted on a sample of 222 patients, it was concluded that the AUC was significantly higher for BLI-bright (0.96) and LCI (0.95) compared to WLI (0.66) [66]. The advantage of LCI endoscopy in the detection of H. pylori infection was confirmed by the same author in a subsequent study [67].
Based on the results of the meta-analysis, the pooled sensitivity of the AI method in the detection of H. pylori infection was 83.9% (95% CI, 70.8–92.9%), while the specificity was 89.7% (95% CI, 79.4–95.9%) and the AUC was 0.92 (95% CI, 0.88–0.97) [22].
The summary of the results of selected studies in the use of AI for the detection of H. pylori infection are shown in Table 4 [63,64,66,67,68,69,70,71].

6. Conclusions

Artificial intelligence reduces the workload of endoscopists, is not influenced by human factors (e.g., fatigue, stress, etc.) and contributes to increasing the diagnostic accuracy and quality of endoscopic methods. These systems are very effective in the diagnosis of neoplastic Barrett’s esophagus, esophageal squamous cell carcinoma, early gastric cancer and H. pylori infection. AI methods are effective with the use of different endoscopy modalities. Given that the lack of time is a significant enemy of endoscopy, computerized data processing can speed up the detection time, because it processes more data at a higher speed than a human being.
However, these systems also have their weaknesses. Given that AI systems, after development, do not include constant human supervision and correction, purely technical errors are possible. Namely, during the application of AI methods, false-negative and false-positive findings may occur. More important are false-negative results, which most often occur due to visualization and technical errors.
The question often arises whether AI will replace doctors. At this stage of the development of science and technology, it is unlikely. Namely, for an expert endoscopist, in addition to endoscopy knowledge, significant clinical experience is also necessary. This has been proven in studies where the performance of expert endoscopists is comparable to the performance of AI. Also, the opinion of experts was used as the “gold standard” in studies for the application of AI. Further improvement of these techniques will provide significant help in diagnosis and facilitate the learning of endoscopy for all endoscopists, especially less experienced ones. Therefore, endoscopists should not only rely on such systems, but these systems should serve as their assistants.

Future Directions

Most of the previous research in this area has been conducted using WLE, while studies using advanced endoscopic techniques are sporadic. Therefore, future directions should be focused on the use of advanced endoscopic techniques, primarily virtual chromoendoscopy. Unfortunately, the main limiting factor is the unavailability of this technique and trained endoscopists in a large number of hospitals. For an adequate assessment of the clinical application of AI methods, a larger number of prospective studies are needed, as well as a larger number of studies that include the analysis of real-time images and videos. Also, we believe that it would be interesting to develop models that include certain clinical, laboratory and radiological data.

Author Contributions

Conceptualization, D.P., T.G. and T.M.; methodology, D.P. and D.M.; formal analysis, D.P., D.M., M.M.-H., J.N. and P.S.; investigation, D.P., T.G., M.S.L., N.P., S.D., P.S. and B.F.; writing—original draft preparation, D.P., N.P. and B.F.; writing—review and editing, D.P., M.S.L., J.N. and S.D.; supervision, T.M. and B.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data generated.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kaul, V.; Enslin, S.; Gross, S.A. History of artificial intelligence in medicine. Gastrointest. Endosc. 2020, 92, 807–812. [Google Scholar] [CrossRef] [PubMed]
  2. Tokat, M.; van Tilburg, L.; Koch, A.D.; Spaander, M.C. Artificial Intelligence in Upper Gastrointestinal Endoscopy. Dig. Dis. 2022, 40, 395–408. [Google Scholar] [CrossRef] [PubMed]
  3. Okagawa, Y.; Abe, S.; Yamada, M.; Oda, I.; Saito, Y. Artificial Intelligence in Endoscopy. Dig. Dis. Sci. 2022, 67, 1553–1572. [Google Scholar] [CrossRef]
  4. Ebigbo, A.; Palm, C.; Probst, A.; Mendel, R.; Manzeneder, J.; Prinz, F.; de Souza, L.A.; Papa, J.P.; Siersema, P.; Messmann, H. A technical review of artificial intelligence as applied to gastrointestinal endoscopy: Clarifying the terminology. Endosc. Int. Open 2019, 7, E1616–E1623. [Google Scholar] [CrossRef]
  5. Mori, Y.; Kudo, S.; Mohmed, H.E.N.; Misawa, M.; Ogata, N.; Itoh, H.; Oda, M.; Mori, K. Artificial intelligence and upper gastrointestinal endoscopy: Current status and future perspective. Dig. Endosc. 2019, 31, 378–388. [Google Scholar] [CrossRef] [PubMed]
  6. Hamade, N.; Sharma, P. Artificial intelligence in Barrett’s Esophagus. Ther. Adv. Gastrointest. Endosc. 2021, 14, 26317745211049964. [Google Scholar] [CrossRef] [PubMed]
  7. Renna, F.; Martins, M.; Neto, A.; Cunha, A.; Libânio, D.; Dinis-Ribeiro, M.; Coimbra, M. Artificial Intelligence for Upper Gastrointestinal Endoscopy: A Roadmap from Technology Development to Clinical Practice. Diagnostics 2022, 12, 1278. [Google Scholar] [CrossRef] [PubMed]
  8. Januszewicz, W.; Witczak, K.; Wieszczy, P.; Socha, M.; Turkot, M.H.; Wojciechowska, U.; Didkowska, J.; Kaminski, M.F.; Regula, J. Prevalence and risk factors of upper gastrointestinal cancers missed during endoscopy: A nationwide registry-based study. Endoscopy 2022, 54, 653–660. [Google Scholar] [CrossRef]
  9. De Groof, A.J.; Struyvenberg, M.R.; Fockens, K.N.; van der Putten, J.; van der Sommen, F.; Boers, T.G.; Zinger, S.; Bisschops, R.; de With, P.H.; Pouw, R.E.; et al. Deep learning algorithm detection of Barrett’s neoplasia with high accuracy during live endoscopic procedures: A pilot study (with video). Gastrointest. Endosc. 2020, 91, 1242–1250. [Google Scholar] [CrossRef]
  10. Ebigbo, A.; Mendel, R.; Probst, A.; Manzeneder, J.; Prinz, F.; de Souza, L.A., Jr.; Papa, J.; Palm, C.; Messmann, H. Real-time use of artificial intelligence in the evaluation of cancer in Barrett’s oesophagus. Gut 2020, 69, 615–616. [Google Scholar] [CrossRef]
  11. Peters, Y.; Al-Kaabi, A.; Shaheen, N.J.; Chak, A.; Blum, A.; Souza, R.F.; Di Pietro, M.; Iyer, P.G.; Pech, O.; Fitzgerald, R.C.; et al. Barrett oesophagus. Nat. Rev. Dis. Prim. 2019, 5, 35. [Google Scholar] [CrossRef] [PubMed]
  12. De Groof, A.J.; Struyvenberg, M.R.; van der Putten, J.; van der Sommen, F.; Fockens, K.N.; Curvers, W.L.; Zinger, S.; Pouw, R.E.; Coron, E.; Baldaque-Silva, F.; et al. Deep-Learning System Detects Neoplasia in Patients with Barrett’s Esophagus with Higher Accuracy Than Endoscopists in a Multistep Training and Validation Study with Benchmarking. Gastroenterology 2020, 158, 915–929.e4. [Google Scholar] [CrossRef] [PubMed]
  13. Pennathur, A.; Gibson, M.K.; Jobe, B.A.; Luketich, J.D. Oesophageal carcinoma. Lancet 2013, 381, 400–412. [Google Scholar] [CrossRef]
  14. Weusten, B.; Bisschops, R.; Coron, E.; Dinis-Ribeiro, M.; Dumonceau, J.-M.; Esteban, J.-M.; Hassan, C.; Pech, O.; Repici, A.; Bergman, J.; et al. Endoscopic management of Barrett’s esophagus: European Society of Gastrointestinal Endoscopy (ESGE) Position Statement. Endoscopy 2017, 49, 191–198. [Google Scholar] [CrossRef]
  15. Sharaf, R.N.; Shergill, A.K.; Odze, R.D.; Krinsky, M.L.; Fukami, N.; Jain, R.; Appalaneni, V.; Anderson, M.A.; Ben-Menachem, T.; Chandrasekhara, V.; et al. Endoscopic mucosal tissue sampling. Gastrointest. Endosc. 2013, 78, 216–224. [Google Scholar] [CrossRef] [PubMed]
  16. Sharma, P.; Dent, J.; Armstrong, D.; Bergman, J.J.; Gossner, L.; Hoshihara, Y.; Jankowski, J.A.; Junghard, O.; Lundell, L.; Tytgat, G.N.; et al. The Development and Validation of an Endoscopic Grading System for Barrett’s Esophagus: The Prague C & M Criteria. Gastroenterology 2006, 131, 1392–1399. [Google Scholar] [CrossRef]
  17. Kusano, C.; Singh, R.; Lee, Y.Y.; Soh, Y.S.A.; Sharma, P.; Ho, K.; Gotoda, T. Global variations in diagnostic guidelines for Barrett’s esophagus. Dig. Endosc. 2022, 34, 1320–1328. [Google Scholar] [CrossRef]
  18. Milosavljevic, T.; Popovic, D.; Zec, S.; Krstic, M.; Mijac, D. Accuracy and Pitfalls in the Assessment of Early Gastrointestinal Lesions. Dig. Dis. 2019, 37, 364–373. [Google Scholar] [CrossRef]
  19. Nagao, S.; Tani, Y.; Shibata, J.; Tsuji, Y.; Tada, T.; Ishihara, R.; Fujishiro, M. Implementation of artificial intelligence in upper gastrointestinal endoscopy. DEN Open 2022, 2, e72. [Google Scholar] [CrossRef]
  20. Smyth, E.C.; Lagergren, J.; Fitzgerald, R.C.; Lordick, F.; Shah, M.A.; Lagergren, P.; Cunningham, D. Oesophageal cancer. Nat. Rev. Dis. Primers 2017, 3, 17048. [Google Scholar] [CrossRef]
  21. van der Sommen, F.; Zinger, S.; Curvers, W.L.; Bisschops, R.; Pech, O.; Weusten, B.L.A.M.; Bergman, J.J.G.H.M.; de With, P.H.N.; Schoon, E.J. Computer-aided detection of early neoplastic lesions in Barrett’s esophagus. Endoscopy 2016, 48, 617–624. [Google Scholar] [CrossRef] [PubMed]
  22. Lui, T.K.; Tsui, V.W.; Leung, W.K. Accuracy of artificial intelligence–assisted detection of upper GI lesions: A systematic review and meta-analysis. Gastrointest. Endosc. 2020, 92, 821–830.e9. [Google Scholar] [CrossRef]
  23. Fockens, K.N.; Jukema, J.B.; Boers, T.; Jong, M.R.; van der Putten, J.A.; Pouw, R.E.; Weusten, B.L.A.M.; Herrero, L.A.; Houben, M.H.M.G.; Nagengast, W.B.; et al. Towards a robust and compact deep learning system for primary detection of early Barrett’s neoplasia: Initial image-based results of training on a multi-center retrospectively collected data set. United Eur. Gastroenterol. J. 2023, 11, 324–336. [Google Scholar] [CrossRef] [PubMed]
  24. Abdelrahim, M.; Saiko, M.; Maeda, N.; Hossain, E.; Alkandari, A.; Subramaniam, S.; Parra-Blanco, A.; Sanchez-Yague, A.; Coron, E.; Repici, A.; et al. Development and validation of artificial neural networks model for detection of Barrett’s neoplasia: A multicenter pragmatic nonrandomized trial (with video). Gastrointest. Endosc. 2023, 97, 422–434. [Google Scholar] [CrossRef] [PubMed]
  25. Struyvenberg, M.R.; de Groof, A.J.; van der Putten, J.; van der Sommen, F.; Baldaque-Silva, F.; Omae, M.; Pouw, R.; Bisschops, R.; Vieth, M.; Schoon, E.J.; et al. A computer-assisted algorithm for narrow-band imaging-based tissue characterization in Barrett’s esophagus. Gastrointest. Endosc. 2021, 93, 89–98. [Google Scholar] [CrossRef] [PubMed]
  26. Swager, A.-F.; van der Sommen, F.; Klomp, S.R.; Zinger, S.; Meijer, S.L.; Schoon, E.J.; Bergman, J.J.; de With, P.H.; Curvers, W.L. Computer-aided detection of early Barrett’s neoplasia using volumetric laser endomicroscopy. Gastrointest. Endosc. 2017, 86, 839–846. [Google Scholar] [CrossRef] [PubMed]
  27. Visaggi, P.; Barberio, B.; Gregori, D.; Azzolina, D.; Martinato, M.; Hassan, C.; Sharma, P.; Savarino, E.; de Bortoli, N. Systematic review with meta-analysis: Artificial intelligence in the diagnosis of oesophageal diseases. Aliment. Pharmacol. Ther. 2022, 55, 528–540. [Google Scholar] [CrossRef]
  28. Hashimoto, R.; Requa, J.; Dao, T.; Ninh, A.; Tran, E.; Mai, D.; Lugo, M.; Chehade, N.E.-H.; Chang, K.J.; Karnes, W.E.; et al. Artificial intelligence using convolutional neural networks for real-time detection of early esophageal neoplasia in Barrett’s esophagus (with video). Gastrointest. Endosc. 2020, 91, 1264–1271.e1. [Google Scholar] [CrossRef]
  29. Abnet, C.C.; Arnold, M.; Wei, W.-Q. Epidemiology of Esophageal Squamous Cell Carcinoma. Gastroenterology 2018, 154, 360–373. [Google Scholar] [CrossRef]
  30. Meves, V.; Behrens, A.; Pohl, J. Diagnostics and Early Diagnosis of Esophageal Cancer. Visc. Med. 2015, 31, 315–318. [Google Scholar] [CrossRef]
  31. Morita, F.H.A.; Bernardo, W.M.; Ide, E.; Rocha, R.S.P.; Aquino, J.C.M.; Minata, M.K.; Yamazaki, K.; Marques, S.B.; Sakai, P.; de Moura, E.G.H. Narrow band imaging versus lugol chromoendoscopy to diagnose squamous cell carcinoma of the esophagus: A systematic review and meta-analysis. BMC Cancer 2017, 17, 54. [Google Scholar] [CrossRef] [PubMed]
  32. Codipilly, D.C.; Qin, Y.; Dawsey, S.M.; Kisiel, J.; Topazian, M.; Ahlquist, D.; Iyer, P.G. Screening for esophageal squamous cell carcinoma: Recent advances. Gastrointest. Endosc. 2018, 88, 413–426. [Google Scholar] [CrossRef] [PubMed]
  33. Kodashima, S.; Fujishiro, M.; Takubo, K.; Kammori, M.; Nomura, S.; Kakushima, N.; Muraki, Y.; Goto, O.; Ono, S.; Kaminishi, M.; et al. Ex vivo pilot study using computed analysis of endo-cytoscopic images to differentiate normal and malignant squamous cell epithelia in the oesophagus. Dig. Liver Dis. 2007, 39, 762–766. [Google Scholar] [CrossRef] [PubMed]
  34. Horie, Y.; Yoshio, T.; Aoyama, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Hirasawa, T.; Tsuchida, T.; Ozawa, T.; Ishihara, S.; et al. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest. Endosc. 2019, 89, 25–32. [Google Scholar] [CrossRef]
  35. Feng, Y.; Liang, Y.; Li, P.; Long, Q.; Song, J.; Li, M.; Wang, X.; Cheng, C.-E.; Zhao, K.; Ma, J.; et al. Artificial intelligence assisted detection of superficial esophageal squamous cell carcinoma in white-light endoscopic images by using a generalized system. Discov. Oncol. 2023, 14, 73. [Google Scholar] [CrossRef]
  36. Wang, S.X.; Ke, Y.; Liu, Y.M.; Liu, S.Y.; Song, S.B.; He, S.; Zhang, Y.M.; Dou, L.Z.; Liu, Y.; Liu, X.D.; et al. Establishment and clinical validation of an artificial intelligence YOLOv51 model for the detection of precancerous lesions and superficial esophageal cancer in endoscopic procedure. Chin. J. Oncol. 2022, 44, 395–401. [Google Scholar] [CrossRef]
  37. Shimamoto, Y.; Ishihara, R.; Kato, Y.; Shoji, A.; Inoue, T.; Matsueda, K.; Miyake, M.; Waki, K.; Kono, M.; Fukuda, H.; et al. Real-time assessment of video images for esophageal squamous cell carcinoma invasion depth using artificial intelligence. J. Gastroenterol. 2020, 55, 1037–1045. [Google Scholar] [CrossRef]
  38. Yuan, X.; Guo, L.; Liu, W.; Zeng, X.; Mou, Y.; Bai, S.; Pan, Z.; Zhang, T.; Pu, W.; Wen, C.; et al. Artificial intelligence for detecting superficial esophageal squamous cell carcinoma under multiple endoscopic imaging modalities: A multicenter study. J. Gastroenterol. Hepatol. 2022, 37, 169–178. [Google Scholar] [CrossRef]
  39. Ohmori, M.; Ishihara, R.; Aoyama, K.; Nakagawa, K.; Iwagami, H.; Matsuura, N.; Shichijo, S.; Yamamoto, K.; Nagaike, K.; Nakahara, M.; et al. Endoscopic detection and differentiation of esophageal lesions using a deep neural network. Gastrointest. Endosc. 2020, 91, 301–309.e1. [Google Scholar] [CrossRef]
  40. Guo, L.; Xiao, X.; Wu, C.; Zeng, X.; Zhang, Y.; Du, J.; Bai, S.; Xie, J.; Zhang, Z.; Li, Y.; et al. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest. Endosc. 2020, 91, 41–51. [Google Scholar] [CrossRef]
  41. Nakagawa, K.; Ishihara, R.; Aoyama, K.; Ohmori, M.; Nakahira, H.; Matsuura, N.; Shichijo, S.; Nishida, T.; Yamada, T.; Yamaguchi, S.; et al. Classification for invasion depth of esophageal squamous cell carcinoma using a deep neural network compared with experienced endoscopists. Gastrointest. Endosc. 2019, 90, 407–414. [Google Scholar] [CrossRef] [PubMed]
  42. Milano, A.F. 20-Year Comparative Survival and Mortality of Cancer of the Stomach by Age, Sex, Race, Stage, Grade, Cohort Entry Time-Period, Disease Duration & Selected ICD-O-3 Oncologic Phenotypes: A Systematic Review of 157,258 Cases for Diagnosis Years 1973–2014: (SEER*Stat 8.3.4). J. Insur. Med. 2019, 48, 5–23. [Google Scholar] [CrossRef] [PubMed]
  43. Jin, P.; Ji, X.; Kang, W.; Li, Y.; Liu, H.; Ma, F.; Ma, S.; Hu, H.; Li, W.; Tian, Y. Artificial intelligence in gastric cancer: A systematic review. J. Cancer Res. Clin. Oncol. 2020, 146, 2339–2350. [Google Scholar] [CrossRef] [PubMed]
  44. Yang, K.; Lu, L.; Liu, H.; Wang, X.; Gao, Y.; Yang, L.; Li, Y.; Su, M.; Jin, M.; Khan, S. A comprehensive update on early gastric cancer: Defining terms, etiology, and alarming risk factors. Expert Rev. Gastroenterol. Hepatol. 2021, 15, 255–273. [Google Scholar] [CrossRef] [PubMed]
  45. Hartgrink, H.H.; Jansen, E.P.; van Grieken, N.C.; van de Velde, C.J. Gastric cancer. Lancet 2009, 374, 477–490. [Google Scholar] [CrossRef]
  46. Young, E.; Philpott, H.; Singh, R. Endoscopic diagnosis and treatment of gastric dysplasia and early cancer: Current evidence and what the future may hold. World J. Gastroenterol. 2021, 27, 5126–5151. [Google Scholar] [CrossRef]
  47. Yao, K.; Uedo, N.; Kamada, T.; Hirasawa, T.; Nagahama, T.; Yoshinaga, S.; Oka, M.; Inoue, K.; Mabe, K.; Yao, T.; et al. Guidelines for endoscopic diagnosis of early gastric cancer. Dig. Endosc. 2020, 32, 663–698. [Google Scholar] [CrossRef]
  48. Waddingham, W.; Nieuwenburg, S.A.V.; Carlson, S.; Rodriguez-Justo, M.; Spaander, M.; Kuipers, E.J.; Jansen, M.; Graham, D.G.; Banks, M. Recent advances in the detection and management of early gastric cancer and its precursors. Front. Gastroenterol. 2020, 12, 322–331. [Google Scholar] [CrossRef]
  49. Miyaki, R.; Yoshida, S.; Tanaka, S.; Kominami, Y.; Sanomura, Y.; Matsuo, T.; Oka, S.; Raytchev, B.; Tamaki, T.; Koide, T.; et al. Quantitative identification of mucosal gastric cancer under magnifying endoscopy with flexible spectral imaging color enhancement. J. Gastroenterol. Hepatol. 2013, 28, 841–847. [Google Scholar] [CrossRef]
  50. Luo, H.; Xu, G.; Li, C.; He, L.; Luo, L.; Wang, Z.; Jing, B.; Deng, Y.; Jin, Y.; Li, Y.; et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: A multicentre, case-control, diagnostic study. Lancet Oncol. 2019, 20, 1645–1654. [Google Scholar] [CrossRef]
  51. Kubota, K.; Kuroda, J.; Yoshida, M.; Ohta, K.; Kitajima, M. Medical image analysis: Computer-aided diagnosis of gastric cancer invasion on endoscopic images. Surg. Endosc. 2021, 26, 1485–1489. [Google Scholar] [CrossRef] [PubMed]
  52. Niikura, R.; Aoki, T.; Shichijo, S.; Yamada, A.; Kawahara, T.; Kato, Y.; Hirata, Y.; Hayakawa, Y.; Suzuki, N.; Ochi, M.; et al. Artificial intelligence versus expert endoscopists for diagnosis of gastric cancer in patients who have undergone upper gastrointestinal endoscopy. Endoscopy 2022, 54, 780–784. [Google Scholar] [CrossRef] [PubMed]
  53. Costa, L.C.d.S.; Santos, J.O.M.; Miyajima, N.T.; Montes, C.G.; Andreollo, N.A.; Lopes, L.R. Efficacy analysis of endoscopic submucosal dissection for the early gastric cancer and precancerous lesions. Arq. Gastroenterol. 2022, 59, 421–427. [Google Scholar] [CrossRef] [PubMed]
  54. Niu, P.-H.; Zhao, L.-L.; Wu, H.-L.; Zhao, D.-B.; Chen, Y.-T. Artificial intelligence in gastric cancer: Application and future perspectives. World J. Gastroenterol. 2020, 26, 5408–5419. [Google Scholar] [CrossRef] [PubMed]
  55. Tang, D.; Wang, L.; Ling, T.; Lv, Y.; Ni, M.; Zhan, Q.; Fu, Y.; Zhuang, D.; Guo, H.; Dou, X.; et al. Development and validation of a real-time artificial intelligence-assisted system for detecting early gastric cancer: A multicentre retrospective diagnostic study. EBioMedicine 2020, 62, 103146. [Google Scholar] [CrossRef] [PubMed]
  56. Ikenoyama, Y.; Hirasawa, T.; Ishioka, M.; Namikawa, K.; Yoshimizu, S.; Horiuchi, Y.; Ishiyama, A.; Yoshio, T.; Tsuchida, T.; Takeuchi, Y.; et al. Detecting early gastric cancer: Comparison between the diagnostic ability of convolutional neural networks and endoscopists. Dig. Endosc. 2021, 33, 141–150. [Google Scholar] [CrossRef]
  57. Nagao, S.; Tsuji, Y.; Sakaguchi, Y.; Takahashi, Y.; Minatsuki, C.; Niimi, K.; Yamashita, H.; Yamamichi, N.; Seto, Y.; Tada, T.; et al. Highly accurate artificial intelligence systems to predict the invasion depth of gastric cancer: Efficacy of conventional white-light imaging, nonmagnifying narrow-band imaging, and indigo-carmine dye contrast imaging. Gastrointest. Endosc. 2020, 92, 866–873.e1. [Google Scholar] [CrossRef]
  58. Kanesaka, T.; Lee, T.-C.; Uedo, N.; Lin, K.-P.; Chen, H.-Z.; Lee, J.-Y.; Wang, H.-P.; Chang, H.-T. Computer-aided diagnosis for identifying and delineating early gastric cancers in magnifying narrow-band imaging. Gastrointest. Endosc. 2018, 87, 1339–1344. [Google Scholar] [CrossRef]
  59. Zhu, Y.; Wang, Q.-C.; Xu, M.-D.; Zhang, Z.; Cheng, J.; Zhong, Y.-S.; Zhang, Y.-Q.; Chen, W.-F.; Yao, L.-Q.; Zhou, P.-H.; et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest. Endosc. 2019, 89, 806–815.e1. [Google Scholar] [CrossRef]
  60. Yang, H.; Hu, B. Diagnosis of Helicobacter pylori Infection and Recent Advances. Diagnostics 2021, 11, 1305. [Google Scholar] [CrossRef]
  61. Bang, C.S.; Lee, J.J.; Baik, G.H. Artificial Intelligence for the Prediction of Helicobacter Pylori Infection in Endoscopic Images: Systematic Review and Meta-Analysis of Diagnostic Test Accuracy. J. Med. Internet Res. 2020, 22, e21983. [Google Scholar] [CrossRef] [PubMed]
  62. Pannala, R.; Krishnan, K.; Melson, J.; Parsi, M.A.; Schulman, A.R.; Sullivan, S.; Trikudanathan, G.; Trindade, A.J.; Watson, R.R.; Maple, J.T.; et al. Artificial intelligence in gastrointestinal endoscopy. VideoGIE 2020, 5, 598–613. [Google Scholar] [CrossRef] [PubMed]
  63. Shichijo, S.; Nomura, S.; Aoyama, K.; Nishikawa, Y.; Miura, M.; Shinagawa, T.; Takiyama, H.; Tanimoto, T.; Ishihara, S.; Matsuo, K.; et al. Application of Convolutional Neural Networks in the Diagnosis of Helicobacter pylori Infection Based on Endoscopic Images. EBioMedicine 2017, 25, 106–111. [Google Scholar] [CrossRef] [PubMed]
  64. Zheng, W.; Zhang, X.; Kim, J.J.; Zhu, X.; Ye, G.; Ye, B.; Wang, J.; Luo, S.; Li, J.; Yu, T.; et al. High Accuracy of Convolutional Neural Network for Evaluation of Helicobacter pylori Infection Based on Endoscopic Images: Preliminary Experience. Clin. Transl. Gastroenterol. 2019, 10, e00109. [Google Scholar] [CrossRef] [PubMed]
  65. Bordin, D.S.; Voynovan, I.N.; Andreev, D.N.; Maev, I.V. Current Helicobacter pylori Diagnostics. Diagnostics 2021, 11, 1458. [Google Scholar] [CrossRef]
  66. Hirotaka, N.; Hiroshi, K.; Hiroshi, K.; Nobuhiro, S. Artificial intelligence diagnosis of Helicobacter pylori infection using blue laser imaging-bright and linked color imaging: A single-center prospective study. Ann. Gastroenterol. 2018, 31, 462–468. [Google Scholar] [CrossRef]
  67. Nakashima, H.; Kawahira, H.; Sakaki, N. Endoscopic three-categorical diagnosis of Helicobacter pylori infection using linked color imaging and deep learning: A single-center prospective study (with video). Gastric Cancer 2020, 23, 1033–1040. [Google Scholar] [CrossRef]
  68. Seo, J.Y.; Hong, H.; Ryu, W.-S.; Kim, D.; Chun, J.; Kwak, M.-S. Development and validation of a convolutional neural network model for diagnosing Helicobacter pylori infections with endoscopic images: A multicenter study. Gastrointest. Endosc. 2023, 97, 880–888.e2. [Google Scholar] [CrossRef]
  69. Li, Y.-D.; Wang, H.-G.; Chen, S.-S.; Yu, J.-P.; Ruan, R.-W.; Jin, C.-H.; Chen, M.; Jin, J.-Y.; Wang, S. Assessment of Helicobacter pylori infection by deep learning based on endoscopic videos in real time. Dig. Liver Dis. 2023, 55, 649–654. [Google Scholar] [CrossRef]
  70. Yasuda, T.; Hiroyasu, T.; Hiwa, S.; Okada, Y.; Hayashi, S.; Nakahata, Y.; Yasuda, Y.; Omatsu, T.; Obora, A.; Kojima, T.; et al. Potential of automatic diagnosis system with linked color imaging for diagnosis of Helicobacter pylori infection. Dig. Endosc. 2020, 32, 373–381. [Google Scholar] [CrossRef]
  71. Itoh, T.; Kawahira, H.; Nakashima, H.; Yata, N. Deep learning analyzes Helicobacter pylori infection by upper gastrointestinal endoscopy images. Endosc. Int. Open 2018, 6, E139–E144. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The schematic structure of the CNN system.
Figure 1. The schematic structure of the CNN system.
Diagnostics 13 02862 g001
Table 1. Summary of selected studies on the application of artificial intelligence in the diagnosis of neoplastic Barrett’s esophagus.
Table 1. Summary of selected studies on the application of artificial intelligence in the diagnosis of neoplastic Barrett’s esophagus.
Authors
(Reference)
YearCountryDesignDichotomous VariableEndoscopic MethodsAI MethodPerformance
Accuracy (%)Sensitivity (%)Specificity (%)
van der Sommen
[21]
2016NetherlandsRetrospectiveneoplastic/non-neoplastic BEWLISVMN/A86.087.0
de Groof
[12]
2020NetherlandsRetrospectiveneoplastic/non-neoplastic BEWLIResNet/UNet (CNN)89.090.088.0
Fockens
[23]
2023NetherlandsProspective
(multicentric)
neoplastic/non-neoplastic BEWLICNNN/A100.066.0
Abdelrahim
[24]
2023UKRetrospective
(multicentric)
neoplastic/non-neoplastic BEWLICNN92.093.890.7
Struyvenberg
[25]
2021NetherlandsRetrospective
(multicentric)
neoplastic/non-neoplastic BENBIResNet/UNet (CNN)84.088.078.0
de Groof
[9]
2020NetherlandsProspectiveneoplastic/non-neoplastic BEWLIResNet/UNet (CNN)90.091.089.0
Ebigdo
[10]
2020GermanyProspectiveneoplastic/non-neoplastic BEWLICNN89.983.7100
Hashimoto
[28]
2019USARetrospectiveneoplastic/non-neoplastic BEWLICNNN/A98.688.8
NBI N/A92.499.2
Swager
[26]
2017NetherlandsRetrospectiveneoplastic/non-neoplastic BEVLECNNN/A
N/A—not available; AI—artificial intelligence; BE—Barrett’s esophagus; WLI—white-light imaging; SVM—support vector machine; CNN—convolutional neural network; UK—United Kingdom; NBI—narrow-band imaging; USA—United States of America; VLE—volumetric laser endomicroscopy.
Table 2. Summary of selected studies on the application of artificial intelligence in the diagnosis of esophageal squamous cell carcinoma.
Table 2. Summary of selected studies on the application of artificial intelligence in the diagnosis of esophageal squamous cell carcinoma.
Authors
(Reference)
YearCountryDesignDichotomous VariableEndoscopic MethodsAI MethodPerformance
Accuracy (%)Sensitivity (%)Specificity (%)
Horie
[34]
2019JapanRetrospectivecancer/non-cancerWLICNNN/A98.0 *79.0 **
Feng
[35]
2023ChinaRetrospectivecancer/non-cancerWLICNN88.390.194.3
Wang
[36]
2023ChinaRetrospectivecancer/non-cancerWLIYOLOv5l96.987.998.3
NBI 98.689.399.5
LCE 93.077.598.0
Shimamoto
[37]
2020JapanRetrospective depth of invasion WLICNN87.350.098.7
ME 89.270.894.9
Yuan
[38]
2022ChinaRetrospective (multicentric)superficial carcinoma/non-carcinomaWLICNN86.693.378.5
non-ME NBI 91.798.085.1
ME-NBI 96.599.489.0
Iodine staining 92.296.786.9
Ohmori
[39]
2020JapanRetrospectivecancer/non-cancerWLICNN81.090.076.0
NBI/BLI 77.0100.063.0
ME 77.098.056.0
Guo
[40]
2019IndiaRetrospective cancer/non-cancerNBISegNetN/A98.095.0
Nakagwa
[41]
2019JapanRetrospectivecancer/non-cancerWLISingle Shot MultiBox91.090.195.8
* For each case; ** for each image. N/A—not available; AI—artificial intelligence; WLI—white-light imaging; CNN—convolutional neural network; YOLOv5l—model “You Only Look Once” large extension; NBI—narrow-band imaging; LCE—Lugol chromoendoscopy; ME—magnifying endoscopy, BLI—blue laser imaging.
Table 3. Summary of selected studies on the application of artificial intelligence in the diagnosis of early gastric carcinoma.
Table 3. Summary of selected studies on the application of artificial intelligence in the diagnosis of early gastric carcinoma.
Authors
(Reference)
YearCountryDesignDichotomous VariableEndoscopic MethodsAI MethodPerformance
Accuracy (%)Sensitivity (%)Specificity (%)
Miyaki
[49]
2013JapanRetrospectivecancer/non-cancerME-FICESVM85.984.887.0
Luo
[50]
2019ChinaRetrospective (multicentric)cancer/non-cancerWLICNN92.794.691.3
Tang
[55]
2020ChinaRetrospectivecancer/non-cancerWLICNN85.1–91.285.9–95.581.7–90.3
Ikenoyama
[56]
2021JapanRetrospectivecancer/non-cancerWLI, ICE, NBICNNN/A58.487.3
Nagao
[57]
2020JapanRetrospectivedepth of invasionWLICNN94.484.499.3
NBI 94.375.0100.0
ICE 95.587.5100.0
Kanasaka
[58]
2018TaiwanRetrospectivecancer/non-cancerME-NBISVM96.396.795.0
Zhu
[59]
2018USARetrospectivedepth of invasionWLICNN89.176.495.5
N/A—not available; AI—artificial intelligence; ME—magnifying endoscopy; FICE—flexible spectral imaging color enhancement; WLI—white-light imaging; CNN—convolutional neural network; ICE—indigo-carmine chromoendoscopy; SVM—support vector machine; USA—United States of America.
Table 4. Summary of selected studies on the application of artificial intelligence in the diagnosis of H. pylori.
Table 4. Summary of selected studies on the application of artificial intelligence in the diagnosis of H. pylori.
Authors
(Reference)
YearCountryDesignDichotomous VariableEndoscopic MethodsAI MethodPerformance
Accuracy (%)Sensitivity (%)Specificity (%)
Shichijo
[63]
2017JapanRetrospectivePresence/absence H. pyloriWLICNN87.788.987.4
Zheng
[64]
2019ChinaRetrospectivePresence/absence H. pyloriWLICNN (Res-Net 50)93.8 *91.6 *98.6 *
Seo
[68]
2023KoreaRetrospective
(multicentric)
Presence/absence H. pyloriWLICNN94.0 **96.0 **90.0 **
88.0 ***92.0 ***79.0 ***
Nakashima
[67]
2020JapanProspectivePresence/absence H. pyloriWLICNN77.560.086.2
LCI 82.562.592.5
Li
[69]
2023ChinaRetrospectivePresence/absence H. pyloriWLICNN84.082.086.0
Yasuda
[70]
2019JapanRetrospectivePresence/absence H. pyloriLCISVM87.690.585.7
Itoh
[71]
2019JapanProspectivePresence/absence H. pyloriWLICNNN/A86.786.7
Nakashima
[66]
2018JapanProspectivePresence/absence H. pyloriWLICNNN/A66.760.0
BLI-bright N/A96.786.7
LCI N/A96.783.3
* For multiple images; ** for Korean; *** for non-Korean. AI—artificial intelligence; WLI—white-light imaging; CNN—convolutional neural network; LCI—linked color imaging; SVM—support vector machine; BLI—blue laser imaging.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Popovic, D.; Glisic, T.; Milosavljevic, T.; Panic, N.; Marjanovic-Haljilji, M.; Mijac, D.; Stojkovic Lalosevic, M.; Nestorov, J.; Dragasevic, S.; Savic, P.; et al. The Importance of Artificial Intelligence in Upper Gastrointestinal Endoscopy. Diagnostics 2023, 13, 2862. https://doi.org/10.3390/diagnostics13182862

AMA Style

Popovic D, Glisic T, Milosavljevic T, Panic N, Marjanovic-Haljilji M, Mijac D, Stojkovic Lalosevic M, Nestorov J, Dragasevic S, Savic P, et al. The Importance of Artificial Intelligence in Upper Gastrointestinal Endoscopy. Diagnostics. 2023; 13(18):2862. https://doi.org/10.3390/diagnostics13182862

Chicago/Turabian Style

Popovic, Dusan, Tijana Glisic, Tomica Milosavljevic, Natasa Panic, Marija Marjanovic-Haljilji, Dragana Mijac, Milica Stojkovic Lalosevic, Jelena Nestorov, Sanja Dragasevic, Predrag Savic, and et al. 2023. "The Importance of Artificial Intelligence in Upper Gastrointestinal Endoscopy" Diagnostics 13, no. 18: 2862. https://doi.org/10.3390/diagnostics13182862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop