Next Article in Journal
Impact of Carotid Artery Geometry and Clinical Risk Factors on Carotid Atherosclerotic Plaque Prevalence
Previous Article in Journal
Implementing Personalized Cancer Medicine: Insights from a Qualitative Interview Study
Previous Article in Special Issue
Two-Year Outcomes for Patients with Non-ST-Elevation Acute Coronary Syndrome Treated with Magmaris and Absorb Bioresorbable Scaffolds in Large-Vessel Lesions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Artificial Intelligence-Based Algorithm for Stent Coverage Assessments

by
Joanna Fluder-Wlodarczyk
1,*,†,
Mikhail Darakhovich
2,†,
Zofia Schneider
3,
Magda Roleder-Dylewska
1,
Magdalena Dobrolińska
1,
Tomasz Pawłowski
1,
Wojciech Wojakowski
1,
Pawel Gasior
1 and
Elżbieta Pociask
2
1
Division of Cardiology and Structural Heart Diseases, Medical University of Silesia in Katowice, 40-635 Katowice, Poland
2
Department of Biocybernetics and Biomedical Engineering, AGH University of Kraków, 30-059 Kraków, Poland
3
Faculty of Geology, Geophysics and Environmental Protection, AGH University of Kraków, 30-059 Krakow, Poland
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
J. Pers. Med. 2025, 15(4), 151; https://doi.org/10.3390/jpm15040151
Submission received: 17 February 2025 / Revised: 31 March 2025 / Accepted: 8 April 2025 / Published: 11 April 2025
(This article belongs to the Special Issue New Perspectives and Current Challenges in Myocardial Infarction)

Abstract

:
Background: Neointimal formation after stent implantation is an important prognostic factor since delayed healing may lead to stent thrombosis. In vivo, optical coherence tomography (OCT) can most precisely assess stent strut coverage. Since analyzing neointimal coverage is time-consuming, artificial intelligence (AI) may offer valuable assistance. This study presents the preliminary results of the AI-based tool’s performance in detecting and categorizing struts as covered and uncovered. Methods: The algorithm was developed using the YOLO11 (You Only Look Once) neural networks. The first step was preprocessing, then data augmentation techniques were implemented, and the model was trained. Twenty OCT pullbacks were used during model training, and two OCT pullbacks were used in the final validation. Results: The presented tool’s performance was validated against two analysts’ consensus. Both analysts showed moderate intraobserver agreement (κ = 0.57 for analyst 1 and κ = 0.533 for analyst 2) and fair agreement with each other (κ = 0.389). The algorithm’s detection of struts was satisfactory (a 92% positive predictive value (PPV) and a 90% true positive rate (TPR)) and was more accurate in recognizing covered struts (an 81% PPV and an 85% TPR) than uncovered struts (a 73% PPV and a 60% TPR). The agreement was κ = 0.444. Conclusions: The initial results demonstrated a good detection of struts with a more challenging uncovered strut classification. Further clinical studies with a larger sample size are needed to improve the proposed tool.

1. Introduction

The standard treatment for patients with symptomatic chronic coronary syndrome (CCS) and acute coronary syndrome (ACS) is percutaneous coronary intervention (PCI) with drug-eluting stent (DES) implantation. Neointimal formation following stent deployment is an important prognostic factor for these procedures. Excessive endothelial growth can lead to restenosis, requiring subsequent interventions [1], while delayed healing may lead to stent thrombosis (ST) [2,3], which is associated with significant morbidity and mortality [4]. The imaging modality that is destined for an in vivo assessment of stent struts’ neointimal coverage is optical coherence tomography (OCT). OCT provides high-resolution cross-sectional images of both native coronary arteries and implanted stents, enabling visualization of microscopic structures [5,6,7]. However, analysis of neointimal coverage is time-consuming and requires significant interpretation skills preceded by extensive education and training, making it useless in daily clinical practice and challenging and expensive in experimental settings. Currently, most OCT analyses are conducted by CoreLabs. These limitations have raised the urgent need for automated strut-level classification algorithms [8].
Artificial intelligence (AI) has made remarkable progress recently, finding applications also in healthcare. Particularly interesting is the use of AI for repetitive and tedious tasks. Deep learning, one of the AI domains, uses artificial neural networks (ANN) to imitate the human brain in processing data and creating patterns. Different types of ANNs are available depending on the specific problem. Convolutional neural networks (CNNs), such as YOLO (You Only Look Once) networks, are especially effective in finding patterns in images and recognizing objects, classes, and categories [9].
Previously, we presented an algorithm developed by engineers from the AGH University of Kraków for the automated quantitative analysis of vessel lumen and stent struts at the early stages of vessel healing in intravascular OCT imaging. The algorithm demonstrated excellent agreement with a manual estimation of the lumen and stent area. However, the quantification of strut coverage was more challenging [10]. We have aimed to improve the algorithm using AI to enhance its performance in classifying struts. This study presents the preliminary results of the algorithm’s performance in detecting and categorizing struts as covered and uncovered.

2. Materials and Methods

2.1. Study Description

We selected 22 suitable OCT examinations. We included only OCTs with visible stents in the early stages of healing. Of these, 19 pullbacks were from patients who underwent an OCT imaging follow-up one month (on average 32 ± 3 days) after an OCT-guided stent deployment. The remaining three OCT pullbacks were performed for clinical indications on an average of 97 days (75, 149, and 66 days) after PCI. Poor-quality pullbacks were excluded, especially those with a significant amount of residual blood. All procedures were performed in the Division of Cardiology and Structural Heart Diseases, Medical University of Silesia in Katowice. All patients provided written informed consent for the OCT examination. Due to the study’s retrospective nature and lack of interference in the diagnostic and therapeutic decision-making processes, no permission was required from the Institutional Review Board and Bioethics Committee, but the Local Bioethical Committee approved OCT imaging one month following DES implantation. The data were anonymized prior to analysis. OCT was performed using the ILUMIEN OPTIS system (Abbott Vascular, Santa Clara, CA, USA). The examination included the entire stented segment as well as 5 mm proximally and distally to the stent. OCT registration was preceded by a contrast injection, which triggered an automatic pullback. Patients received unfractionated heparin prior to the procedure to obtain an activated clotting time (ACT) of >250 s. The analyses were performed at intervals of 0.2 mm. Patients’ characteristics are summarized in Table 1.
The dataset was divided into three subsets: the training set (17 pullbacks), the validation set (3 pullbacks), and the testing set (2 pullbacks—one 75 days and the second 28 days after stent implantation). Each pullback represents a sequence of OCT images along a coronary artery segment. Dividing the data in this manner ensures that the model is trained and validated on distinct anatomical regions, reducing the risk of overfitting and providing a robust performance evaluation. The final validation and performance metrics were assessed on the testing dataset, which was not used during training or validation.
The training set consisted of 2312 frames. Only 10% of the frames had no visible struts. The validation (1017 frames) and testing (644 frames) sets included whole pullbacks. During training, a validation set was used to monitor the model’s performance and adjust the hyperparameters accordingly. We employed the Ultralytics YOLO framework, which relies on a comprehensive fitness metric derived from the mean Average Precision (mAP). This metric, known for its robustness in evaluating object detection performance across varying confidence thresholds, was monitored after each epoch.
Validation metrics guided decisions on learning rate adjustments and potential early stopping to prevent overfitting. The last two pullbacks were used in the testing set. These examinations were reserved exclusively for the final evaluation to assess the model’s generalization to unseen data. No frames were excluded from the testing, demonstrating the varying degrees of image quality encountered in daily clinical practice. In each image, all struts were categorized as either covered or uncovered (defined as a strut not covered by tissue at any side or covered only at one side). The annotation process was performed using the Computer Vision Annotation Tool (CVAT.ai) [11]. The training set was divided among four analysts (three cardiologists experienced in OCT analysis and one CoreLab analyst). The validation set was assessed by two analysts (one cardiologist and a CoreLab analyst) and reviewed by them to obtain their consensus. The testing set was initially analyzed by the AI model. Next, two analysts were asked to review and edit the data offered by the tool. The assessments were performed twice, with at least a week’s break between analyses. Then, the intraobserver agreement was calculated, and the analysts could again categorize false negative and false positive struts, obtaining a ground truth (GT) for each analyst. The GT refers to the definitive classification of each stent strut as either “covered” or “uncovered”. The next step was a comparison between analysts, achieving interobserver agreement. Both analysts reviewed the false negative and false positive struts, establishing the final label for differing struts. This consensus of two analysts was used as a GT for the algorithm evaluation.

2.2. Data Preprocessing, Model Architecture, and Training

In this study, we explored the application of neural networks in medical imaging to detect metallic struts in coronary arteries using OCT images. We employed the YOLO (You Only Look Once) family of object detection algorithms, specifically testing versions YOLO8 through YOLO11. Among these, YOLO11 demonstrated superior performance in accurately identifying metallic struts.
The first step was preprocessing. The OCT images were originally acquired and processed in Cartesian coordinates with 704 × 704 pixels dimensions. Pixel intensity values were normalized to the [0, 1] range to facilitate faster convergence during training. Then, data augmentation techniques were implemented to enhance model generalization. These included geometric transformations (rotation, scaling, horizontal and vertical flip) and color transformations.
Next, we utilized the pre-trained YOLO11x model as the foundation for our object detection task. The YOLO11 architecture was developed and provided by Ultralytics, and it is distributed under the GNU Affero General Public License v3.0 (AGPL-3.0) [12]. Training was conducted on a single NVIDIA A100 GPU with 40GB of memory. This model was initially trained on the COCO2017 dataset, which contains over 200,000 labeled images across 80 object categories. Training on COCO2017 provides a robust starting point due to its diverse feature representations and extensive object instances. Leveraging transfer learning, we fine-tuned the YOLO11x model to adapt it to our specific application. The model was customized to detect two classes of metallic struts: covered and uncovered. The final layers of the network were modified to output predictions for the two targeted classes instead of the original 80 classes from COCO2017. The architecture incorporates advanced convolutional layers, residual connections, and attention mechanisms to enhance feature representation and localization precision.

2.3. Statistic Analysis

In object detection, evaluation metrics such as the PPV (Positive Predictive Value − Precision), TPR (True Positive Rate − Recall), and F1-score are widely used to assess performance:
PPV = TP/(TP + FP)
TPR = TP/(TP + FN)
F1-Score = 2 × (PPV × TPR)/(PPV + TPR)
where TP (True Positive) represents the number of correctly identified stent struts, FP (False Positive) refers to the non-stent struts incorrectly classified as stent struts, and FN (False Negative) corresponds to the actual stent struts that the model failed to detect.
PPV reflects the proportion of correctly detected stent struts among all predicted stent struts, while TPR quantifies the fraction of actual stent struts that were successfully identified. The F1-score serves as a harmonic mean of the PPV and TPR, offering a comprehensive performance measure. This metric is particularly useful when the classes are imbalanced because it accounts for false positives and false negatives, providing a balanced measure of the algorithm’s accuracy.
The kappa statistic was used to evaluate inter-rater reliability. Specifically, Cohen’s kappa was applied to assess the level of agreement between two analysts. This statistic accounts for agreement occurring beyond chance, providing a more accurate measure of reliability in categorical assessments.

3. Results

A comparison of analyst 1’s assessments demonstrates high consistency in detecting struts (96% PPV and 96% TPR) and recognizing covered struts (91% PPV and 94% TPR) but was lower for uncovered struts (81% PPV and 71% TPR). The intraobserver agreement for analyst 1 was κ= 0.57 (a 95% confidence interval from 0.538 to 0.602). Analyst 2 was slightly less repeatable (86% and 68% PPV and 76% and 86% TPR for detecting covered and uncovered struts, respectively). The overall detection of struts was excellent (94% PPV and 96% TPR). The intraobserver reliability for analyst 2 was κ= 0.533 (a 95% confidence interval from 0.509 to 0.558). The next step was to compare the results of analyst 1 and analyst 2. The interobserver agreement was κ= 0.389 (a 95% confidence interval from 0.364 to 0.414). Compared to analyst 1, analyst 2 demonstrated 68% PPV and 96% TPR in detecting covered struts and 93% PPV and 41% TPR for uncovered struts. A high consistency between both analysts was observed in total strut detection (96% PPV and 97% TPR). These findings are summarized in Figure 1. Analyst 2 agreed better with the GT (κ = 0.693, with a 95% confidence interval from 0.670 to 0.715, 95% and 77% PPV, and 83% and 91% TPR for the detection of covered and uncovered struts, respectively) than analyst 1 (κ = 0.558, with a 95% confidence interval from 0.531 to 0.585, 80% and 98% PPV, and 98% and 51% TPR for covered and uncovered struts).
Figure 2 shows the sample frames analyzed by the presented tool, presenting correctly and incorrectly labeled struts. Analyses were rapid and efficient; the average time needed for one pullback evaluation was 30 s. The performance of the presented tool was validated against GT. The algorithm identified 3439 struts, of which 2440 were classified as covered and 999 uncovered, while GT detected 3539 struts, 2324 covered and 1215 uncovered. Detection of the struts was very good (92% PPV, 90% TPR, and 91% F1-score). The algorithm was more accurate in recognizing covered struts (81% PPV, 85% TPR, and 83% F1-score) than uncovered struts (73% PPV, 60% TPR, and 66% F1-score). The agreement was κ = 0.444 (a 95% confidence interval from 0.420 to 0.468). Table 2 and Figure 1 present the described data.

4. Discussion

This study aimed to present the preliminary results of the AI-based tool’s performance. The detection of struts was very good (92% PPV, 90% TPR, and 91% F1-score). The classification of covered struts was also satisfactory (81% PPV, 85% TPR, and 83% F1-score). However, the recognition of uncovered struts proved to be more challenging (73% PPV, 60% TPR, and 66% F1-score). Three main reasons are responsible for this outcome. Firstly, more data are required to enhance the classification of uncovered struts. We are collecting new OCT images since we have already used all suitable examinations. Secondly, classes (covered or uncovered) were unbalanced in the OCT pullbacks used in this study. Most of the struts were covered, which decreased the opportunity for AI to learn to recognize the uncovered struts properly. Lastly, most OCT examinations were from one-month follow-ups, which resulted in many thinly or partly covered struts. Sometimes, the difference between the covered or uncovered label was very discrete, making correct classification more challenging, as presented in Figure 3.
Naturally, it is difficult to avoid comparing the proposed tool with others published previously. Wang et al. presented a tool that detected malapposed, apposed, and covered struts with sensitivities of 91.0%, 93.0%, and 94.0%, respectively [13]. Ughi et al. reported an algorithm that was characterized by high Pearson’s correlation coefficients (R = 0.96–0.97) between the automated and manual measurements of stent strut apposition and strut coverage measurements [14]. An algorithm presented by Nam et al. used ANNs and was efficient in detecting struts and classifying between covered and uncovered struts (TPR and PPV above 90%) [15]. Fully automated machine learning-based software analysis, shown by Lu et al., provided objective, repeatable, and comprehensive stent analyses. The tool was very effective in detecting and classifying struts (sensitivity/specificity of 94%/90% in detecting uncovered struts [16], and in another study with more challenging cases, this was 82%/99% [17]). Comparing the currently presented method with the algorithm previously published by us, we see similar effectiveness in strut detection (the previous algorithm achieved a PPV of 89.7% and a TPR of 91.4%, while the new method shows a PPV of 92% and a TPR of 90%) and the classification of covered struts (87%PPV and 80% TPR vs. 81% PPV and 85% TPR, reported by the presented method). The detection of uncovered struts remains challenging (77.3% PPV and 99% TPR for the previous algorithm vs. 73% PPV and 60% TPR for the current one) [10]. However, comparing the current tool with our previous algorithm, it should be emphasized that the validation process in this study was more complex and reliable. Moreover, several factors might be responsible for the differences between various algorithms. Firstly, each algorithm was developed using different techniques and has its limitations. Validation was performed on different OCT images, the quality and difficulty of which can vary significantly, impacting the algorithms’ performance. Lastly, analyst errors are possible, as discussed in more detail below. What distinguishes the proposed method is its approach to utilizing the latest YOLO11 algorithm with enhanced architecture and other improvements, i.e., better detection of small objects like stent struts. The tool may facilitate coverage analysis even in its current form, but since the performance is not fully satisfactory, we plan to improve it further. For this, more OCT studies will be needed, preferably from different periods of coronary stent healing. Of course, agreement between analysts in categorizing struts as covered or uncovered also affects the algorithm’s performance.
Analyzing the results from automated tools should also account for the possibility of analyst errors. Some fluctuation will always be present, partly because coverage assessment is not fully standardized yet, and analysts might have slightly different perceptions of the strut coverage status. Considering this, we involved four analysts in the training process in order to expose the tool to some variability. Additionally, validation was based on a two-analyst consensus to minimize further errors resulting from analyst misjudgment. Both analysts showed moderate intraobserver agreement (κ = 0.57 for analyst 1 and κ = 0.533 for analyst 2) and fair agreement with each other (κ = 0.389). The differences represent different levels of expertise. Furthermore, the analysts tended to choose other zoom settings, which might affect the results substantially [18]. Analyst 1 zoomed more, leading to better detection of thinly covered struts, but also sometimes falsely perceived partly covered struts as covered. Several studies have assessed intra- and interobserver variability. Antonsen et al. showed excellent intra- and interobserver agreement (κ = 0.91 and κ = 0.88, respectively) [19], and comparably, Matsumoto et al. (κ = 0.82 and κ = 0.75 for intra- and interobserver variability) [20]. Otherwise, Brugaletta et al. reported wide inter- (κ = 0.07–0.69) and intraobserver (κ = 0.37–0.86) agreement for qualitative strut coverage assessment. It is worth emphasizing that two CoreLab analysts and two interventional cardiologists with wide experience in OCT evaluations were involved in the mentioned study [18]. Lu et al. demonstrated interobserver variability with an 80–95% agreement on covered struts and a 60–80% agreement on uncovered struts [16]. These studies demonstrate that OCT is a reliable tool for coverage analysis. However, some mistakes are inevitable, especially for less experienced analysts. We can assume that some of these errors are due to fatigue. Automatic tools for coverage assessments can address this issue because a comparison of fully manual analysis with software and manual editing showed that software assistance greatly improves interobserver strut classification agreement [17].
Automated algorithms accelerate analysis at the strut level, providing many benefits and creating new possibilities. The most innovative concept is DAPT discontinuation, which is based on arterial healing after stent deployment. This is only feasible for specific patients for whom the benefits of undergoing an invasive procedure outweigh the risks. Data regarding OCT-based DAPT cessation are limited. The PROTECT-OCT study tested a population of cancer patients with a recently implanted stent that requires premature DAPT cessation due to cancer-related procedures. After diagnostic coronarography and OCT, low-risk individuals were determined (the criteria included >90% of struts covered, >90% apposition of struts, expansion, absence of in-stent restenosis, or intraluminal masses during OCT examination). These patients safely discontinued DAPT during the cancer-related procedure. However, the study had a limited number of patients (40 patients), which limits the power of the study [21]. Early DAPT discontinuation based on the percentage of uncovered struts at a three-month OCT follow-up in a relatively low-risk population was evaluated in the DETECT-OCT study. Patients were assigned to either three months of DAPT (less than 6% of uncovered struts) or 12 months of DAPT (more than 6% of uncovered struts). Composite events rarely occurred in both groups [22]. The basic challenge in planning such studies is the absence of a precise value of uncovered struts that are associated with adverse clinical outcomes. One study determined that 5.9% of uncovered struts was the best cut-off value in predicting major adverse events. However, the cut-off value was derived from patients who experienced MACE, and their number was limited (six patients) [23]. Furthermore, the prognostic factor for ST is not only the overall percentage but also the spatial accumulation of uncovered struts [3,24]. Also, it must be emphasized that a covered strut in OCT is not synonymous with optimal endothelialization, but qualitative neointimal characterization may provide additional information. The homogeneous high-intensity pattern overall represents maturing neointimal tissue [25]. The OCT study comparing vascular healing after implantation of durable- or biodegradable-polymer DES showed, at the 3- and 18-month follow-ups, a high percentage of neointimal frames with homogenous high-intensity signal patterns in both platforms. This suggests favorable vascular responses after PCI with biocompatible polymers, regardless of whether they are durable or biodegradable [26]. Finally, delayed arterial healing is just one of the ST determinants. ST is associated with many risk factors, such as age, a history of prior MI, congestive heart failure, low hemoglobin, or diabetes mellitus [27,28]
Another benefit of automatic algorithms is the increased analysis intervals, which are crucial factors that influence the results. Large intervals (0.5–1 mm) are suitable for assessing the lumen and stent area, while smaller intervals are required for strut coverage assessments since larger intervals may lead to higher variability [29]. Automated algorithms enable the analysis of all available frames in a short time. Proper assessment of stent coverage is especially crucial for patients presenting with myocardial infarction, who might experience delayed endothelialization at culprit sites compared with patients with stable angina [30,31].

5. Limitations

This study has several limitations. First, the sample size was limited. Adding more OCT cases to the training set may increase the algorithm’s efficiency. Second, the algorithm was not trained and tested for multi-layer stent strut classification, which makes the proposed tool unsuitable for some patients. The proposed tool does not recognize strut aposition. Additionally, the spatial distribution of uncovered struts is not examined by the algorithm, which is also important information in terms of thrombotic complications [3]. Finally, the algorithm was not directly compared with other similar software, and the proposed method has not been validated histologically.

6. Conclusions

This paper introduces an AI-based method for quantitative stent strut coverage assessment. The initial results demonstrated a good detection of struts, with more challenging uncovered strut classification. Further clinical studies with a larger sample size are needed to improve the proposed tool. Automatic methods might be a promising alternative that enhances and facilitates OCT analysis.

Author Contributions

Conceptualization, J.F.-W., M.D. (Mikhail Darakhovich), Z.S., P.G. and E.P.; methodology, J.F.-W., M.D. (Mikhail Darakhovich), Z.S., P.G. and E.P.; software, M.D. (Mikhail Darakhovich) and E.P.; analysis, J.F.-W., M.R.-D., M.D. (Magdalena Dobrolińska) and E.P.; validation, J.F.-W. and E.P.; writing—original draft preparation, J.F.-W., M.D. (Mikhail Darakhovich) and E.P.; writing—review and editing, P.G., T.P., W.W. and E.P. All authors have read and agreed to the published version of the manuscript.

Funding

This study received no external funding.

Institutional Review Board Statement

OCT imaging one month following DES implantation was approved by the Ethics Committee of The Medical University of Silesia (Approval Code: KNW/0022/KB1/121/15, Approval Date: 17 November 2015).

Informed Consent Statement

Informed consent was obtained from all patients for the OCT procedure. Written informed consent was obtained from all patients who underwent a one-month OCT follow-up.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Acknowledgments

We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2024/017078.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Alfonso, F.; Coughlan, J.C.; Giacoppo, D.; Kastrati, A.; Byrne, R.B. Management of In-Stent Restenosis. EuroIntervention 2022, 18, e103–e123. [Google Scholar] [CrossRef] [PubMed]
  2. Joner, M.; Finn, A.V.; Farb, A.; Mont, E.K.; Kolodgie, F.D.; Ladich, E.; Kutys, R.; Skorija, K.; Gold, H.K.; Virmani, R. Pathology of Drug-Eluting Stents in Humans. J. Am. Coll. Cardiol. 2006, 48, 193–202. [Google Scholar] [CrossRef]
  3. Finn, A.V.; Joner, M.; Nakazawa, G.; Kolodgie, F.; Newell, J.; John, M.C.; Gold, H.K.; Virmani, R. Pathological Correlates of Late Drug-Eluting Stent Thrombosis: Strut Coverage as a Marker of Endothelialization. Circulation 2007, 115, 2435–2441. [Google Scholar] [CrossRef]
  4. Wenaweser, P.; Daemen, J.; Zwahlen, M.; Van Domburg, R.; Jüni, P.; Vaina, S.; Hellige, G.; Tsuchida, K.; Morger, C.; Boersma, E.; et al. Incidence and Correlates of Drug-Eluting Stent Thrombosis in Routine Clinical Practice. J. Am. Coll. Cardiol. 2008, 52, 1134–1140. [Google Scholar] [CrossRef]
  5. Tearney, G.J.; Waxman, S.; Shishkov, M.; Vakoc, B.J.; Suter, M.J.; Freilich, M.I.; Desjardins, A.E.; Oh, W.-Y.; Bartlett, L.A.; Rosenberg, M.; et al. Three-Dimensional Coronary Artery Microscopy by Intracoronary Optical Frequency Domain Imaging. JACC Cardiovasc. Imaging 2008, 1, 752–761. [Google Scholar] [CrossRef] [PubMed]
  6. Kim, J.Y.; Lee, M.W.; Yoo, H. Diagnostic Fiber-Based Optical Imaging Catheters. Biomed. Eng. Lett. 2014, 4, 239–249. [Google Scholar] [CrossRef]
  7. Tearney, G.J.; Brezinski, M.E.; Bouma, B.E.; Boppart, S.A.; Pitris, C.; Southern, J.F.; Fujimoto, J.G. In Vivo Endoscopic Optical Biopsy with Optical Coherence Tomography. Science 1997, 276, 2037–2039. [Google Scholar] [CrossRef]
  8. Attizzani, G.F.; Bezerra, H.G. Contemporary Assessment of Stent Strut Coverage by OCT. Int. J. Cardiovasc. Imaging 2013, 29, 23–27. [Google Scholar] [CrossRef]
  9. Cheng, R. A Survey: Comparison between Convolutional Neural Network and YOLO in Image Identification. J. Phys. Conf. Ser. 2020, 1453, 012139. [Google Scholar] [CrossRef]
  10. Fluder-Wlodarczyk, J.; Schneider, Z.; Pawłowski, T.; Wojakowski, W.; Gasior, P.; Pociask, E. Assessment of Effectiveness of the Algorithm for Automated Quantitative Analysis of Metallic Strut Tissue Short-Term Coverage with Intravascular Optical Coherence Tomography. J. Clin. Med. 2024, 13, 4336. [Google Scholar] [CrossRef]
  11. CVAT.ai: Computer Vision Annotation Tool. Available online: https://cvat.ai (accessed on 20 May 2024).
  12. Ultralytics. YOLO11—State-of-the-Art Object Detection Model. Available online: https://ultralytics.com (accessed on 25 April 2024).
  13. Wang, A.; Eggermont, J.; Dekker, N.; Garcia-Garcia, H.M.; Pawar, R.; Reiber, J.H.C.; Dijkstra, J. Automatic Stent Strut Detection in Intravascular Optical Coherence Tomographic Pullback Runs. Int. J. Cardiovasc. Imaging 2013, 29, 29–38. [Google Scholar] [CrossRef] [PubMed]
  14. Ughi, G.J.; Adriaenssens, T.; Onsea, K.; Kayaert, P.; Dubois, C.; Sinnaeve, P.; Coosemans, M.; Desmet, W.; D’hooge, J. Automatic Segmentation of In-Vivo Intra-Coronary Optical Coherence Tomography Images to Assess Stent Strut Apposition and Coverage. Int. J. Cardiovasc. Imaging 2012, 28, 229–241. [Google Scholar] [CrossRef] [PubMed]
  15. Nam, H.S.; Kim, C.; Lee, J.J.; Song, J.W.; Kim, J.W.; Yoo, H. Automated Detection of Vessel Lumen and Stent Struts in Intravascular Optical Coherence Tomography to Evaluate Stent Apposition and Neointimal Coverage. Med. Phys. 2016, 43, 1662–1675. [Google Scholar] [CrossRef]
  16. Lu, H.; Lee, J.; Ray, S.; Tanaka, K.; Bezerra, H.G.; Rollins, A.M.; Wilson, D.L. Automated Stent Coverage Analysis in Intravascular OCT (IVOCT) Image Volumes Using a Support Vector Machine and Mesh Growing. Biomed. Opt. Express 2019, 10, 2809. [Google Scholar] [CrossRef]
  17. Lu, H.; Lee, J.; Jakl, M.; Wang, Z.; Cervinka, P.; Bezerra, H.G.; Wilson, D.L. Application and Evaluation of Highly Automated Software for Comprehensive Stent Analysis in Intravascular Optical Coherence Tomography. Sci. Rep. 2020, 10, 2150. [Google Scholar] [CrossRef]
  18. Brugaletta, S.; Garcia-Garcia, H.M.; Gomez-Lara, J.; Radu, M.D.; Pawar, R.; Khachabi, J.; Bruining, N.; Sabaté, M.; Serruys, P.W. Reproducibility of Qualitative Assessment of Stent Struts Coverage by Optical Coherence Tomography. Int. J. Cardiovasc. Imaging 2013, 29, 5–11. [Google Scholar] [CrossRef]
  19. Antonsen, L.; Thayssen, P.; Junker, A.; Veien, K.T.; Hansen, H.S.; Hansen, K.N.; Hougaard, M.; Jensen, L.O. Intra- and Interobserver Reliability and Intra-Catheter Reproducibility Using Frequency Domain Optical Coherence Tomography for the Evaluation of Morphometric Stent Parameters and Qualitative Assessment of Stent Strut Coverage. Cardiovasc. Revascularization Med. 2015, 16, 469–477. [Google Scholar] [CrossRef]
  20. Matsumoto, D.; Shite, J.; Shinke, T.; Otake, H.; Tanino, Y.; Ogasawara, D.; Sawada, T.; Paredes, O.L.; Hirata, K.-i.; Yokoyama, M. Neointimal Coverage of Sirolimus-Eluting Stents at 6-Month Follow-up: Evaluated by Optical Coherence Tomography. Eur. Heart J. 2007, 28, 961–967. [Google Scholar] [CrossRef]
  21. Iliescu, C.A.; Cilingiroglu, M.; Giza, D.E.; Rosales, O.; Lebeau, J.; Guerrero-Mantilla, I.; Lopez-Mattei, J.; Song, J.; Silva, G.; Loyalka, P.; et al. “Bringing on the Light” in a Complex Clinical Scenario: Optical Coherence Tomography–Guided Discontinuation of Antiplatelet Therapy in Cancer Patients with Coronary Artery Disease (PROTECT-OCT Registry). Am. Heart J. 2017, 194, 83–91. [Google Scholar] [CrossRef]
  22. Lee, S.-Y.; Kim, J.-S.; Yoon, H.-J.; Hur, S.-H.; Lee, S.-G.; Kim, J.W.; Hong, Y.J.; Kim, K.-S.; Choi, S.-Y.; Shin, D.-H.; et al. Early Strut Coverage in Patients Receiving Drug-Eluting Stents and Its Implications for Dual Antiplatelet Therapy. JACC Cardiovasc. Imaging 2018, 11, 1810–1819. [Google Scholar] [CrossRef]
  23. Won, H.; Shin, D.-H.; Kim, B.-K.; Mintz, G.S.; Kim, J.-S.; Ko, Y.-G.; Choi, D.; Jang, Y.; Hong, M.-K. Optical Coherence Tomography Derived Cut-off Value of Uncovered Stent Struts to Predict Adverse Clinical Outcomes after Drug-Eluting Stent Implantation. Int. J. Cardiovasc. Imaging 2013, 29, 1255–1263. [Google Scholar] [CrossRef] [PubMed]
  24. Guagliumi, G.; Sirbu, V.; Musumeci, G.; Gerber, R.; Biondi-Zoccai, G.; Ikejima, H.; Ladich, E.; Lortkipanidze, N.; Matiashvili, A.; Valsecchi, O.; et al. Examination of the In Vivo Mechanisms of Late Drug-Eluting Stent Thrombosis. JACC Cardiovasc. Interv. 2012, 5, 12–20. [Google Scholar] [CrossRef] [PubMed]
  25. Lutter, C.; Mori, H.; Yahagi, K.; Ladich, E.; Joner, M.; Kutys, R.; Fowler, D.; Romero, M.; Narula, J.; Virmani, R.; et al. Histopathological Differential Diagnosis of Optical Coherence Tomographic Image Interpretation After Stenting. JACC Cardiovasc. Interv. 2016, 9, 2511–2523. [Google Scholar] [CrossRef] [PubMed]
  26. Guagliumi, G.; Shimamura, K.; Sirbu, V.; Garbo, R.; Boccuzzi, G.; Vassileva, A.; Valsecchi, O.; Fiocca, L.; Canova, P.; Colombo, F.; et al. Temporal Course of Vascular Healing and Neoatherosclerosis after Implantation of Durable- or Biodegradable-Polymer Drug-Eluting Stents. Eur. Heart J. 2018, 39, 2448–2456. [Google Scholar] [CrossRef]
  27. Chi, G.; AlKhalfan, F.; Lee, J.J.; Montazerin, S.M.; Fitzgerald, C.; Korjian, S.; Omar, W.; Barnathan, E.; Plotnikov, A.; Gibson, C.M. Factors Associated with Early, Late, and Very Late Stent Thrombosis among Patients with Acute Coronary Syndrome Undergoing Coronary Stent Placement: Analysis from the ATLAS ACS 2-TIMI 51 Trial. Front. Cardiovasc. Med. 2024, 10, 1269011. [Google Scholar] [CrossRef]
  28. Iakovou, I.; Schmidt, T.; Bonizzoni, E.; Ge, L.; Sangiorgi, G.M.; Stankovic, G.; Airoldi, F.; Chieffo, A.; Montorfano, M.; Carlino, M.; et al. Incidence, Predictors, and Outcome of Thrombosis After Successful Implantation of Drug-Eluting Stents. JAMA 2005, 293, 2126–2130. [Google Scholar]
  29. Mehanna, E.A.; Attizzani, G.F.; Kyono, H.; Hake, M.; Bezerra, H.G. Assessment of Coronary Stent by Optical Coherence Tomography, Methodology and Definitions. Int. J. Cardiovasc. Imaging 2011, 27, 259–269. [Google Scholar] [CrossRef]
  30. Nakazawa, G.; Finn, A.V.; Joner, M.; Ladich, E.; Kutys, R.; Mont, E.K.; Gold, H.K.; Burke, A.P.; Kolodgie, F.D.; Virmani, R. Delayed Arterial Healing and Increased Late Stent Thrombosis at Culprit Sites After Drug-Eluting Stent Placement for Acute Myocardial Infarction Patients: An Autopsy Study. Circulation 2008, 118, 1138–1145. [Google Scholar] [CrossRef]
  31. Aihara, K.; Torii, S.; Nakamura, N.; Hozumi, H.; Shiozaki, M.; Sato, Y.; Yoshikawa, M.; Kamioka, N.; Ijichi, T.; Natsumeda, M.; et al. Pathological Evaluation of Predictors for Delayed Endothelial Coverage after Currently Available Drug-Eluting Stent Implantation in Coronary Arteries: Impact of Lesions with Acute and Chronic Coronary Syndromes. Am. Heart J. 2024, 277, 114–124. [Google Scholar] [CrossRef]
Figure 1. Summary of intraobserver variability for analyst 1 (A), analyst 2 (B), interobserver variability (C), and algorithm performance versus ground truth (GT)—consensus for analysts 1 and 2 (D).
Figure 1. Summary of intraobserver variability for analyst 1 (A), analyst 2 (B), interobserver variability (C), and algorithm performance versus ground truth (GT)—consensus for analysts 1 and 2 (D).
Jpm 15 00151 g001
Figure 2. Example frames analyzed by the presented tool: (AD)—shows the correct detection and classification of struts; (E)—a thinly covered strut was incorrectly classified as uncovered; (F)—calcification was confused with a covered strut; (G)—ghost strut artifacts (multiplied struts in the shadow area) were confused with an uncovered strut; and (H)—a catheter was incorrectly identified as an uncovered strut.
Figure 2. Example frames analyzed by the presented tool: (AD)—shows the correct detection and classification of struts; (E)—a thinly covered strut was incorrectly classified as uncovered; (F)—calcification was confused with a covered strut; (G)—ghost strut artifacts (multiplied struts in the shadow area) were confused with an uncovered strut; and (H)—a catheter was incorrectly identified as an uncovered strut.
Jpm 15 00151 g002
Figure 3. (A) OCT frame from 1-month follow-up examination. Most visible struts are covered (an example is marked with an asterisk). However, it is questionable whether several struts are thinly covered or uncovered (arrows). (BD) The zoomed view of questionable struts.
Figure 3. (A) OCT frame from 1-month follow-up examination. Most visible struts are covered (an example is marked with an asterisk). However, it is questionable whether several struts are thinly covered or uncovered (arrows). (BD) The zoomed view of questionable struts.
Jpm 15 00151 g003
Table 1. Patients and procedural characteristics.
Table 1. Patients and procedural characteristics.
All Pullbacks (n = 22)The Testing Set (n = 2)
Age (average)6863
Male141
Indication for PCI
ACS31
UA100
CCS91
Risk factors
Hypertension182
Diabetes mellitus40
Dyslipidemia60
Smoking52
Coronary artery
LAD82
Cx80
IM10
RCA50
Stent type (strut thickness)
Alex Plus (71 µm)90
Resolute Onyx (81 μm)80
Supraflex Cruz (60 μm)21
Resolute Integrity (90 μm)10
Orsiro (60 μm)21
ACS—acute coronary syndrome, UA—unstable angina, CCS—chronic coronary syndrome, PCI- percutaneous coronary intervention, LAD—left anterior descending artery, CX—circumflex artery, IM—intermediate artery, RCA—right coronary artery.
Table 2. Comparison of GT and AI model’s performance.
Table 2. Comparison of GT and AI model’s performance.
GTAlgorithmGT vs. Algorithm
PPV (%)TPR (%)
Total strut353934399290
Covered232424408185
Uncovered12159997360
GT—ground truth; PPV—positive predictive value; TPR—true positive rate.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fluder-Wlodarczyk, J.; Darakhovich, M.; Schneider, Z.; Roleder-Dylewska, M.; Dobrolińska, M.; Pawłowski, T.; Wojakowski, W.; Gasior, P.; Pociask, E. Artificial Intelligence-Based Algorithm for Stent Coverage Assessments. J. Pers. Med. 2025, 15, 151. https://doi.org/10.3390/jpm15040151

AMA Style

Fluder-Wlodarczyk J, Darakhovich M, Schneider Z, Roleder-Dylewska M, Dobrolińska M, Pawłowski T, Wojakowski W, Gasior P, Pociask E. Artificial Intelligence-Based Algorithm for Stent Coverage Assessments. Journal of Personalized Medicine. 2025; 15(4):151. https://doi.org/10.3390/jpm15040151

Chicago/Turabian Style

Fluder-Wlodarczyk, Joanna, Mikhail Darakhovich, Zofia Schneider, Magda Roleder-Dylewska, Magdalena Dobrolińska, Tomasz Pawłowski, Wojciech Wojakowski, Pawel Gasior, and Elżbieta Pociask. 2025. "Artificial Intelligence-Based Algorithm for Stent Coverage Assessments" Journal of Personalized Medicine 15, no. 4: 151. https://doi.org/10.3390/jpm15040151

APA Style

Fluder-Wlodarczyk, J., Darakhovich, M., Schneider, Z., Roleder-Dylewska, M., Dobrolińska, M., Pawłowski, T., Wojakowski, W., Gasior, P., & Pociask, E. (2025). Artificial Intelligence-Based Algorithm for Stent Coverage Assessments. Journal of Personalized Medicine, 15(4), 151. https://doi.org/10.3390/jpm15040151

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop