Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (484)

Search Parameters:
Keywords = computers assisted diagnosis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2135 KiB  
Article
Development of an Automotive Electronics Internship Assistance System Using a Fine-Tuned Llama 3 Large Language Model
by Ying-Chia Huang, Hsin-Jung Tsai, Hui-Ting Liang, Bo-Siang Chen, Tzu-Hsin Chu, Wei-Sho Ho, Wei-Lun Huang and Ying-Ju Tseng
Systems 2025, 13(8), 668; https://doi.org/10.3390/systems13080668 - 6 Aug 2025
Abstract
This study develops and validates an artificial intelligence (AI)-assisted internship learning platform for automotive electronics based on the Llama 3 large language model, aiming to enhance pedagogical effectiveness within vocational training contexts. Addressing critical issues such as the persistent theory–practice gap and limited [...] Read more.
This study develops and validates an artificial intelligence (AI)-assisted internship learning platform for automotive electronics based on the Llama 3 large language model, aiming to enhance pedagogical effectiveness within vocational training contexts. Addressing critical issues such as the persistent theory–practice gap and limited innovation capability prevalent in existing curricula, we leverage the natural language processing (NLP) capabilities of Llama 3 through fine-tuning based on transfer learning to establish a specialized knowledge base encompassing fundamental circuit principles and fault diagnosis protocols. The implementation employs the Hugging Face Transformers library with optimized hyperparameters, including a learning rate of 5 × 10−5 across five training epochs. Post-training evaluations revealed an accuracy of 89.7% on validation tasks (representing a 12.4% improvement over the baseline model), a semantic comprehension precision of 92.3% in technical question-and-answer assessments, a mathematical computation accuracy of 78.4% (highlighting this as a current limitation), and a latency of 6.3 s under peak operational workloads (indicating a system bottleneck). Although direct trials involving students were deliberately avoided, the platform’s technical feasibility was validated through multidimensional benchmarking against established models (BERT-base and GPT-2), confirming superior domain adaptability (F1 = 0.87) and enhanced error tolerance (σ2 = 1.2). Notable limitations emerged in numerical reasoning tasks (Cohen’s d = 1.15 compared to human experts) and in real-time responsiveness deterioration when exceeding 50 concurrent users. The study concludes that Llama 3 demonstrates considerable promise for automotive electronics skills development. Proposed future enhancements include integrating symbolic AI modules to improve computational reliability, implementing Kubernetes-based load balancing to ensure latency below 2 s at scale, and conducting longitudinal pedagogical validation studies with trainees. This research provides a robust technical foundation for AI-driven vocational education, especially suited to mechatronics fields that require close integration between theoretical knowledge and practical troubleshooting skills. Full article
Show Figures

Figure 1

19 pages, 1555 KiB  
Article
MedLangViT: A Language–Vision Network for Medical Image Segmentation
by Yiyi Wang, Jia Su, Xinxiao Li and Eisei Nakahara
Electronics 2025, 14(15), 3020; https://doi.org/10.3390/electronics14153020 - 29 Jul 2025
Viewed by 245
Abstract
Precise medical image segmentation is crucial for advancing computer-aided diagnosis. Although deep learning-based medical image segmentation is now widely applied in this field, the complexity of human anatomy and the diversity of pathological manifestations often necessitate the use of image annotations to enhance [...] Read more.
Precise medical image segmentation is crucial for advancing computer-aided diagnosis. Although deep learning-based medical image segmentation is now widely applied in this field, the complexity of human anatomy and the diversity of pathological manifestations often necessitate the use of image annotations to enhance segmentation accuracy. In this process, the scarcity of annotations and the lightweight design requirements of associated text encoders collectively present key challenges for improving segmentation model performance. To address these challenges, we propose MedLangViT, a novel language–vision multimodal model for medical image segmentation that incorporates medical descriptive information through lightweight text embedding rather than text encoders. MedLangViT innovatively leverages medical textual information to assist the segmentation process, thereby reducing reliance on extensive high-precision image annotations. Furthermore, we design an Enhanced Channel-Spatial Attention Module (ECSAM) to effectively fuse textual and visual features, strengthening textual guidance for segmentation decisions. Extensive experiments conducted on two publicly available text–image-paired medical datasets demonstrated that MedLangViT significantly outperforms existing state-of-the-art methods, validating the effectiveness of both the proposed model and the ECSAM. Full article
Show Figures

Figure 1

11 pages, 556 KiB  
Article
Added Value of SPECT/CT in Radio-Guided Occult Localization (ROLL) of Non-Palpable Pulmonary Nodules Treated with Uniportal Video-Assisted Thoracoscopy
by Demetrio Aricò, Lucia Motta, Giulia Giacoppo, Michelangelo Bambaci, Paolo Macrì, Stefania Maria, Francesco Barbagallo, Nicola Ricottone, Lorenza Marino, Gianmarco Motta, Giorgia Leone, Carlo Carnaghi, Vittorio Gebbia, Domenica Caponnetto and Laura Evangelista
J. Clin. Med. 2025, 14(15), 5337; https://doi.org/10.3390/jcm14155337 - 29 Jul 2025
Viewed by 246
Abstract
Background/Objectives: The extensive use of computed tomography (CT) has led to a significant increase in the detection of small and non-palpable pulmonary nodules, necessitating the use of invasive methods for definitive diagnosis. Video-assisted thoracoscopic surgery (VATS) has become the preferred procedure for nodule [...] Read more.
Background/Objectives: The extensive use of computed tomography (CT) has led to a significant increase in the detection of small and non-palpable pulmonary nodules, necessitating the use of invasive methods for definitive diagnosis. Video-assisted thoracoscopic surgery (VATS) has become the preferred procedure for nodule resections; however, intraoperative localization remains challenging, especially for deep or subsolid lesions. This study explores whether SPECT/CT improves the technical and clinical outcomes of radio-guided occult lesion localization (ROLL) before uniportal video-assisted thoracoscopic surgery (u-VATS). Methods: This is a retrospective study involving consecutive patients referred for the resection of pulmonary nodules who underwent CT-guided ROLL followed by u-VATS between September 2017 and December 2024. From January 2023, SPECT/CT was systematically added after planar imaging. The cohort was divided into a planar group and a planar + SPECT/CT group. The inclusion criteria involved nodules sized ≤ 2 cm, with ground glass or solid characteristics, located at a depth of <6 cm from the pleural surface. 99mTc-MAA injected activity, timing, the classification of planar and SPECT/CT image findings (focal uptake, multisite with focal uptake, multisite without focal uptake), spillage, and post-procedure complications were evaluated. Statistical analysis was performed, with continuous data expressed as the median and categorical data as the number. Comparisons were made using chi-square tests for categorical variables and the Mann–Whitney U test for procedural duration. Cohen’s kappa coefficient was calculated to assess agreement between imaging modalities. Results: In total, 125 patients were selected for CT-guided radiotracer injection followed by uniportal-VATS. The planar group and planar + SPECT/CT group comprised 60 and 65 patients, respectively. Focal uptake was detected in 68 (54%), multisite with focal uptake in 46 (36.8%), and multisite without focal uptake in 11 patients (8.8%). In comparative analyses between planar and SPECT/CT imaging in 65 patients, 91% exhibited focal uptake, revealing significant differences in classification for 40% of the patients. SPECT/CT corrected the classification of 23 patients initially categorized as multisite with focal uptake to focal uptake, improving localization accuracy. The mean procedure duration was 39 min with SPECT/CT. Pneumothorax was more frequently detected with SPECT/CT (43% vs. 1.6%). The intraoperative localization success rate was 96%. Conclusions: SPECT/CT imaging in the ROLL procedure for detecting pulmonary nodules before u-VATs demonstrates a significant advantage in reclassifying radiotracer positioning compared to planar imaging. Considering its limited impact on surgical success rates and additional procedural time, SPECT/CT should be reserved for technically challenging cases. Larger sample sizes, multicentric and prospective randomized studies, and formal cost–utility analyses are warranted. Full article
(This article belongs to the Section Nuclear Medicine & Radiology)
Show Figures

Figure 1

17 pages, 6870 KiB  
Article
Edge- and Color–Texture-Aware Bag-of-Local-Features Model for Accurate and Interpretable Skin Lesion Diagnosis
by Dichao Liu and Kenji Suzuki
Diagnostics 2025, 15(15), 1883; https://doi.org/10.3390/diagnostics15151883 - 27 Jul 2025
Viewed by 380
Abstract
Background/Objectives: Deep models have achieved remarkable progress in the diagnosis of skin lesions but face two significant drawbacks. First, they cannot effectively explain the basis of their predictions. Although attention visualization tools like Grad-CAM can create heatmaps using deep features, these features [...] Read more.
Background/Objectives: Deep models have achieved remarkable progress in the diagnosis of skin lesions but face two significant drawbacks. First, they cannot effectively explain the basis of their predictions. Although attention visualization tools like Grad-CAM can create heatmaps using deep features, these features often have large receptive fields, resulting in poor spatial alignment with the input image. Second, the design of most deep models neglects interpretable traditional visual features inspired by clinical experience, such as color–texture and edge features. This study aims to propose a novel approach integrating deep learning with traditional visual features to handle these limitations. Methods: We introduce the edge- and color–texture-aware bag-of-local-features model (ECT-BoFM), which limits the receptive field of deep features to a small size and incorporates edge and color–texture information from traditional features. A non-rigid reconstruction strategy ensures that traditional features enhance rather than constrain the model’s performance. Results: Experiments on the ISIC 2018 and 2019 datasets demonstrated that ECT-BoFM yields precise heatmaps and achieves high diagnostic performance, outperforming state-of-the-art methods. Furthermore, training models using only a small number of the most predictive patches identified by ECT-BoFM achieved diagnostic performance comparable to that obtained using full images, demonstrating its efficiency in exploring key clues. Conclusions: ECT-BoFM successfully combines deep learning and traditional visual features, addressing the interpretability and diagnostic accuracy challenges of existing methods. ECT-BoFM provides an interpretable and accurate framework for skin lesion diagnosis, advancing the integration of AI in dermatological research and clinical applications. Full article
Show Figures

Figure 1

22 pages, 4406 KiB  
Article
Colorectal Cancer Detection Tool Developed with Neural Networks
by Alex Ede Danku, Eva Henrietta Dulf, Alexandru George Berciu, Noemi Lorenzovici and Teodora Mocan
Appl. Sci. 2025, 15(15), 8144; https://doi.org/10.3390/app15158144 - 22 Jul 2025
Viewed by 267
Abstract
In the last two decades, there has been a considerable surge in the development of artificial intelligence. Imaging is most frequently employed for the diagnostic evaluation of patients, as it is regarded as one of the most precise methods for identifying the presence [...] Read more.
In the last two decades, there has been a considerable surge in the development of artificial intelligence. Imaging is most frequently employed for the diagnostic evaluation of patients, as it is regarded as one of the most precise methods for identifying the presence of a disease. However, a study indicates that approximately 800,000 individuals in the USA die or incur permanent disability because of misdiagnosis. The present study is based on the use of computer-aided diagnosis of colorectal cancer. The objective of this study is to develop a practical, low-cost, AI-based decision-support tool that integrates clinical test data (blood/stool) and, if needed, colonoscopy images to help reduce misdiagnosis and improve early detection of colorectal cancer for clinicians. Convolutional neural networks (CNNs) and artificial neural networks (ANNs) are utilized in conjunction with a graphical user interface (GUI), which caters to individuals lacking programming expertise. The performance of the artificial neural network (ANN) is measured using the mean squared error (MSE) metric, and the obtained performance is 7.38. For CNN, two distinct cases are under consideration: one with two outputs and one with three outputs. The precision of the models is 97.2% for RGB and 96.7% for grayscale, respectively, in the first instance, and 83% for RGB and 82% for grayscale in the second instance. However, using a pretrained network yielded superior performance with 99.5% for 2-output models and 93% for 3-output models. The GUI is composed of two panels, with the best ANN model and the best CNN model being utilized in each. The primary function of the tool is to assist medical personnel in reducing the time required to make decisions and the probability of misdiagnosis. Full article
Show Figures

Figure 1

16 pages, 2557 KiB  
Article
Explainable AI for Oral Cancer Diagnosis: Multiclass Classification of Histopathology Images and Grad-CAM Visualization
by Jelena Štifanić, Daniel Štifanić, Nikola Anđelić and Zlatan Car
Biology 2025, 14(8), 909; https://doi.org/10.3390/biology14080909 - 22 Jul 2025
Viewed by 352
Abstract
Oral cancer is typically diagnosed through histological examination; however, the primary issue with this type of procedure is tumor heterogeneity, where a subjective aspect of the examination may have a direct effect on the treatment plan for a patient. To reduce inter- and [...] Read more.
Oral cancer is typically diagnosed through histological examination; however, the primary issue with this type of procedure is tumor heterogeneity, where a subjective aspect of the examination may have a direct effect on the treatment plan for a patient. To reduce inter- and intra-observer variability, artificial intelligence algorithms are often used as computational aids in tumor classification and diagnosis. This research proposes a two-step approach for automatic multiclass grading using oral histopathology images (the first step) and Grad-CAM visualization (the second step) to assist clinicians in diagnosing oral squamous cell carcinoma. The Xception architecture achieved the highest classification values of 0.929 (±σ = 0.087) AUCmacro and 0.942 (±σ = 0.074) AUCmicro. Additionally, Grad-CAM provided visual explanations of the model’s predictions by highlighting the precise areas of histopathology images that influenced the model’s decision. These results emphasize the potential of integrated AI algorithms in medical diagnostics, offering a more precise, dependable, and effective method for disease analysis. Full article
Show Figures

Figure 1

24 pages, 746 KiB  
Review
Artificial Intelligence in Advancing Inflammatory Bowel Disease Management: Setting New Standards
by Nunzia Labarile, Alessandro Vitello, Emanuele Sinagra, Olga Maria Nardone, Giulio Calabrese, Federico Bonomo, Marcello Maida and Marietta Iacucci
Cancers 2025, 17(14), 2337; https://doi.org/10.3390/cancers17142337 - 14 Jul 2025
Viewed by 786
Abstract
Introduction: Artificial intelligence (AI) is increasingly being applied to improve the diagnosis and management of inflammatory bowel disease (IBD). Aims and Methods: We conducted a narrative review of the literature on AI applications in IBD endoscopy, focusing on diagnosis, disease activity assessment, therapy [...] Read more.
Introduction: Artificial intelligence (AI) is increasingly being applied to improve the diagnosis and management of inflammatory bowel disease (IBD). Aims and Methods: We conducted a narrative review of the literature on AI applications in IBD endoscopy, focusing on diagnosis, disease activity assessment, therapy prediction, and detection of dysplasia. Results: AI systems have demonstrated high accuracy in assessing endoscopic and histological disease activity in ulcerative colitis and Crohn’s disease, with performance comparable to expert clinicians. Machine learning models can predict response to biologics and risk of complications. AI-assisted technologies like confocal laser endomicroscopy enable real-time histological assessment. Computer-aided detection systems improve identification of dysplastic lesions during surveillance. Challenges remain, including need for larger datasets, external validation, and addressing potential biases. Conclusions: AI has significant potential to enhance IBD care by providing rapid, objective assessments of disease activity, predicting outcomes, and assisting in dysplasia surveillance. However, further validation in diverse populations and prospective studies are needed before widespread clinical implementation. With ongoing advances, AI is poised to become a valuable tool to support clinical decision-making and improve patient outcomes in IBD. Addressing methodological, regulatory, and cost barriers will be crucial for the successful integration of AI into routine IBD management. Full article
(This article belongs to the Section Cancer Therapy)
Show Figures

Figure 1

18 pages, 1667 KiB  
Article
Multi-Task Deep Learning for Simultaneous Classification and Segmentation of Cancer Pathologies in Diverse Medical Imaging Modalities
by Maryem Rhanoui, Khaoula Alaoui Belghiti and Mounia Mikram
Onco 2025, 5(3), 34; https://doi.org/10.3390/onco5030034 - 11 Jul 2025
Viewed by 407
Abstract
Background: Clinical imaging is an important part of health care providing physicians with great assistance in patients treatment. In fact, segmentation and grading of tumors can help doctors assess the severity of the cancer at an early stage and increase the chances [...] Read more.
Background: Clinical imaging is an important part of health care providing physicians with great assistance in patients treatment. In fact, segmentation and grading of tumors can help doctors assess the severity of the cancer at an early stage and increase the chances of cure. Despite that Deep Learning for cancer diagnosis has achieved clinically acceptable accuracy, there still remains challenging tasks, especially in the context of insufficient labeled data and the subsequent need for expensive computational ressources. Objective: This paper presents a lightweight classification and segmentation deep learning model to assist in the identification of cancerous tumors with high accuracy despite the scarcity of medical data. Methods: We propose a multi-task architecture for classification and segmentation of cancerous tumors in the Brain, Skin, Prostate and lungs. The model is based on the UNet architecture with different pre-trained deep learning models (VGG 16 and MobileNetv2) as a backbone. The multi-task model is validated on relatively small datasets (slightly exceed 1200 images) that are diverse in terms of modalities (IRM, X-Ray, Dermoscopic and Digital Histopathology), number of classes, shapes, and sizes of cancer pathologies using the accuracy and dice coefficient as statistical metrics. Results: Experiments show that the multi-task approach improve the learning efficiency and the prediction accuracy for the segmentation and classification tasks, compared to training the individual models separately. The multi-task architecture reached a classification accuracy of 86%, 90%, 88%, and 87% respectively for Skin Lesion, Brain Tumor, Prostate Cancer and Pneumothorax. For the segmentation tasks we were able to achieve high precisions respectively 95%, 98% for the Skin Lesion and Brain Tumor segmentation and a 99% precise segmentation for both Prostate cancer and Pneumothorax. Proving that the multi-task solution is more efficient than single-task networks. Full article
Show Figures

Figure 1

12 pages, 2431 KiB  
Article
Unsupervised Clustering Successfully Predicts Prognosis in NSCLC Brain Metastasis Cohorts
by Emre Uysal, Gorkem Durak, Ayse Kotek Sedef, Ulas Bagci, Tanju Berber, Necla Gurdal and Berna Akkus Yildirim
Diagnostics 2025, 15(14), 1747; https://doi.org/10.3390/diagnostics15141747 - 10 Jul 2025
Viewed by 407
Abstract
Background/Objectives: Current developments in computer-aided systems rely heavily on complex and computationally intensive algorithms. However, even a simple approach can offer a promising solution to reduce the burden on clinicians. Addressing this, we aim to employ unsupervised cluster analysis to identify prognostic [...] Read more.
Background/Objectives: Current developments in computer-aided systems rely heavily on complex and computationally intensive algorithms. However, even a simple approach can offer a promising solution to reduce the burden on clinicians. Addressing this, we aim to employ unsupervised cluster analysis to identify prognostic subgroups of non-small-cell lung cancer (NSCLC) patients with brain metastasis (BM). Simple-yet-effective algorithms designed to identify similar group characteristics will assist clinicians in categorizing patients effectively. Methods: We retrospectively collected data from 95 NSCLC patients with BM treated at two oncology centers. To identify clinically distinct subgroups, two types of unsupervised clustering methods—two-step clustering (TSC) and hierarchical cluster analysis (HCA)—were applied to the baseline clinical data. Patients were categorized into prognostic classes according to the Diagnosis-Specific Graded Prognostic Assessment (DS-GPA). Survival curves for the clusters and DS-GPA classes were generated using Kaplan–Meier analysis, and the differences were assessed with the log-rank test. The discriminative ability of three categorical variables on survival was compared using the concordance index (C-index). Results: The mean age of the patients was 61.8 ± 0.9 years, and the majority (77.9%) were men. Extracranial metastasis was present in 71.6% of the patients, with most (63.2%) having a single BM. The DS-GPA classification significantly divided the patients into prognostic classes (p < 0.001). Furthermore, statistical significance was observed between clusters created by TSC (p < 0.001) and HCA (p < 0.001). HCA showed the highest discriminatory power (C-index = 0.721), followed by the DS-GPA (C-index = 0.709) and TSC (C-index = 0.650). Conclusions: Our findings demonstrated that the TSC and HCA models were comparable in prognostic performance to the DS-GPA index in NSCLC patients with BM. These results suggest that unsupervised clustering may offer a data-driven perspective on patient stratification, though further validation is needed to clarify its role in prognostic modeling. Full article
(This article belongs to the Special Issue Artificial Intelligence Approaches for Medical Diagnostics in the USA)
Show Figures

Figure 1

23 pages, 8395 KiB  
Review
Revisiting Fat Content in Bone Lesions: Paradigms in Bone Lesion Detection
by Ali Shah, Neel R. Raja, Hasaam Uldin, Sonal Saran and Rajesh Botchu
Diseases 2025, 13(7), 197; https://doi.org/10.3390/diseases13070197 - 27 Jun 2025
Viewed by 849
Abstract
Bone lesions encountered as part of radiology practice can bring diagnostic challenges, both when encountered incidentally or suspected as a primary bone lesion, and in patients at risk of metastases or marrow-based malignancies. Differentiating benign from malignant bone marrow lesions is critical, yet [...] Read more.
Bone lesions encountered as part of radiology practice can bring diagnostic challenges, both when encountered incidentally or suspected as a primary bone lesion, and in patients at risk of metastases or marrow-based malignancies. Differentiating benign from malignant bone marrow lesions is critical, yet can be challenging due to overlapping imaging characteristics. One key imaging feature that can assist with diagnosis is the presence of fat within the lesion. Fat can be present either macroscopically (i.e., visible on radiographs, computed tomography (CT), and conventional magnetic resonance imaging (MRI)), or microscopically, detected through specialised MRI techniques such as chemical shift imaging (CSI). This comprehensive review explores the diagnostic significance of both macroscopic and microscopic fat in bone lesions and discusses how its presence can point towards benignity. We illustrate the spectrum of fat-containing bone lesions, encompassing both typical and atypical presentations, and provide practical imaging strategies to increase diagnostic accuracy by utilising radiographs, CT, and MRI in characterising these lesions. Specifically, CSI is highlighted as a non-invasive method for evaluating intralesional fat content, to distinguish benign marrow entities from malignant marrow-replacing conditions based on quantifiable signal drop-off. Furthermore, we detail imaging pitfalls with a focus on conditions that can mimic malignancy (such as aggressive haemangiomas) and collision lesions. Through a detailed discussion and illustrative examples, we aim to guide radiologists and clinicians in recognising reassuring imaging features while also identifying scenarios where further investigation may be warranted. Full article
Show Figures

Figure 1

29 pages, 4405 KiB  
Article
Pupil Detection Algorithm Based on ViM
by Yu Zhang, Changyuan Wang, Pengbo Wang and Pengxiang Xue
Sensors 2025, 25(13), 3978; https://doi.org/10.3390/s25133978 - 26 Jun 2025
Viewed by 337
Abstract
Pupil detection is a key technology in fields such as human–computer interaction, fatigue driving detection, and medical diagnosis. Existing pupil detection algorithms still face challenges in maintaining robustness under variable lighting conditions and occlusion scenarios. In this paper, we propose a novel pupil [...] Read more.
Pupil detection is a key technology in fields such as human–computer interaction, fatigue driving detection, and medical diagnosis. Existing pupil detection algorithms still face challenges in maintaining robustness under variable lighting conditions and occlusion scenarios. In this paper, we propose a novel pupil detection algorithm, ViMSA, based on the ViM model. This algorithm introduces weighted feature fusion, aiming to enable the model to adaptively learn the contribution of different feature patches to the pupil detection results; combines ViM with the MSA (multi-head self-attention) mechanism), aiming to integrate global features and improve the accuracy and robustness of pupil detection; and uses FFT (Fast Fourier Transform) to convert the time-domain vector outer product in MSA into a frequency–domain dot product, in order to reduce the computational complexity of the model and improve the detection efficiency of the model. ViMSA was trained and tested on nearly 135,000 pupil images from 30 different datasets, demonstrating exceptional generalization capability. The experimental results demonstrate that the proposed ViMSA achieves 99.6% detection accuracy at five pixels with an RMSE of 1.67 pixels and a processing speed exceeding 100 FPS, meeting real-time monitoring requirements for various applications including operation under variable and uneven lighting conditions, assistive technology (enabling communication with neuro-motor disorder patients through pupil recognition), computer gaming, and automotive industry applications (enhancing traffic safety by monitoring drivers’ cognitive states). Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 371 KiB  
Article
Real-Life Performance of a Commercially Available AI Tool for Post-Traumatic Intracranial Hemorrhage Detection on CT Scans: A Supportive Tool
by Léo Mabit, Maryne Lepoittevin, Martin Valls, Clément Thomas, Rémy Guillevin and Guillaume Herpe
J. Clin. Med. 2025, 14(13), 4403; https://doi.org/10.3390/jcm14134403 - 20 Jun 2025
Viewed by 721
Abstract
Background: Traumatic brain injury (TBI) is a major cause of morbimortality in the world, and it can cause potential intracranial hemorrhage (ICH), a life-threatening condition that requires rapid diagnosis with computed tomography (CT). Artificial intelligence tools for ICH detection are now commercially [...] Read more.
Background: Traumatic brain injury (TBI) is a major cause of morbimortality in the world, and it can cause potential intracranial hemorrhage (ICH), a life-threatening condition that requires rapid diagnosis with computed tomography (CT). Artificial intelligence tools for ICH detection are now commercially available. Objectives: Investigate the real-world performance of qER.ai, an artificial intelligence-based CT hemorrhage detection tool, in a post-traumatic population. Methods: Retrospective monocentric observational study of a dataset of consecutively acquired head CT scans at the emergency radiology unit to explore brain trauma. AI performance was compared to ground truth determined by expert consensus. A subset of night shift cases with the radiological report of a junior resident was compared to the AI results and ground truth. Results: A total of 682 head CT scans were analyzed. AI demonstrated a sensitivity of 88.8% and a specificity of 92.1% overall, with a positive predictive value of 65.4% and a negative predictive value of 98%. AI’s performance was comparable to that of junior residents in detecting ICH, with the latter showing a sensitivity of 85.7% and a high specificity of 99.3%. Interestingly, the AI detected two out of three ICH cases missed by the junior residents. When AI assistance was integrated, the combined sensitivity improved to 95.2%, and the overall accuracy reached 98.8%. Conclusions: This study shows better performance from AI and radiologist residents working together than each one alone. These results are encouraging for rethinking the radiological workflow and the future of triage of this large population of brain traumatized patients in the emergency unit. Full article
Show Figures

Figure 1

26 pages, 1024 KiB  
Review
Changes Connected to Early Chronic Pancreatitis and Early Pancreatic Cancer in Endoscopic Ultrasonography (EUS): Clinical Implications
by Natalia Pawelec, Łukasz Durko and Ewa Małecka-Wojciesko
Cancers 2025, 17(11), 1891; https://doi.org/10.3390/cancers17111891 - 5 Jun 2025
Viewed by 1430
Abstract
Chronic pancreatitis (CP) is a progressive condition that is associated with severe complications. Diagnosis of late CP is easy due to characteristic clinical presentation and pathognomonic imaging findings, such as pancreatic calcifications. Early changes, such as lobularity and a dilated main pancreatic duct, [...] Read more.
Chronic pancreatitis (CP) is a progressive condition that is associated with severe complications. Diagnosis of late CP is easy due to characteristic clinical presentation and pathognomonic imaging findings, such as pancreatic calcifications. Early changes, such as lobularity and a dilated main pancreatic duct, are very subtle and challenging to detect with ultrasonography (US) or even computed tomography (CT). Data were accumulating on the usefulness of EUS in the early diagnosis of CP. The sensitivity values for detecting early CP (ECP) by US, MRI, and EUS were 67–69%, 77–78%, and 81–84%, respectively. The specificity values for detecting ECP by US, MRI, and EUS were 90–98%, 83–96%, and 90–100%, respectively. Pancreatic cancer (PDAC) is one of the leading cancers worldwide, with increasing morbidity. Due to its poor prognosis and survival, early diagnosis is crucial. For this indication, EUS also shows better outcomes compared to other imaging methods, especially in tumors < 2 cm. The sensitivity and specificity for diagnosing PDAC with MRI and EUS were 52.3–93%, 77.1–89%, 72–100%, and 90%, respectively. In addition, EUS can detect precancerous conditions that are associated with a higher risk of PDAC. EUS-assisted new techniques, like elastography and contrast enhancement, facilitate the diagnosis of pancreatic lesions and make it even more accurate. Early PDAC changes, such as main pancreatic duct dilatation or irregular margins of pancreatic solid masses, may be detected with EUS. This review describes the efficacy of different imaging techniques in the early detection of CP and PDAC. In addition, we describe the useful interventions made possible by early diagnosis of PDAC and CP. Full article
(This article belongs to the Collection Targeting Solid Tumors)
Show Figures

Figure 1

18 pages, 3798 KiB  
Article
Assessment of the Diagnostic Accuracy of Artificial Intelligence Software in Identifying Common Periodontal and Restorative Dental Conditions (Marginal Bone Loss, Periapical Lesion, Crown, Restoration, Dental Caries) in Intraoral Periapical Radiographs
by Wael I. Ibraheem, Saurabh Jain, Mohammed Naji Ayoub, Mohammed Ahmed Namazi, Amjad Ismail Alfaqih, Aparna Aggarwal, Abdullah A. Meshni, Ammar Almarghlani and Abdulkareem Abdullah Alhumaidan
Diagnostics 2025, 15(11), 1432; https://doi.org/10.3390/diagnostics15111432 - 4 Jun 2025
Viewed by 1259
Abstract
Objectives: The purpose of the study is to evaluate the diagnostic accuracy of artificial intelligence (AI) software in detecting a common set of periodontal and restorative conditions, including marginal bone loss, dental caries, periapical lesions, calculus, endodontic treatment, crowns, restorations, and open crown [...] Read more.
Objectives: The purpose of the study is to evaluate the diagnostic accuracy of artificial intelligence (AI) software in detecting a common set of periodontal and restorative conditions, including marginal bone loss, dental caries, periapical lesions, calculus, endodontic treatment, crowns, restorations, and open crown margins, using intraoral periapical radiographs. Additionally, the study will assess how this AI software influences the diagnostic accuracy of dentists with varying levels of experience in identifying these conditions. Methods: A total of three hundred digital IOPARs representing 1030 teeth were selected based on predetermined selection criteria. The parameters assessed included (a) calculus, (b) periapical radiolucency, (c) caries, (d) marginal bone loss, (e) type of restorative (filling) material, (f) type of crown retainer material, and (g) detection of open crown margins. Two oral radiologists performed the initial diagnosis of the selected radiographs and independently labeled all the predefined parameters for the provided IOPARs under standardized conditions. This data served as reference data. A pre-trained AI-based computer-aided detection (“CADe”) software (Second Opinion®, version 1.1) was used for the detection of the predefined features. The reports generated by the AI software were compared with the reference data to evaluate the diagnostic accuracy of the AI software. In the second phase of the study, thirty dental interns and thirty dental specialists were randomly selected. Each participant was randomly assigned five IOPARs and was asked to detect and diagnose the predefined conditions. Subsequently, all the participants were requested to reassess the IOPARs, this time with the assistance of the AI software. All the data was recorded using a self-designed Performa. Results: The sensitivity of the AI software in detecting caries, periapical lesions, crowns, open crown margins, restoration, endodontic treatment, calculus, and marginal bone loss was 91.0%, 86.6%, 97.1%, 82.6%, 89.3%, 93.4%, 80.2%, and 91.1%, respectively. The specificity of the AI software in detected caries, periapical lesions, crowns, open crown margins, restoration, endodontic treatment, calculus, and marginal bone loss was 87%, 98.3%, 99.6%, 91.9%, 96.4%, 99.3%, 97.8%, and 93.1%, respectively. The differences between the AI software and radiologist diagnoses of caries, periapical lesions, crowns, open crown margins, restoration, endodontic treatment, calculus, and marginal bone loss were statistically significant (all p values < 0.0001). The results showed that the diagnostic accuracy of operators (interns and specialists) with AI software revealed higher accuracy, sensitivity, and specificity in detecting caries, PA lesions, restoration, endodontic treatment, calculus, and marginal bone loss compared to that without using AI software. There were variations in the improvements in the diagnostic accuracy of interns and dental specialists. Conclusions: Within the limitations of the study, it can be concluded that the tested AI software has high accuracy in detecting the tested dental conditions in IOPARs. The use of AI software enhanced the diagnostic capabilities of dental operators. The present study used AI software to detect a clinically useful set of periodontal and restorative conditions, which can help dental operators in fast and accurate diagnosis and provide high-quality treatment to their patients. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 1343 KiB  
Review
The Integration of Cone Beam Computed Tomography, Artificial Intelligence, Augmented Reality, and Virtual Reality in Dental Diagnostics, Surgical Planning, and Education: A Narrative Review
by Aida Meto and Gerta Halilaj
Appl. Sci. 2025, 15(11), 6308; https://doi.org/10.3390/app15116308 - 4 Jun 2025
Viewed by 1348
Abstract
(1) Background: Advancements in dental imaging technologies have significantly transformed diagnostic and surgical practices. The integration of cone beam computed tomography (CBCT), artificial intelligence (AI), augmented reality (AR), and virtual reality (VR) is enhancing clinical precision, streamlining workflows, and redefining dental education. This [...] Read more.
(1) Background: Advancements in dental imaging technologies have significantly transformed diagnostic and surgical practices. The integration of cone beam computed tomography (CBCT), artificial intelligence (AI), augmented reality (AR), and virtual reality (VR) is enhancing clinical precision, streamlining workflows, and redefining dental education. This review examines the evolution, applications, and future potential of these technologies in modern dental practice. (2) Methods: A narrative literature review was conducted, synthesizing findings from recent studies on digital radiography, CBCT, AI-assisted diagnostics, 3D imaging, and involving simulation tools (AR/VR). Peer-reviewed journal articles, systematic reviews, and clinical studies were analyzed to explore their impact on diagnosis, treatment planning, surgical execution, and training. (3) Results: Digital and 3D imaging modalities have improved diagnostic accuracy and reduced radiation exposure. AI applications enhance image interpretation, automate clinical tasks, and support treatment simulations. AR and VR technologies provide involved, competency-based surgical training and real-time intraoperative guidance. Integrating 3D printing and portable imaging expands accessibility and personalization in care delivery. (4) Conclusions: The integration of CBCT, AI, AR, and VR represents a paradigm shift in dentistry, elevating precision, efficiency, and patient outcomes. Continued research, standardization, and ethical practice will be essential for widespread adoption and maximizing clinical benefits. Full article
(This article belongs to the Special Issue Advanced Technologies in Oral Surgery)
Show Figures

Figure 1

Back to TopTop