Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (124)

Search Parameters:
Keywords = medical image analysis and medical decision-making

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
40 pages, 3463 KiB  
Review
Machine Learning-Powered Smart Healthcare Systems in the Era of Big Data: Applications, Diagnostic Insights, Challenges, and Ethical Implications
by Sita Rani, Raman Kumar, B. S. Panda, Rajender Kumar, Nafaa Farhan Muften, Mayada Ahmed Abass and Jasmina Lozanović
Diagnostics 2025, 15(15), 1914; https://doi.org/10.3390/diagnostics15151914 - 30 Jul 2025
Viewed by 521
Abstract
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, [...] Read more.
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, cross-domain ML applications, and a critical discussion on ethical integration in smart diagnostics. The review focuses on the role of big data analysis and ML towards better diagnosis, improved efficiency of operations, and individualized care for patients. It explores the principal challenges of data heterogeneity, privacy, computational complexity, and advanced methods such as federated learning (FL) and edge computing. Applications in real-world settings, such as disease prediction, medical imaging, drug discovery, and remote monitoring, illustrate how ML methods, such as deep learning (DL) and natural language processing (NLP), enhance clinical decision-making. A comparison of ML models highlights their value in dealing with large and heterogeneous healthcare datasets. In addition, the use of nascent technologies such as wearables and Internet of Medical Things (IoMT) is examined for their role in supporting real-time data-driven delivery of healthcare. The paper emphasizes the pragmatic application of intelligent systems by highlighting case studies that reflect up to 95% diagnostic accuracy and cost savings. The review ends with future directions that seek to develop scalable, ethical, and interpretable AI-powered healthcare systems. It bridges the gap between ML algorithms and smart diagnostics, offering critical perspectives for clinicians, data scientists, and policymakers. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

26 pages, 14606 KiB  
Review
Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI
by Kazi Nabiul Alam, Pooneh Bagheri Zadeh and Akbar Sheikh-Akbari
Electronics 2025, 14(15), 3024; https://doi.org/10.3390/electronics14153024 - 29 Jul 2025
Viewed by 410
Abstract
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting [...] Read more.
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting genetic analysis for personalized medicine. However, a critical drawback of using Computer Vision (CV) approaches is their limited reliability and transparency. Clinicians and patients must comprehend the rationale behind predictions or results to ensure trust and ethical deployment in clinical settings. This demonstrates the adoption of the idea of Explainable Computer Vision (X-CV), which enhances vision-relative interpretability. Among various methodologies, attribution-based approaches are widely employed by researchers to explain medical imaging outputs by identifying influential features. This article solely aims to explore how attribution-based X-CV methods work in medical imaging, what they are good for in real-world use, and what their main limitations are. This study evaluates X-CV techniques by conducting a thorough review of relevant reports, peer-reviewed journals, and methodological approaches to obtain an adequate understanding of attribution-based approaches. It explores how these techniques tackle computational complexity issues, improve diagnostic accuracy and aid clinical decision-making processes. This article intends to present a path that generalizes the concept of trustworthiness towards AI-based healthcare solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

35 pages, 7934 KiB  
Article
Analyzing Diagnostic Reasoning of Vision–Language Models via Zero-Shot Chain-of-Thought Prompting in Medical Visual Question Answering
by Fatema Tuj Johora Faria, Laith H. Baniata, Ahyoung Choi and Sangwoo Kang
Mathematics 2025, 13(14), 2322; https://doi.org/10.3390/math13142322 - 21 Jul 2025
Viewed by 763
Abstract
Medical Visual Question Answering (MedVQA) lies at the intersection of computer vision, natural language processing, and clinical decision-making, aiming to generate accurate responses from medical images paired with complex inquiries. Despite recent advances in vision–language models (VLMs), their use in healthcare remains limited [...] Read more.
Medical Visual Question Answering (MedVQA) lies at the intersection of computer vision, natural language processing, and clinical decision-making, aiming to generate accurate responses from medical images paired with complex inquiries. Despite recent advances in vision–language models (VLMs), their use in healthcare remains limited by a lack of interpretability and a tendency to produce direct, unexplainable outputs. This opacity undermines their reliability in medical settings, where transparency and justification are critically important. To address this limitation, we propose a zero-shot chain-of-thought prompting framework that guides VLMs to perform multi-step reasoning before arriving at an answer. By encouraging the model to break down the problem, analyze both visual and contextual cues, and construct a stepwise explanation, the approach makes the reasoning process explicit and clinically meaningful. We evaluate the framework on the PMC-VQA benchmark, which includes authentic radiological images and expert-level prompts. In a comparative analysis of three leading VLMs, Gemini 2.5 Pro achieved the highest accuracy (72.48%), followed by Claude 3.5 Sonnet (69.00%) and GPT-4o Mini (67.33%). The results demonstrate that chain-of-thought prompting significantly improves both reasoning transparency and performance in MedVQA tasks. Full article
(This article belongs to the Special Issue Mathematical Foundations in NLP: Applications and Challenges)
Show Figures

Figure 1

19 pages, 3923 KiB  
Article
Automated Aneurysm Boundary Detection and Volume Estimation Using Deep Learning
by Alireza Bagheri Rajeoni, Breanna Pederson, Susan M. Lessner and Homayoun Valafar
Diagnostics 2025, 15(14), 1804; https://doi.org/10.3390/diagnostics15141804 - 17 Jul 2025
Viewed by 316
Abstract
Background/Objective: Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard [...] Read more.
Background/Objective: Precise aneurysm volume measurement offers a transformative edge for risk assessment and treatment planning in clinical settings. Currently, clinical assessments rely heavily on manual review of medical imaging, a process that is time-consuming and prone to inter-observer variability. The widely accepted standard of care primarily focuses on measuring aneurysm diameter at its widest point, providing a limited perspective on aneurysm morphology and lacking efficient methods to measure aneurysm volumes. Yet, volume measurement can offer deeper insight into aneurysm progression and severity. In this study, we propose an automated approach that leverages the strengths of pre-trained neural networks and expert systems to delineate aneurysm boundaries and compute volumes on an unannotated dataset from 60 patients. The dataset includes slice-level start/end annotations for aneurysm but no pixel-wise aorta segmentations. Method: Our method utilizes a pre-trained UNet to automatically locate the aorta, employs SAM2 to track the aorta through vascular irregularities such as aneurysms down to the iliac bifurcation, and finally uses a Long Short-Term Memory (LSTM) network or expert system to identify the beginning and end points of the aneurysm within the aorta. Results: Despite no manual aorta segmentation, our approach achieves promising accuracy, predicting the aneurysm start point with an R2 score of 71%, the end point with an R2 score of 76%, and the volume with an R2 score of 92%. Conclusions: This technique has the potential to facilitate large-scale aneurysm analysis and improve clinical decision-making by reducing dependence on annotated datasets. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

14 pages, 1106 KiB  
Article
Metastatic Melanoma Prognosis Prediction Using a TC Radiomic-Based Machine Learning Model: A Preliminary Study
by Antonino Guerrisi, Maria Teresa Maccallini, Italia Falcone, Alessandro Valenti, Ludovica Miseo, Sara Ungania, Vincenzo Dolcetti, Fabio Valenti, Marianna Cerro, Flora Desiderio, Fabio Calabrò, Virginia Ferraresi and Michelangelo Russillo
Cancers 2025, 17(14), 2304; https://doi.org/10.3390/cancers17142304 - 10 Jul 2025
Viewed by 327
Abstract
Background/Objective: The approach to the clinical management of metastatic melanoma patients is undergoing a significant transformation. The availability of a large amount of data from medical images has made Artificial Intelligence (AI) applications an innovative and cutting-edge solution that could revolutionize the [...] Read more.
Background/Objective: The approach to the clinical management of metastatic melanoma patients is undergoing a significant transformation. The availability of a large amount of data from medical images has made Artificial Intelligence (AI) applications an innovative and cutting-edge solution that could revolutionize the surveillance and management of these patients. In this study, we develop and validate a machine-learning model based on radiomic data extracted from a computed tomography (CT) analysis of patients with metastatic melanoma (MM). This approach was designed to accurately predict prognosis and identify the potential key factors associated with prognosis. Methods: To achieve this goal, we used radiomic pipelines to extract the quantitative features related to lesion texture, morphology, and intensity from high-quality CT images. We retrospectively collected a cohort of 58 patients with metastatic melanoma, from which a total of 60 CT series were used for model training, and 70 independent CT series were employed for external testing. Model performance was evaluated using metrics such as sensitivity, specificity, and AUC (area under the curve), demonstrating particularly favorable results compared to traditional methods. Results: The model used in this study presented a ROC-AUC curve of 82% in the internal test and, in combination with AI, presented a good predictive ability regarding lesion outcome. Conclusions: Although the cohort size was limited and the data were collected retrospectively from a single institution, the findings provide a promising basis for further validation in larger and more diverse patient populations. This approach could directly support clinical decision-making by providing accurate and personalized prognostic information. Full article
(This article belongs to the Special Issue Radiomics and Imaging in Cancer Analysis)
Show Figures

Graphical abstract

22 pages, 4079 KiB  
Article
Breast Cancer Classification with Various Optimized Deep Learning Methods
by Mustafa Güler, Gamze Sart, Ömer Algorabi, Ayse Nur Adıguzel Tuylu and Yusuf Sait Türkan
Diagnostics 2025, 15(14), 1751; https://doi.org/10.3390/diagnostics15141751 - 10 Jul 2025
Viewed by 487
Abstract
Background/Objectives: In recent years, there has been a significant increase in the number of women with breast cancer. Breast cancer prediction is defined as a medical data analysis and image processing problem. Experts may need artificial intelligence technologies to distinguish between benign and [...] Read more.
Background/Objectives: In recent years, there has been a significant increase in the number of women with breast cancer. Breast cancer prediction is defined as a medical data analysis and image processing problem. Experts may need artificial intelligence technologies to distinguish between benign and malignant tumors in order to make decisions. When the studies in the literature are examined, it can be seen that applications of deep learning algorithms in the field of medicine have achieved very successful results. Methods: In this study, 11 different deep learning algorithms (Vanilla, ResNet50, ResNet152, VGG16, DenseNet152, MobileNetv2, EfficientB1, NasNet, DenseNet201, ensemble, and Tuned Model) were used. Images of pathological specimens from breast biopsies consisting of two classes, benign and malignant, were used for classification analysis. To limit the computational time and speed up the analysis process, 10,000 images, 6172 IDC-negative and 3828 IDC-positive, were selected. Of the images, 80% were used for training, 10% were used for validation, and 10% were used for testing the trained model. Results: The results demonstrate that DenseNet201 achieved the highest classification accuracy of 89.4%, with a precision of 88.2%, a recall of 84.1%, an F1 score of 86.1%, and an AUC score of 95.8%. Conclusions: In conclusion, this study highlights the potential of deep learning algorithms in breast cancer classification. Future research should focus on integrating multi-modal imaging data, refining ensemble learning methodologies, and expanding dataset diversity to further improve the classification accuracy and real-world clinical applicability. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

34 pages, 947 KiB  
Review
Multimodal Artificial Intelligence in Medical Diagnostics
by Bassem Jandoubi and Moulay A. Akhloufi
Information 2025, 16(7), 591; https://doi.org/10.3390/info16070591 - 9 Jul 2025
Viewed by 1150
Abstract
The integration of artificial intelligence into healthcare has advanced rapidly in recent years, with multimodal approaches emerging as promising tools for improving diagnostic accuracy and clinical decision making. These approaches combine heterogeneous data sources such as medical images, electronic health records, physiological signals, [...] Read more.
The integration of artificial intelligence into healthcare has advanced rapidly in recent years, with multimodal approaches emerging as promising tools for improving diagnostic accuracy and clinical decision making. These approaches combine heterogeneous data sources such as medical images, electronic health records, physiological signals, and clinical notes to better capture the complexity of disease processes. Despite this progress, only a limited number of studies offer a unified view of multimodal AI applications in medicine. In this review, we provide a comprehensive and up-to-date analysis of machine learning and deep learning-based multimodal architectures, fusion strategies, and their performance across a range of diagnostic tasks. We begin by summarizing publicly available datasets and examining the preprocessing pipelines required for harmonizing heterogeneous medical data. We then categorize key fusion strategies used to integrate information from multiple modalities and overview representative model architectures, from hybrid designs and transformer-based vision-language models to optimization-driven and EHR-centric frameworks. Finally, we highlight the challenges present in existing works. Our analysis shows that multimodal approaches tend to outperform unimodal systems in diagnostic performance, robustness, and generalization. This review provides a unified view of the field and opens up future research directions aimed at building clinically usable, interpretable, and scalable multimodal diagnostic systems. Full article
Show Figures

Graphical abstract

32 pages, 1126 KiB  
Review
Exploring the Role of Artificial Intelligence in Smart Healthcare: A Capability and Function-Oriented Review
by Syed Raza Abbas, Huiseung Seol, Zeeshan Abbas and Seung Won Lee
Healthcare 2025, 13(14), 1642; https://doi.org/10.3390/healthcare13141642 - 8 Jul 2025
Viewed by 1258
Abstract
Artificial Intelligence (AI) is transforming smart healthcare by enhancing diagnostic precision, automating clinical workflows, and enabling personalized treatment strategies. This review explores the current landscape of AI in healthcare from two key perspectives: capability types (e.g., Narrow AI and AGI) and functional architectures [...] Read more.
Artificial Intelligence (AI) is transforming smart healthcare by enhancing diagnostic precision, automating clinical workflows, and enabling personalized treatment strategies. This review explores the current landscape of AI in healthcare from two key perspectives: capability types (e.g., Narrow AI and AGI) and functional architectures (e.g., Limited Memory and Theory of Mind). Based on capabilities, most AI systems today are categorized as Narrow AI, performing specific tasks such as medical image analysis and risk prediction with high accuracy. More advanced forms like General Artificial Intelligence (AGI) and Superintelligent AI remain theoretical but hold transformative potential. From a functional standpoint, Limited Memory AI dominates clinical applications by learning from historical patient data to inform decision-making. Reactive systems are used in rule-based alerts, while Theory of Mind (ToM) and Self-Aware AI remain conceptual stages for future development. This dual perspective provides a comprehensive framework to assess the maturity, impact, and future direction of AI in healthcare. It also highlights the need for ethical design, transparency, and regulation as AI systems grow more complex and autonomous, by incorporating cross-domain AI insights. Moreover, we evaluate the viability of developing AGI in regionally specific legal and regulatory frameworks, using South Korea as a case study to emphasize the limitations imposed by infrastructural preparedness and medical data governance regulations. Full article
(This article belongs to the Special Issue The Role of AI in Predictive and Prescriptive Healthcare)
Show Figures

Figure 1

24 pages, 974 KiB  
Review
Artificial Intelligence in Primary Malignant Bone Tumor Imaging: A Narrative Review
by Platon S. Papageorgiou, Rafail Christodoulou, Panagiotis Korfiatis, Dimitra P. Papagelopoulos, Olympia Papakonstantinou, Nancy Pham, Amanda Woodward and Panayiotis J. Papagelopoulos
Diagnostics 2025, 15(13), 1714; https://doi.org/10.3390/diagnostics15131714 - 4 Jul 2025
Viewed by 1516
Abstract
Artificial Intelligence (AI) has emerged as a transformative force in orthopedic oncology, offering significant advances in the diagnosis, classification, and prediction of treatment response for primary malignant bone tumors (PBT). Through machine learning and deep learning techniques, AI leverages computational algorithms and large [...] Read more.
Artificial Intelligence (AI) has emerged as a transformative force in orthopedic oncology, offering significant advances in the diagnosis, classification, and prediction of treatment response for primary malignant bone tumors (PBT). Through machine learning and deep learning techniques, AI leverages computational algorithms and large datasets to enhance medical imaging interpretation and support clinical decision-making. The integration of radiomics with AI enables the extraction of quantitative features from medical images, allowing for precise tumor characterization and the development of personalized therapeutic strategies. Notably, convolutional neural networks have demonstrated exceptional capabilities in pattern recognition, significantly improving tumor detection, segmentation, and differentiation. This narrative review synthesizes the evolving applications of AI in PBTs, focusing on early tumor detection, imaging analysis, therapy response prediction, and histological classification. AI-driven radiomics and predictive models have yielded promising results in assessing chemotherapy efficacy, optimizing preoperative imaging, and predicting treatment outcomes, thereby advancing the field of precision medicine. Innovative segmentation techniques and multimodal imaging models have further enhanced healthcare efficiency by reducing physician workload and improving diagnostic accuracy. Despite these advancements, challenges remain. The rarity of PBTs limits the availability of robust, high-quality datasets for model development and validation, while the lack of standardized imaging protocols complicates reproducibility. Ethical considerations, including data privacy and the interpretability of complex AI algorithms, also warrant careful attention. Future research should prioritize multicenter collaborations, external validation of AI models, and the integration of explainable AI systems into clinical practice. Addressing these challenges will unlock AI’s full potential to revolutionize PBT management, ultimately improving patient outcomes and advancing personalized care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

27 pages, 2478 KiB  
Article
Early Diabetic Retinopathy Detection from OCT Images Using Multifractal Analysis and Multi-Layer Perceptron Classification
by Ahlem Aziz, Necmi Serkan Tezel, Seydi Kaçmaz and Youcef Attallah
Diagnostics 2025, 15(13), 1616; https://doi.org/10.3390/diagnostics15131616 - 25 Jun 2025
Viewed by 571
Abstract
Background/Objectives: Diabetic retinopathy (DR) remains one of the primary causes of preventable vision impairment worldwide, particularly among individuals with long-standing diabetes. The progressive damage of retinal microvasculature can lead to irreversible blindness if not detected and managed at an early stage. Therefore, the [...] Read more.
Background/Objectives: Diabetic retinopathy (DR) remains one of the primary causes of preventable vision impairment worldwide, particularly among individuals with long-standing diabetes. The progressive damage of retinal microvasculature can lead to irreversible blindness if not detected and managed at an early stage. Therefore, the development of reliable, non-invasive, and automated screening tools has become increasingly vital in modern ophthalmology. With the evolution of medical imaging technologies, Optical Coherence Tomography (OCT) has emerged as a valuable modality for capturing high-resolution cross-sectional images of retinal structures. In parallel, machine learning has shown considerable promise in supporting early disease recognition by uncovering complex and often imperceptible patterns in image data. Methods: This study introduces a novel framework for the early detection of DR through multifractal analysis of OCT images. Multifractal features, extracted using a box-counting approach, provide quantitative descriptors that reflect the structural irregularities of retinal tissue associated with pathological changes. Results: A comparative evaluation of several machine learning algorithms was conducted to assess classification performance. Among them, the Multi-Layer Perceptron (MLP) achieved the highest predictive accuracy, with a score of 98.02%, along with precision, recall, and F1-score values of 98.24%, 97.80%, and 98.01%, respectively. Conclusions: These results highlight the strength of combining OCT imaging with multifractal geometry and deep learning methods to build robust and scalable systems for DR screening. The proposed approach could contribute significantly to improving early diagnosis, clinical decision-making, and patient outcomes in diabetic eye care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

24 pages, 691 KiB  
Review
Multimodal Preoperative Management of Rectal Cancer: A Review of the Existing Guidelines
by Ionut Negoi
Medicina 2025, 61(7), 1132; https://doi.org/10.3390/medicina61071132 - 24 Jun 2025
Viewed by 633
Abstract
Rectal cancer management necessitates a rigorous multidisciplinary strategy, emphasizing precise staging and detailed risk stratification to inform optimal therapeutic decision-making. Obtaining an accurate histological diagnosis before initiating treatment is essential. Comprehensive staging integrates clinical evaluation, thorough medical history analysis, assessment of carcinoembryonic antigen [...] Read more.
Rectal cancer management necessitates a rigorous multidisciplinary strategy, emphasizing precise staging and detailed risk stratification to inform optimal therapeutic decision-making. Obtaining an accurate histological diagnosis before initiating treatment is essential. Comprehensive staging integrates clinical evaluation, thorough medical history analysis, assessment of carcinoembryonic antigen (CEA) levels, and computed tomography (CT) imaging of the abdomen and thorax. High-resolution pelvic magnetic resonance imaging (MRI), utilizing dedicated rectal protocols, is critical for identifying recurrence risks and delineating precise anatomical relationships. Endoscopic ultrasound further refines staging accuracy by determining the tumor infiltration depth in early-stage cancers, while preoperative colonoscopy effectively identifies synchronous colorectal lesions. In early-stage rectal cancers (T1–T2, N0, and M0), radical surgical resection remains the standard of care, although transanal local excision can be selectively indicated for certain T1N0 tumors. In contrast, locally advanced rectal cancers (T3, T4, and N+) characterized by microsatellite stability or proficient mismatch repair are optimally managed with total neoadjuvant therapy (TNT), which combines chemoradiotherapy with oxaliplatin-based systemic chemotherapy. Additionally, tumors exhibiting high microsatellite instability or mismatch repair deficiency respond favorably to immune checkpoint inhibitors (ICIs). The evaluation of tumor response following neoadjuvant therapy, utilizing MRI and endoscopic assessments, facilitates individualized treatment planning, including non-operative approaches for patients with confirmed complete clinical responses who comply with rigorous follow-up. Recent advancements in molecular characterization, targeted therapies, and immunotherapy highlight a significant evolution towards personalized medicine. The effective integration of these innovations requires enhanced interdisciplinary collaboration to improve patient prognosis and quality of life. Full article
(This article belongs to the Special Issue Recent Advances and Future Challenges in Colorectal Surgery)
Show Figures

Figure 1

26 pages, 12177 KiB  
Article
An Efficient Hybrid 3D Computer-Aided Cephalometric Analysis for Lateral Cephalometric and Cone-Beam Computed Tomography (CBCT) Systems
by Laurine A. Ashame, Sherin M. Youssef, Mazen Nabil Elagamy and Sahar M. El-Sheikh
Computers 2025, 14(6), 223; https://doi.org/10.3390/computers14060223 - 7 Jun 2025
Viewed by 630
Abstract
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study [...] Read more.
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study addresses fully automatic orthodontics tracing based on the usage of artificial intelligence (AI) applied to 2D and 3D images, by designing a cephalometric system that analyzes the significant landmarks and regions of interest (ROI) needed in orthodontics tracing, especially for the mandible and maxilla teeth. In this research, a computerized system is developed to automate the tasks of orthodontics evaluation during 2D and Cone-Beam Computed Tomography (CBCT or 3D) systems measurements. This work was tested on a dataset that contains images of males and females obtained from dental hospitals with patient-informed consent. The dataset consists of 2D lateral cephalometric, panorama and CBCT radiographs. Many scenarios were applied to test the proposed system in landmark prediction and detection. Moreover, this study integrates the Grad-CAM (Gradient-Weighted Class Activation Mapping) technique to generate heat maps, providing transparent visualization of the regions the model focuses on during its decision-making process. By enhancing the interpretability of deep learning predictions, Grad-CAM strengthens clinical confidence in the system’s outputs, ensuring that ROI detection aligns with orthodontic diagnostic standards. This explainability is crucial in medical AI applications, where understanding model behavior is as important as achieving high accuracy. The experimental results achieved an accuracy exceeding 98.9%. This research evaluates and differentiates between the two-dimensional and the three-dimensional tracing analyses applied to measurements based on the practices of the European Board of Orthodontics. The results demonstrate the proposed methodology’s robustness when applied to cephalometric images. Furthermore, the evaluation of 3D analysis usage provides a clear understanding of the significance of integrated deep-learning techniques in orthodontics. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

34 pages, 20058 KiB  
Article
Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
by Grant Wardle and Teo Sušnjak
Big Data Cogn. Comput. 2025, 9(6), 149; https://doi.org/10.3390/bdcc9060149 - 3 Jun 2025
Viewed by 1037
Abstract
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop [...] Read more.
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop and validate practical heuristics for optimising multi-modal prompt design. Our findings reveal that modality sequencing is a critical factor influencing reasoning performance, particularly in tasks with varying cognitive load and structural complexity. For simpler tasks involving a single image, positioning the modalities directly impacts model accuracy, whereas in complex, multi-step reasoning scenarios, the sequence must align with the logical structure of inference, often outweighing the specific placement of individual modalities. Furthermore, we identify systematic challenges in multi-hop reasoning within transformer-based architectures, where models demonstrate strong early-stage inference but struggle with integrating prior contextual information in later reasoning steps. Building on these insights, we propose a set of validated, user-centred heuristics for designing effective multi-modal prompts, enhancing both reasoning accuracy and user interaction with AI systems. Our contributions inform the design and usability of interactive intelligent systems, with implications for applications in education, medical imaging, legal document analysis, and customer support. By bridging the gap between intelligent system behaviour and user interaction strategies, this study provides actionable guidance on how users can effectively structure prompts to optimise multi-modal LLM reasoning within real-world, high-stakes decision-making contexts. Full article
Show Figures

Figure 1

28 pages, 3279 KiB  
Review
Overdiagnosis and Overtreatment in Prostate Cancer
by Zaure Dushimova, Yerbolat Iztleuov, Gulnar Chingayeva, Abay Shepetov, Nagima Mustapayeva, Oxana Shatkovskaya, Marat Pashimov and Timur Saliev
Diseases 2025, 13(6), 167; https://doi.org/10.3390/diseases13060167 - 24 May 2025
Cited by 1 | Viewed by 1402
Abstract
Prostate cancer (PCa) is one of the most common malignancies among men worldwide. While prostate-specific antigen (PSA) screening has improved early detection, it has also led to significant challenges regarding overdiagnosis and overtreatment. Overdiagnosis involves identifying indolent tumors unlikely to affect a patient’s [...] Read more.
Prostate cancer (PCa) is one of the most common malignancies among men worldwide. While prostate-specific antigen (PSA) screening has improved early detection, it has also led to significant challenges regarding overdiagnosis and overtreatment. Overdiagnosis involves identifying indolent tumors unlikely to affect a patient’s lifespan, while overtreatment refers to unnecessary interventions that can cause adverse effects such as urinary incontinence, erectile dysfunction, and a reduced quality of life. This review highlights contributing factors, including the limitations of PSA testing, advanced imaging techniques like multi-parametric MRI (mpMRI), medical culture, and patient expectations. The analysis emphasizes the need for refining screening protocols, integrating novel biomarkers (e.g., PCA3, TMPRSS2-ERG), and adopting conservative management strategies such as active surveillance to minimize harm. Risk-based screening and shared decision-making are critical to balancing the benefits of early detection with the risks of unnecessary treatment. Additionally, systemic healthcare factors like financial incentives and malpractice concerns exacerbate overuse. This review advocates for updated clinical guidelines and personalized approaches to optimizing patient outcomes while reducing the strain on healthcare resources. Addressing overdiagnosis and overtreatment through targeted interventions will improve the quality of life for PCa patients and enhance the efficiency of healthcare systems. Full article
Show Figures

Figure 1

18 pages, 6136 KiB  
Article
Parallel VMamba and Attention-Based Pneumonia Severity Prediction from CXRs: A Robust Model with Segmented Lung Replacement Augmentation
by Bouthaina Slika, Fadi Dornaika and Karim Hammoudi
Diagnostics 2025, 15(11), 1301; https://doi.org/10.3390/diagnostics15111301 - 22 May 2025
Viewed by 682
Abstract
Background/Objectives: Rapid and accurate assessment of lung diseases, like pneumonia, is critical for effective clinical decision-making, particularly during pandemics when disease progression can be severe. Early diagnosis plays a crucial role in preventing complications, necessitating the development of fast and efficient AI-based models [...] Read more.
Background/Objectives: Rapid and accurate assessment of lung diseases, like pneumonia, is critical for effective clinical decision-making, particularly during pandemics when disease progression can be severe. Early diagnosis plays a crucial role in preventing complications, necessitating the development of fast and efficient AI-based models for automated severity assessment. Methods: In this study, we introduce a novel approach that leverages VMamba, a state-of-the-art vision model based on the VisualStateSpace (VSS) framework and 2D-Selective-Scan (SS2D) spatial scanning, to enhance lung severity prediction. Integrated in a parallel multi-image regions approach, VMamba effectively captures global and local contextual features through structured state-space modeling, improving feature representation and robustness in medical image analysis. Additionally, we integrate a segmented lung replacement augmentation strategy to enhance data diversity and improve model generalization. The proposed method is trained on the RALO and COVID-19 datasets and compared against state-of-the-art models. Results: Experimental results demonstrate that our approach achieves superior performance, outperforming existing techniques in prediction accuracy and robustness. Key evaluation metrics, including Mean Absolute Error (MAE) and Pearson Correlation (PC), confirm the model’s effectiveness, while the incorporation of segmented lung replacement augmentation further enhances adaptability to diverse lung conditions. Conclusions: These findings highlight the potential of our method for reliable and immediate clinical applications in lung infection assessment. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Back to TopTop