Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (256)

Search Parameters:
Keywords = AI in ultrasound

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
11 pages, 2637 KB  
Article
AI Enhances Lung Ultrasound Interpretation Across Clinicians with Varying Expertise Levels
by Seyed Ehsan Seyed Bolouri, Masood Dehghan, Mahdiar Nekoui, Brian Buchanan, Jacob L. Jaremko, Dornoosh Zonoobi, Arun Nagdev and Jeevesh Kapur
Diagnostics 2025, 15(17), 2145; https://doi.org/10.3390/diagnostics15172145 (registering DOI) - 25 Aug 2025
Abstract
Background/Objective: Lung ultrasound (LUS) is a valuable tool for detecting pulmonary conditions, but its accuracy depends on user expertise. This study evaluated whether an artificial intelligence (AI) tool could improve clinician performance in detecting pleural effusion and consolidation/atelectasis on LUS scans. Methods [...] Read more.
Background/Objective: Lung ultrasound (LUS) is a valuable tool for detecting pulmonary conditions, but its accuracy depends on user expertise. This study evaluated whether an artificial intelligence (AI) tool could improve clinician performance in detecting pleural effusion and consolidation/atelectasis on LUS scans. Methods: In this multi-reader, multi-case study, 14 clinicians of varying experience reviewed 374 retrospectively selected LUS scans (cine clips from the PLAPS point, obtained using three different probes) from 359 patients across six centers in the U.S. and Canada. In phase one, readers scored the likelihood (0–100) of pleural effusion and consolidation/atelectasis without AI. After a 4-week washout, they re-evaluated all scans with AI-generated bounding boxes. Performance metrics included area under the curve (AUC), sensitivity, specificity, and Fleiss’ Kappa. Subgroup analyses examined effects by reader experience. Results: For pleural effusion, AUC improved from 0.917 to 0.960, sensitivity from 77.3% to 89.1%, and specificity from 91.7% to 92.9%. Fleiss’ Kappa increased from 0.612 to 0.774. For consolidation/atelectasis, AUC rose from 0.870 to 0.941, sensitivity from 70.7% to 89.2%, and specificity from 85.8% to 89.5%. Kappa improved from 0.427 to 0.756. Conclusions: AI assistance enhanced clinician detection of pleural effusion and consolidation/atelectasis in LUS scans, particularly benefiting less experienced users. Full article
Show Figures

Figure 1

13 pages, 629 KB  
Article
Estrus Detection and Optimal Insemination Timing in Holstein Cattle Using a Neck-Mounted Accelerometer Sensor System
by Jacobo Álvarez, Antía Acción, Elio López, Carlota Antelo, Renato Barrionuevo, Juan José Becerra, Ana Isabel Peña, Pedro García Herradón, Luis Ángel Quintela and Uxía Yáñez
Sensors 2025, 25(17), 5245; https://doi.org/10.3390/s25175245 - 23 Aug 2025
Viewed by 147
Abstract
This study aimed to evaluate the accuracy of the accelerometer-equipped collar RUMI to detect estrus in dairy cows, establish a recommendation for the optimal timing for artificial insemination (AI) when using this device, and characterize the blood flow of the dominant follicle (F) [...] Read more.
This study aimed to evaluate the accuracy of the accelerometer-equipped collar RUMI to detect estrus in dairy cows, establish a recommendation for the optimal timing for artificial insemination (AI) when using this device, and characterize the blood flow of the dominant follicle (F) and the corpus luteum (CL) as ovulation approaches. Forty-seven cycling cows were monitored following synchronization with a modified G6G protocol, allowing for spontaneous ovulation. Ultrasound examinations were conducted every 12 h, starting 48 h after the second PGF2α dose, to monitor uterine and ovarian changes. Blood samples were also collected to determine serum progesterone (P4) levels. Each cow was fitted with a RUMI collar, which continuously monitored behavioral changes to identify the onset, offset, and peak of activity of estrus. One-way ANOVA assessed the relationship between physiological parameters and time before ovulation. Results showed that the RUMI collar demonstrated high specificity (100%), sensitivity (90.90%), and accuracy (93.62%) for estrus detection. The optimal AI window was identified as between 11.4 and 15.5 h after heat onset. Increased blood flow to the F and reduced luteal activity were observed in the 48 h prior to ovulation. Further research is needed to assess the influence of this AI window on conception rates, and if it should be modified considering external factors. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

15 pages, 622 KB  
Review
Artificial Intelligence in the Diagnosis and Imaging-Based Assessment of Pelvic Organ Prolapse: A Scoping Review
by Marian Botoncea, Călin Molnar, Vlad Olimpiu Butiurca, Cosmin Lucian Nicolescu and Claudiu Molnar-Varlam
Medicina 2025, 61(8), 1497; https://doi.org/10.3390/medicina61081497 - 21 Aug 2025
Viewed by 172
Abstract
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping [...] Read more.
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping review aims to synthesize current evidence on the use of AI in the imaging-based diagnosis and anatomical evaluation of POP. Materials and Methods: Following the PRISMA-ScR guidelines, a comprehensive search was conducted in PubMed, Scopus, and Web of Science for studies published between January 2020 and April 2025. Studies were included if they applied AI methodologies, such as convolutional neural networks (CNNs), vision transformers (ViTs), or hybrid models, to diagnostic imaging modalities such as ultrasound and magnetic resonance imaging (MRI) to women with POP. Results: Eight studies met the inclusion criteria. In these studies, AI technologies were applied to 2D/3D ultrasound and static or stress MRI for segmentation, anatomical landmark localization, and prolapse classification. CNNs were the most commonly used models, often combined with transfer learning. Some studies used hybrid models of ViTs, demonstrating high diagnostic accuracy. However, all studies relied on internal datasets, with limited model interpretability and no external validation. Moreover, clinical deployment and outcome assessments remain underexplored. Conclusions: AI shows promise in enhancing POP diagnosis through improved image analysis, but current applications are largely exploratory. Future work should prioritize external validation, standardization, explainable AI, and real-world implementation to bridge the gap between experimental models and clinical utility. Full article
(This article belongs to the Section Obstetrics and Gynecology)
Show Figures

Graphical abstract

70 pages, 4767 KB  
Review
Advancements in Breast Cancer Detection: A Review of Global Trends, Risk Factors, Imaging Modalities, Machine Learning, and Deep Learning Approaches
by Md. Atiqur Rahman, M. Saddam Hossain Khan, Yutaka Watanobe, Jarin Tasnim Prioty, Tasfia Tahsin Annita, Samura Rahman, Md. Shakil Hossain, Saddit Ahmed Aitijjo, Rafsun Islam Taskin, Victor Dhrubo, Abubokor Hanip and Touhid Bhuiyan
BioMedInformatics 2025, 5(3), 46; https://doi.org/10.3390/biomedinformatics5030046 - 20 Aug 2025
Viewed by 632
Abstract
Breast cancer remains a critical global health challenge, with over 2.1 million new cases annually. This review systematically evaluates recent advancements (2022–2024) in machine and deep learning approaches for breast cancer detection and risk management. Our analysis demonstrates that deep learning models achieve [...] Read more.
Breast cancer remains a critical global health challenge, with over 2.1 million new cases annually. This review systematically evaluates recent advancements (2022–2024) in machine and deep learning approaches for breast cancer detection and risk management. Our analysis demonstrates that deep learning models achieve 90–99% accuracy across imaging modalities, with convolutional neural networks showing particular promise in mammography (99.96% accuracy) and ultrasound (100% accuracy) applications. Tabular data models using XGBoost achieve comparable performance (99.12% accuracy) for risk prediction. The study confirms that lifestyle modifications (dietary changes, BMI management, and alcohol reduction) significantly mitigate breast cancer risk. Key findings include the following: (1) hybrid models combining imaging and clinical data enhance early detection, (2) thermal imaging achieves high diagnostic accuracy (97–100% in optimized models) while offering a cost-effective, less hazardous screening option, (3) challenges persist in data variability and model interpretability. These results highlight the need for integrated diagnostic systems combining technological innovations with preventive strategies. The review underscores AI’s transformative potential in breast cancer diagnosis while emphasizing the continued importance of risk factor management. Future research should prioritize multi-modal data integration and clinically interpretable models. Full article
(This article belongs to the Section Imaging Informatics)
Show Figures

Figure 1

25 pages, 9913 KB  
Article
Video-Based CSwin Transformer Using Selective Filtering Technique for Interstitial Syndrome Detection
by Khalid Moafa, Maria Antico, Christopher Edwards, Marian Steffens, Jason Dowling, David Canty and Davide Fontanarosa
Appl. Sci. 2025, 15(16), 9126; https://doi.org/10.3390/app15169126 - 19 Aug 2025
Viewed by 131
Abstract
Interstitial lung diseases (ILD) significantly impact health and mortality, affecting millions of individuals worldwide. During the COVID-19 pandemic, lung ultrasonography (LUS) became an indispensable diagnostic and management tool for lung disorders. However, utilising LUS to diagnose ILD requires significant expertise. This research aims [...] Read more.
Interstitial lung diseases (ILD) significantly impact health and mortality, affecting millions of individuals worldwide. During the COVID-19 pandemic, lung ultrasonography (LUS) became an indispensable diagnostic and management tool for lung disorders. However, utilising LUS to diagnose ILD requires significant expertise. This research aims to develop an automated and efficient approach for diagnosing ILD from LUS videos using AI to support clinicians in their diagnostic procedures. We developed a binary classifier based on a state-of-the-art CSwin Transformer to discriminate between LUS videos from healthy and non-healthy patients. We used a multi-centric dataset from the Royal Melbourne Hospital (Australia) and the ULTRa Lab at the University of Trento (Italy), comprising 60 LUS videos. Each video corresponds to a single patient, comprising 30 healthy individuals and 30 patients with ILD, with frame counts ranging from 96 to 300 per video. Each video is annotated using the corresponding medical report as ground truth. The datasets used for training the model underwent selective frame filtering, including reduction in frame numbers to eliminate potentially misleading frames in non-healthy videos. This step was crucial because some ILD videos included segments of normal frames, which could be mixed with the pathological features and mislead the model. To address this, we eliminated frames with a healthy appearance, such as frames without B-lines, thereby ensuring that training focused on diagnostically relevant features. The trained model was assessed on an unseen, separate dataset of 12 videos (3 healthy and 9 ILD) with frame counts ranging from 96 to 300 per video. The model achieved an average classification accuracy of 91%, calculated as the mean of three testing methods: Random Sampling (92%), Key Featuring (92%), and Chunk Averaging (89%). In RS, 32 frames were randomly selected from each of the 12 videos, resulting in a classification with 92% accuracy, with specificity, precision, recall, and F1-score of 100%, 100%, 90%, and 95%, respectively. Similarly, KF, which involved manually selecting 32 key frames based on representative frames from each of the 12 videos, achieved 92% accuracy with a specificity, precision, recall, and F1-score of 100%, 100%, 90%, and 95%, respectively. In contrast, the CA method, where the 12 videos were divided into video segments (chunks) of 32 consecutive frames, with 82 video segments, achieved an 89% classification accuracy (73 out of 82 video segments). Among the 9 misclassified segments in the CA method, 6 were false positives and 3 were false negatives, corresponding to an 11% misclassification rate. The accuracy differences observed between the three training scenarios were confirmed to be statistically significant via inferential analysis. A one-way ANOVA conducted on the 10-fold cross-validation accuracies yielded a large F-statistic of 2135.67 and a small p-value of 6.7 × 10−26, indicating highly significant differences in model performance. The proposed approach is a valid solution for fully automating LUS disease detection, aligning with clinical diagnostic practices that integrate dynamic LUS videos. In conclusion, introducing the selective frame filtering technique to refine the dataset training reduced the effort required for labelling. Full article
Show Figures

Figure 1

38 pages, 751 KB  
Article
Machine Learning and Feature Selection in Pediatric Appendicitis
by John Kendall, Gabriel Gaspar, Derek Berger and Jacob Levman
Tomography 2025, 11(8), 90; https://doi.org/10.3390/tomography11080090 - 13 Aug 2025
Viewed by 669
Abstract
Background/Objectives: Accurate prediction of pediatric appendicitis diagnosis, management, and severity is critical for clinical decision-making. We aimed to evaluate the predictive performance of a wide range of machine learning models, combined with various feature selection techniques, on a pediatric appendicitis dataset. A particular [...] Read more.
Background/Objectives: Accurate prediction of pediatric appendicitis diagnosis, management, and severity is critical for clinical decision-making. We aimed to evaluate the predictive performance of a wide range of machine learning models, combined with various feature selection techniques, on a pediatric appendicitis dataset. A particular focus was placed on the role of ultrasound (US) image-descriptive features in model performance and explainability. Methods: We conducted a retrospective cohort study on a dataset of 781 pediatric patients aged 0–18 presenting to Children’s Hospital St. Hedwig in Regensburg, Germany, between January 2016 and February 2023. We developed and validated predictive models; machine learning algorithms included the random forest, logistic regression, stochastic gradient descent, and the light gradient boosting machine (LGBM). These were paired exhaustively with feature selection methods spanning filter-based (association and prediction), embedded (LGBM and linear), and a novel redundancy-aware step-up wrapper approach. We employed a machine learning benchmarking study design where AI models were trained to predict diagnosis, management, and severity outcomes, both with and without US image-descriptive features, and evaluated on held-out testing samples. Model performance was assessed using overall accuracy and area under the receiver operating characteristic curve (AUROC). A deep learner optimized for tabular data, GANDALF, was also evaluated in these applications. Results: US features significantly improved diagnostic accuracy, supporting their use in reducing model bias. However, they were not essential for maximizing accuracy in predicting management or severity. In summary, our best-performing models were, for diagnosis, the random forest with embedded LGBM feature selection (98.1% accuracy, AUROC: 0.993), for management, the random forest without feature selection (93.9% accuracy, AUROC: 0.980), and for severity, the LGBM with filter-based association feature selection (90.1% accuracy, AUROC: 0.931). Conclusions: Our results demonstrate that high-performing, interpretable machine learning models can predict key clinical outcomes in pediatric appendicitis. US image features improve diagnostic accuracy but are not critical for predicting management or severity. Full article
(This article belongs to the Special Issue Celebrate the 10th Anniversary of Tomography)
Show Figures

Figure 1

24 pages, 580 KB  
Review
Overcoming the Blood–Brain Barrier: Advanced Strategies in Targeted Drug Delivery for Neurodegenerative Diseases
by Han-Mo Yang
Pharmaceutics 2025, 17(8), 1041; https://doi.org/10.3390/pharmaceutics17081041 - 11 Aug 2025
Viewed by 1016
Abstract
The increasing global health crisis of neurodegenerative diseases such as Alzheimer’s, Parkinson’s, amyotrophic lateral sclerosis, and Huntington’s disease is worsening because of a rapidly increasing aging population. Disease-modifying therapies continue to face development challenges due to the blood–brain barrier (BBB), which prevents more [...] Read more.
The increasing global health crisis of neurodegenerative diseases such as Alzheimer’s, Parkinson’s, amyotrophic lateral sclerosis, and Huntington’s disease is worsening because of a rapidly increasing aging population. Disease-modifying therapies continue to face development challenges due to the blood–brain barrier (BBB), which prevents more than 98% of small molecules and all biologics from entering the central nervous system. The therapeutic landscape for neurodegenerative diseases has recently undergone transformation through advances in targeted drug delivery that include ligand-decorated nanoparticles, bispecific antibody shuttles, focused ultrasound-mediated BBB modulation, intranasal exosomes, and mRNA lipid nanoparticles. This review provides an analysis of the molecular pathways that cause major neurodegenerative diseases, discusses the physiological and physicochemical barriers to drug delivery to the brain, and reviews the most recent drug targeting strategies including receptor-mediated transcytosis, cell-based “Trojan horse” approaches, gene-editing vectors, and spatiotemporally controlled physical methods. The review also critically evaluates the limitations such as immunogenicity, scalability, and clinical translation challenges, proposing potential solutions to enhance therapeutic efficacy. The recent clinical trials are assessed in detail, and current and future trends are discussed, including artificial intelligence (AI)-based carrier engineering, combination therapy, and precision neuro-nanomedicine. The successful translation of these innovations into effective treatments for patients with neurodegenerative diseases will require essential interdisciplinary collaboration between neuroscientists, pharmaceutics experts, clinicians, and regulators. Full article
(This article belongs to the Special Issue Targeted Therapies and Drug Delivery for Neurodegenerative Diseases)
Show Figures

Figure 1

19 pages, 7650 KB  
Article
Lightweight Mamba Model for 3D Tumor Segmentation in Automated Breast Ultrasounds
by JongNam Kim, Jun Kim, Fayaz Ali Dharejo, Zeeshan Abbas and Seung Won Lee
Mathematics 2025, 13(16), 2553; https://doi.org/10.3390/math13162553 - 9 Aug 2025
Viewed by 351
Abstract
Background: Recently, the adoption of AI-based technologies has been accelerating in the field of medical image analysis. For the early diagnosis and treatment planning of breast cancer, Automated Breast Ultrasound (ABUS) has emerged as a safe and non-invasive imaging method, especially for [...] Read more.
Background: Recently, the adoption of AI-based technologies has been accelerating in the field of medical image analysis. For the early diagnosis and treatment planning of breast cancer, Automated Breast Ultrasound (ABUS) has emerged as a safe and non-invasive imaging method, especially for women with dense breasts. However, the increasing computational cost due to the minute size and complexity of 3D ABUS data remains a major challenge. Methods: In this study, we propose a novel model based on the Mamba state–space model architecture for 3D tumor segmentation in ABUS images. The model uses Mamba blocks to effectively capture the volumetric spatial features of tumors, and integrates a deep spatial pyramid pooling (DASPP) module to extract multiscale contextual information from lesions of different sizes. Results: On the TDSC-2023 ABUS dataset, the proposed model achieved a Dice Similarity Coefficient (DSC) of 0.8062, and Intersection over Union (IoU) of 0.6831, using only 3.08 million parameters. Conclusions: These results show that the proposed model improves the performance of tumor segmentation in ABUS, offering both diagnostic precision and computational efficiency. The reduced computational space suggests a strong potential for real-world medical applications, where accurate early diagnosis can reduce costs and improve patient survival. Full article
Show Figures

Figure 1

27 pages, 1326 KB  
Systematic Review
Application of Artificial Intelligence in Pancreatic Cyst Management: A Systematic Review
by Donghyun Lee, Fadel Jesry, John J. Maliekkal, Lewis Goulder, Benjamin Huntly, Andrew M. Smith and Yazan S. Khaled
Cancers 2025, 17(15), 2558; https://doi.org/10.3390/cancers17152558 - 2 Aug 2025
Viewed by 597
Abstract
Background: Pancreatic cystic lesions (PCLs), including intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), pose a diagnostic challenge due to their variable malignant potential. Current guidelines, such as Fukuoka and American Gastroenterological Association (AGA), have moderate predictive accuracy and may lead [...] Read more.
Background: Pancreatic cystic lesions (PCLs), including intraductal papillary mucinous neoplasms (IPMNs) and mucinous cystic neoplasms (MCNs), pose a diagnostic challenge due to their variable malignant potential. Current guidelines, such as Fukuoka and American Gastroenterological Association (AGA), have moderate predictive accuracy and may lead to overtreatment or missed malignancies. Artificial intelligence (AI), incorporating machine learning (ML) and deep learning (DL), offers the potential to improve risk stratification, diagnosis, and management of PCLs by integrating clinical, radiological, and molecular data. This is the first systematic review to evaluate the application, performance, and clinical utility of AI models in the diagnosis, classification, prognosis, and management of pancreatic cysts. Methods: A systematic review was conducted in accordance with PRISMA guidelines and registered on PROSPERO (CRD420251008593). Databases searched included PubMed, EMBASE, Scopus, and Cochrane Library up to March 2025. The inclusion criteria encompassed original studies employing AI, ML, or DL in human subjects with pancreatic cysts, evaluating diagnostic, classification, or prognostic outcomes. Data were extracted on the study design, imaging modality, model type, sample size, performance metrics (accuracy, sensitivity, specificity, and area under the curve (AUC)), and validation methods. Study quality and bias were assessed using the PROBAST and adherence to TRIPOD reporting guidelines. Results: From 847 records, 31 studies met the inclusion criteria. Most were retrospective observational (n = 27, 87%) and focused on preoperative diagnostic applications (n = 30, 97%), with only one addressing prognosis. Imaging modalities included Computed Tomography (CT) (48%), endoscopic ultrasound (EUS) (26%), and Magnetic Resonance Imaging (MRI) (9.7%). Neural networks, particularly convolutional neural networks (CNNs), were the most common AI models (n = 16), followed by logistic regression (n = 4) and support vector machines (n = 3). The median reported AUC across studies was 0.912, with 55% of models achieving AUC ≥ 0.80. The models outperformed clinicians or existing guidelines in 11 studies. IPMN stratification and subtype classification were common focuses, with CNN-based EUS models achieving accuracies of up to 99.6%. Only 10 studies (32%) performed external validation. The risk of bias was high in 93.5% of studies, and TRIPOD adherence averaged 48%. Conclusions: AI demonstrates strong potential in improving the diagnosis and risk stratification of pancreatic cysts, with several models outperforming current clinical guidelines and human readers. However, widespread clinical adoption is hindered by high risk of bias, lack of external validation, and limited interpretability of complex models. Future work should prioritise multicentre prospective studies, standardised model reporting, and development of interpretable, externally validated tools to support clinical integration. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

25 pages, 5899 KB  
Review
Non-Invasive Medical Imaging in the Evaluation of Composite Scaffolds in Tissue Engineering: Methods, Challenges, and Future Directions
by Samira Farjaminejad, Rosana Farjaminejad, Pedram Sotoudehbagha and Mehdi Razavi
J. Compos. Sci. 2025, 9(8), 400; https://doi.org/10.3390/jcs9080400 - 1 Aug 2025
Viewed by 552
Abstract
Tissue-engineered scaffolds, particularly composite scaffolds composed of polymers combined with ceramics, bioactive glasses, or nanomaterials, play a vital role in regenerative medicine by providing structural and biological support for tissue repair. As scaffold designs grow increasingly complex, the need for non-invasive imaging modalities [...] Read more.
Tissue-engineered scaffolds, particularly composite scaffolds composed of polymers combined with ceramics, bioactive glasses, or nanomaterials, play a vital role in regenerative medicine by providing structural and biological support for tissue repair. As scaffold designs grow increasingly complex, the need for non-invasive imaging modalities capable of monitoring scaffold integration, degradation, and tissue regeneration in real-time has become critical. This review summarizes current non-invasive imaging techniques used to evaluate tissue-engineered constructs, including optical methods such as near-infrared fluorescence imaging (NIR), optical coherence tomography (OCT), and photoacoustic imaging (PAI); magnetic resonance imaging (MRI); X-ray-based approaches like computed tomography (CT); and ultrasound-based modalities. It discusses the unique advantages and limitations of each modality. Finally, the review identifies major challenges—including limited imaging depth, resolution trade-offs, and regulatory hurdles—and proposes future directions to enhance translational readiness and clinical adoption of imaging-guided tissue engineering (TE). Emerging prospects such as multimodal platforms and artificial intelligence (AI) assisted image analysis hold promise for improving precision, scalability, and clinical relevance in scaffold monitoring. Full article
(This article belongs to the Special Issue Sustainable Biocomposites, 3rd Edition)
Show Figures

Figure 1

24 pages, 624 KB  
Review
Integrating Artificial Intelligence into Perinatal Care Pathways: A Scoping Review of Reviews of Applications, Outcomes, and Equity
by Rabie Adel El Arab, Omayma Abdulaziz Al Moosa, Zahraa Albahrani, Israa Alkhalil, Joel Somerville and Fuad Abuadas
Nurs. Rep. 2025, 15(8), 281; https://doi.org/10.3390/nursrep15080281 - 31 Jul 2025
Viewed by 504
Abstract
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping [...] Read more.
Background: Artificial intelligence (AI) and machine learning (ML) have been reshaping maternal, fetal, neonatal, and reproductive healthcare by enhancing risk prediction, diagnostic accuracy, and operational efficiency across the perinatal continuum. However, no comprehensive synthesis has yet been published. Objective: To conduct a scoping review of reviews of AI/ML applications spanning reproductive, prenatal, postpartum, neonatal, and early child-development care. Methods: We searched PubMed, Embase, the Cochrane Library, Web of Science, and Scopus through April 2025. Two reviewers independently screened records, extracted data, and assessed methodological quality using AMSTAR 2 for systematic reviews, ROBIS for bias assessment, SANRA for narrative reviews, and JBI guidance for scoping reviews. Results: Thirty-nine reviews met our inclusion criteria. In preconception and fertility treatment, convolutional neural network-based platforms can identify viable embryos and key sperm parameters with over 90 percent accuracy, and machine-learning models can personalize follicle-stimulating hormone regimens to boost mature oocyte yield while reducing overall medication use. Digital sexual-health chatbots have enhanced patient education, pre-exposure prophylaxis adherence, and safer sexual behaviors, although data-privacy safeguards and bias mitigation remain priorities. During pregnancy, advanced deep-learning models can segment fetal anatomy on ultrasound images with more than 90 percent overlap compared to expert annotations and can detect anomalies with sensitivity exceeding 93 percent. Predictive biometric tools can estimate gestational age within one week with accuracy and fetal weight within approximately 190 g. In the postpartum period, AI-driven decision-support systems and conversational agents can facilitate early screening for depression and can guide follow-up care. Wearable sensors enable remote monitoring of maternal blood pressure and heart rate to support timely clinical intervention. Within neonatal care, the Heart Rate Observation (HeRO) system has reduced mortality among very low-birth-weight infants by roughly 20 percent, and additional AI models can predict neonatal sepsis, retinopathy of prematurity, and necrotizing enterocolitis with area-under-the-curve values above 0.80. From an operational standpoint, automated ultrasound workflows deliver biometric measurements at about 14 milliseconds per frame, and dynamic scheduling in IVF laboratories lowers staff workload and per-cycle costs. Home-monitoring platforms for pregnant women are associated with 7–11 percent reductions in maternal mortality and preeclampsia incidence. Despite these advances, most evidence derives from retrospective, single-center studies with limited external validation. Low-resource settings, especially in Sub-Saharan Africa, remain under-represented, and few AI solutions are fully embedded in electronic health records. Conclusions: AI holds transformative promise for perinatal care but will require prospective multicenter validation, equity-centered design, robust governance, transparent fairness audits, and seamless electronic health record integration to translate these innovations into routine practice and improve maternal and neonatal outcomes. Full article
Show Figures

Figure 1

16 pages, 1194 KB  
Systematic Review
Artificial Intelligence in the Diagnosis of Tongue Cancer: A Systematic Review with Meta-Analysis
by Seorin Jeong, Hae-In Choi, Keon-Il Yang, Jin Soo Kim, Ji-Won Ryu and Hyun-Jeong Park
Biomedicines 2025, 13(8), 1849; https://doi.org/10.3390/biomedicines13081849 - 30 Jul 2025
Viewed by 440
Abstract
Background: Tongue squamous cell carcinoma (TSCC) is an aggressive oral malignancy characterized by early submucosal invasion and a high risk of cervical lymph node metastasis. Accurate and timely diagnosis is essential, but it remains challenging when relying solely on conventional imaging and [...] Read more.
Background: Tongue squamous cell carcinoma (TSCC) is an aggressive oral malignancy characterized by early submucosal invasion and a high risk of cervical lymph node metastasis. Accurate and timely diagnosis is essential, but it remains challenging when relying solely on conventional imaging and histopathology. This systematic review aimed to evaluate studies applying artificial intelligence (AI) in the diagnostic imaging of TSCC. Methods: This review was conducted under PRISMA 2020 guidelines and included studies from January 2020 to December 2024 that utilized AI in TSCC imaging. A total of 13 studies were included, employing AI models such as Convolutional Neural Networks (CNNs), Support Vector Machines (SVMs), and Random Forest (RF). Imaging modalities analyzed included MRI, CT, PET, ultrasound, histopathological whole-slide images (WSI), and endoscopic photographs. Results: Diagnostic performance was generally high, with area under the curve (AUC) values ranging from 0.717 to 0.991, sensitivity from 63.3% to 100%, and specificity from 70.0% to 96.7%. Several models demonstrated superior performance compared to expert clinicians, particularly in delineating tumor margins and estimating the depth of invasion (DOI). However, only one study conducted external validation, and most exhibited moderate risk of bias in patient selection or index test interpretation. Conclusions: AI-based diagnostic tools hold strong potential for enhancing TSCC detection, but future research must address external validation, standardization, and clinical integration to ensure their reliable and widespread adoption. Full article
(This article belongs to the Special Issue Recent Advances in Oral Medicine—2nd Edition)
Show Figures

Figure 1

11 pages, 1298 KB  
Technical Note
Ultrasound Imaging: Advancing the Diagnosis of Periodontal Disease
by Gaël Y. Rochefort, Frédéric Denis and Matthieu Renaud
Dent. J. 2025, 13(8), 349; https://doi.org/10.3390/dj13080349 - 29 Jul 2025
Viewed by 368
Abstract
Objectives: This pilot study evaluates the correlation between periodontal pocket depth (PPD) measurements obtained by manual probing and those derived from an AI-coupled ultrasound imaging device in periodontitis patients. Materials and Methods: Thirteen patients with periodontitis underwent ultrasonic probing with an [...] Read more.
Objectives: This pilot study evaluates the correlation between periodontal pocket depth (PPD) measurements obtained by manual probing and those derived from an AI-coupled ultrasound imaging device in periodontitis patients. Materials and Methods: Thirteen patients with periodontitis underwent ultrasonic probing with an AI engine for automated PPD measurements, followed by routine manual probing. Results: A total of 2088 manual and 1987 AI-based PPD measurements were collected. The mean PPD was 4.2 mm (range: 2–8 mm) for manual probing and 4.5 mm (range: 2–9 mm) for AI-based ultrasound, with a Pearson correlation coefficient of 0.68 (95% CI: 0.62–0.73). Discrepancies were noted in cases with inflammation or calculus. AI struggled to differentiate pocket depths in complex clinical scenarios. Discussion: Ultrasound imaging offers non-invasive, real-time visualization of periodontal structures, but AI accuracy requires further training to address image artifacts and clinical variability. Conclusions: The ultrasound device shows promise for non-invasive periodontal diagnostics but is not yet a direct alternative to manual probing. Further AI optimization and validation are needed. Clinical Relevance: This technology could enhance patient comfort and enable frequent monitoring, pending improvements in AI reliability. Full article
(This article belongs to the Special Issue Feature Papers in Digital Dentistry)
Show Figures

Figure 1

14 pages, 2191 KB  
Article
AI-Based Ultrasound Nomogram for Differentiating Invasive from Non-Invasive Breast Cancer Masses
by Meng-Yuan Tsai, Zi-Han Yu and Chen-Pin Chou
Cancers 2025, 17(15), 2497; https://doi.org/10.3390/cancers17152497 - 29 Jul 2025
Viewed by 352
Abstract
Purpose: This study aimed to develop a predictive nomogram integrating AI-based BI-RADS lexicons and lesion-to-nipple distance (LND) ultrasound features to differentiate mass-type ductal carcinoma in situ (DCIS) from invasive ductal carcinoma (IDC) visible on ultrasound. Methods: The final study cohort consisted of 170 [...] Read more.
Purpose: This study aimed to develop a predictive nomogram integrating AI-based BI-RADS lexicons and lesion-to-nipple distance (LND) ultrasound features to differentiate mass-type ductal carcinoma in situ (DCIS) from invasive ductal carcinoma (IDC) visible on ultrasound. Methods: The final study cohort consisted of 170 women with 175 pathologically confirmed malignant breast lesions, including 26 cases of DCIS and 149 cases of IDC. LND and AI-based features from the S-Detect system (BI-RADS lexicons) were analyzed. Rare features were consolidated into broader categories to enhance model stability. Data were split into training (70%) and validation (30%) sets. Logistic regression identified key predictors for an LND nomogram. Model performance was evaluated using receiver operating characteristic (ROC) curves, 1000 bootstrap resamples, and calibration curves to assess discrimination and calibration. Results: Multivariate logistic regression identified smaller lesion size, irregular shape, LND ≤ 3 cm, and non-hypoechoic echogenicity as independent predictors of DCIS. These variables were integrated into the LND nomogram, which demonstrated strong discriminative performance (AUC = 0.851 training; AUC = 0.842 validation). Calibration was excellent, with non-significant Hosmer-Lemeshow tests (p = 0.127 training, p = 0.972 validation) and low mean absolute errors (MAE = 0.016 and 0.034, respectively), supporting the model’s accuracy and reliability. Conclusions: The AI-based comprehensive nomogram demonstrates strong reliability in distinguishing mass-type DCIS from IDC, offering a practical tool to enhance non-invasive breast cancer diagnosis and inform preoperative planning. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

16 pages, 1162 KB  
Review
Ultrasound for the Early Detection and Diagnosis of Necrotizing Enterocolitis: A Scoping Review of Emerging Evidence
by Indrani Bhattacharjee, Michael Todd Dolinger, Rachana Singh and Yogen Singh
Diagnostics 2025, 15(15), 1852; https://doi.org/10.3390/diagnostics15151852 - 23 Jul 2025
Viewed by 613
Abstract
Background: Necrotizing enterocolitis (NEC) is a severe gastrointestinal disease and a major cause of morbidity and mortality among preterm infants. Traditional diagnostic methods such as abdominal radiography have limited sensitivity in early disease stages, prompting interest in bowel ultrasound (BUS) as a complementary [...] Read more.
Background: Necrotizing enterocolitis (NEC) is a severe gastrointestinal disease and a major cause of morbidity and mortality among preterm infants. Traditional diagnostic methods such as abdominal radiography have limited sensitivity in early disease stages, prompting interest in bowel ultrasound (BUS) as a complementary imaging modality. Objective: This scoping review aims to synthesize existing literature on the role of ultra sound in the early detection, diagnosis, and management of NEC, with emphasis on its diagnostic performance, integration into clinical care, and technological innovations. Methods: Following PRISMA-ScR guidelines, a systematic search was conducted across PubMed, Embase, Cochrane Library, and Google Scholar for studies published between January 2000 and December 2025. Inclusion criteria encompassed original research, reviews, and clinical studies evaluating the use of bowel, intestinal, or Doppler ultrasound in neonates with suspected or confirmed NEC. Data were extracted, categorized by study design, population characteristics, ultrasound features, and diagnostic outcomes, and qualitatively synthesized. Results: A total of 101 studies were included. BUS demonstrated superior sensitivity over radiography in detecting early features of NEC, including bowel wall thickening, portal venous gas, and altered peristalsis. Doppler ultrasound, both antenatal and postnatal, was effective in identifying perfusion deficits predictive of NEC onset. Neonatologist-performed ultrasound (NEOBUS) showed high interobserver agreement when standardized protocols were used. Emerging tools such as ultra-high-frequency ultrasound (UHFUS) and artificial intelligence (AI)-enhanced analysis hold potential to improve diagnostic precision. Point-of-care ultrasound (POCUS) appears feasible in resource-limited settings, though implementation barriers remain. Conclusions: Bowel ultrasound is a valuable adjunct to conventional imaging in NEC diagnosis. Standardized protocols, validation of advanced technologies, and out come-based studies are essential to guide its broader clinical adoption. Full article
(This article belongs to the Special Issue Diagnosis and Management in Digestive Surgery: 2nd Edition)
Show Figures

Figure 1

Back to TopTop