Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (19,449)

Search Parameters:
Keywords = AIS

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 753 KiB  
Article
In-Context Learning for Low-Resource Machine Translation: A Study on Tarifit with Large Language Models
by Oussama Akallouch and Khalid Fardousse
Algorithms 2025, 18(8), 489; https://doi.org/10.3390/a18080489 (registering DOI) - 6 Aug 2025
Abstract
This study presents the first systematic evaluation of in-context learning for Tarifit machine translation, a low-resource Amazigh language spoken by 5 million people in Morocco and Europe. We assess three large language models (GPT-4, Claude-3.5, PaLM-2) across Tarifit–Arabic, Tarifit–French, and Tarifit–English translation using [...] Read more.
This study presents the first systematic evaluation of in-context learning for Tarifit machine translation, a low-resource Amazigh language spoken by 5 million people in Morocco and Europe. We assess three large language models (GPT-4, Claude-3.5, PaLM-2) across Tarifit–Arabic, Tarifit–French, and Tarifit–English translation using 1000 sentence pairs and 5-fold cross-validation. Results show that 8-shot similarity-based demonstration selection achieves optimal performance. GPT-4 achieved 20.2 BLEU for Tarifit–Arabic, 14.8 for Tarifit–French, and 10.9 for Tarifit–English. Linguistic proximity significantly impacts translation quality, with Tarifit–Arabic substantially outperforming other language pairs by 8.4 BLEU points due to shared vocabulary and morphological patterns. Error analysis reveals systematic issues with morphological complexity (42% of errors) and cultural terminology preservation (18% of errors). This work establishes baseline benchmarks for Tarifit translation and demonstrates the viability of in-context learning for morphologically complex low-resource languages, contributing to linguistic equity in AI systems. Full article
(This article belongs to the Section Evolutionary Algorithms and Machine Learning)
Show Figures

Figure 1

16 pages, 824 KiB  
Article
ChatGPT and Microsoft Copilot for Cochlear Implant Side Selection: A Preliminary Study
by Daniele Portelli, Sabrina Loteta, Mariangela D’Angelo, Cosimo Galletti, Leonard Freni, Rocco Bruno, Francesco Ciodaro, Angela Alibrandi and Giuseppe Alberti
Audiol. Res. 2025, 15(4), 100; https://doi.org/10.3390/audiolres15040100 (registering DOI) - 6 Aug 2025
Abstract
Background/Objectives: Artificial Intelligence (AI) is increasingly being applied in otolaryngology, including cochlear implants (CIs). This study evaluates the accuracy and completeness of ChatGPT-4 and Microsoft Copilot in determining the appropriate implantation side based on audiological and radiological data, as well as the [...] Read more.
Background/Objectives: Artificial Intelligence (AI) is increasingly being applied in otolaryngology, including cochlear implants (CIs). This study evaluates the accuracy and completeness of ChatGPT-4 and Microsoft Copilot in determining the appropriate implantation side based on audiological and radiological data, as well as the presence of tinnitus. Methods: Data from 22 CI patients (11 males, 11 females; 12 right-sided, 10 left-sided implants) were used to query both AI models. Each patient’s audiometric thresholds, hearing aid benefit, tinnitus presence, and radiological findings were provided. The AI-generated responses were compared to the clinician-chosen sides. Accuracy and completeness were scored by two independent reviewers. Results: ChatGPT had a 50% concordance rate for right-side implantation and a 70% concordance rate for left-side implantation, while Microsoft Copilot achieved 75% and 90%, respectively. Chi-square tests showed significant associations between AI-suggested and clinician-chosen sides for both AI (p < 0.05). ChatGPT outperformed Microsoft Copilot in identifying radiological alterations (60% vs. 40%) and tinnitus presence (77.8% vs. 66.7%). Cronbach’s alpha was >0.70 only for ChatGPT accuracy, indicating better agreement between reviewers. Conclusions: Both AI models showed significant alignment with clinician decisions. Microsoft Copilot was more accurate in implantation side selection, while ChatGPT better recognized radiological alterations and tinnitus. These results highlight AI’s potential as a clinical decision support tool in CI candidacy, although further research is needed to refine its application in complex cases. Full article
Show Figures

Figure 1

13 pages, 1424 KiB  
Article
Comparison of Artificial Intelligence–Derived Heart Age with Chronological Age Using Normal Sinus Electrocardiograms in Patients with No Evidence of Cardiac Disease
by Myoung Jung Kim, Sung-Hee Song, Young Jun Park, Young-Hyun Lee, Jongwoo Kim, JaeHu Jeon, KyungChang Woo, Juwon Kim, Ju Youn Kim, Seung-Jung Park, Young Keun On and Kyoung-Min Park
J. Clin. Med. 2025, 14(15), 5548; https://doi.org/10.3390/jcm14155548 (registering DOI) - 6 Aug 2025
Abstract
Background/Objectives: Chronological age (CA) is commonly used in clinical decision-making, yet it may not accurately reflect biological aging. Recent advances in artificial intelligence (AI) allow estimation of electrocardiogram (ECG)-derived heart age, which may serve as a non-invasive biomarker for physiological aging. This [...] Read more.
Background/Objectives: Chronological age (CA) is commonly used in clinical decision-making, yet it may not accurately reflect biological aging. Recent advances in artificial intelligence (AI) allow estimation of electrocardiogram (ECG)-derived heart age, which may serve as a non-invasive biomarker for physiological aging. This study aimed to develop and validate a deep learning model to predict ECG-heart age in individuals with no structural heart disease. Methods: We trained a convolutional neural network (DenseNet-121) using 12-lead ECGs from 292,484 individuals (mean age: 51.4 ± 13.8 years; 42.3% male) without significant cardiac disease. Exclusion criteria included missing age data, age <18 or >90 years, and structural abnormalities. CA was used as the target variable. Model performance was evaluated using the coefficient of determination (R2), Pearson correlation coefficient (PCC), mean absolute error (MAE), and root mean square error (RMSE). External validation was conducted using 1191 independent ECGs. Results: The model demonstrated strong predictive performance (R2 = 0.783, PCC = 0.885, MAE = 5.023 years, RMSE = 6.389 years). ECG-heart age tended to be overestimated in younger adults (≤30 years) and underestimated in older adults (≥70 years). External validation showed consistent performance (R2 = 0.703, PCC = 0.846, MAE = 5.582 years, RMSE = 7.316 years). Conclusions: The proposed AI-based model accurately estimates ECG-heart age in individuals with structurally normal hearts. ECG-derived heart age may serve as a reliable biomarker of biological aging and support future risk stratification strategies. Full article
(This article belongs to the Section Cardiology)
Show Figures

Figure 1

19 pages, 1185 KiB  
Article
PredictMed-CDSS: Artificial Intelligence-Based Decision Support System Predicting the Probability to Develop Neuromuscular Hip Dysplasia
by Carlo M. Bertoncelli, Federico Solla, Michal Latalski, Sikha Bagui, Subhash C. Bagui, Stefania Costantini and Domenico Bertoncelli
Bioengineering 2025, 12(8), 846; https://doi.org/10.3390/bioengineering12080846 (registering DOI) - 6 Aug 2025
Abstract
Neuromuscular hip dysplasia (NHD) is a common deformity in children with cerebral palsy (CP). Although some predictive factors of NHD are known, the prediction of NHD is in its infancy. We present a Clinical Decision Support System (CDSS) designed to calculate the probability [...] Read more.
Neuromuscular hip dysplasia (NHD) is a common deformity in children with cerebral palsy (CP). Although some predictive factors of NHD are known, the prediction of NHD is in its infancy. We present a Clinical Decision Support System (CDSS) designed to calculate the probability of developing NHD in children with CP. The system utilizes an ensemble of three machine learning (ML) algorithms: Neural Network (NN), Support Vector Machine (SVM), and Logistic Regression (LR). The development and evaluation of the CDSS followed the DECIDE-AI guidelines for AI-driven clinical decision support tools. The ensemble was trained on a data series from 182 subjects. Inclusion criteria were age between 12 and 18 years and diagnosis of CP from two specialized units. Clinical and functional data were collected prospectively between 2005 and 2023, and then analyzed in a cross-sectional study. Accuracy and area under the receiver operating characteristic (AUROC) were calculated for each method. Best logistic regression scores highlighted history of previous orthopedic surgery (p = 0.001), poor motor function (p = 0.004), truncal tone disorder (p = 0.008), scoliosis (p = 0.031), number of affected limbs (p = 0.05), and epilepsy (p = 0.05) as predictors of NHD. Both accuracy and AUROC were highest for NN, 83.7% and 0.92, respectively. The novelty of this study lies in the development of an efficient Clinical Decision Support System (CDSS) prototype, specifically designed to predict future outcomes of neuromuscular hip dysplasia (NHD) in patients with cerebral palsy (CP) using clinical data. The proposed system, PredictMed-CDSS, demonstrated strong predictive performance for estimating the probability of NHD development in children with CP, with the highest accuracy achieved using neural networks (NN). PredictMed-CDSS has the potential to assist clinicians in anticipating the need for early interventions and preventive strategies in the management of NHD among CP patients. Full article
Show Figures

Figure 1

19 pages, 1584 KiB  
Article
The Development of a Predictive Maintenance System for Gearboxes Through a Statistical Diagnostic Analysis of Lubricating Oil and Artificial Intelligence
by Diego Rigolli, Lorenzo Pompei, Massimo Manfredini, Massimiliano Vignoli, Vincenzo La Battaglia and Alessandro Giorgetti
Machines 2025, 13(8), 693; https://doi.org/10.3390/machines13080693 (registering DOI) - 6 Aug 2025
Abstract
This paper addressed the problem of oil diagnostics lubricants applied to the predictive maintenance of industrial gearboxes, proposing the integration of an artificial intelligence (AI) system into the process analysis. The main objective was to overcome the critical issues of the traditional method, [...] Read more.
This paper addressed the problem of oil diagnostics lubricants applied to the predictive maintenance of industrial gearboxes, proposing the integration of an artificial intelligence (AI) system into the process analysis. The main objective was to overcome the critical issues of the traditional method, characterized by long analysis times and a marked dependence on the subjective interpretation of operators. The method includes a detailed statistical analysis of the common ways to assess the condition of lubricants, such as optical emission spectroscopy, particle counting, measuring viscosity and density, and Fourier-transform infrared spectroscopy (FT-IR). These methods are then combined with an artificial intelligence model. Tested on commercial gearbox data, the proposed approach demonstrates agreement between IA and expert evaluation. The application has shown that it can effectively support diagnoses, reduce processing time by 60%, and minimize human errors. It also improves knowledge sharing through an increase in the stability and repetitiveness of diagnoses and promotes consistency and clarity in reporting. Full article
Show Figures

Figure 1

13 pages, 286 KiB  
Review
Drug Repurposing and Artificial Intelligence in Multiple Sclerosis: Emerging Strategies for Precision Therapy
by Pedro Henrique Villar-Delfino, Paulo Pereira Christo and Caroline Maria Oliveira Volpe
Sclerosis 2025, 3(3), 28; https://doi.org/10.3390/sclerosis3030028 (registering DOI) - 6 Aug 2025
Abstract
Multiple sclerosis (MS) is a chronic, immune-mediated disorder of the central nervous system (CNS) characterized by inflammation, demyelination, axonal degeneration, and gliosis. Its pathophysiology involves a complex interplay of genetic susceptibility, environmental triggers, and immune dysregulation, ultimately leading to progressive neurodegeneration and functional [...] Read more.
Multiple sclerosis (MS) is a chronic, immune-mediated disorder of the central nervous system (CNS) characterized by inflammation, demyelination, axonal degeneration, and gliosis. Its pathophysiology involves a complex interplay of genetic susceptibility, environmental triggers, and immune dysregulation, ultimately leading to progressive neurodegeneration and functional decline. Although significant advances have been made in disease-modifying therapies (DMTs), many patients continue to experience disease progression and unmet therapeutic needs. Drug repurposing—the identification of new indications for existing drugs—has emerged as a promising strategy in MS research, offering a cost-effective and time-efficient alternative to traditional drug development. Several compounds originally developed for other diseases, including immunomodulatory, anti-inflammatory, and neuroprotective agents, are currently under investigation for their efficacy in MS. Repurposed agents, such as selective sphingosine-1-phosphate (S1P) receptor modulators, kinase inhibitors, and metabolic regulators, have demonstrated potential in promoting neuroprotection, modulating immune responses, and supporting remyelination in both preclinical and clinical settings. Simultaneously, artificial intelligence (AI) is transforming drug discovery and precision medicine in MS. Machine learning and deep learning models are being employed to analyze high-dimensional biomedical data, predict drug–target interactions, streamline drug repurposing workflows, and enhance therapeutic candidate selection. By integrating multiomics and neuroimaging data, AI tools facilitate the identification of novel targets and support patient stratification for individualized treatment. This review highlights recent advances in drug repurposing and discovery for MS, with a particular emphasis on the emerging role of AI in accelerating therapeutic innovation and optimizing treatment strategies. Full article
Show Figures

Graphical abstract

19 pages, 253 KiB  
Article
The Application of Artificial Intelligence in Acute Prescribing in Homeopathy: A Comparative Retrospective Study
by Rachael Doherty, Parker Pracjek, Christine D. Luketic, Denise Straiges and Alastair C. Gray
Healthcare 2025, 13(15), 1923; https://doi.org/10.3390/healthcare13151923 (registering DOI) - 6 Aug 2025
Abstract
Background/Objective: The use of artificial intelligence to assist in medical applications is an emerging area of investigation and discussion. The researchers studied whether there was a difference between homeopathy guidance provided by artificial intelligence (AI) (automated) and live professional practitioners (live) for acute [...] Read more.
Background/Objective: The use of artificial intelligence to assist in medical applications is an emerging area of investigation and discussion. The researchers studied whether there was a difference between homeopathy guidance provided by artificial intelligence (AI) (automated) and live professional practitioners (live) for acute illnesses. Additionally, the study explored the practical challenges associated with validating AI tools used for homeopathy and sought to generate insights on the potential value and limitations of these tools in the management of acute health complaints. Method: Randomly selected cases at a homeopathy teaching clinic (n = 100) were entered into a commercially available homeopathic remedy finder to investigate the consistency between automated and live recommendations. Client symptoms, medical disclaimers, remedies, and posology were compared. The findings of this study show that the purpose-built homeopathic remedy finder is not a one-to-one replacement for a live practitioner. Result: In the 100 cases compared, the automated online remedy finder provided between 1 and 20 prioritized remedy recommendations for each complaint, leaving the user to make the final remedy decision based on how well their characteristic symptoms were covered by each potential remedy. The live practitioner-recommended remedy was included somewhere among the auto-mated results in 59% of the cases, appeared in the top three results in 37% of the cases, and was a top remedy match in 17% of the cases. There was no guidance for managing remedy responses found in live clinical settings. Conclusion: This study also highlights the challenge and importance of validating AI remedy recommendations against real cases. The automated remedy finder used covered 74 acute complaints. The live cases from the teaching clinic included 22 of the 74 complaints. Full article
(This article belongs to the Special Issue The Role of AI in Predictive and Prescriptive Healthcare)
28 pages, 48169 KiB  
Article
Advancing Self-Supervised Learning for Building Change Detection and Damage Assessment: Unified Denoising Autoencoder and Contrastive Learning Framework
by Songxi Yang, Bo Peng, Tang Sui, Meiliu Wu and Qunying Huang
Remote Sens. 2025, 17(15), 2717; https://doi.org/10.3390/rs17152717 (registering DOI) - 6 Aug 2025
Abstract
Building change detection and building damage assessment are two essential tasks in post-disaster analysis. Building change detection focuses on identifying changed building areas between bi-temporal images, while building damage assessment involves segmenting all buildings and classifying their damage severity. These tasks play a [...] Read more.
Building change detection and building damage assessment are two essential tasks in post-disaster analysis. Building change detection focuses on identifying changed building areas between bi-temporal images, while building damage assessment involves segmenting all buildings and classifying their damage severity. These tasks play a critical role in disaster response and urban development monitoring. Although supervised learning has significantly advanced building change detection and damage assessment, its reliance on large labeled datasets remains a major limitation. In contrast, self-supervised learning enables the extraction of meaningful data representations without explicit training labels. To address this challenge, we propose a self-supervised learning approach that unifies denoising autoencoders and contrastive learning, enabling effective data representation for building change detection and damage assessment. The proposed architecture integrates a dual denoising autoencoder with a Vision Transformer backbone and contrastive learning strategy, complemented by a Feature Pyramid Network-ResNet dual decoder and an Edge Guidance Module. This design enhances multi-scale feature extraction and enables edge-aware segmentation for accurate predictions. Extensive experiments were conducted on five public datasets, including xBD, LEVIR, LEVIR+, SYSU, and WHU, to evaluate the performance and generalization capabilities of the model. The results demonstrate that the proposed Denoising AutoEncoder-enhanced Dual-Fusion Network (DAEDFN) approach achieves competitive performance compared with fully supervised methods. On the xBD dataset, the largest dataset for building damage assessment, our proposed method achieves an F1 score of 0.892 for building segmentation, outperforming state-of-the-art methods. For building damage severity classification, the model achieves an F1 score of 0.632. On the building change detection datasets, the proposed method achieves F1 scores of 0.837 (LEVIR), 0.817 (LEVIR+), 0.768 (SYSU), and 0.876 (WHU), demonstrating model generalization across diverse scenarios. Despite these promising results, challenges remain in complex urban environments, small-scale changes, and fine-grained boundary detection. These findings highlight the potential of self-supervised learning in building change detection and damage assessment tasks. Full article
Show Figures

Figure 1

24 pages, 1684 KiB  
Article
Beyond Assistance: Embracing AI as a Collaborative Co-Agent in Education
by Rena Katsenou, Konstantinos Kotsidis, Agnes Papadopoulou, Panagiotis Anastasiadis and Ioannis Deliyannis
Educ. Sci. 2025, 15(8), 1006; https://doi.org/10.3390/educsci15081006 (registering DOI) - 6 Aug 2025
Abstract
The integration of artificial intelligence (AI) in education offers novel opportunities to enhance critical thinking while also posing challenges to independent cognitive development. In particular, Human-Centered Artificial Intelligence (HCAI) in education aims to enhance human experience by providing a supportive and collaborative learning [...] Read more.
The integration of artificial intelligence (AI) in education offers novel opportunities to enhance critical thinking while also posing challenges to independent cognitive development. In particular, Human-Centered Artificial Intelligence (HCAI) in education aims to enhance human experience by providing a supportive and collaborative learning environment. Rather than replacing the educator, HCAI serves as a tool that empowers both students and teachers, fostering critical thinking and autonomy in learning. This study investigates the potential for AI to become a collaborative partner that assists learning and enriches academic engagement. The research was conducted during the 2024–2025 winter semester within the Pedagogical and Teaching Sufficiency Program offered by the Audio and Visual Arts Department, Ionian University, Corfu, Greece. The research employs a hybrid ethnographic methodology that blends digital interactions—where students use AI tools to create artistic representations—with physical classroom engagement. Data was collected through student projects, reflective journals, and questionnaires, revealing that structured dialog with AI not only facilitates deeper critical inquiry and analytical reasoning but also induces a state of flow, characterized by intense focus and heightened creativity. The findings highlight a dialectic between individual agency and collaborative co-agency, demonstrating that while automated AI responses may diminish active cognitive engagement, meaningful interactions can transform AI into an intellectual partner that enriches the learning experience. These insights suggest promising directions for future pedagogical strategies that balance digital innovation with traditional teaching methods, ultimately enhancing the overall quality of education. Furthermore, the study underscores the importance of integrating reflective practices and adaptive frameworks to support evolving student needs, ensuring a sustainable model. Full article
(This article belongs to the Special Issue Unleashing the Potential of E-learning in Higher Education)
Show Figures

Figure 1

26 pages, 2638 KiB  
Article
How Explainable Really Is AI? Benchmarking Explainable AI
by Giacomo Bergami and Oliver Robert Fox
Logics 2025, 3(3), 9; https://doi.org/10.3390/logics3030009 (registering DOI) - 6 Aug 2025
Abstract
This work contextualizes the possibility of deriving a unifying artificial intelligence framework by walking in the footsteps of General, Explainable, and Verified Artificial Intelligence (GEVAI): by considering explainability not only at the level of the results produced by a specification but also considering [...] Read more.
This work contextualizes the possibility of deriving a unifying artificial intelligence framework by walking in the footsteps of General, Explainable, and Verified Artificial Intelligence (GEVAI): by considering explainability not only at the level of the results produced by a specification but also considering the explicability of the inference process as well as the one related to the data processing step, we can not only ensure human explainability of the process leading to the ultimate results but also mitigate and minimize machine faults leading to incorrect results. This, on the other hand, requires the adoption of automated verification processes beyond system fine-tuning, which are essentially relevant in a more interconnected world. The challenges related to full automation of a data processing pipeline, mostly requiring human-in-the-loop approaches, forces us to tackle the framework from a different perspective: while proposing a preliminary implementation of GEVAI mainly used as an AI test-bed having different state-of-the-art AI algorithms interconnected, we propose two other data processing pipelines, LaSSI and EMeriTAte+DF, being a specific instantiation of GEVAI for solving specific problems (Natural Language Processing, and Multivariate Time Series Classifications). Preliminary results from our ongoing work strengthen the position of the proposed framework by showcasing it as a viable path to improve current state-of-the-art AI algorithms. Full article
Show Figures

Figure 1

26 pages, 1178 KiB  
Article
Towards Dynamic Learner State: Orchestrating AI Agents and Workplace Performance via the Model Context Protocol
by Mohan Yang, Nolan Lovett, Belle Li and Zhen Hou
Educ. Sci. 2025, 15(8), 1004; https://doi.org/10.3390/educsci15081004 - 6 Aug 2025
Abstract
Current learning and development approaches often struggle to capture dynamic individual capabilities, particularly the skills they acquire informally every day on the job. This dynamic creates a significant gap between what traditional models think people know and their actual performance, leading to an [...] Read more.
Current learning and development approaches often struggle to capture dynamic individual capabilities, particularly the skills they acquire informally every day on the job. This dynamic creates a significant gap between what traditional models think people know and their actual performance, leading to an incomplete and often outdated understanding of how ready the workforce truly is, which can hinder organizational adaptability in rapidly evolving environments. This paper proposes a novel dynamic learner-state ecosystem—an AI-driven solution designed to bridge this gap. Our approach leverages specialized AI agents, orchestrated via the Model Context Protocol (MCP), to continuously track and evolve an individual’s multi-dimensional state (e.g., mastery, confidence, context, and decay). The seamless integration of in-workflow performance data will transform daily work activities into granular and actionable data points through AI-powered dynamic xAPI generation into Learning Record Stores (LRSs). This system enables continuous, authentic performance-based assessment, precise skill gap identification, and highly personalized interventions. The significance of this ecosystem lies in its ability to provide a real-time understanding of everyone’s capabilities, enabling more accurate workforce planning for the future and cultivating a workforce that is continuously learning and adapting. It ultimately helps to transform learning from a disconnected, occasional event into an integrated and responsive part of everyday work. Full article
Show Figures

Figure 1

9 pages, 838 KiB  
Review
Merging Neuroscience and Engineering Through Regenerative Peripheral Nerve Interfaces
by Melanie J. Wang, Theodore A. Kung, Alison K. Snyder-Warwick and Paul S. Cederna
Prosthesis 2025, 7(4), 97; https://doi.org/10.3390/prosthesis7040097 (registering DOI) - 6 Aug 2025
Abstract
Approximately 185,000 people in the United states experience limb loss each year. There is a need for an intuitive neural interface that can offer high-fidelity control signals to optimize the advanced functionality of prosthetic devices. Regenerative peripheral nerve interface (RPNI) is a pioneering [...] Read more.
Approximately 185,000 people in the United states experience limb loss each year. There is a need for an intuitive neural interface that can offer high-fidelity control signals to optimize the advanced functionality of prosthetic devices. Regenerative peripheral nerve interface (RPNI) is a pioneering advancement in neuroengineering that combines surgical techniques with biocompatible materials to create an interface for individuals with limb loss. RPNIs are surgically constructed from autologous muscle grafts that are neurotized by the residual peripheral nerves of an individual with limb loss. RPNIs amplify neural signals and demonstrate long term stability. In this narrative review, the terms “Regenerative Peripheral Nerve Interface (RPNI)” and “RPNI surgery” are used interchangeably to refer to the same surgical and biological construct. This narrative review specifically focuses on RPNIs as a targeted approach to enhance prosthetic control through surgically created nerve–muscle interfaces. This area of research offers a promising solution to overcome the limitations of existing prosthetic control systems and could help improve the quality of life for people suffering from limb loss. It allows for multi-channel control and bidirectional communication, while enhancing the functionality of prosthetics through improved sensory feedback. RPNI surgery holds significant promise for improving the quality of life for individuals with limb loss by providing a more intuitive and responsive prosthetic experience. Full article
Show Figures

Figure 1

21 pages, 365 KiB  
Article
The Effect of Data Leakage and Feature Selection on Machine Learning Performance for Early Parkinson’s Disease Detection
by Jonathan Starcke, James Spadafora, Jonathan Spadafora, Phillip Spadafora and Milan Toma
Bioengineering 2025, 12(8), 845; https://doi.org/10.3390/bioengineering12080845 (registering DOI) - 6 Aug 2025
Abstract
If we do not urgently educate current and future medical professionals to critically evaluate and distinguish credible AI-assisted diagnostic tools from those whose performance is artificially inflated by data leakage or improper validation, we risk undermining clinician trust in all AI diagnostics and [...] Read more.
If we do not urgently educate current and future medical professionals to critically evaluate and distinguish credible AI-assisted diagnostic tools from those whose performance is artificially inflated by data leakage or improper validation, we risk undermining clinician trust in all AI diagnostics and jeopardizing future advances in patient care. For instance, machine learning models have shown high accuracy in diagnosing Parkinson’s Disease when trained on clinical features that are themselves diagnostic, such as tremor and rigidity. This study systematically investigates the impact of data leakage and feature selection on the true clinical utility of machine learning models for early Parkinson’s Disease detection. We constructed two experimental pipelines: one excluding all overt motor symptoms to simulate a subclinical scenario and a control including these features. Nine machine learning algorithms were evaluated using a robust three-way data split and comprehensive metric analysis. Results reveal that, without overt features, all models exhibited superficially acceptable F1 scores but failed catastrophically in specificity, misclassifying most healthy controls as Parkinson’s Disease. The inclusion of overt features dramatically improved performance, confirming that high accuracy was due to data leakage rather than genuine predictive power. These findings underscore the necessity of rigorous experimental design, transparent reporting, and critical evaluation of machine learning models in clinically realistic settings. Our work highlights the risks of overestimating model utility due to data leakage and provides guidance for developing robust, clinically meaningful machine learning tools for early disease detection. Full article
(This article belongs to the Special Issue Mathematical Models for Medical Diagnosis and Testing)
Show Figures

Figure 1

24 pages, 1993 KiB  
Article
Evaluating Prompt Injection Attacks with LSTM-Based Generative Adversarial Networks: A Lightweight Alternative to Large Language Models
by Sharaf Rashid, Edson Bollis, Lucas Pellicer, Darian Rabbani, Rafael Palacios, Aneesh Gupta and Amar Gupta
Mach. Learn. Knowl. Extr. 2025, 7(3), 77; https://doi.org/10.3390/make7030077 - 6 Aug 2025
Abstract
Generative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to [...] Read more.
Generative Adversarial Networks (GANs) using Long Short-Term Memory (LSTM) provide a computationally cheaper approach for text generation compared to large language models (LLMs). The low hardware barrier of training GANs poses a threat because it means more bad actors may use them to mass-produce prompt attack messages against LLM systems. Thus, to better understand the threat of GANs being used for prompt attack generation, we train two well-known GAN architectures, SeqGAN and RelGAN, on prompt attack messages. For each architecture, we evaluate generated prompt attack messages, comparing results with each other, with generated attacks from another computationally cheap approach, a 1-billion-parameter Llama 3.2 small language model (SLM), and with messages from the original dataset. This evaluation suggests that GAN architectures like SeqGAN and RelGAN have the potential to be used in conjunction with SLMs to readily generate malicious prompts that impose new threats against LLM-based systems such as chatbots. Analyzing the effectiveness of state-of-the-art defenses against prompt attacks, we also find that GAN-generated attacks can deceive most of these defenses with varying levels of success with the exception of Meta’s PromptGuard. Further, we suggest an improvement of prompt attack defenses based on the analysis of the language quality of the prompts, which we found to be the weakest point of GAN-generated messages. Full article
Show Figures

Figure 1

16 pages, 2750 KiB  
Article
Combining Object Detection, Super-Resolution GANs and Transformers to Facilitate Tick Identification Workflow from Crowdsourced Images on the eTick Platform
by Étienne Clabaut, Jérémie Bouffard and Jade Savage
Insects 2025, 16(8), 813; https://doi.org/10.3390/insects16080813 (registering DOI) - 6 Aug 2025
Abstract
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance [...] Read more.
Ongoing changes in the distribution and abundance of several tick species of medical relevance in Canada have prompted the development of the eTick platform—an image-based crowd-sourcing public surveillance tool for Canada enabling rapid tick species identification by trained personnel, and public health guidance based on tick species and province of residence of the submitter. Considering that more than 100,000 images from over 73,500 identified records representing 25 tick species have been submitted to eTick since the public launch in 2018, a partial automation of the image processing workflow could save substantial human resources, especially as submission numbers have been steadily increasing since 2021. In this study, we evaluate an end-to-end artificial intelligence (AI) pipeline to support tick identification from eTick user-submitted images, characterized by heterogeneous quality and uncontrolled acquisition conditions. Our framework integrates (i) tick localization using a fine-tuned YOLOv7 object detection model, (ii) resolution enhancement of cropped images via super-resolution Generative Adversarial Networks (RealESRGAN and SwinIR), and (iii) image classification using deep convolutional (ResNet-50) and transformer-based (ViT) architectures across three datasets (12, 6, and 3 classes) of decreasing granularities in terms of taxonomic resolution, tick life stage, and specimen viewing angle. ViT consistently outperformed ResNet-50, especially in complex classification settings. The configuration yielding the best performance—relying on object detection without incorporating super-resolution—achieved a macro-averaged F1-score exceeding 86% in the 3-class model (Dermacentor sp., other species, bad images), with minimal critical misclassifications (0.7% of “other species” misclassified as Dermacentor). Given that Dermacentor ticks represent more than 60% of tick volume submitted on the eTick platform, the integration of a low granularity model in the processing workflow could save significant time while maintaining very high standards of identification accuracy. Our findings highlight the potential of combining modern AI methods to facilitate efficient and accurate tick image processing in community science platforms, while emphasizing the need to adapt model complexity and class resolution to task-specific constraints. Full article
(This article belongs to the Section Medical and Livestock Entomology)
Show Figures

Graphical abstract

Back to TopTop