Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (954)

Search Parameters:
Keywords = ChatGPT

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 643 KiB  
Article
Using Artificial Intelligence for Detecting Diabetic Foot Osteomyelitis: Validation of Deep Learning Model for Plain Radiograph Interpretation
by Francisco Javier Álvaro-Afonso, Aroa Tardáguila-García, Mateo López-Moral, Irene Sanz-Corbalán, Esther García-Morales and José Luis Lázaro-Martínez
Appl. Sci. 2025, 15(15), 8583; https://doi.org/10.3390/app15158583 (registering DOI) - 1 Aug 2025
Abstract
Objective: To develop and validate a ResNet-50-based deep learning model for automatic detection of osteomyelitis (DFO) in plain radiographs of patients with diabetic foot ulcers (DFUs). Research Design and Methods: This retrospective study included 168 patients with type one or type two diabetes [...] Read more.
Objective: To develop and validate a ResNet-50-based deep learning model for automatic detection of osteomyelitis (DFO) in plain radiographs of patients with diabetic foot ulcers (DFUs). Research Design and Methods: This retrospective study included 168 patients with type one or type two diabetes and clinical suspicion of DFO confirmed via a surgical bone biopsy. An experienced clinician and a pretrained ResNet-50 model independently interpreted the radiographs. The model was developed using Python-based frameworks with ChatGPT assistance for coding. The diagnostic performance was assessed against the histopathological findings, calculating sensitivity, specificity, the positive predictive value (PPV), the negative predictive value (NPV), and the likelihood ratios. Agreement between the AI model and the clinician was evaluated using Cohen’s kappa coefficient. Results: The AI model demonstrated high sensitivity (92.8%) and PPV (0.97), but low-level specificity (4.4%). The clinician showed 90.2% sensitivity and 37.8% specificity. The Cohen’s kappa coefficient between the AI model and the clinician was −0.105 (p = 0.117), indicating weak agreement. Both the methods tended to classify many cases as DFO-positive, with 81.5% agreement in the positive cases. Conclusions: This study demonstrates the potential of IA to support the radiographic diagnosis of DFO using a ResNet-50-based deep learning model. AI-assisted radiographic interpretation could enhance early DFO detection, particularly in high-prevalence settings. However, further validation is necessary to improve its specificity and assess its utility in primary care. Full article
(This article belongs to the Special Issue Applications of Sensors in Biomechanics and Biomedicine)
15 pages, 317 KiB  
Review
The Contribution of Artificial Intelligence in Nursing Education: A Scoping Review of the Literature
by Federico Cucci, Dario Marasciulo, Mattia Romani, Giovanni Soldano, Donato Cascio, Giorgio De Nunzio, Cosimo Caldararo, Ivan Rubbi, Elsa Vitale, Roberto Lupo and Luana Conte
Nurs. Rep. 2025, 15(8), 283; https://doi.org/10.3390/nursrep15080283 (registering DOI) - 1 Aug 2025
Abstract
Background and Aim: Artificial intelligence (AI) is among the most promising innovations for transforming nursing education, making it more interactive, personalized, and competency-based. However, its integration also raises significant ethical and practical concerns. This scoping review aims to analyze and summarize key studies [...] Read more.
Background and Aim: Artificial intelligence (AI) is among the most promising innovations for transforming nursing education, making it more interactive, personalized, and competency-based. However, its integration also raises significant ethical and practical concerns. This scoping review aims to analyze and summarize key studies on the application of AI in university-level nursing education, focusing on its benefits, challenges, and future prospects. Methods: A scoping review was conducted using the Population, Concept, and Context (PCC) framework, targeting nursing students and educators in academic settings. A comprehensive search was carried out across the PubMed, Scopus, and Web of Science databases. Only peer-reviewed original studies published in English were included. Two researchers independently screened the studies, resolving any disagreements through team discussion. Data were synthesized narratively. Results: Of the 569 articles initially identified, 11 original studies met the inclusion criteria. The findings indicate that AI-based tools—such as virtual simulators and ChatGPT—can enhance students’ learning experiences, communication skills, and clinical preparedness. Nonetheless, several challenges were identified, including increased simulation-related anxiety, potential misuse, and ethical concerns related to data quality, privacy, and academic integrity. Conclusions: AI offers significant opportunities to enhance nursing education; however, its implementation must be approached with critical awareness and responsibility. It is essential that students develop both digital competencies and ethical sensitivity to fully leverage AI’s potential while ensuring high-quality education and responsible nursing practice. Full article
Show Figures

Figure 1

13 pages, 1003 KiB  
Article
Evaluation of an Artificial Intelligence-Generated Health Communication Material on Bird Flu Precautions
by Ayokunle A. Olagoke, Comfort Tosin Adebayo, Joseph Ayotunde Aderonmu, Emmanuel A. Adeaga and Kimberly J. Johnson
Zoonotic Dis. 2025, 5(3), 22; https://doi.org/10.3390/zoonoticdis5030022 - 1 Aug 2025
Abstract
The 2025 avian influenza A(H5N1) outbreak has highlighted the urgent need for rapidly generated health communication materials during public health emergencies. Artificial intelligence (AI) systems offer transformative potential to accelerate content development pipelines while maintaining scientific accuracy and impact. We evaluated an AI-generated [...] Read more.
The 2025 avian influenza A(H5N1) outbreak has highlighted the urgent need for rapidly generated health communication materials during public health emergencies. Artificial intelligence (AI) systems offer transformative potential to accelerate content development pipelines while maintaining scientific accuracy and impact. We evaluated an AI-generated health communication material on bird flu precautions among 100 U.S. adults. The material was developed using ChatGPT for text generation based on CDC guidelines and Leonardo.AI for illustrations. Participants rated perceived message effectiveness, quality, realism, relevance, attractiveness, and visual informativeness. The AI-generated health communication material received favorable ratings across all dimensions: perceived message effectiveness (3.83/5, 77%), perceived message quality (3.84/5, 77%), realism (3.72/5, 74%), relevance (3.68/5, 74%), attractiveness (3.62/5, 74%), and visual informativeness (3.35/5 67%). Linear regression analysis revealed that all features significantly predicted perceived message effectiveness in unadjusted and adjusted models (p < 0.0001), e.g., multivariate analysis of outcome on perceived visual informativeness showed β = 0.51, 95% CI: 0.37–0.66, p < 0.0001. Also, mediation analysis revealed that visual informativeness accounted for 23.8% of the relationship between material attractiveness and perceived effectiveness. AI tools can enable real-time adaptation of prevention guidance during epidemiological emergencies while maintaining effective risk communication. Full article
Show Figures

Figure 1

9 pages, 299 KiB  
Article
Assessing the Accuracy and Readability of Large Language Model Guidance for Patients on Breast Cancer Surgery Preparation and Recovery
by Elena Palmarin, Stefania Lando, Alberto Marchet, Tania Saibene, Silvia Michieletto, Matteo Cagol, Francesco Milardi, Dario Gregori and Giulia Lorenzoni
J. Clin. Med. 2025, 14(15), 5411; https://doi.org/10.3390/jcm14155411 (registering DOI) - 1 Aug 2025
Abstract
Background/Objectives: Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence tools, such as ChatGPT, have garnered attention for their potential in health communication. This study evaluates the accuracy and readability of responses generated by ChatGPT to questions commonly [...] Read more.
Background/Objectives: Accurate and accessible perioperative health information empowers patients and enhances recovery outcomes. Artificial intelligence tools, such as ChatGPT, have garnered attention for their potential in health communication. This study evaluates the accuracy and readability of responses generated by ChatGPT to questions commonly asked about breast cancer. Methods: Fifteen simulated patient queries about breast cancer surgery preparation and recovery were prepared. Responses generated by ChatGPT (4o version) were evaluated for accuracy by a pool of breast surgeons using a 4-point Likert scale. Readability was assessed with the Flesch–Kincaid Grade Level (FKGL). Descriptive statistics were used to summarize the findings. Results: Of the 15 responses evaluated, 11 were rated as “accurate and comprehensive”, while 4 out of 15 were deemed “correct but incomplete”. No responses were classified as “partially incorrect” or “completely incorrect”. The median FKGL score was 11.2, indicating a high school reading level. While most responses were technically accurate, the complexity of language exceeded the recommended readability levels for patient-directed materials. Conclusions: The model shows potential as a complementary resource for patient education in breast cancer surgery, but should not replace direct interaction with healthcare providers. Future research should focus on enhancing language models’ ability to generate accessible and patient-friendly content. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

26 pages, 1768 KiB  
Article
Generative AI in Education: Mapping the Research Landscape Through Bibliometric Analysis
by Sai-Leung Ng and Chih-Chung Ho
Information 2025, 16(8), 657; https://doi.org/10.3390/info16080657 (registering DOI) - 31 Jul 2025
Abstract
The rapid emergence of generative AI technologies has sparked significant transformation across educational landscapes worldwide. This study presents a comprehensive bibliometric analysis of GAI in education, mapping scholarly trends from 2022 to 2025. Drawing on 3808 peer-reviewed journal articles indexed in Scopus, the [...] Read more.
The rapid emergence of generative AI technologies has sparked significant transformation across educational landscapes worldwide. This study presents a comprehensive bibliometric analysis of GAI in education, mapping scholarly trends from 2022 to 2025. Drawing on 3808 peer-reviewed journal articles indexed in Scopus, the analysis reveals exponential growth in publications, with dominant contributions from the United States, China, and Hong Kong. Using VOSviewer, the study identifies six major thematic clusters, including GAI in higher education, ethics, technological foundations, writing support, and assessment. Prominent tools, especially ChatGPT, are shown to influence pedagogical design, academic integrity, and learner engagement. The study highlights interdisciplinary integration, regional research ecosystems, and evolving keyword patterns reflecting the field’s transition from tool-based inquiry to learner-centered concerns. This review offers strategic insights for educators, researchers, and policymakers navigating AI’s transformative role in education. Full article
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)
Show Figures

Figure 1

14 pages, 283 KiB  
Article
Teens, Tech, and Talk: Adolescents’ Use of and Emotional Reactions to Snapchat’s My AI Chatbot
by Gaëlle Vanhoffelen, Laura Vandenbosch and Lara Schreurs
Behav. Sci. 2025, 15(8), 1037; https://doi.org/10.3390/bs15081037 - 30 Jul 2025
Abstract
Due to technological advancements such as generative artificial intelligence (AI) and large language models, chatbots enable increasingly human-like, real-time conversations through text (e.g., OpenAI’s ChatGPT) and voice (e.g., Amazon’s Alexa). One AI chatbot that is specifically designed to meet the social-supportive needs of [...] Read more.
Due to technological advancements such as generative artificial intelligence (AI) and large language models, chatbots enable increasingly human-like, real-time conversations through text (e.g., OpenAI’s ChatGPT) and voice (e.g., Amazon’s Alexa). One AI chatbot that is specifically designed to meet the social-supportive needs of youth is Snapchat’s My AI. Given its increasing popularity among adolescents, the present study investigated whether adolescents’ likelihood of using My AI, as well as their positive or negative emotional experiences from interacting with the chatbot, is related to socio-demographic factors (i.e., gender, age, and socioeconomic status (SES)). A cross-sectional study was conducted among 303 adolescents (64.1% girls, 35.9% boys, 1.0% other, 0.7% preferred not to say their gender; Mage = 15.89, SDage = 1.69). The findings revealed that younger adolescents were more likely to use My AI and experienced more positive emotions from these interactions than older adolescents. No significant relationships were found for gender or SES. These results highlight the potential for age to play a critical role in shaping adolescents’ engagement with AI chatbots on social media and their emotional outcomes from such interactions, underscoring the need to consider developmental factors in AI design and policy. Full article
14 pages, 3600 KiB  
Article
Performance of Large Language Models in Recognizing Brain MRI Sequences: A Comparative Analysis of ChatGPT-4o, Claude 4 Opus, and Gemini 2.5 Pro
by Ali Salbas and Rasit Eren Buyuktoka
Diagnostics 2025, 15(15), 1919; https://doi.org/10.3390/diagnostics15151919 - 30 Jul 2025
Abstract
Background/Objectives: Multimodal large language models (LLMs) are increasingly used in radiology. However, their ability to recognize fundamental imaging features, including modality, anatomical region, imaging plane, contrast-enhancement status, and particularly specific magnetic resonance imaging (MRI) sequences, remains underexplored. This study aims to evaluate [...] Read more.
Background/Objectives: Multimodal large language models (LLMs) are increasingly used in radiology. However, their ability to recognize fundamental imaging features, including modality, anatomical region, imaging plane, contrast-enhancement status, and particularly specific magnetic resonance imaging (MRI) sequences, remains underexplored. This study aims to evaluate and compare the performance of three advanced multimodal LLMs (ChatGPT-4o, Claude 4 Opus, and Gemini 2.5 Pro) in classifying brain MRI sequences. Methods: A total of 130 brain MRI images from adult patients without pathological findings were used, representing 13 standard MRI series. Models were tested using zero-shot prompts for identifying modality, anatomical region, imaging plane, contrast-enhancement status, and MRI sequence. Accuracy was calculated, and differences among models were analyzed using Cochran’s Q test and McNemar test with Bonferroni correction. Results: ChatGPT-4o and Gemini 2.5 Pro achieved 100% accuracy in identifying the imaging plane and 98.46% in identifying contrast-enhancement status. MRI sequence classification accuracy was 97.7% for ChatGPT-4o, 93.1% for Gemini 2.5 Pro, and 73.1% for Claude 4 Opus (p < 0.001). The most frequent misclassifications involved fluid-attenuated inversion recovery (FLAIR) sequences, often misclassified as T1-weighted or diffusion-weighted sequences. Claude 4 Opus showed lower accuracy in susceptibility-weighted imaging (SWI) and apparent diffusion coefficient (ADC) sequences. Gemini 2.5 Pro exhibited occasional hallucinations, including irrelevant clinical details such as “hypoglycemia” and “Susac syndrome.” Conclusions: Multimodal LLMs demonstrate high accuracy in basic MRI recognition tasks but vary significantly in specific sequence classification tasks. Hallucinations emphasize caution in clinical use, underlining the need for validation, transparency, and expert oversight. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

15 pages, 747 KiB  
Article
Comparative Analysis of LLMs in Dry Eye Syndrome Healthcare Information
by Gloria Wu, Hrishi Paliath-Pathiyal, Obaid Khan and Margaret C. Wang
Diagnostics 2025, 15(15), 1913; https://doi.org/10.3390/diagnostics15151913 - 30 Jul 2025
Abstract
Background/Objective: Dry eye syndrome affects 16 million Americans with USD 52 billion in annual healthcare costs. With large language models (LLMs) increasingly used for healthcare information, understanding their performance in delivering equitable dry eye guidance across diverse populations is critical. This study aims [...] Read more.
Background/Objective: Dry eye syndrome affects 16 million Americans with USD 52 billion in annual healthcare costs. With large language models (LLMs) increasingly used for healthcare information, understanding their performance in delivering equitable dry eye guidance across diverse populations is critical. This study aims to evaluate and compare five major LLMs (Grok, ChatGPT, Gemini, Claude.ai, and Meta AI) regarding dry eye syndrome information delivery across different demographic groups. Methods: LLMs were queried using standardized prompts simulating a 62-year-old patient with dry eye symptoms across four demographic categories (White, Black, East Asian, and Hispanic males and females). Responses were analyzed for word count, readability, cultural sensitivity scores (0–3 scale), keyword coverage, and response times. Results: Significant variations existed across LLMs. Word counts ranged from 32 to 346 words, with Gemini being the most comprehensive (653.8 ± 96.2 words) and Claude.ai being the most concise (207.6 ± 10.8 words). Cultural sensitivity scores revealed Grok demonstrated highest awareness for minority populations (scoring 3 for Black and Hispanic demographics), while Meta AI showed minimal cultural tailoring (0.5 ± 0.5). All models recommended specialist consultation, but medical term coverage varied significantly. Response times ranged from 7.41 s (Meta AI) to 25.32 s (Gemini). Conclusions: While all LLMs provided appropriate referral recommendations, substantial disparities exist in cultural sensitivity, content depth, and information delivery across demographic groups. No LLM consistently addressed the full spectrum of dry eye causes across all demographics. These findings underscore the importance for physician oversight and standardization in AI-generated healthcare information to ensure equitable access and prevent care delays. Full article
(This article belongs to the Special Issue Artificial Intelligence Application in Cornea and External Diseases)
Show Figures

Figure 1

13 pages, 311 KiB  
Article
Diagnostic Performance of ChatGPT-4o in Analyzing Oral Mucosal Lesions: A Comparative Study with Experts
by Luigi Angelo Vaira, Jerome R. Lechien, Antonino Maniaci, Andrea De Vito, Miguel Mayo-Yáñez, Stefania Troise, Giuseppe Consorti, Carlos M. Chiesa-Estomba, Giovanni Cammaroto, Thomas Radulesco, Arianna di Stadio, Alessandro Tel, Andrea Frosolini, Guido Gabriele, Giannicola Iannella, Alberto Maria Saibene, Paolo Boscolo-Rizzo, Giovanni Maria Soro, Giovanni Salzano and Giacomo De Riu
Medicina 2025, 61(8), 1379; https://doi.org/10.3390/medicina61081379 - 30 Jul 2025
Viewed by 33
Abstract
Background and Objectives: this pilot study aimed to evaluate the diagnostic accuracy of ChatGPT-4o in analyzing oral mucosal lesions from clinical images. Materials and Methods: a total of 110 clinical images, including 100 pathological lesions and 10 healthy mucosal images, were retrieved [...] Read more.
Background and Objectives: this pilot study aimed to evaluate the diagnostic accuracy of ChatGPT-4o in analyzing oral mucosal lesions from clinical images. Materials and Methods: a total of 110 clinical images, including 100 pathological lesions and 10 healthy mucosal images, were retrieved from Google Images and analyzed by ChatGPT-4o using a standardized prompt. An expert panel of five clinicians established a reference diagnosis, categorizing lesions as benign or malignant. The AI-generated diagnoses were classified as correct or incorrect and further categorized as plausible or not plausible. The accuracy, sensitivity, specificity, and agreement with the expert panel were analyzed. The Artificial Intelligence Performance Instrument (AIPI) was used to assess the quality of AI-generated recommendations. Results: ChatGPT-4o correctly diagnosed 85% of cases. Among the 15 incorrect diagnoses, 10 were deemed plausible by the expert panel. The AI misclassified three malignant lesions as benign but did not categorize any benign lesions as malignant. Sensitivity and specificity were 91.7% and 100%, respectively. The AIPI score averaged 17.6 ± 1.73, indicating strong diagnostic reasoning. The McNemar test showed no significant differences between AI and expert diagnoses (p = 0.084). Conclusions: In this proof-of-concept pilot study, ChatGPT-4o demonstrated high diagnostic accuracy and strong descriptive capabilities in oral mucosal lesion analysis. A residual 8.3% false-negative rate for malignant lesions underscores the need for specialist oversight; however, the model shows promise as an AI-powered triage aid in settings with limited access to specialized care. Full article
(This article belongs to the Section Dentistry and Oral Health)
8 pages, 192 KiB  
Brief Report
Accuracy and Safety of ChatGPT-3.5 in Assessing Over-the-Counter Medication Use During Pregnancy: A Descriptive Comparative Study
by Bernadette Cornelison, David R. Axon, Bryan Abbott, Carter Bishop, Cindy Jebara, Anjali Kumar and Kristen A. Root
Pharmacy 2025, 13(4), 104; https://doi.org/10.3390/pharmacy13040104 - 30 Jul 2025
Viewed by 169
Abstract
As artificial intelligence (AI) becomes increasingly utilized to perform tasks requiring human intelligence, patients who are pregnant may turn to AI for advice on over-the-counter (OTC) medications. However, medications used in pregnancy may pose profound safety concerns limited by data availability. This study [...] Read more.
As artificial intelligence (AI) becomes increasingly utilized to perform tasks requiring human intelligence, patients who are pregnant may turn to AI for advice on over-the-counter (OTC) medications. However, medications used in pregnancy may pose profound safety concerns limited by data availability. This study focuses on a chatbot’s ability to accurately provide information regarding OTC medications as it relates to patients that are pregnant. A prospective, descriptive design was used to compare the responses generated by the Chat Generative Pre-Trained Transformer 3.5 (ChatGPT-3.5) to the information provided by UpToDate®. Eighty-seven of the top pharmacist-recommended OTC drugs in the United States (U.S.) as identified by Pharmacy Times were assessed for safe use in pregnancy using ChatGPT-3.5. A piloted, standard prompt was input into ChatGPT-3.5, and the responses were recorded. Two groups independently rated the responses compared to UpToDate on their correctness, completeness, and safety using a 5-point Likert scale. After independent evaluations, the groups discussed the findings to reach a consensus, with a third independent investigator giving final ratings. For correctness, the median score was 5 (interquartile range [IQR]: 5–5). For completeness, the median score was 4 (IQR: 4–5). For safety, the median score was 5 (IQR: 5–5). Despite high overall scores, the safety errors in 9% of the evaluations (n = 8), including omissions that pose a risk of serious complications, currently renders the chatbot an unsafe standalone resource for this purpose. Full article
(This article belongs to the Special Issue AI Use in Pharmacy and Pharmacy Education)
17 pages, 1540 KiB  
Article
Evaluating a Nationally Localized AI Chatbot for Personalized Primary Care Guidance: Insights from the HomeDOCtor Deployment in Slovenia
by Matjaž Gams, Tadej Horvat, Žiga Kolar, Primož Kocuvan, Kostadin Mishev and Monika Simjanoska Misheva
Healthcare 2025, 13(15), 1843; https://doi.org/10.3390/healthcare13151843 - 29 Jul 2025
Viewed by 208
Abstract
Background/Objectives: The demand for accessible and reliable digital health services has increased significantly in recent years, particularly in regions facing physician shortages. HomeDOCtor, a conversational AI platform developed in Slovenia, addresses this need with a nationally adapted architecture that combines retrieval-augmented generation [...] Read more.
Background/Objectives: The demand for accessible and reliable digital health services has increased significantly in recent years, particularly in regions facing physician shortages. HomeDOCtor, a conversational AI platform developed in Slovenia, addresses this need with a nationally adapted architecture that combines retrieval-augmented generation (RAG) and a Redis-based vector database of curated medical guidelines. The objective of this study was to assess the performance and impact of HomeDOCtor in providing AI-powered healthcare assistance. Methods: HomeDOCtor is designed for human-centered communication and clinical relevance, supporting multilingual and multimedia citizen inputs while being available 24/7. It was tested using a set of 100 international clinical vignettes and 150 internal medicine exam questions from the University of Ljubljana to validate its clinical performance. Results: During its six-month nationwide deployment, HomeDOCtor received overwhelmingly positive user feedback with minimal criticism, and exceeded initial expectations, especially in light of widespread media narratives warning about the risks of AI. HomeDOCtor autonomously delivered localized, evidence-based guidance, including self-care instructions and referral suggestions, with average response times under three seconds. On international benchmarks, the system achieved ≥95% Top-1 diagnostic accuracy, comparable to leading medical AI platforms, and significantly outperformed stand-alone ChatGPT-4o in the national context (90.7% vs. 80.7%, p = 0.0135). Conclusions: Practically, HomeDOCtor eases the burden on healthcare professionals by providing citizens with 24/7 autonomous, personalized triage and self-care guidance for less complex medical issues, ensuring that these cases are self-managed efficiently. The system also identifies more serious cases that might otherwise be neglected, directing them to professionals for appropriate care. Theoretically, HomeDOCtor demonstrates that domain-specific, nationally adapted large language models can outperform general-purpose models. Methodologically, it offers a framework for integrating GDPR-compliant AI solutions in healthcare. These findings emphasize the value of localization in conversational AI and telemedicine solutions across diverse national contexts. Full article
(This article belongs to the Special Issue Application of Digital Services to Improve Patient-Centered Care)
Show Figures

Figure 1

42 pages, 1300 KiB  
Article
A Hybrid Human-AI Model for Enhanced Automated Vulnerability Scoring in Modern Vehicle Sensor Systems
by Mohamed Sayed Farghaly, Heba Kamal Aslan and Islam Tharwat Abdel Halim
Future Internet 2025, 17(8), 339; https://doi.org/10.3390/fi17080339 - 28 Jul 2025
Viewed by 116
Abstract
Modern vehicles are rapidly transforming into interconnected cyber–physical systems that rely on advanced sensor technologies and pervasive connectivity to support autonomous functionality. Yet, despite this evolution, standardized methods for quantifying cybersecurity vulnerabilities across critical automotive components remain scarce. This paper introduces a novel [...] Read more.
Modern vehicles are rapidly transforming into interconnected cyber–physical systems that rely on advanced sensor technologies and pervasive connectivity to support autonomous functionality. Yet, despite this evolution, standardized methods for quantifying cybersecurity vulnerabilities across critical automotive components remain scarce. This paper introduces a novel hybrid model that integrates expert-driven insights with generative AI tools to adapt and extend the Common Vulnerability Scoring System (CVSS) specifically for autonomous vehicle sensor systems. Following a three-phase methodology, the study conducted a systematic review of 16 peer-reviewed sources (2018–2024), applied CVSS version 4.0 scoring to 15 representative attack types, and evaluated four free source generative AI models—ChatGPT, DeepSeek, Gemini, and Copilot—on a dataset of 117 annotated automotive-related vulnerabilities. Expert validation from 10 domain professionals reveals that Light Detection and Ranging (LiDAR) sensors are the most vulnerable (9 distinct attack types), followed by Radio Detection And Ranging (radar) (8) and ultrasonic (6). Network-based attacks dominate (104 of 117 cases), with 92.3% of the dataset exhibiting low attack complexity and 82.9% requiring no user interaction. The most severe attack vectors, as scored by experts using CVSS, include eavesdropping (7.19), Sybil attacks (6.76), and replay attacks (6.35). Evaluation of large language models (LLMs) showed that DeepSeek achieved an F1 score of 99.07% on network-based attacks, while all models struggled with minority classes such as high complexity (e.g., ChatGPT F1 = 0%, Gemini F1 = 15.38%). The findings highlight the potential of integrating expert insight with AI efficiency to deliver more scalable and accurate vulnerability assessments for modern vehicular systems.This study offers actionable insights for vehicle manufacturers and cybersecurity practitioners, aiming to inform strategic efforts to fortify sensor integrity, optimize network resilience, and ultimately enhance the cybersecurity posture of next-generation autonomous vehicles. Full article
Show Figures

Figure 1

18 pages, 271 KiB  
Article
AI Pioneers and Stragglers in Greece: Challenges, Gaps, and Opportunities for Journalists and Media
by Sotirios Triantafyllou, Andreas M. Panagopoulos and Panagiotis Kapos
Societies 2025, 15(8), 209; https://doi.org/10.3390/soc15080209 - 28 Jul 2025
Viewed by 280
Abstract
Media organizations are experiencing ongoing transformation, increasingly driven by the advancement of AI technologies. This development has begun to link journalists with generative systems and synthetic technologies. Although newsrooms worldwide are exploring AI adoption to improve information sourcing, news production, and distribution, a [...] Read more.
Media organizations are experiencing ongoing transformation, increasingly driven by the advancement of AI technologies. This development has begun to link journalists with generative systems and synthetic technologies. Although newsrooms worldwide are exploring AI adoption to improve information sourcing, news production, and distribution, a gap exists between resource-rich organizations and those with limited means. Since ChatGPT 3.5 was released on 30 November 2022, Greek media and journalists have gained the ability to use and explore AI technology. In this study, we examine the use of AI in Greek newsrooms, as well as journalists’ reflections and concerns. Through qualitative analysis, our findings indicate that the adoption and integration of these tools in Greek newsrooms is marked by the lack of formal institutional policies, leading to a predominantly self-directed and individualized use of these technologies by journalists. Greek journalists engage with AI tools both professionally and personally, often without organizational guidance or formal training. This issue may compromise the quality of journalism due to the absence of established guidelines. Consequently, individuals may produce content that is inconsistent with the media outlet’s identity or that disseminates misinformation. Age, gender, and newsroom roles do not constitute limiting factors for this “experimentation”, as survey participants showed familiarity with this technology. In addition, in some cases, the disadvantages of specific tools regarding qualitative results in Greek are inhibiting factors for further exploration and use. All these points to the need for immediate training, literacy, and ethical frameworks. Full article
17 pages, 1035 KiB  
Article
Whether and When Could Generative AI Improve College Student Learning Engagement?
by Fei Guo, Lanwen Zhang, Tianle Shi and Hamish Coates
Behav. Sci. 2025, 15(8), 1011; https://doi.org/10.3390/bs15081011 - 25 Jul 2025
Viewed by 281
Abstract
Generative AI (GenAI) technologies have been widely adopted by college students since the launch of ChatGPT in late 2022. While the debate about GenAI’s role in higher education continues, there is a lack of empirical evidence regarding whether and when these technologies can [...] Read more.
Generative AI (GenAI) technologies have been widely adopted by college students since the launch of ChatGPT in late 2022. While the debate about GenAI’s role in higher education continues, there is a lack of empirical evidence regarding whether and when these technologies can improve the learning experience for college students. This study utilizes data from a survey of 72,615 undergraduate students across 25 universities and colleges in China to explore the relationships between GenAI use and student learning engagement in different learning environments. The findings reveal that over sixty percent of Chinese college students use GenAI technologies in Academic Year 2023–2024, with academic use exceeding daily use. GenAI use in academic tasks is related to more cognitive and emotional engagement, though it may also reduce active learning activities and learning motivation. Furthermore, this study highlights that the role of GenAI varies across learning environments. The positive associations of GenAI and student engagement are most prominent for students in “high-challenge and high-support” learning contexts, while GenAI use is mostly negatively associated with student engagement in “low-challenge, high-support” courses. These findings suggest that while GenAI plays a valuable role in the learning process for college students, its effectiveness is fundamentally conditioned by the instructional design of human teachers. Full article
(This article belongs to the Special Issue Artificial Intelligence and Educational Psychology)
Show Figures

Figure 1

21 pages, 1745 KiB  
Article
AI and Q Methodology in the Context of Using Online Escape Games in Chemistry Classes
by Markéta Dobečková, Ladislav Simon, Lucia Boldišová and Zita Jenisová
Educ. Sci. 2025, 15(8), 962; https://doi.org/10.3390/educsci15080962 - 25 Jul 2025
Viewed by 181
Abstract
The contemporary digital era has fundamentally reshaped pupil education. It has transformed learning into a dynamic environment with enhanced access to information. The focus shifts to the educator, who must employ teaching strategies, practices, and methods to engage and motivate the pupils. New [...] Read more.
The contemporary digital era has fundamentally reshaped pupil education. It has transformed learning into a dynamic environment with enhanced access to information. The focus shifts to the educator, who must employ teaching strategies, practices, and methods to engage and motivate the pupils. New possibilities are emerging for adopting active pedagogical approaches. One example is the use of educational online escape games. In the theoretical part of this paper, we present online escape games as a tool that broadens pedagogical opportunities for schools in primary school chemistry education. These activities are known to foster pupils’ transversal or soft skills. We investigate the practical dimension of implementing escape games in education. This pilot study aims to analyse primary school teachers’ perceptions of online escape games. We collected data using Q methodology and conducted the Q-sort through digital technology. Data analysis utilised both the PQMethod programme and ChatGPT 4-o, with a subsequent comparison of their respective outputs. Although some numerical differences appeared between the ChatGPT and PQMethod analyses, both methods yielded the same factor saturation and overall results. Full article
(This article belongs to the Special Issue Innovation in Teacher Education Practices)
Show Figures

Figure 1

Back to TopTop