Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (70)

Search Parameters:
Keywords = AI Chatbot adoption

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 497 KB  
Article
An Assessment of GPT-3.5 and GPT-4.0 Responses to Scoliosis FAQs
by Tu-Lan Vu-Han, Enikö Regényi, Vikram Sunkara, Paul Köhli, Friederike Schömig, Alexander P. Hughes, Michael Putzier, Matthias Pumberger and Thilo Khakzad
J. Pers. Med. 2026, 16(4), 206; https://doi.org/10.3390/jpm16040206 - 7 Apr 2026
Viewed by 221
Abstract
Background: ChatGPT is a large language model (LLM) online chatbot developed by OpenAI and launched in November 2022. Early adoption studies have shown high readiness to use this technology for health-related questions and self-diagnosis. However, the quality and clinical adequacy of health-related [...] Read more.
Background: ChatGPT is a large language model (LLM) online chatbot developed by OpenAI and launched in November 2022. Early adoption studies have shown high readiness to use this technology for health-related questions and self-diagnosis. However, the quality and clinical adequacy of health-related responses remain incompletely characterized. This study aimed to explore responses generated by ChatGPT-3.5 and ChatGPT-4.0 to common patient questions regarding scoliosis. Methods: Ten scoliosis-related frequently asked questions (FAQs) were selected from a larger pool of over 250 patient-facing questions compiled from 17 publicly available FAQ webpages and informed by a Google Trends analysis. Questions were harmonized, grouped by theme, and then reduced by rule-based expert review to a final set intended to represent common patient concerns. Results: The median ratings of ChatGPT-3.5 and ChatGPT-4.0 responses ranged from satisfactory, requiring minimal (2) to moderate clarification (3). Across the ten matched questions, no statistically detectable difference was found between models in this study setting (W = 8.0, p = 0.59; Cliff’s δ = −0.12 [95% CI −0.58, 0.40]); however, given the small question set, unblinded rating process, and poor inter-rater reliability, this should not be interpreted as evidence of equivalence, non-inferiority, or comparable model performance. The results apply only to the 10–15 April 2024, online snapshots of ChatGPT-3.5 and ChatGPT-4.0 and should not be generalized to later model iterations. Conclusions: This study should be interpreted as a clinically oriented observational report, intended to inform physician awareness and patient-physician communication rather than validate chatbot accuracy or safety. In this 10–15 April 2024, sample, both model outputs frequently required clinician clarification. Given the small FAQ set, low inter-rater reliability, unblinded design, and single-sample outputs, the findings do not establish equivalence or superiority and apply only to the specific 10–15 April 2024, model snapshots and evaluated questions. Full article
(This article belongs to the Special Issue AI and Precision Medicine: Innovations and Applications)
Show Figures

Figure 1

31 pages, 1345 KB  
Article
Navigating the Dual-View Phenomenon: Social Ambivalence, Ambivalence Literacy, and Lecturer Role Transformation in AI-Integrated Transnational STEM Education
by Kamalanathan Kajan, Wenyuan Shi, Dariusz Wanatowski and Matt Ryan
Educ. Sci. 2026, 16(4), 554; https://doi.org/10.3390/educsci16040554 - 1 Apr 2026
Viewed by 335
Abstract
Generative AI chatbots are becoming routine study companions in STEM, which raises a pedagogical question: what do students expect human lecturers to do differently when AI support is ubiquitous? This study examines STEM undergraduates’ expectations for a transformation of the lecturer role and [...] Read more.
Generative AI chatbots are becoming routine study companions in STEM, which raises a pedagogical question: what do students expect human lecturers to do differently when AI support is ubiquitous? This study examines STEM undergraduates’ expectations for a transformation of the lecturer role and their social ambivalence toward AI chatbots in Sino-foreign transnational education (TNE) programmes in China. We administered an online survey to 467 consenting undergraduates across four partnership institutions (three with sufficient subgroup sizes for institutional comparison). The survey instrument captured adoption readiness, perceived AI-enabled learning enhancement, expected changes to the lecturer role (multi-select), perceived social enhancement and social reduction mechanisms, and perceived support needs; it also asked an open-ended question, collecting 454 usable comments. We report descriptive statistics, χ2 tests, Spearman correlations, and exploratory content analysis results. Students expected lecturers to shift from content delivery to facilitation: 52.7% anticipated that chatbots would handle routine questions, enabling more discussion and practical activities, and 49.7% expected greater emphasis on guiding deep thinking and problem solving. Perceived social impacts were strongly ambivalent: 92.2% endorsed at least one social enhancement and at least one social reduction mechanism, and enhancement and reduction indices were positively associated (ρ = 0.547, p < 0.001), a pattern that remained stable under alternative scoring and response-style trimming (ρ range = 0.526–0.590). Importantly, higher social ambivalence was linked to stronger expectations of lecturer governance and orchestration, including the curation of chatbot resources (42.5% vs. 9.7% in high vs. low ambivalence; χ2(1) = 44.12, p < 0.001) and accuracy checking (27.6% vs. 13.4%; χ2(1) = 8.82, p = 0.003). We therefore propose ambivalence literacy as a conceptual framework for responsible AI integration: a teachable capability to recognise and navigate simultaneous social benefits and risks of AI use, and to translate that recognition into concrete expectations for lecturer governance, orchestration, and facilitative teaching design in AI-integrated transnational STEM programmes. Full article
Show Figures

Figure 1

20 pages, 502 KB  
Article
Design and Evaluation of a Retrieval-Augmented Generation LLM Chatbot with Structured Database Access
by Juan Burbano, Pablo Landeta-López, Cathy Guevara-Vega and Antonio Quiña-Mera
Appl. Sci. 2026, 16(7), 3147; https://doi.org/10.3390/app16073147 - 25 Mar 2026
Viewed by 578
Abstract
Context. The grocery sector is undergoing a massive shift in consumer behavior, with global chatbot usage projected to reach 8.4 billion units by 2024—surpassing the total human population—and online grocery revenue per shopper expected to hit USD 449.00 by 2023. In this competitive [...] Read more.
Context. The grocery sector is undergoing a massive shift in consumer behavior, with global chatbot usage projected to reach 8.4 billion units by 2024—surpassing the total human population—and online grocery revenue per shopper expected to hit USD 449.00 by 2023. In this competitive landscape, small grocery stores must adopt AI-driven tools to modernize their operations. However, these businesses often face significant inefficiencies in manual inventory management, resulting in errors and reduced competitiveness. Objective. This research aims to develop and validate a chatbot application using Large Language Models and Retrieval-Augmented Generation (RAG) for operational management of grocery stores. Method. The method employed a quantitative experimental approach with a five-component system architecture: a web interface, a FastAPI API, a Mistral-7B-Instruct-v0.2 model, a dynamic SQL generator, and a custom RAG application with an FAISS vector database, all integrated through SQLAlchemy 2.0.40. Results. The results demonstrate that a chatbot achieves an average response time of 0.08 s with 80% overall accuracy, showing a 96.2% improvement in information query time and a 92.9% reduction in operational errors. Conclusions. Major conclusions suggest that the chatbot system is effective for retail environments and has the potential to enhance the operational efficiency of grocery stores, serving as a foundation for future research in applied conversational assistance. Full article
Show Figures

Figure 1

27 pages, 813 KB  
Article
Towards a Sustainable and Ethical Integration of AI Chatbots in Higher Education
by Mirela-Catrinel Voicu, Nicoleta Sîrghi, Gabriela Mircea and Daniela Maria-Magdalena Toth
Sustainability 2026, 18(5), 2534; https://doi.org/10.3390/su18052534 - 5 Mar 2026
Viewed by 461
Abstract
This paper examines students’ perceptions of factors influencing normative support for the integration of AI Chatbots in universities, providing an empirical basis for developing institutional policies and implementation strategies in higher education. Framed within the sustainability perspective, the study examines how ethical, cognitive, [...] Read more.
This paper examines students’ perceptions of factors influencing normative support for the integration of AI Chatbots in universities, providing an empirical basis for developing institutional policies and implementation strategies in higher education. Framed within the sustainability perspective, the study examines how ethical, cognitive, and perceptual factors shape the long-term adoption of AI technologies in academic environments. Our study employs a structural model comprising 10 constructs, 46 items, and 9 hypotheses, tested on a sample of 408 economics students from Timisoara. The research identifies AI literacy as the most influential factor in the formal integration of these technologies in universities. The following factors have a direct impact: teacher perception, student perception, and cognitive risks (reliance on AI Chatbots and avoidance of intellectual effort). Use for personalized learning is a factor with a significant direct effect on positive perceptions and intentions to use AI Chatbots among students. Academic integrity risks, as well as limitations on accuracy and reliability, have no significant impact. AI Chatbots represent an essential opportunity to transform higher education. However, their positive impact is realized only through responsible formal integration, grounded in ethical policies, adequate digital education, and the adaptation of pedagogical practices. Universities must regard AI as a strategic ally for teachers and students, while keeping human interaction, critical thinking, and academic integrity at the centre of the educational process. The study argues that students’ perceptions are that universities must approach AI Integration as a strategic component of sustainable educational ecosystems, aligning innovation with long-term academic integrity and the objectives of sustainable development, particularly Sustainable Development Goal 4 (Quality Education). Full article
Show Figures

Figure 1

20 pages, 1100 KB  
Review
Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review
by Francisco Fernandes, Rúben Encarnação, José Alves, Carla Pais-Vieira, Suzinara Beatriz Soares de Lima and Paulo Alves
Nurs. Rep. 2026, 16(3), 87; https://doi.org/10.3390/nursrep16030087 - 3 Mar 2026
Viewed by 850
Abstract
Background/Objectives: The rapid expansion of generative artificial intelligence (AI) and large language model-based chatbots has accelerated their adoption in higher education, including nursing. This scoping review mapped the use of AI-based chatbots in nursing education, including curricular domains, pedagogical approaches, educational outcomes, and [...] Read more.
Background/Objectives: The rapid expansion of generative artificial intelligence (AI) and large language model-based chatbots has accelerated their adoption in higher education, including nursing. This scoping review mapped the use of AI-based chatbots in nursing education, including curricular domains, pedagogical approaches, educational outcomes, and implementation challenges. Methods: A scoping review was conducted following the Joanna Briggs Institute methodology and reported in accordance with the PRISMA-ScR guideline. Searches were performed across major bibliographic databases and grey literature sources. Quantitative, qualitative, and mixed-methods studies addressing the use of AI chatbots in nursing education or professional training were included. Data were extracted using a standardized instrument and synthesized through descriptive statistics and qualitative content analysis. Results: Sixty-six studies (2019–2025) were included, with significant growth observed after 2023. Most studies employed quasi-experimental designs (37.9%) and were implemented in academic settings (83.3%). Application formats varied across online, hybrid, simulation-based, and classroom models. Reported benefits included improved learning performance, clinical reasoning, and student engagement. Key challenges involved the reliability of AI outputs, academic integrity, data protection, and limited institutional governance. Conclusions: AI-based chatbots represent promising tools to enhance nursing education, particularly when integrated into structured pedagogical strategies with active faculty supervision. Their use can support the development of clinical reasoning, student engagement, and personalized learning. However, methodological heterogeneity, ethical concerns, and governance gaps highlight the need for careful implementation and further rigorous research to ensure safe, effective, and pedagogically sound integration. Full article
Show Figures

Figure 1

19 pages, 2213 KB  
Article
The Development of a Large Language Model-Powered Chatbot to Advance Fairness in Machine Learning
by Pedro Henrique Ribeiro Santiago, Xiangqun Ju, Xavier Vasquez, Heidi Shen, Lisa Jamieson and Hawazin W. Elani
AI 2026, 7(3), 90; https://doi.org/10.3390/ai7030090 - 2 Mar 2026
Viewed by 1070
Abstract
Background: Machine learning (ML) has been widely adopted in decision-making, making fairness a central ethical and scientific priority. We developed the Themis chatbot, a Large Language Model (LLM) system designed to explain concepts of ML fairness in an accessible, conversational format. Methods [...] Read more.
Background: Machine learning (ML) has been widely adopted in decision-making, making fairness a central ethical and scientific priority. We developed the Themis chatbot, a Large Language Model (LLM) system designed to explain concepts of ML fairness in an accessible, conversational format. Methods: The development followed four stages: (1) curating a document corpus of 286 peer-reviewed publications on ML fairness; (2) development of Themis by combining a modern LLM (OpenAI’s GPT-4o) with Retrieval Augmented Generation (RAG); (3) creation of a 340-item benchmark dataset, the FairnessQA; and (4) evaluating performance against state-of-the-art non-augmented LLMs (DeepSeek R1, GPT-4o, GPT-5, and Grok 3). Results: For the multiple-choice questions, Themis achieved an accuracy of 96.7%, outperforming DeepSeek R1 (90.0%), GPT-4o (89.3%), GPT-5 (92.0%), and Grok 3 (86.7%), and the overall difference was statistically significant (χ2(4) = 10.1, p = 0.038). In the closed-ended questions, Themis achieved the highest accuracy (96.7%), while competing models ranged from 78.0% to 84.0%, and the overall difference was significant (χ2(4) = 23.9, p < 0.001). In the open-ended questions, Themis achieved the highest mean scores for correctness (M = 4.62), completeness (M = 4.59), and usefulness (M = 4.56), and differences were statistically significant (correctness: F(4, 195) = 20.91, p < 0.001; completeness: F(4, 195) = 7.76, p < 0.001; usefulness: F(4, 195) = 2.90, p < 0.001). By consolidating scattered research into an interactive assistant, Themis makes fairness concepts more accessible to educators, researchers, and policymakers. This work demonstrates that retrieval-augmented systems can enhance the public understanding of machine learning fairness at scale. Full article
Show Figures

Figure 1

19 pages, 475 KB  
Article
Can AI Chatbot Adoption Bridge the Gap Between Intention and Behavior in Tourism Service E-Booking: A Moderated Mediation Model Analysis
by Nguyen Thi Ngoc Anh, Dinh Hoang Minh, Tran Cuong and Tran Thi Quy Chinh
Tour. Hosp. 2026, 7(3), 68; https://doi.org/10.3390/tourhosp7030068 - 2 Mar 2026
Viewed by 865
Abstract
Drawing on the Theory of Planned Behavior (TPB), this research examines how attitudes influence intentions and behaviors, and whether AI Chatbot serves as a contextual moderator that strengthens this linkage. Data were collected from 607 tourists at major destinations in Vietnam using systematic [...] Read more.
Drawing on the Theory of Planned Behavior (TPB), this research examines how attitudes influence intentions and behaviors, and whether AI Chatbot serves as a contextual moderator that strengthens this linkage. Data were collected from 607 tourists at major destinations in Vietnam using systematic sampling. The hypotheses were tested with SPSS 26, AMOS 20, and the PROCESS macro to examine mediation and moderated mediation effects. The results show that e-booking intention partially mediates the relationship between e-booking attitudes and behavior. More importantly, AI Chatbot Adoption significantly enhances the intention–behavior linkage, thereby reducing the well-documented intention–behavior gap in e-booking. This result implies that tourism businesses and hotel managers can integrate AI Chatbot to provide real-time support, reduce customer hesitation, and improve booking conversion rates. Policymakers and AI developers are also encouraged to promote responsible adoption of AI in tourism to enhance service quality and customer trust. Full article
Show Figures

Figure 1

28 pages, 829 KB  
Article
Unveiling the Determinants of Tourists’ Behavioural Intention to Adopt AI-Powered Chatbots for the Hospitality and Tourism Industry: Revising the UTAUT2 Model
by Sitaram Sukthankar, Relita Fernandes, Sadanand Gaonkar and Arya Shetye
Tour. Hosp. 2026, 7(3), 65; https://doi.org/10.3390/tourhosp7030065 - 2 Mar 2026
Viewed by 840
Abstract
Emerging technologies, such as artificial intelligence (AI), including chatbots, are now transforming the hospitality and tourism industry. Chatbot technology is an excellent tool for enhancing communication, boosting service delivery efficiency, reducing costs, and improving the tourist experience. Despite their potential benefits, the adoption [...] Read more.
Emerging technologies, such as artificial intelligence (AI), including chatbots, are now transforming the hospitality and tourism industry. Chatbot technology is an excellent tool for enhancing communication, boosting service delivery efficiency, reducing costs, and improving the tourist experience. Despite their potential benefits, the adoption of AI-powered chatbots in Goa’s hospitality and tourism industry remains low, underscoring the need to identify the determinants influencing tourists’ behavioural intention to adopt this technology and use behaviour. Therefore, this study examines the key determinants influencing tourists’ behavioural intentions to adopt AI-powered chatbots in the hospitality and tourism industry. In addition, the study also examines the impact of tourists’ behavioural intentions to adopt AI-powered chatbots on use behaviour. For this purpose, a revised UTAUT2 model is assessed by leveraging a quantitative research approach. Structured questionnaires were distributed to a total of 400 inbound and outbound tourists, of which 227 respondents who were aware of AI-powered chatbots were chosen as the respondents for this study based on purposive sampling. The collected data were analysed using Partial Least Squares–Structural Equation Modelling (PLS-SEM) in SmartPLS 4.0. The findings revealed that attitude, performance expectancy, effort expectancy, social influence, facilitating conditions, and perceived enjoyment significantly influence tourists’ behavioural intention to adopt AI-powered chatbots, whereas automation and habit do not significantly influence their behavioural intention to adopt AI-powered chatbots. This study has implications for tourism managers and policymakers in the tourism and hospitality industry, who can gain insights into the factors that can encourage tourists to adopt AI-based facilities. Full article
Show Figures

Figure 1

29 pages, 719 KB  
Article
Graduate Employability in Tourism: Recruitment Practices, Skills, and the Role of Digitalisation and AI in Marrakech
by Aomar Ibourk and Sokaina El Alami
Societies 2026, 16(2), 58; https://doi.org/10.3390/soc16020058 - 11 Feb 2026
Viewed by 1136
Abstract
This article examines graduate employability challenges in the tourism and hospitality sector of Marrakech, a major tourism destination and strategic regional labour market in Morocco, characterised by strong seasonality, high labour turnover, and persistent education–employment mismatches. Rather than focusing exclusively on technology, the [...] Read more.
This article examines graduate employability challenges in the tourism and hospitality sector of Marrakech, a major tourism destination and strategic regional labour market in Morocco, characterised by strong seasonality, high labour turnover, and persistent education–employment mismatches. Rather than focusing exclusively on technology, the study analyses employability as a multidimensional and context-dependent process, in which digitalisation and artificial intelligence (AI) constitute one influencing factor among others. The research adopts a qualitative, purposive design based on semi-structured interviews conducted between August and October 2025 with 20 stakeholders directly involved in recruitment, training, or early career integration. These include five-star hotel general managers and HR officers, riad managers, travel agencies, recruitment intermediaries, representatives of Morocco’s public employment service (ANAPEC—National Agency for the Promotion of Employment and Skills) and private, regional tourism authorities, academics and young tourism graduates. Interview transcripts were thematically analysed using NVivo to identify recurrent patterns in recruitment practices, skill expectations, and the impact of AI in employability. The results, reflecting stakeholders’ perceptions within this local labour market, show that employability is shaped by six interrelated dimensions: (1) the structure and functioning of the tourism labour market (segmentation, turnover, mobility); (2) partial misalignment between training provision and operational service realities; (3) recruitment standards that prioritise behavioural and relational competences alongside formal qualifications, particularly for frontline positions; (4) language proficiency, especially English and French, as a baseline employability condition; (5) growing expectations regarding digital literacy linked to tourism operations (property management systems, reservation platforms, online reputation management); and (6) the perceived impact of AI-enabled tools (automation of routine tasks, decision-support systems, chatbots), which is seen less as a source of job destruction than as a driver of task reconfiguration and skill upgrading. By situating employer and graduate perceptions within the broader Moroccan employment and training context, the study contributes a place-based understanding of employability in tourism. It highlights the shared responsibility of individuals, employers, and education and training institutions in supporting skill development. The article concludes by discussing policy and practice-oriented levers to strengthen graduate employability, including co-designed curricula, structured internships and mentoring schemes, employer-supported upskilling in tourism-specific digital and AI-related competences, and reinforced labour-market intermediation through ANAPEC and regional governance actors. Full article
(This article belongs to the Special Issue Employment Relations in the Era of Industry 4.0)
Show Figures

Figure 1

20 pages, 988 KB  
Article
Hedonic Beats Utilitarian: Differential Effects of AI Chatbots and AR/VR on Consumer Engagement in E-Commerce
by Qin Zhang and Firdaus Abdullah
J. Theor. Appl. Electron. Commer. Res. 2026, 21(2), 60; https://doi.org/10.3390/jtaer21020060 - 7 Feb 2026
Cited by 1 | Viewed by 830
Abstract
This research investigates the impact of augmented and virtual reality (AR/VR) and AI-enabled chatbots, both individually and collectively, on consumer engagement of e-commerce platforms. Moreover, this research examines the mediating effects of perceived utility, ease of use, and enjoyment and the moderating effects [...] Read more.
This research investigates the impact of augmented and virtual reality (AR/VR) and AI-enabled chatbots, both individually and collectively, on consumer engagement of e-commerce platforms. Moreover, this research examines the mediating effects of perceived utility, ease of use, and enjoyment and the moderating effects of product type and technology readiness, respectively. By applying the theories of Technology Acceptance Model (TAM) and Stimulus–Organism–Response (S-O-R), this research proposed this theoretical framework and adopted a mixed-method research method. This research collected its empirical findings from 486 respondents who had utilized chatbots and AR/VR technology on three of China’s most popular e-commerce platforms, including Taobao, JD.com, and Pinduoduo. Structural equation modeling was utilized for hypothesis testing, and semi-structured interviews on 30 participants were used for validation of empirical findings. Results reveal that both AI chatbot features (β = 0.35, p < 0.001) and AR/VR technologies (β = 0.42, p < 0.001) significantly enhance consumer engagement, with AR/VR demonstrating stronger effects. Perceived enjoyment emerged as the strongest mediator (AI: β = 0.14; AR/VR: β = 0.18), surpassing traditional utilitarian factors. Technology readiness significantly moderated these relationships, with high-readiness consumers showing substantially stronger responses (AI: β = 0.45; AR/VR: β = 0.52). Experience goods amplified technology effects compared to search goods. Multi-group analysis revealed platform-specific variations, while robustness checks identified diminishing returns for AI chatbots but not AR/VR technologies. This research contributes to digital marketing and information systems literature by providing empirical evidence of differential technology impacts on engagement, highlighting the dominance of hedonic over utilitarian pathways in consumer technology adoption. The findings offer practical guidance for e-commerce platforms in optimizing technology investments and designing engagement strategies. Full article
(This article belongs to the Section Data Science, AI, and e-Commerce Analytics)
Show Figures

Figure 1

14 pages, 268 KB  
Article
Trust in Financial Technology: The Role of Financial Literacy, Digital Financial Literacy, Technological Literacy, and Trust in Artificial Intelligence
by Thomas A. Hanson and Caleb Ott
J. Risk Financial Manag. 2026, 19(2), 97; https://doi.org/10.3390/jrfm19020097 - 2 Feb 2026
Viewed by 1431
Abstract
This study examines the relationships among financial literacy, digital financial literacy, technological literacy, and trust in artificial intelligence (AI) as predictors of consumer trust in fintech applications involving robo-advisors or chatbots. A sample of 117 college students responded to an online survey with [...] Read more.
This study examines the relationships among financial literacy, digital financial literacy, technological literacy, and trust in artificial intelligence (AI) as predictors of consumer trust in fintech applications involving robo-advisors or chatbots. A sample of 117 college students responded to an online survey with scales designed to measure these constructs. Results confirmed that the three literacy measures were significantly correlated, reflecting their overlapping knowledge and cognitive perspective. However, trust in AI showed no significant correlation with any literacy measure, and regression analysis revealed that trust in AI was the sole statistically significant predictor of trust in consumer fintech. These findings suggest that fintech adoption is driven largely by trust rather than financial or technological competence, creating potential vulnerabilities when consumers lack the literacy to evaluate AI-generated financial advice. The results highlight the need for financial education programs to integrate fintech alongside traditional literacy topics and suggest a possible role for regulatory reform to support users of fintech. Full article
(This article belongs to the Special Issue The Role of Financial Literacy in Modern Finance)
24 pages, 1628 KB  
Article
A Neuro-Symbolic Framework for Ensuring Deterministic Reliability in AI-Assisted Structural Engineering: The SYNAPSE Architecture
by Adriano Castagnone and Giuseppe Nitti
Buildings 2026, 16(3), 534; https://doi.org/10.3390/buildings16030534 - 28 Jan 2026
Viewed by 1010
Abstract
This paper addresses the opportunities and risks of integrating Large Language Models (LLMs) into structural engineering. Exclusive reliance on LLMs is inadequate in this field, because their probabilistic nature can lead to hallucinations and inaccuracies that are unacceptable in safety-critical domains which require [...] Read more.
This paper addresses the opportunities and risks of integrating Large Language Models (LLMs) into structural engineering. Exclusive reliance on LLMs is inadequate in this field, because their probabilistic nature can lead to hallucinations and inaccuracies that are unacceptable in safety-critical domains which require rigorous calculations. To resolve this dilemma, we propose adopting Neuro-Symbolic Artificial Intelligence (NSAI), a hybrid approach that balances neural intuition with symbolic rigor. The NSAI architecture employs an intelligent query system to enrich user requests and delegate critical operations to deterministic external algorithms. This system is designed to enhance reliability and support regulatory compliance, as exemplified by the 3Muri chatbot case study, an NSAI (gemini-2.5-flash)-based intelligent assistant for structural analysis software. We developed 3Muri chatbot implementing AI processes. Our experimental results, based on over 200 questions submitted to the chatbot, show that this hybrid approach achieves 94% accuracy while keeping response times below 2 s. These results validate the feasibility of deploying AI systems in safety-critical engineering domains. Full article
(This article belongs to the Special Issue Applying Artificial Intelligence in Construction Management)
Show Figures

Figure 1

18 pages, 1623 KB  
Review
AI Chatbots and Remote Sensing Archaeology: Current Landscape, Technical Barriers, and Future Directions
by Nicolas Melillos and Athos Agapiou
Heritage 2026, 9(1), 32; https://doi.org/10.3390/heritage9010032 - 16 Jan 2026
Viewed by 1161
Abstract
Chatbots have emerged as a promising interface for facilitating access to complex datasets, allowing users to pose questions in natural language rather than relying on specialized technical workflows. At the same time, remote sensing has transformed archaeological practice by producing vast amounts of [...] Read more.
Chatbots have emerged as a promising interface for facilitating access to complex datasets, allowing users to pose questions in natural language rather than relying on specialized technical workflows. At the same time, remote sensing has transformed archaeological practice by producing vast amounts of imagery from LiDAR, drones, and satellites. While these advances have created unprecedented opportunities for discovery, they also pose significant challenges due to the scale, heterogeneity, and interpretative demands of the data. In related scientific domains, multimodal conversational systems capable of integrating natural language interaction with image-based analysis have advanced rapidly, supported by a growing body of survey and review literature documenting their architectures, datasets, and applications across multiple fields. By contrast, archaeological applications of chatbots remain limited to text-based prototypes, primarily focused on education, cultural heritage mediation or archival search. This review synthesizes the historical development of chatbots, examines their current use in remote sensing, and evaluates the barriers to adapting such systems for archaeology. Four major challenges are identified: data scale and heterogeneity, scarcity of training datasets, computational costs, and uncertainties around usability and adoption. By comparing experiences across domains, this review highlights both the opportunities and the limitations of integrating conversational AI into archaeological workflows. The central conclusion is that domain-specific adaptation is essential if multimodal chatbots are to become effective analytical partners in archaeology. Full article
(This article belongs to the Section Digital Heritage)
Show Figures

Figure 1

14 pages, 2197 KB  
Article
Innovative Application of Chatbots in Clinical Nutrition Education: The E+DIEting_Lab Experience in University Students
by Iñaki Elío, Kilian Tutusaus, Imanol Eguren-García, Álvaro Lasarte-García, Arturo Ortega-Mansilla, Thomas A. Prola and Sandra Sumalla-Cano
Nutrients 2026, 18(2), 257; https://doi.org/10.3390/nu18020257 - 14 Jan 2026
Viewed by 972
Abstract
Background/Objectives: The growing integration of Artificial Intelligence (AI) and chatbots in health professional education offers innovative methods to enhance learning and clinical preparedness. This study aimed to evaluate the educational impact and perceptions in university students of Human Nutrition and Dietetics, regarding [...] Read more.
Background/Objectives: The growing integration of Artificial Intelligence (AI) and chatbots in health professional education offers innovative methods to enhance learning and clinical preparedness. This study aimed to evaluate the educational impact and perceptions in university students of Human Nutrition and Dietetics, regarding the utility, usability, and design of the E+DIEting_Lab chatbot platform when implemented in clinical nutrition training. Methods: The platform was piloted from December 2023 to April 2025 involving 475 students from multiple European universities. While all 475 students completed the initial survey, 305 finished the follow-up evaluation, representing a 36% attrition rate. Participants completed surveys before and after interacting with the chatbots, assessing prior experience, knowledge, skills, and attitudes. Data were analyzed using descriptive statistics and independent samples t-tests to compare pre- and post-intervention perceptions. Results: A total of 475 university students completed the initial survey and 305 the final evaluation. Most university students were females (75.4%), with representation from six languages and diverse institutions. Students reported clear perceived learning gains: 79.7% reported updated practical skills in clinical dietetics and communication were improved, 90% felt that new digital tools improved classroom practice, and 73.9% reported enhanced interpersonal skills. Self-rated competence in using chatbots as learning tools increased significantly, with mean knowledge scores rising from 2.32 to 2.66 and skills from 2.39 to 2.79 on a 0–5 Likert scale (p < 0.001 for both). Perceived effectiveness and usefulness of chatbots as self-learning tools remained positive but showed a small decline after use (effectiveness from 3.63 to 3.42; usefulness from 3.63 to 3.45), suggesting that hands-on experience refined, but did not diminish, students’ overall favorable views of the platform. Conclusions: The implementation and pilot evaluation of the E+DIEting_Lab self-learning virtual patient chatbot platform demonstrate that structured digital simulation tools can significantly improve perceived clinical nutrition competences. These findings support chatbot adoption in dietetics curricula and inform future digital education innovations. Full article
Show Figures

Figure 1

23 pages, 3985 KB  
Article
Enabling Humans and AI Systems to Retrieve Information from System Architectures in Model-Based Systems Engineering
by Vincent Quast, Georg Jacobs, Simon Dehn and Gregor Höpfner
Systems 2026, 14(1), 83; https://doi.org/10.3390/systems14010083 - 12 Jan 2026
Viewed by 1263
Abstract
The complexity of modern cyber–physical systems is steadily increasing as their functional scope expands and as regulations become more demanding. To cope with this complexity, organizations are adopting methodologies such as model-based systems engineering (MBSE). By creating system models, MBSE promises significant advantages [...] Read more.
The complexity of modern cyber–physical systems is steadily increasing as their functional scope expands and as regulations become more demanding. To cope with this complexity, organizations are adopting methodologies such as model-based systems engineering (MBSE). By creating system models, MBSE promises significant advantages such as improved traceability, consistency, and collaboration. On the other hand, the adoption of MBSE faces challenges in both the introduction and the operational use. In the introduction phase, challenges include high initial effort and steep learning curves. In the operational use phase, challenges arise from the difficulty of retrieving and reusing information stored in system models. Research on the support of MBSE through artificial intelligence (AI), especially generative AI, has so far focused mainly on easing the introduction phase, for example by using large language models (LLMs) to assist in creating system models. However, generative AI could also support the operational use phase by helping stakeholders access the information embedded in existing system models. This study introduces an LLM-based multi-agent system that applies a Graph Retrieval-Augmented Generation (GraphRAG) strategy to access and utilize information stored in MBSE system models. The system’s capabilities are demonstrated through a chatbot that answers questions about the underlying system model. This solution reduces the complexity and effort involved in retrieving system model information and improves accessibility for stakeholders who lack advanced knowledge in MBSE methodologies. The chatbot was evaluated using the architecture of a battery electric vehicle as a reference model and a set of 100 curated questions and answers. When tested across four large language models, the best-performing model achieved an accuracy of 93 percent in providing correct answers. Full article
Show Figures

Figure 1

Back to TopTop