Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline

Search Results (179)

Search Parameters:
Keywords = conversational fine-tuning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5166 KB  
Article
Divergence Shepherd Feature Optimization-Based Stochastic-Tuned Deep Multilayer Perceptron for Emotional Footprint Identification
by Karthikeyan Jagadeesan and Annapurani Kumarappan
Algorithms 2025, 18(12), 801; https://doi.org/10.3390/a18120801 - 17 Dec 2025
Viewed by 69
Abstract
Emotional Footprint Identification refers to the process of recognizing or understanding the emotional impact that a person, experience, or interaction leaves on others. Emotion Recognition plays an important role in human–computer interaction for identifying emotions such as fear, sadness, anger, happiness, and surprise [...] Read more.
Emotional Footprint Identification refers to the process of recognizing or understanding the emotional impact that a person, experience, or interaction leaves on others. Emotion Recognition plays an important role in human–computer interaction for identifying emotions such as fear, sadness, anger, happiness, and surprise on the human face during the conversation. However, accurate emotional footprint identification plays a crucial role due to the dynamic changes. Conventional deep learning techniques integrate advanced technologies for emotional footprint identification, but challenges in accurately detecting emotions in minimal time. To address these challenges, a novel Divergence Shepherd Feature Optimization-based Stochastic-Tuned Deep Multilayer Perceptron (DSFO-STDMP) is proposed. The proposed DSFO-STDMP model consists of three distinct processes namely data acquisition, feature selection or reduction, and classification. First, the data acquisition phase collects a number of conversation data samples from a dataset to train the model. These conversation samples are given to the Sokal–Sneath Divergence shuffling shepherd optimization to select more important features and remove the others. This optimization process accurately performs the feature reduction process to minimize the emotional footprint identification time. Once the features are selected, classification is carried out using the Rosenthal correlative stochastic-tuned deep multilayer perceptron classifier, which analyzes the correlation score between data samples. Based on this analysis, the system successfully classifies different emotions footprints during the conversations. In the fine-tuning phase, the stochastic gradient method is applied to adjust the weights between layers of deep learning architecture for minimizing errors and improving the model’s accuracy. Experimental evaluations are conducted using various performance metrics, including accuracy, precision, recall, F1 score, and emotional footprint identification time. The quantitative results reveal enhancement in the 95% accuracy, 93% precision, 97% recall and 97% F1 score. Additionally, the DSFO-STDMP minimized the in training time by 35% when compared to traditional techniques. Full article
Show Figures

Figure 1

15 pages, 497 KB  
Article
Learning Analytics with Scalable Bloom’s Taxonomy Labeling of Socratic Chatbot Dialogues
by Kok Wai Lee, Yee Sin Ang and Joel Weijia Lai
Computers 2025, 14(12), 555; https://doi.org/10.3390/computers14120555 - 15 Dec 2025
Viewed by 178
Abstract
Educational chatbots are increasingly deployed to scaffold student learning, yet educators lack scalable ways to assess the cognitive depth of these dialogues in situ. Bloom’s taxonomy provides a principled lens for characterizing reasoning, but manual tagging of conversational turns is costly and difficult [...] Read more.
Educational chatbots are increasingly deployed to scaffold student learning, yet educators lack scalable ways to assess the cognitive depth of these dialogues in situ. Bloom’s taxonomy provides a principled lens for characterizing reasoning, but manual tagging of conversational turns is costly and difficult to scale for learning analytics. We present a reproducible high-confidence pseudo-labeling pipeline for multi-label Bloom classification of Socratic student–chatbot exchanges. The dataset comprises 6716 utterances collected from conversations between a Socratic chatbot and 34 undergraduate statistics students at Nanyang Technological University. From three chronologically selected workbooks with expert Bloom annotations, we trained and compared two labeling tracks: (i) a calibrated classical approach using SentenceTransformer (all-MiniLM-L6-v2) embeddings with one-vs-rest Logistic Regression, Linear SVM, XGBoost, and MLP, followed by per-class precision–recall threshold tuning; and (ii) a lightweight LLM track using GPT-4o-mini after supervised fine-tuning. Class-specific thresholds tuned on 5-fold cross-validation were then applied in a single pass to assign high-confidence pseudo-labels to the remaining unlabeled exchanges, avoiding feedback-loop confirmation bias. Fine-tuned GPT-4o-mini achieved the highest prevalence-weighted performance (micro-F1 =0.814), whereas calibrated classical models yielded stronger balance across Bloom levels (best macro-F1 =0.630 with Linear SVM; best classical micro-F1 =0.759 with Logistic Regression). Both model families reflect the corpus skew toward lower-order cognition, with LLMs excelling on common patterns and linear models better preserving rarer higher-order labels, while results should be interpreted as a proof-of-concept given limited gold labeling, the approach substantially reduces annotation burden and provides a practical pathway for Bloom-aware learning analytics and future real-time adaptive chatbot support. Full article
(This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning (2nd Edition))
Show Figures

Figure 1

28 pages, 2324 KB  
Article
ARGUS: A Neuro-Symbolic System Integrating GNNs and LLMs for Actionable Feedback on English Argumentative Writing
by Lei Yang and Shuo Zhao
Systems 2025, 13(12), 1079; https://doi.org/10.3390/systems13121079 - 1 Dec 2025
Viewed by 377
Abstract
English argumentative writing is a cornerstone of academic and professional communication, yet it remains a significant challenge for second-language (L2) learners. While Large Language Models (LLMs) show promise as components in automated feedback systems, their responses are often generic and lack the structural [...] Read more.
English argumentative writing is a cornerstone of academic and professional communication, yet it remains a significant challenge for second-language (L2) learners. While Large Language Models (LLMs) show promise as components in automated feedback systems, their responses are often generic and lack the structural insight necessary for meaningful improvement. Existing Automated Essay Scoring (AES) systems, conversely, typically provide holistic scores without the kind of actionable, fine-grained advice that can guide concrete revisions. To bridge this systemic gap, we introduce ARGUS (Argument Understanding and Structured-feedback), a novel neuro-symbolic system that synergizes the semantic understanding of LLMs with the structured reasoning of Graph Neural Networks (GNNs). The ARGUS system architecture comprises three integrated modules: (1) an LLM-based parser transforms an essay into a structured argument graph; (2) a Relational Graph Convolutional Network (R-GCN) analyzes this symbolic structure to identify specific logical and structural flaws; and (3) this flaw analysis directly guides a conditional LLM to generate feedback that is not only contextually relevant but also pinpoints precise weaknesses in the student’s reasoning. We evaluate ARGUS on the Argument Annotated Essays corpus and on an additional set of 150 L2 persuasive essays collected from the same population to augment training of both the parser and the structural flaw detector. Our argument parsing module achieves a component identification F1-score of 90.4% and a relation identification F1-score of 86.1%. The R-GCN-based structural flaw detector attains a macro-averaged F1-score of 0.83 across the seven flaw categories, indicating that the enriched training data substantially improves its generalization. Most importantly, in a human evaluation study, feedback generated by the ARGUS system was rated as consistently and significantly more specific, accurate, actionable, and helpful than that from strong baselines, including a fine-tuned LLM and a zero-shot GPT-4. Our work demonstrates a robust systems engineering approach, grounding LLM-based feedback in GNN-driven structural analysis to create an intelligent teaching system that provides targeted, pedagogically valuable guidance for L2 student writers engaging with persuasive essays. Full article
Show Figures

Figure 1

12 pages, 439 KB  
Article
Advancing Conversational Text-to-SQL: Context Strategies and Model Integration with Large Language Models
by Benjamin G. Ascoli and Jinho D. Choi
Future Internet 2025, 17(11), 527; https://doi.org/10.3390/fi17110527 - 18 Nov 2025
Viewed by 628
Abstract
Conversational text-to-SQL extends the traditional single-turn SQL generation paradigm to multi-turn, dialogue-based scenarios, enabling users to pose and refine database queries interactively, and requiring models to track dialogue context over multiple user queries and system responses. Despite extensive progress in single-turn benchmarks such [...] Read more.
Conversational text-to-SQL extends the traditional single-turn SQL generation paradigm to multi-turn, dialogue-based scenarios, enabling users to pose and refine database queries interactively, and requiring models to track dialogue context over multiple user queries and system responses. Despite extensive progress in single-turn benchmarks such as Spider and BIRD, and the recent rise of large language models, conversational datasets continue to pose challenges. In this paper, we spotlight model merging as a key strategy for boosting ESM performance on CoSQL and SParC. We present a new state-of-the-art system on the CoSQL benchmark, achieved by fine-tuning CodeS-7b under two paradigms for handling conversational history: (1) full history concatenation, and (2) question rewriting via GPT-based summarization. While each paradigm alone obtains competitive results, we observe that averaging the weights of these fine-tuned models can outperform both individual variants. Our findings highlight the promise of LLM-driven multi-turn SQL generation, offering a lightweight yet powerful avenue for improving conversational text-to-SQL. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

20 pages, 1978 KB  
Article
StressSpeak: A Speech-Driven Framework for Real-Time Personalized Stress Detection and Adaptive Psychological Support
by Laraib Umer, Javaid Iqbal, Yasar Ayaz, Hassan Imam, Adil Ahmad and Umer Asgher
Diagnostics 2025, 15(22), 2871; https://doi.org/10.3390/diagnostics15222871 - 12 Nov 2025
Viewed by 744
Abstract
Background: Stress is a critical determinant of mental health, yet conventional monitoring approaches often rely on subjective self-reports or physiological signals that lack real-time responsiveness. Recent advances in large language models (LLMs) offer opportunities for speech-driven, adaptive stress detection, but existing systems are [...] Read more.
Background: Stress is a critical determinant of mental health, yet conventional monitoring approaches often rely on subjective self-reports or physiological signals that lack real-time responsiveness. Recent advances in large language models (LLMs) offer opportunities for speech-driven, adaptive stress detection, but existing systems are limited to retrospective text analysis, monolingual settings, or detection-only outputs. Methods: We developed a real-time, speech-driven stress detection framework that integrates audio recording, speech-to-text conversion, and linguistic analysis using transformer-based LLMs. The system provides multimodal outputs, delivering recommendations in both text and synthesized speech. Nine LLM variants were evaluated on five benchmark datasets under zero-shot and few-shot learning conditions. Performance was assessed using accuracy, precision, recall, F1-score, and misclassification trends (false-negatives and false-positives). Real-time feasibility was analyzed through latency modeling, and user-centered validation was conducted across cross-domains. Results: Few-shot fine-tuning improved model performance across all datasets, with Large Language Model Meta AI (LLaMA) and Robustly Optimized BERT Pretraining Approach (RoBERTa) achieving the highest F1-scores and reduced false-negatives, particularly for suicide risk detection. Latency analysis revealed a trade-off between responsiveness and accuracy, with delays ranging from ~2 s for smaller models to ~7.6 s for LLaMA-7B on 30 s audio inputs. Multilingual input support and multimodal output enhanced inclusivity. User feedback confirmed strong usability, accessibility, and adoption potential in real-world settings. Conclusions: This study demonstrates that real-time, LLM-powered stress detection is both technically robust and practically feasible. By combining speech-based input, multimodal feedback, and user-centered validation, the framework advances beyond traditional detection only models toward scalable, inclusive, and deployment-ready digital mental health solutions. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

15 pages, 5416 KB  
Article
Acoustic Metamaterial Nanogenerator for Multi-Band Sound Insulation and Acoustic–Electric Conversion
by Xinwu Liang and Ming Yuan
Sensors 2025, 25(21), 6693; https://doi.org/10.3390/s25216693 - 2 Nov 2025
Viewed by 760
Abstract
Controlling low-frequency noise and achieving multi-band sound insulation remain significant challenges and have long been hot topics in industrial research. This study introduces a novel multifunctional device based on the principles of acoustic metamaterials, which not only offers high-performance sound insulation but also [...] Read more.
Controlling low-frequency noise and achieving multi-band sound insulation remain significant challenges and have long been hot topics in industrial research. This study introduces a novel multifunctional device based on the principles of acoustic metamaterials, which not only offers high-performance sound insulation but also converts low-frequency acoustic energy into electrical energy. Through an innovative design featuring multiple local resonance design, the proposed device effectively mitigates the impact of pre-tension on the membrane, while enabling efficient multi-band sound insulation that can be finely tuned by adjusting structural parameters. Experimental results demonstrate that the device achieves a maximum sound insulation of 40 dB and an average sound insulation exceeding 25 dB within the 1000 Hz frequency range. Moreover, by utilizing its local resonance property, a triboelectric nanogenerator (TENG) is specifically designed for low-frequency acoustic–electric conversion, maintaining high performance low-frequency sound insulation while simultaneously powering small scale electronic devices. This work provides a promising approach for multi-band sound insulation and low-frequency acoustic–electric conversion, offering broad potential for industrial applications. Full article
(This article belongs to the Special Issue Advanced Nanogenerators for Micro-Energy and Self-Powered Sensors)
Show Figures

Figure 1

23 pages, 1461 KB  
Review
RNA Degradation in Pluripotent Stem Cells: Mechanisms, Crosstalk, and Fate Regulation
by Seunghwa Jeong, Myunggeun Oh, Jaeil Han and Seung-Kyoon Kim
Cells 2025, 14(20), 1634; https://doi.org/10.3390/cells14201634 - 20 Oct 2025
Viewed by 1308
Abstract
Pluripotent stem cells (PSCs) exhibit remarkable self-renewal capacity and differentiation potential, necessitating tight regulation of gene expression at both transcriptional and post-transcriptional levels. Among post-transcriptional mechanisms, RNA turnover and degradation together play pivotal roles in maintaining transcriptome homeostasis and controlling RNA stability. RNA [...] Read more.
Pluripotent stem cells (PSCs) exhibit remarkable self-renewal capacity and differentiation potential, necessitating tight regulation of gene expression at both transcriptional and post-transcriptional levels. Among post-transcriptional mechanisms, RNA turnover and degradation together play pivotal roles in maintaining transcriptome homeostasis and controlling RNA stability. RNA degradation plays a pivotal role in determining transcript stability for both messenger RNAs (mRNAs) and non-coding RNAs (ncRNAs), thereby influencing cell identity and fate transitions. The core RNA decay machinery, which includes exonucleases, decapping complexes, RNA helicases, and the RNA exosome, ensures timely and selective decay of transcripts. In addition, RNA modifications such as 5′ capping and N6-methyladenosine (m6A) further modulate RNA stability, contributing to the fine-tuning of gene regulatory networks essential for maintaining PSC states. Recent single-cell and multi-omics studies have revealed that RNA degradation exhibits heterogeneous and dynamic kinetics during cell fate transitions, highlighting its role in preserving transcriptome homeostasis. Conversely, disruption of RNA decay pathways has been implicated in developmental defects and disease, underscoring their potential as therapeutic targets. Collectively, RNA degradation emerges as a central regulator of PSC biology, integrating the decay of both mRNAs and ncRNAs to orchestrate pluripotency maintenance, lineage commitment, and disease susceptibility. Full article
(This article belongs to the Special Issue Advances and Breakthroughs in Stem Cell Research)
Show Figures

Figure 1

29 pages, 966 KB  
Article
You Got Phished! Analyzing How to Provide Useful Feedback in Anti-Phishing Training with LLM Teacher Models
by Tailia Malloy, Laura Bernardy, Omar El Bachyr, Fred Philippy, Jordan Samhi, Jacques Klein and Tegawendé F. Bissyandé
Electronics 2025, 14(19), 3872; https://doi.org/10.3390/electronics14193872 - 29 Sep 2025
Viewed by 596
Abstract
Training users to correctly identify potential security threats like social engineering attacks such as phishing emails is a crucial aspect of cybersecurity. One challenge in this training is providing useful educational feedback to maximize student learning outcomes. Large Language Models (LLMs) have recently [...] Read more.
Training users to correctly identify potential security threats like social engineering attacks such as phishing emails is a crucial aspect of cybersecurity. One challenge in this training is providing useful educational feedback to maximize student learning outcomes. Large Language Models (LLMs) have recently been applied to wider and wider applications, including domain-specific education and training. These applications of LLMs have many benefits, such as cost and ease of access, but there are important potential biases and constraints within LLMs. These may make LLMs worse teachers for important and vulnerable subpopulations including the elderly and those with less technical knowledge. In this work we present a dataset of LLM embeddings of conversations between human students and LLM teachers in an anti-phishing setting. We apply these embeddings onto an analysis of human–LLM educational conversations to develop specific and actionable targets for LLM training, fine-tuning, and evaluation that can potentially improve the educational quality of LLM teachers and ameliorate potential biases that may disproportionally impact specific subpopulations. Specifically, we suggest that LLM teaching platforms either speak generally or mention specific quotations of emails depending on user demographics and behaviors, and to steer conversations away from an over focus on the current example. Full article
(This article belongs to the Special Issue Human-Centric AI for Cyber Security in Critical Infrastructures)
Show Figures

Figure 1

20 pages, 4621 KB  
Article
Innovative Application of High-Precision Seismic Interpretation Technology in Coalbed Methane Exploration
by Chunlei Li, Lijiang Duan, Xidong Wang, Xiuqin Lu, Ze Deng and Liyong Fan
Processes 2025, 13(9), 2971; https://doi.org/10.3390/pr13092971 - 18 Sep 2025
Viewed by 596
Abstract
Exploration of coalbed methane (CBM) has long been plagued by critical technical challenges, including a low signal-to-noise (S/N) ratio in seismic data, difficulty identifying thin coal seams, and inadequate accuracy in interpreting complex structures. This study presents an innovative methodological framework that integrates [...] Read more.
Exploration of coalbed methane (CBM) has long been plagued by critical technical challenges, including a low signal-to-noise (S/N) ratio in seismic data, difficulty identifying thin coal seams, and inadequate accuracy in interpreting complex structures. This study presents an innovative methodological framework that integrates artificial intelligence (AI) with advanced seismic processing and interpretation techniques. Its effectiveness is verified through a case study in the North Bowen Basin, Australia. A multi-scale seismic data enhancement approach combining dynamic balancing and blue filtering significantly improved data quality, increasing the S/N ratio by 53%. Using deep learning-driven, multi-attribute fusion analysis, we achieved a prediction error of less than ±1 m for the thickness of thin coal seams (4–7 m thick). Integrating 3D coherence and ant-tracking techniques improved the accuracy of fault identification, increasing the fault recognition rate by 30% and reducing the spatial localization error to below 3%. Additionally, a finely tuned, spatially variable velocity model limited the depth conversion error to 0.5%. Validation using horizontal well trajectories revealed that the rate of reservoir encounters exceeded 95%, with initial gas production in the predicted sweet spots zone being 25–30% higher than with traditional methods. Notably, this study established a quantitative model linking structural curvature to fracture intensity, providing a robust scientific basis for accurately predicting CBM sweet spots. Full article
(This article belongs to the Special Issue Coalbed Methane Development Process)
Show Figures

Figure 1

31 pages, 2854 KB  
Article
ForestGPT and Beyond: A Trustworthy Domain-Specific Large Language Model Paving the Way to Forestry 5.0
by Florian Ehrlich-Sommer, Benno Eberhard and Andreas Holzinger
Electronics 2025, 14(18), 3583; https://doi.org/10.3390/electronics14183583 - 10 Sep 2025
Cited by 2 | Viewed by 2034
Abstract
Large language models (LLMs) such as Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly used across domains, yet their generic training data and propensity for hallucination limit reliability in safety-critical fields like forestry. This paper outlines the conception and prototype of ForestGPT, a domain-specialised [...] Read more.
Large language models (LLMs) such as Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly used across domains, yet their generic training data and propensity for hallucination limit reliability in safety-critical fields like forestry. This paper outlines the conception and prototype of ForestGPT, a domain-specialised assistant designed to support forest professionals while preserving expert oversight. It addresses two looming risks: unverified adoption of generic outputs and professional mistrust of opaque algorithms. We propose a four-level development path: (1) pre-training a transformer on curated forestry literature to create a baseline conversational tool; (2) augmenting it with Retrieval-Augmented Generation to ground answers in local and time-sensitive documents; (3) coupling growth simulators for scenario modeling; and (4) integrating continuous streams from sensors, drones and machinery for real-time decision support. A Level-1 prototype, deployed at Futa Expo 2025 via a mobile app, successfully guided multilingual visitors and demonstrated the feasibility of lightweight fine-tuning on open-weight checkpoints. We analyse technical challenges, multimodal grounding, continual learning, safety certification, and social barriers including data sovereignty, bias and change management. Results indicate that trustworthy, explainable, and accessible LLMs can accelerate the transition to Forestry 5.0, provided that human-in-the-loop guardrails remain central. Future work will extend ForestGPT with full RAG pipelines, simulator coupling and autonomous data ingestion. Whilst exemplified in forestry, a complex, safety-critical, and ecologically vital domain, the proposed architecture and development path are broadly transferable to other sectors that demand trustworthy, domain-specific language models under expert oversight. Full article
Show Figures

Graphical abstract

18 pages, 1609 KB  
Article
Using Large Language Models to Extract Structured Data from Health Coaching Dialogues: A Comparative Study of Code Generation Versus Direct Information Extraction
by Sai Sangameswara Aadithya Kanduri, Apoorv Prasad and Susan McRoy
BioMedInformatics 2025, 5(3), 50; https://doi.org/10.3390/biomedinformatics5030050 - 4 Sep 2025
Viewed by 2879
Abstract
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined [...] Read more.
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined answers detracts from the naturalness of conversations and user engagement. Large Language Models (LLMs) offer the promise of covering a wide range of expressions. However, using an LLM for simple entity extraction would not necessarily perform better than functions coded in a programming language, while creating higher long-term costs. Methods: This study uses a real data set of annotated human coaching dialogs to develop LLM-based models for two training scenarios: one that generates pattern-matching functions and the other which does direct extraction. We use models of different sizes and complexity, including Meta-Llama, Gemma, and ChatGPT, and calculate their speed and accuracy. Results: LLM-generated pattern-matching functions took an average of 10 milliseconds (ms) per item as compared to 900 ms. (ChatGPT 3.5 Turbo) to 5 s (Llama 2 70B). The accuracy for pattern matching was 99% on real data, while LLM accuracy ranged from 90% (Llama 2 70B) to 100% (ChatGPT 3.5 Turbo), on both real and synthetically generated examples created for fine-tuning. Conclusions: These findings suggest promising directions for future research that combines both methods (reserving the LLM for cases that cannot be matched directly) or that use LLMs to generate synthetic training data with more expressive variety which can be used to improve the coverage of either generated codes or fine-tuned models. Full article
(This article belongs to the Section Methods in Biomedical Informatics)
Show Figures

Figure 1

17 pages, 335 KB  
Article
Intelligent Virtual Assistant for Mobile Workers: Towards Hybrid, Frugal and Contextualized Solutions
by Karl Alwyn Sop Djonkam, Gaëtan Rey and Jean-Yves Tigli
Appl. Sci. 2025, 15(17), 9638; https://doi.org/10.3390/app15179638 - 2 Sep 2025
Viewed by 940
Abstract
Field workers require expeditious and pertinent access to information to execute their duties, frequently in arduous environments. Conventional document search interfaces are ill-suited to these contexts, while fully automated approaches often lack the capacity to adapt to the variability of situations. This article [...] Read more.
Field workers require expeditious and pertinent access to information to execute their duties, frequently in arduous environments. Conventional document search interfaces are ill-suited to these contexts, while fully automated approaches often lack the capacity to adapt to the variability of situations. This article explores a hybrid approach based on the use of specialized small language models (SLMs), combining natural language interaction, context awareness (static and dynamic), and structured command generation. The objective of this study is to demonstrate the feasibility of providing contextualized assistance for mobile agents using an intelligent conversational agent, while ensuring that reasonable resource consumption is maintained. The present case study pertains to the supervision of illumination systems on a university campus by technical agents. The static and the dynamic contexts are integrated into the user command to generate a prompt that queries a previously fine-tuned SLM. The methodology employed, the construction of five datasets for the purposes of evaluation, and the refinement of selected SLMs are presented herein. The findings indicate that models of smaller scale demonstrate the capacity to comprehend natural language queries and generate responses that can be effectively utilized by a tangible system. This work opens prospects for intelligent, resource-efficient, and contextualized assistance in industrial or constrained environments. Full article
Show Figures

Figure 1

15 pages, 7305 KB  
Article
Electrochemical Anodization-Induced {001} Facet Exposure in A-TiO2 for Improved DSSC Efficiency
by Jolly Mathew, Shyju Thankaraj Salammal, Anandhi Sivaramalingam and Paulraj Manidurai
J. Compos. Sci. 2025, 9(9), 462; https://doi.org/10.3390/jcs9090462 - 1 Sep 2025
Viewed by 781
Abstract
We developed dye-sensitized solar cells based on anatase–titanium dioxide (A-TiO2) nanotubes (TiNTs) and nanocubes (TiNcs) with {001} crystal facets generated using simple and facile electrochemical anodization. We also demonstrated a simple way of developing one-dimensional, two-dimensional, and three-dimensional self-assembled TiO2 [...] Read more.
We developed dye-sensitized solar cells based on anatase–titanium dioxide (A-TiO2) nanotubes (TiNTs) and nanocubes (TiNcs) with {001} crystal facets generated using simple and facile electrochemical anodization. We also demonstrated a simple way of developing one-dimensional, two-dimensional, and three-dimensional self-assembled TiO2 nanostructures via electrochemical anodization, using them as an electron-transporting layer in DSSCs. TiNTs maintain tubular arrays for a limited time before becoming nanocrystals with {001} facets. Using FESEM and TEM, we observed that the TiO2 nanobundles were transformed into nanocubes with {001} facets and lower fluorine concentrations. Optimizing the reaction approach resulted in better-ordered, crystalline anatase TiNTs/Ncs being formed on the Ti metal foil. The anatase phase of as-grown TiO2 was confirmed by XRD, with (101) being the predominant intensity and preferred orientation. The nanostructured TiO2 had lattice values of a = 3.77–3.82 and c = 9.42–9.58. The structure and morphology of these as-grown materials were studied to understand the growth process. The photoconversion efficiency and impedance spectra were explored to analyze the performance of the designed DSSCs, employing N719 dye as a sensitizer and the I/I3− redox pair as electrolytes, sandwiched with a Pt counter-electrode. As a result, we found that self-assembled TiNTs/Ncs presented a more effective photoanode in DSSCs than standard TiO2 (P25). TiNcs (0.5 and 0.25 NH4F) and P25 achieved the highest power conversion efficiencies of 3.47, 3.41, and 3.25%, respectively. TiNcs photoanodes have lower charge recombination capability and longer electron lifetimes, leading to higher voltage, photocurrent, and photovoltaic performance. These findings show that electrochemical anodization is an effective method for preparing TiNTs/Ncs and developing low-cost, highly efficient DSSCs by fine-tuning photoanode structures and components. Full article
Show Figures

Figure 1

19 pages, 2631 KB  
Article
AI-HOPE-TP53: A Conversational Artificial Intelligence Agent for Pathway-Centric Analysis of TP53-Driven Molecular Alterations in Early-Onset Colorectal Cancer
by Ei-Wen Yang, Brigette Waldrup and Enrique Velazquez-Villarreal
Cancers 2025, 17(17), 2865; https://doi.org/10.3390/cancers17172865 - 31 Aug 2025
Cited by 1 | Viewed by 1258
Abstract
Background/Objectives: The incidence of early onset colorectal cancer (EOCRC) is increasing globally, particularly among underrepresented populations such as Hispanic/Latino individuals. TP53 is among the most frequently mutated pathways in CRC; however, its role in EOCRC, especially in relation to disparities and treatment outcomes, [...] Read more.
Background/Objectives: The incidence of early onset colorectal cancer (EOCRC) is increasing globally, particularly among underrepresented populations such as Hispanic/Latino individuals. TP53 is among the most frequently mutated pathways in CRC; however, its role in EOCRC, especially in relation to disparities and treatment outcomes, remains poorly defined. We developed AI-HOPE-TP53, a novel conversational AI agent, to enable a real-time, disparity-aware analysis of TP53 pathway alterations in EOCRC. Methods: AI-HOPE-TP53 integrates a fine-tuned biomedical large language model (LLaMA 3) with harmonized datasets from cBioPortal (TCGA, MSK-IMPACT, AACR Project GENIE). Natural language queries are translated into workflows for mutation profiling, Kaplan–Meier survival analysis, and odds ratio estimation across clinical and demographic subgroups. Results: The platform replicated known genotype–phenotype associations, including elevated TP53 mutation frequency in EOCRC and poorer prognosis in TP53-mutated tumors. Significant findings included a survival benefit for patients with early-onset TP53-mutant CRC treated with FOLFOX (p = 0.0149). Additional exploratory analyses showed a trend toward higher prevalence of TP53 pathway alterations in Hispanic/Latino EOCRC patients (OR = 2.13, p = 0.084) and identified sex-based disparities in treatment, with women being less likely than men to receive FOLFOX (OR = 0.845, p = 0.0138). Conclusions: AI-HOPE-TP53, developed in this study and made publicly available, is the first conversational AI platform tailored for pathway-specific and disparity-aware EOCRC research. By integrating clinical, genomic, and demographic data through natural language interaction, hypothesis generation and equity-focused analyses are enabled, with significant potential to advance precision oncology. Full article
Show Figures

Figure 1

33 pages, 4233 KB  
Article
A Comparative Study of PEGASUS, BART, and T5 for Text Summarization Across Diverse Datasets
by Eman Daraghmi, Lour Atwe and Areej Jaber
Future Internet 2025, 17(9), 389; https://doi.org/10.3390/fi17090389 - 28 Aug 2025
Cited by 3 | Viewed by 4827
Abstract
This study aims to conduct a comprehensive comparative evaluation of three transformer-based models, PEGASUS, BART, and T5 variants (SMALL and BASE), for the task of abstractive text summarization. The evaluation spans across three benchmark datasets: CNN/DailyMail (long-form news articles), Xsum (extreme single-sentence summaries [...] Read more.
This study aims to conduct a comprehensive comparative evaluation of three transformer-based models, PEGASUS, BART, and T5 variants (SMALL and BASE), for the task of abstractive text summarization. The evaluation spans across three benchmark datasets: CNN/DailyMail (long-form news articles), Xsum (extreme single-sentence summaries of BBC articles), and Samsum (conversational dialogues). Each dataset presents unique challenges in terms of length, style, and domain, enabling a robust assessment of the models’ capabilities. All models were fine-tuned under controlled experimental settings using filtered and preprocessed subsets, with token length limits applied to maintain consistency and prevent truncation. The evaluation leveraged ROUGE-1, ROUGE-2, and ROUGE-L scores to measure summary quality, while efficiency metrics such as training time were also considered. An additional qualitative assessment was conducted through expert human evaluation of fluency, relevance, and conciseness. Results indicate that PEGASUS achieved the highest ROUGE scores on CNN/DailyMail, BART excelled in Xsum and Samsum, while T5 models, particularly T5-Base, narrowed the performance gap with larger models while still offering efficiency advantages compared to PEGASUS and BART. These findings highlight the trade-offs between model performance and computational efficiency, offering practical insights into model scaling—where T5-Small favors lightweight efficiency and T5-Base provides stronger accuracy without excessive resource demands. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Graphical abstract

Back to TopTop