Next Issue
Volume 13, January
Previous Issue
Volume 12, September
 
 

Informatics, Volume 12, Issue 4 (December 2025) – 41 articles

Cover Story (view full-size image): Artificial intelligence (AI) now shapes decisions in domains where errors carry profound consequences for safety, welfare, and long-term societal well-being. As AI capabilities grow, the central challenge shifts from smarter algorithms to responsible Human–AI Collaboration. This work reveals a pivotal shift in decision support: from replacing human judgment to amplifying the intuitive reasoning behind complex choices. It identifies four pillars of successful collaboration: complementary human–AI roles, adaptive user-centered systems, context-aware task allocation, and calibrated reliance on automation. Surprisingly, our findings expose a performance paradox—human–AI teams do not always outperform the best individual decision-makers. These insights redefine the sociotechnical blueprint for AI systems that empower, rather than override, human expertise in critical environments. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 741 KB  
Article
Combining Fuzzy Cognitive Maps and Metaheuristic Algorithms to Predict Preeclampsia and Intrauterine Growth Restriction
by María Paula García, Jesús David Díaz-Meza, Kenia Hoyos, Bethia Pacheco, Rodrigo García and William Hoyos
Informatics 2025, 12(4), 141; https://doi.org/10.3390/informatics12040141 - 15 Dec 2025
Viewed by 628
Abstract
Preeclampsia (PE) and intrauterine growth restriction (IUGR) are obstetric complications associated with placental dysfunction, which represent a public health problem due to high maternal and fetal morbidity and mortality. Early detection is crucial for timely interventions. Therefore, this study proposes the development of [...] Read more.
Preeclampsia (PE) and intrauterine growth restriction (IUGR) are obstetric complications associated with placental dysfunction, which represent a public health problem due to high maternal and fetal morbidity and mortality. Early detection is crucial for timely interventions. Therefore, this study proposes the development of models based on fuzzy cognitive maps (FCM) optimized with metaheuristic algorithms (particle swarm optimization (PSO) and genetic algorithms (GA)) for the prediction of PE and IUGR. The results showed that FCM-PSO applied to the PE dataset achieved excellent performance (accuracy, precision, recall, and F1-Score = 1.0). The FCM-GA model excelled in predicting IUGR with an accuracy and F1-Score of 0.97. Our proposed models outperformed those reported in the literature to predict PE and IUGR. Analysis of the relationships between nodes allowed for the identification of influential variables such as sFlt-1, sFlt-1/PlGF, and uterine Doppler parameters, in accordance with the pathophysiology of placental disorders. FCM optimized with PSO and GA offer a viable clinical alternative as a medical decision support system due to their ability to explore nonlinear relationships and interpretability of variables. In addition, they are suitable for scenarios where low computational resource consumption is required. Full article
Show Figures

Figure 1

32 pages, 7383 KB  
Article
Vertebra Segmentation and Cobb Angle Calculation Platform for Scoliosis Diagnosis Using Deep Learning: SpineCheck
by İrfan Harun İlkhan, Halûk Gümüşkaya and Firdevs Turgut
Informatics 2025, 12(4), 140; https://doi.org/10.3390/informatics12040140 - 11 Dec 2025
Viewed by 1235
Abstract
This study presents SpineCheck, a fully integrated deep-learning-based clinical decision support platform for automatic vertebra segmentation and Cobb angle (CA) measurement from scoliosis X-ray images. The system unifies end-to-end preprocessing, U-Net-based segmentation, geometry-driven angle computation, and a web-based clinical interface within a single [...] Read more.
This study presents SpineCheck, a fully integrated deep-learning-based clinical decision support platform for automatic vertebra segmentation and Cobb angle (CA) measurement from scoliosis X-ray images. The system unifies end-to-end preprocessing, U-Net-based segmentation, geometry-driven angle computation, and a web-based clinical interface within a single deployable architecture. For secure clinical use, SpineCheck adopts a stateless “process-and-delete” design, ensuring that no radiographic data or Protected Health Information (PHI) are permanently stored. Five U-Net family models (U-Net, optimized U-Net-2, Attention U-Net, nnU-Net, and UNet3++) are systematically evaluated under identical conditions using Dice similarity, inference speed, GPU memory usage, and deployment stability, enabling deployment-oriented model selection. A robust CA estimation pipeline is developed by combining minimum-area rectangle analysis with Theil–Sen regression and spline-based anatomical modeling to suppress outliers and improve numerical stability. The system is validated on a large-scale dataset of 20,000 scoliosis X-ray images, demonstrating strong agreement with expert measurements based on Mean Absolute Error, Pearson correlation, and Intraclass Correlation Coefficient metrics. These findings confirm the reliability and clinical robustness of SpineCheck. By integrating large-scale validation, robust geometric modeling, secure stateless processing, and real-time deployment capabilities, SpineCheck provides a scalable and clinically reliable framework for automated scoliosis assessment. Full article
Show Figures

Figure 1

22 pages, 6144 KB  
Article
Multimodal Large Language Models vs. Human Authors: A Comparative Study of Chinese Fairy Tales for Young Children
by Jing Du, Wenhao Liu, Dibin Zhou, Seongku Hong and Fuchang Liu
Informatics 2025, 12(4), 139; https://doi.org/10.3390/informatics12040139 - 9 Dec 2025
Viewed by 748
Abstract
In the realm of children’s education, multimodal large language models (MLLMs) are already being utilized to create educational materials for young learners. But how significant are the differences between image-based fairy tales generated by MLLMs and those crafted by human authors? This paper [...] Read more.
In the realm of children’s education, multimodal large language models (MLLMs) are already being utilized to create educational materials for young learners. But how significant are the differences between image-based fairy tales generated by MLLMs and those crafted by human authors? This paper addresses this question through the design of multi-dimensional human evaluation and actual questionnaire surveys. Specifically, we conducted studies on evaluating MLLM-generated stories and distinguishing them from human-written stories involving 50 undergraduate students in education-related majors, 30 first-grade students, 81 second-grade students, and 103 parents. The findings reveal that most undergraduate students with an educational background, elementary school students, and parents perceive stories generated by MLLMs as being highly similar to those written by humans. Through the evaluation of primary school students and vocabulary analysis, it is further shown that, unlike human-authored stories, which tend to exceed the vocabulary level of young students, MLLM-generated stories are able to control vocabulary complexity and are also very interesting for young readers. Based on the results of the above experiments, we further discuss the following question: Can MLLMs assist or even replace humans in writing Chinese children’s fairy tales based on pictures for young children? We approached this question from both a technical perspective and a user perspective. Full article
Show Figures

Figure 1

24 pages, 8512 KB  
Article
AI-Enabled Intelligent System for Automatic Detection and Classification of Plant Diseases Towards Precision Agriculture
by Gujju Siva Krishna, Zameer Gulzar, Arpita Baronia, Jagirdar Srinivas, Padmavathy Paramanandam and Kasharaju Balakrishna
Informatics 2025, 12(4), 138; https://doi.org/10.3390/informatics12040138 - 8 Dec 2025
Viewed by 1350
Abstract
Technology-driven agriculture, or precision agriculture (PA), is indispensable in the contemporary world due to its advantages and the availability of technological innovations. Particularly, early disease detection in agricultural crops helps the farming community ensure crop health, reduce expenditure, and increase crop yield. Governments [...] Read more.
Technology-driven agriculture, or precision agriculture (PA), is indispensable in the contemporary world due to its advantages and the availability of technological innovations. Particularly, early disease detection in agricultural crops helps the farming community ensure crop health, reduce expenditure, and increase crop yield. Governments have mainly used current systems for agricultural statistics and strategic decision-making, but there is still a critical need for farmers to have access to cost-effective, user-friendly solutions that can be used by them regardless of their educational level. In this study, we used four apple leaf diseases (leaf spot, mosaic, rust and brown spot) from the PlantVillage dataset to develop an Automated Agricultural Crop Disease Identification System (AACDIS), a deep learning framework for identifying and categorizing crop diseases. This framework makes use of deep convolutional neural networks (CNNs) and includes three CNN models created specifically for this application. AACDIS achieves significant performance improvements by combining cascade inception and drawing inspiration from the well-known AlexNet design, making it a potent tool for managing agricultural diseases. AACDIS also has Region of Interest (ROI) awareness, a crucial component that improves the efficiency and precision of illness identification. This feature guarantees that the system can quickly and accurately identify illness-related areas inside images, enabling faster and more accurate disease diagnosis. Experimental findings show a test accuracy of 99.491%, which is better than many state-of-the-art deep learning models. This empirical study reveals the potential benefits of the proposed system for early identification of diseases. This research triggers further investigation to realize full-fledged precision agriculture and smart agriculture. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

23 pages, 2519 KB  
Review
Mapping the AI Surge in Higher Education: A Bibliometric Study Spanning a Decade (2015–2025)
by Mousin Omarsaib, Sara Bibi Mitha, Anisa Vahed and Ghulam Masudh Mohamed
Informatics 2025, 12(4), 137; https://doi.org/10.3390/informatics12040137 - 8 Dec 2025
Viewed by 1636
Abstract
There has recently been a pronounced global escalation in scholarly output concerning Artificial Intelligence (AI) within the context of higher education (HE). However, the precise locus of this growth remains ambiguous, thereby hindering the systematic integration of critical AI trends into HE practices. [...] Read more.
There has recently been a pronounced global escalation in scholarly output concerning Artificial Intelligence (AI) within the context of higher education (HE). However, the precise locus of this growth remains ambiguous, thereby hindering the systematic integration of critical AI trends into HE practices. To address this opacity, the present study adopts a rigorous and impartial analytical approach by synthesizing datasets from the Web of Science (WoS) and Scopus through the Biblioshiny platform. In addition, independent examinations of WoS and Scopus data were conducted using co-occurrence network analyses in VOSviewer, which revealed comparable patterns of cluster strength across both datasets. Complementing these methods, Latent Dirichlet Allocation (LDA) was employed to extract and interpret thematic structures within locally cited references, thereby providing deeper insights into the extant research discourse. Findings revealed significant acceleration patterns from 2023 concerning publication trends, annual growth patterns, cited references, top authors, leading journals, and leading countries. Patterns of strengths from co-occurrence networks in VOSviewer revealed growing interest in generative AI tools, AI ethics, and concerns about AI integration into the curriculum in HE. The LDA analysis identified two dominant themes: the pedagogical integration of generative AI tools and broader academic discourse on AI ethics that correlated with the VOSviewer findings. This enhanced the credibility, reliability, and validity of the bibliometric techniques applied in the study. Recommendations and future directions offer valuable insights for policymakers and stakeholders to address pedagogical integration of generative AI tools in HE. The development of frameworks and ethical guidelines are important to address fair and transparent adoption of AI in HE. Further, global inequalities in adoption, aligning with UNESCO’s Sustainable Development Goals, are crucial to ensure equitable and responsible AI integration in HE. Full article
Show Figures

Figure 1

18 pages, 1606 KB  
Article
CLFF-NER: A Cross-Lingual Feature Fusion Model for Named Entity Recognition in the Traditional Chinese Festival Culture Domain
by Shenghe Yang, Kun He, Wei Li and Yingying He
Informatics 2025, 12(4), 136; https://doi.org/10.3390/informatics12040136 - 5 Dec 2025
Viewed by 645
Abstract
With the rapid development of information technology, there is an increasing demand for the digital preservation of traditional festival culture and the extraction of relevant knowledge. However, existing research on Named Entity Recognition (NER) for Chinese traditional festival culture lacks support from high-quality [...] Read more.
With the rapid development of information technology, there is an increasing demand for the digital preservation of traditional festival culture and the extraction of relevant knowledge. However, existing research on Named Entity Recognition (NER) for Chinese traditional festival culture lacks support from high-quality corpora and dedicated model methods. To address this gap, this study proposes a Named Entity Recognition model, CLFF-NER, which integrates multi-source heterogeneous information. The model operates as follows: first, Multilingual BERT is employed to obtain the contextual semantic representations of Chinese and English sentences. Subsequently, a Multiconvolutional Kernel Network (MKN) is used to extract the local structural features of entities. Then, a Transformer module is introduced to achieve cross-lingual, cross-attention fusion of Chinese and English semantics. Furthermore, a Graph Neural Network (GNN) is utilized to selectively supplement useful English information, thereby alleviating the interference caused by redundant information. Finally, a gating mechanism and Conditional Random Field (CRF) are combined to jointly optimize the recognition results. Experiments were conducted on the public Chinese Festival Culture Dataset (CTFCDataSet), and the model achieved 89.45%, 90.01%, and 89.73% in precision, recall, and F1 score, respectively—significantly outperforming a range of mainstream baseline models. Meanwhile, the model also demonstrated competitive performance on two other public datasets, Resume and Weibo, which verifies its strong cross-domain generalization ability. Full article
Show Figures

Figure 1

36 pages, 1888 KB  
Review
Enhancing Intuitive Decision-Making and Reliance Through Human–AI Collaboration: A Review
by Gerui Xu, Shruthi Venkatesha Murthy and Bochen Jia
Informatics 2025, 12(4), 135; https://doi.org/10.3390/informatics12040135 - 5 Dec 2025
Viewed by 3782
Abstract
As AI decision support systems play a growing role in high-stakes decision making, ensuring effective integration of human intuition with AI recommendations is essential. Despite advances in AI explainability, challenges persist in fostering appropriate reliance. This review explores AI decision support systems that [...] Read more.
As AI decision support systems play a growing role in high-stakes decision making, ensuring effective integration of human intuition with AI recommendations is essential. Despite advances in AI explainability, challenges persist in fostering appropriate reliance. This review explores AI decision support systems that enhance human intuition through the analysis of 84 studies addressing three questions: (1) What design strategies enable AI systems to support humans’ intuitive capabilities while maintaining decision-making autonomy? (2) How do AI presentation and interaction approaches influence trust calibration and reliance behaviors in human–AI collaboration? (3) What ethical and practical implications arise from integrating AI decision support systems into high-risk human decision making, particularly regarding trust calibration, skill degradation, and accountability across different domains? Our findings reveal four key design strategies: complementary role architectures that amplify rather than replace human judgment, adaptive user-centered designs tailoring AI support to individual decision-making styles, context-aware task allocation dynamically assigning responsibilities based on situational factors, and autonomous reliance calibration mechanisms empowering users’ control over AI dependence. We identified that visual presentations, interactive features, and uncertainty communication significantly influence trust calibration, with simple visual highlights proving more effective than complex presentation and interactive methods in preventing over-reliance. However, a concerning performance paradox emerges where human–AI combinations often underperform the best individual agent while surpassing human-only performance. The research demonstrates that successful AI integration in high-risk contexts requires domain-specific calibration, integrated sociotechnical design addressing trust calibration and skill preservation simultaneously, and proactive measures to maintain human agency and competencies essential for safety, accountability, and ethical responsibility. Full article
Show Figures

Figure 1

27 pages, 1028 KB  
Article
MCD-Temporal: Constructing a New Time-Entropy Enhanced Dynamic Weighted Heterogeneous Ensemble for Cognitive Level Classification
by Yuhan Wu, Long Zhang, Bin Li and Wendong Zhang
Informatics 2025, 12(4), 134; https://doi.org/10.3390/informatics12040134 - 2 Dec 2025
Viewed by 553
Abstract
Accurate classification of cognitive levels in instructional dialogues is essential for personalized education and intelligent teaching systems. However, most existing methods predominantly rely on static textual features and a shallow semantic analysis. They often overlook dynamic temporal interactions and struggle with class imbalance. [...] Read more.
Accurate classification of cognitive levels in instructional dialogues is essential for personalized education and intelligent teaching systems. However, most existing methods predominantly rely on static textual features and a shallow semantic analysis. They often overlook dynamic temporal interactions and struggle with class imbalance. To address these limitations, this study proposes a novel framework for cognitive-level classification. This framework integrates time entropy-enhanced dynamics with a dynamically weighted, heterogeneous ensemble strategy. Specifically, we reconstruct the original Multi-turn Classroom Dialogue (MCD) dataset by introducing time entropy to quantify teacher–student speaking balance and semantic richness features based on Term Frequency-Inverse Document Frequency (TF-IDF), resulting in an enhanced MCD-temporal dataset. We then design a Dynamic Weighted Heterogeneous Ensemble (DWHE), which adjusts weights based on the class distribution. Our framework achieves a state-of-the-art macro-F1 score of 0.6236. This study validates the effectiveness of incorporating temporal dynamics and adaptive ensemble learning for robust cognitive level assessment, offering a more powerful tool for educational AI applications. Full article
Show Figures

Figure 1

35 pages, 5859 KB  
Article
Fuzzy Ontology Embeddings and Visual Query Building for Ontology Exploration
by Vladimir Zhurov, John Kausch, Kamran Sedig and Mostafa Milani
Informatics 2025, 12(4), 133; https://doi.org/10.3390/informatics12040133 - 1 Dec 2025
Viewed by 1102
Abstract
Ontologies play a central role in structuring knowledge across domains, supporting tasks such as reasoning, data integration, and semantic search. However, their large size and complexity—particularly in fields such as biomedicine, computational biology, law, and engineering—make them difficult for non-experts to navigate. Formal [...] Read more.
Ontologies play a central role in structuring knowledge across domains, supporting tasks such as reasoning, data integration, and semantic search. However, their large size and complexity—particularly in fields such as biomedicine, computational biology, law, and engineering—make them difficult for non-experts to navigate. Formal query languages such as SPARQL offer expressive access but require users to understand the ontology’s structure and syntax. In contrast, visual exploration tools and basic keyword-based search interfaces are easier to use but often lack flexibility and expressiveness. We introduce FuzzyVis, a proof-of-concept system that enables intuitive and expressive exploration of complex ontologies. FuzzyVis integrates two key components: a fuzzy logic-based querying model built on fuzzy ontology embeddings, and an interactive visual interface for building and interpreting queries. Users can construct new composite concepts by selecting and combining existing ontology concepts using logical operators such as conjunction, disjunction, and negation. These composite concepts are matched against the ontology using fuzzy membership-based embeddings, which capture degrees of membership and support approximate, concept-level similarity search. The visual interface supports browsing, query composition, and partial search without requiring formal syntax. By combining fuzzy semantics with embedding-based reasoning, FuzzyVis enables flexible interpretation, efficient computation, and exploratory learning. A usage scenario demonstrates how FuzzyVis supports subtle information needs and helps users uncover relevant concepts in large, complex ontologies. Full article
(This article belongs to the Section Human-Computer Interaction)
Show Figures

Figure 1

27 pages, 610 KB  
Article
Reducing AI-Generated Misinformation in Australian Higher Education: A Qualitative Analysis of Institutional Responses to AI-Generated Misinformation and Implications for Cybercrime Prevention
by Leo S. F. Lin, Geberew Tulu Mekonnen, Mladen Zecevic, Immaculate Motsi-Omoijiade, Duane Aslett and Douglas M. C. Allan
Informatics 2025, 12(4), 132; https://doi.org/10.3390/informatics12040132 - 28 Nov 2025
Viewed by 1472
Abstract
Generative Artificial Intelligence (GenAI) has transformed Australian higher education, amplifying online harms such as misinformation, fraud, and image-based abuse, with significant implications for cybercrime prevention. Combining a PRISMA-guided systematic review with MAXQDA-driven analysis of Australian university policies, this research evaluates institutional strategies against [...] Read more.
Generative Artificial Intelligence (GenAI) has transformed Australian higher education, amplifying online harms such as misinformation, fraud, and image-based abuse, with significant implications for cybercrime prevention. Combining a PRISMA-guided systematic review with MAXQDA-driven analysis of Australian university policies, this research evaluates institutional strategies against national frameworks, such as the Cybersecurity Strategy 2023–2030. Analyzing data from academic literature, we identify three key themes: educational strategies, alignment with national frameworks, and policy gaps and development. As the first qualitative analysis of 40 Australian university policies, this study uncovers systemic fragmentation in governance frameworks, with only 12 institutions addressing data privacy risks and none directly targeting AI-driven disinformation threats like deepfake harassment—a critical gap in global AI governance literature. This study provides actionable recommendations to develop the National GenAI Governance Framework, co-developed by TEQSA/UA and DoE, enhanced cyberbullying policies, and behavior-focused training to enhance digital safety and prevent cybercrime in Australian higher education. Mandatory annual CyberAI Literacy Module for all students and staff to ensure awareness of cybersecurity risks, responsible use of artificial intelligence, and digital safety practices within the university community. Full article
Show Figures

Figure 1

19 pages, 6349 KB  
Article
Hierarchical Fake News Detection Model Based on Multi-Task Learning and Adversarial Training
by Yi Sun and Dunhui Yu
Informatics 2025, 12(4), 131; https://doi.org/10.3390/informatics12040131 - 27 Nov 2025
Viewed by 883
Abstract
The harmfulness of online fake news has brought widespread attention to fake news detection by researchers. Most existing methods focus on improving the accuracy and early detection of fake news, while ignoring the frequent cross-topic issues faced by fake news in online environments. [...] Read more.
The harmfulness of online fake news has brought widespread attention to fake news detection by researchers. Most existing methods focus on improving the accuracy and early detection of fake news, while ignoring the frequent cross-topic issues faced by fake news in online environments. A hierarchical fake news detection method (HAMFD) based on multi-task learning and adversarial training is proposed. Through the multi-task learning task at the event level, subjective and objective information is introduced. A subjectivity classifier is used to capture sentiment shift within events, aiming to improve in-domain performance and generalization ability of fake news detection. On this basis, textual features and sentiment shift features are fused to perform event-level fake news detection and enhance detection accuracy. The post-level loss and event-level loss are weighted and summed for backpropagation. Adversarial perturbations are added to the embedding layer of the post-level module to deceive the detector, enabling the model to better resist adversarial attacks and enhance its robustness and topic adaptability. Experiments are conducted on three real-world social media datasets, and the results show that the proposed method improves performance in both in-domain and cross-topic fake news detection. Specifically, the model attains accuracies of 91.3% on Twitter15, 90.4% on Twitter16, and 95.7% on Weibo, surpassing advanced baseline methods by 1.6%, 1.5%, and 1.1%, respectively. Full article
Show Figures

Figure 1

28 pages, 3223 KB  
Article
Explainable Artificial Intelligence for Workplace Mental Health Prediction
by Tsholofelo Mokheleli, Tebogo Bokaba and Elliot Mbunge
Informatics 2025, 12(4), 130; https://doi.org/10.3390/informatics12040130 - 26 Nov 2025
Viewed by 1350
Abstract
The increased prevalence of mental health issues in the workplace affects employees’ well-being and organisational success, necessitating proactive interventions such as employee assistance programmes, stress management workshops, and tailored wellness initiatives. Artificial intelligence (AI) techniques are transforming mental health risk prediction using behavioural, [...] Read more.
The increased prevalence of mental health issues in the workplace affects employees’ well-being and organisational success, necessitating proactive interventions such as employee assistance programmes, stress management workshops, and tailored wellness initiatives. Artificial intelligence (AI) techniques are transforming mental health risk prediction using behavioural, environmental, and workplace data. However, the “black-box” nature of many AI models hinders trust, transparency, and adoption in sensitive domains such as mental health. This study used the Open Sourcing Mental Illness (OSMI) secondary dataset (2016–2023) and applied four ML classifiers, Random Forest (RF), xGBoost, Support Vector Machine (SVM), and AdaBoost, to predict workplace mental health outcomes. Explainable AI (XAI) techniques, SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), were integrated to provide both global (SHAP) and instance-level (LIME) interpretability. The Synthetic Minority Oversampling Technique (SMOTE) was applied to address class imbalance. The results show that xGBoost and RF achieved the highest cross-validation accuracy (94%), with xGBoost performing best overall (accuracy = 91%, ROC AUC = 90%), followed by RF (accuracy = 91%). SHAP revealed that sought_treatment, past_mh_disorder, and current_mh_disorder had the most significant positive impact on predictions, while LIME provided case-level explanations to support individualised interpretation. These findings show the importance of explainable ML models in informing timely, targeted interventions, such as improving access to mental health resources, promoting stigma-free workplaces, and supporting treatment-seeking behaviour, while ensuring the ethical and transparent integration of AI into workplace mental health management. Full article
Show Figures

Figure 1

23 pages, 2616 KB  
Article
ETICD-Net: A Multimodal Fake News Detection Network via Emotion-Topic Injection and Consistency Modeling
by Wenqian Shang, Jinru Yang, Linlin Zhang, Tong Yi and Peng Liu
Informatics 2025, 12(4), 129; https://doi.org/10.3390/informatics12040129 - 25 Nov 2025
Viewed by 1064
Abstract
The widespread dissemination of multimodal disinformation, which combines inflammatory text with manipulated images, poses a severe threat to society. Existing detection methods typically process textual and visual features in isolation or perform simple fusion, failing to capture the sophisticated semantic inconsistencies commonly found [...] Read more.
The widespread dissemination of multimodal disinformation, which combines inflammatory text with manipulated images, poses a severe threat to society. Existing detection methods typically process textual and visual features in isolation or perform simple fusion, failing to capture the sophisticated semantic inconsistencies commonly found in false information. To address this, we propose a novel framework: Emotion-Topic Injection and Consistency Detection Network (ETICD-Net). First, a large language model (LLM) extracts structured sentiment and topic-guided signals from news texts to provide rich semantic clues. Second, unlike previous approaches, this guided signal is injected into the feature extraction processes of both modalities: it enhances text features from BERT and modulates image features from ResNet, thereby generating sentiment-topic-aware feature representations. Additionally, this paper introduces a hierarchical consistency fusion module that explicitly evaluates semantic coherence among these enhanced features. It employs cross-modal attention mechanisms, enabling text to query image regions relevant to its statements, and calculates explicit dissimilarity metrics to quantify inconsistencies. Extensive experiments on the Weibo and Twitter benchmark datasets demonstrate that ETICD-Net outperforms or matches state-of-the-art methods, achieving accuracy and F1 scores of 90.6% and 91.5%, respectively. Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
Show Figures

Figure 1

24 pages, 490 KB  
Article
Learning Dynamics Analysis: Assessing Generalization of Machine Learning Models for Optical Coherence Tomography Multiclass Classification
by Michael Sher, David Remyes, Riah Sharma and Milan Toma
Informatics 2025, 12(4), 128; https://doi.org/10.3390/informatics12040128 - 22 Nov 2025
Viewed by 954
Abstract
This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with [...] Read more.
This study evaluated the generalization and reliability of machine learning models for multiclass classification of retinal pathologies using a diverse set of images representing eight disease categories. Images were aggregated from two public datasets and divided into training, validation, and test sets, with an additional independent dataset used for external validation. Multiple modeling approaches were compared, including classical machine learning algorithms, convolutional neural networks with and without data augmentation, and a deep neural network using pre-trained feature extraction. Analysis of learning dynamics revealed that classical models and unaugmented convolutional neural networks exhibited overfitting and poor generalization, while models with data augmentation and the deep neural network showed healthy, parallel convergence of training and validation performance. Only the deep neural network demonstrated a consistent, monotonic decrease in accuracy, F1-score, and recall from training through external validation, indicating robust generalization. These results underscore the necessity of evaluating learning dynamics (not just summary metrics) to ensure model reliability and patient safety. Typically, model performance is expected to decrease gradually as data becomes less familiar. Therefore, models that do not exhibit these healthy learning dynamics, or that show unexpected improvements in performance on subsequent datasets, should not be considered for clinical application, as such patterns may indicate methodological flaws or data leakage rather than true generalization. Full article
Show Figures

Figure 1

17 pages, 1783 KB  
Article
MOOC Dropout Prediction via a Dilated Convolutional Attention Network with Lie Group Features
by Yinxu Liu, Chengjun Xu, Desheng Yang and Yuncheng Shen
Informatics 2025, 12(4), 127; https://doi.org/10.3390/informatics12040127 - 21 Nov 2025
Cited by 1 | Viewed by 1024
Abstract
Massive open online courses (MOOCs) represent an innovative online learning paradigm that has garnered considerable popularity in recent years, attracting a multitude of learners to MOOC platforms due to their accessible and adaptable instructional structure. However, the elevated dropout rate in current MOOCs [...] Read more.
Massive open online courses (MOOCs) represent an innovative online learning paradigm that has garnered considerable popularity in recent years, attracting a multitude of learners to MOOC platforms due to their accessible and adaptable instructional structure. However, the elevated dropout rate in current MOOCs limits their advancement. Current dropout prediction models predominantly employ fixed-size convolutional kernels for feature extraction, which insufficiently address temporal dependencies and consequently demonstrate specific limitations. We propose a Lie Group-based feature context-local fusion attention model for predicting dropout in MOOCs. This model initially extracts shallow features using Lie Group machine learning techniques and subsequently integrates multiple parallel dilated convolutional modules to acquire high-level semantic representations. We design an attention mechanism that integrates contextual and local features, effectively capturing the temporal dependencies in the study behaviors of learners. We performed multiple experiments on the XuetangX dataset to evaluate the model’s efficacy. The results show that our method attains a precision score of 0.910, exceeding the previous state-of-the-art approach by 3.3%. Full article
Show Figures

Figure 1

17 pages, 962 KB  
Article
Automated Hyperparameter Optimization for Cyberattack Detection Based on Machine Learning in IoT Systems
by Fray L. Becerra-Suarez, Lloy Pinedo, Madeleine J. Gavilán-Colca, Mónica Díaz and Manuel G. Forero
Informatics 2025, 12(4), 126; https://doi.org/10.3390/informatics12040126 - 20 Nov 2025
Viewed by 1160
Abstract
The growing sophistication of cyberattacks in Internet of Things (IoT) environments demands proactive and efficient solutions. We present an automated hyperparameter optimization (HPO) method for detecting cyberattacks in IoT that explicitly addresses class imbalance. The approach combines a Random Forest surrogate, a UCB [...] Read more.
The growing sophistication of cyberattacks in Internet of Things (IoT) environments demands proactive and efficient solutions. We present an automated hyperparameter optimization (HPO) method for detecting cyberattacks in IoT that explicitly addresses class imbalance. The approach combines a Random Forest surrogate, a UCB acquisition function with controlled exploration, and an objective function that maximizes weighted F1 and MCC; it also integrates stratified validation and a compact selection of descriptors by metaheuristic consensus. Five models (RandomForest, AdaBoost, DecisionTree, XGBoost, and MLP) were evaluated on CICIoT2023 and CIC-DDoS2019. The results show systematic improvements over default configurations and competitiveness compared to Hyperopt and GridSearch. For RandomForest, marked increases were observed in CIC-DDoS2019 (F1-Score from 0.9469 to 0.9995; MCC from 0.9284 to 0.9986) and consistent improvements in CICIoT2023 (F1-Score from 0.9947 to 0.9954; MCC from 0.9885 to 0.9896), while maintaining low inference times. These results demonstrate that the proposed HPO offers a solid balance between performance, computational cost, and traceability, and constitutes a reproducible alternative for strengthening cybersecurity mechanisms in IoT environments with limited resources. Full article
Show Figures

Figure 1

17 pages, 1209 KB  
Article
An Adaptive Protocol Selection Framework for Energy-Efficient IoT Communication: Dynamic Optimization Through Context-Aware Decision Making
by Dmitrij Żatuchin and Maksim Azarskov
Informatics 2025, 12(4), 125; https://doi.org/10.3390/informatics12040125 - 17 Nov 2025
Viewed by 1497
Abstract
The rapid growth of Internet of Things (IoT) deployments has created an urgent need for energy-efficient communication strategies that can adapt to dynamic operational conditions. This study presents a novel adaptive protocol selection framework that dynamically optimizes IoT communication energy consumption through context-aware [...] Read more.
The rapid growth of Internet of Things (IoT) deployments has created an urgent need for energy-efficient communication strategies that can adapt to dynamic operational conditions. This study presents a novel adaptive protocol selection framework that dynamically optimizes IoT communication energy consumption through context-aware decision making, achieving up to 34% energy reduction compared to static protocol selection. The framework is grounded in a comprehensive empirical evaluation of three widely used IoT communication protocols—MQTT, CoAP, and HTTP—using Intel’s Running Average Power Limit (RAPL) for precise energy measurement across varied network conditions including packet loss (0–20%) and latency variations (1–200 ms). Our key contribution is the design and validation of an adaptive selection mechanism that employs multi-criteria decision making with hysteresis control to prevent oscillation, dynamically switching between protocols based on six runtime metrics: message frequency, payload size, network conditions, packet loss rate, available energy budget, and QoS requirements. Results show MQTT consumes only 40% of HTTP’s energy per byte at high volumes (>10,000 messages), while HTTP remains practical for low-volume traffic (<10 msg/min). A novel finding reveals receiver nodes consistently consume 15–20% more energy than senders, requiring new design considerations for IoT gateways. The framework demonstrates robust performance across simulated real-world conditions, maintaining 92% of optimal performance while requiring 85% less computation than machine learning approaches. These findings offer actionable guidance for IoT architects and developers, positioning this work as a practical solution for energy-aware IoT communication in production environments. Full article
Show Figures

Figure 1

18 pages, 1475 KB  
Article
Leveraging the Graph-Based LLM to Support the Analysis of Supply Chain Information
by Peng Su, Rui Xu and Dejiu Chen
Informatics 2025, 12(4), 124; https://doi.org/10.3390/informatics12040124 - 13 Nov 2025
Viewed by 1401
Abstract
Modern companies often rely on integrating an extensive network of suppliers to organize and produce industrial artifacts. Within this process, it is critical to maintain sustainability and flexibility by analyzing and managing information from the supply chain. In particular, there is a continuous [...] Read more.
Modern companies often rely on integrating an extensive network of suppliers to organize and produce industrial artifacts. Within this process, it is critical to maintain sustainability and flexibility by analyzing and managing information from the supply chain. In particular, there is a continuous demand to automatically analyze and infer information from extensive datasets structured in various forms, such as natural language and domain-specific models. The advancement of Large Language Models (LLM) presents a promising solution to address this challenge. By leveraging prompts that contain the necessary information provided by humans, LLM can generate insightful responses through analysis and reasoning over the provided content. However, the quality of these responses is still affected by the inherent opaqueness of LLM, stemming from their complex architectures, thus weakening their trustworthiness and limiting their applicability across different fields. To address this issue, this work presents a framework to leverage the graph-based LLM to support the analysis of supply chain information by combining the LLM and domain knowledge. Specifically, this work proposes an integration of LLM and domain knowledge to support an analysis of the supply chain as follows: (1) constructing a graph-based knowledge base to describe and model the domain knowledge; (2) creating prompts to support the retrieval of the graph-based models and guide the generation of LLM; (3) generating responses via LLM to support the analysis and reason about information across the supply chain. We demonstrate the proposed framework in the tasks of entity classification, link prediction, and reasoning across entities. Compared to the average performance of the best methods in the comparative studies, the proposed framework achieves a significant improvement of 59%, increasing the ROUGE-1 F1 score from 0.42 to 0.67. Full article
Show Figures

Figure 1

26 pages, 3024 KB  
Article
GraderAssist: A Graph-Based Multi-LLM Framework for Transparent and Reproducible Automated Evaluation
by Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Adina Cocu, Marian Viorel Craciun, Paul Iacobescu, Antonio Stefan Balau and Constantin Adrian Andrei
Informatics 2025, 12(4), 123; https://doi.org/10.3390/informatics12040123 - 9 Nov 2025
Cited by 1 | Viewed by 1490
Abstract
Background and objectives: Automated evaluation of open-ended responses remains a persistent challenge, particularly when consistency, transparency, and reproducibility are required. While large language models (LLMs) have shown promise in rubric-based evaluation, their reliability across multiple evaluators is still uncertain. Variability in scoring, feedback, [...] Read more.
Background and objectives: Automated evaluation of open-ended responses remains a persistent challenge, particularly when consistency, transparency, and reproducibility are required. While large language models (LLMs) have shown promise in rubric-based evaluation, their reliability across multiple evaluators is still uncertain. Variability in scoring, feedback, and rubric adherence raises concerns about interpretability and system robustness. This study introduces GraderAssist, a graph-based, rubric-guided, multi-LLM framework designed to ensure transparent and reproducible automated evaluation. Methods: GraderAssist evaluates a dataset of 220 responses to both technical and argumentative questions, collected from undergraduate computer science courses. Six open-source LLMs and GPT-4 (as expert reference) independently scored each response using two predefined rubrics. All outputs—including scores, feedback, and metadata—were parsed, validated, and stored in a Neo4j graph database, enabling structured querying, traceability, and longitudinal analysis. Results: Cross-model analysis revealed systematic differences in scoring behavior and feedback generation. Some models produced more generous evaluations, while others aligned closely with GPT-4. Semantic analysis using Sentence-BERT embeddings highlighted distinctive feedback styles and variable rubric adherence. Inter-model agreement was stronger for technical criteria but diverged substantially for argumentative tasks. Originality: GraderAssist integrates rubric-guided evaluation, multi-model comparison, and graph-based storage into a unified pipeline. By emphasizing reproducibility, transparency, and fine-grained analysis of evaluator behavior, it advances the design of interpretable automated evaluation systems with applications in education and beyond. Full article
Show Figures

Figure 1

33 pages, 6577 KB  
Article
Percolation–Stochastic Model for Traffic Management in Transport Networks
by Anton Aleshkin, Dmitry Zhukov and Vadim Zhmud
Informatics 2025, 12(4), 122; https://doi.org/10.3390/informatics12040122 - 6 Nov 2025
Viewed by 1060
Abstract
This article describes a model for optimizing traffic flow control and generating traffic signal phases based on the stochastic dynamics of traffic and the percolation properties of transport networks. As input data (in SUMO), we use lane-level vehicle flow rates, treating them as [...] Read more.
This article describes a model for optimizing traffic flow control and generating traffic signal phases based on the stochastic dynamics of traffic and the percolation properties of transport networks. As input data (in SUMO), we use lane-level vehicle flow rates, treating them as random processes with unknown distributions. It is shown that the percolation threshold of the transport network can serve as a reliability criterion in a stochastic model of lane blockage and can be used to determine the control interval. To calculate the durations of permissive control signals and their sequence for different directions, vehicle queues are considered and the time required for them to reach the network’s percolation threshold is estimated. Subsequently, the lane with the largest queue (i.e., the shortest time to reach blockage) is selected, and a phase is formed for its signal control, as well as for other lanes that can be opened simultaneously. Simulation results show that when dynamic traffic signal control is used and a percolation-dynamic model for balancing road traffic is applied, lane occupancy indicators such as “congestion” decrease by 19–51% compared to a model with statically specified traffic signal phase cycles. The characteristics of flow dynamics obtained in the simulation make it possible to construct an overall control quality function and to assess, from the standpoint of traffic network management organization, an acceptable density of traffic signals and unsignalized intersections. Full article
Show Figures

Figure 1

36 pages, 2229 KB  
Systematic Review
Digital Competencies for a FinTech-Driven Accounting Profession: A Systematic Literature Review
by Saiphit Satjawisate, Kanitsorn Suriyapaiboonwattana, Alisara Saramolee and Kate Hone
Informatics 2025, 12(4), 121; https://doi.org/10.3390/informatics12040121 - 6 Nov 2025
Viewed by 2899
Abstract
Financial Technology (FinTech) is fundamentally reshaping the accounting profession, accelerating the shift from routine transactional activities to more strategic, data-driven functions. This transformation demands advanced digital competencies, yet the scholarly understanding of these skills remains fragmented. To provide conceptual and analytical clarity, this [...] Read more.
Financial Technology (FinTech) is fundamentally reshaping the accounting profession, accelerating the shift from routine transactional activities to more strategic, data-driven functions. This transformation demands advanced digital competencies, yet the scholarly understanding of these skills remains fragmented. To provide conceptual and analytical clarity, this study defines FinTech as an ecosystem of enabling technologies, including artificial intelligence, data analytics, and blockchain, that collectively drive this professional transition. Addressing the lack of systematic synthesis, the study employs a systematic literature review (SLR) guided by the PRISMA 2020 framework, complemented by bibliometric analysis, to map the intellectual landscape. The review focuses on peer-reviewed journal articles published between January 2020 and June 2025, thereby capturing the accelerated digital transformation of the post-pandemic era. The analysis identifies four dominant thematic clusters: (1) the professional context and digital transformation; (2) the educational response and curriculum development; (3) core competencies and their technological drivers; and (4) ethical judgement and professional responsibilities. Synthesising these themes reveals critical research gaps in faculty readiness, curriculum integration, ethical governance, and the empirical validation of institutional strategies. By offering a structured map of the field, this review contributes actionable insights for educators, professional bodies, and firms, and advances a forward-looking research agenda to align professional readiness with the realities of the FinTech era. Full article
Show Figures

Figure 1

17 pages, 306 KB  
Article
Negotiating Human–AI Complementarity in Geriatric and Palliative Care: A Qualitative Study of Healthcare Practitioners’ Perspectives in Northeast China
by Chenyang Guo, Chao Fang, Wenbo Zhang and John Troyer
Informatics 2025, 12(4), 120; https://doi.org/10.3390/informatics12040120 - 1 Nov 2025
Cited by 1 | Viewed by 1404
Abstract
Artificial intelligence (AI) is becoming increasingly significant in healthcare around the world, especially in China, where rapid population ageing coincides with rising expectations for quality of life and a shrinking care workforce. This study explores Chinese health practitioners’ perspectives on using AI assistants [...] Read more.
Artificial intelligence (AI) is becoming increasingly significant in healthcare around the world, especially in China, where rapid population ageing coincides with rising expectations for quality of life and a shrinking care workforce. This study explores Chinese health practitioners’ perspectives on using AI assistants in integrated geriatric and palliative care. Drawing on Actor–Network Theory, care is viewed as a network of interconnected human and non-human actors, including practitioners, technologies, patients and policies. Based in Northeast China, a region with structurally marginalised healthcare infrastructure, this article analyses qualitative interviews with 14 practitioners. Our findings reveal three key themes: (1) tensions between AI’s rule-based logic and practitioners’ human-centred approach; (2) ethical discomfort with AI performing intimate or emotionally sensitive care, especially in end-of-life contexts; (3) structural inequalities, with weak policy and infrastructure limiting effective AI integration. The study highlights that AI offers clearer benefits for routine geriatric care, such as monitoring and basic symptom management, but its utility is far more limited in the complex, relational and ethically sensitive domain of palliative care. Proposing a model of human–AI complementarity, the article argues that technology should support rather than replace the emotional and relational aspects of care and identifies policy considerations for ethically grounded integration in resource-limited contexts. Full article
27 pages, 624 KB  
Article
Explainable AI for Clinical Decision Support Systems: Literature Review, Key Gaps, and Research Synthesis
by Mozhgan Salimparsa, Kamran Sedig, Daniel J. Lizotte, Sheikh S. Abdullah, Niaz Chalabianloo and Flory T. Muanda
Informatics 2025, 12(4), 119; https://doi.org/10.3390/informatics12040119 - 28 Oct 2025
Cited by 4 | Viewed by 6730
Abstract
While Artificial Intelligence (AI) promises significant enhancements for Clinical Decision Support Systems (CDSSs), the opacity of many AI models remains a major barrier to clinical adoption, primarily due to interpretability and trust challenges. Explainable AI (XAI) seeks to bridge this gap by making [...] Read more.
While Artificial Intelligence (AI) promises significant enhancements for Clinical Decision Support Systems (CDSSs), the opacity of many AI models remains a major barrier to clinical adoption, primarily due to interpretability and trust challenges. Explainable AI (XAI) seeks to bridge this gap by making model reasoning understandable to clinicians, but technical XAI solutions have too often failed to address real-world clinician needs, workflow integration, and usability concerns. This study synthesizes persistent challenges in applying XAI to CDSS—including mismatched explanation methods, suboptimal interface designs, and insufficient evaluation practices—and proposes a structured, user-centered framework to guide more effective and trustworthy XAI-CDSS development. Drawing on a comprehensive literature review, we detail a three-phase framework encompassing user-centered XAI method selection, interface co-design, and iterative evaluation and refinement. We demonstrate its application through a retrospective case study analysis of a published XAI-CDSS for sepsis care. Our synthesis highlights the importance of aligning XAI with clinical workflows, supporting calibrated trust, and deploying robust evaluation methodologies that capture real-world clinician–AI interaction patterns, such as negotiation. The case analysis shows how the framework can systematically identify and address user-centric gaps, leading to better workflow integration, tailored explanations, and more usable interfaces. We conclude that achieving trustworthy and clinically useful XAI-CDSS requires a fundamentally user-centered approach; our framework offers actionable guidance for creating explainable, usable, and trusted AI systems in healthcare. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

17 pages, 4146 KB  
Article
Sentiment Analysis of Meme Images Using Deep Neural Network Based on Keypoint Representation
by Endah Asmawati, Ahmad Saikhu and Daniel O. Siahaan
Informatics 2025, 12(4), 118; https://doi.org/10.3390/informatics12040118 - 28 Oct 2025
Viewed by 1603
Abstract
Meme image sentiment analysis is a task of examining public opinion based on meme images posted on social media. In various fields, stakeholders often need to quickly and accurately determine the sentiment of memes from large amounts of available data. Therefore, innovation is [...] Read more.
Meme image sentiment analysis is a task of examining public opinion based on meme images posted on social media. In various fields, stakeholders often need to quickly and accurately determine the sentiment of memes from large amounts of available data. Therefore, innovation is needed in image pre-processing so that an increase in performance metrics, especially accuracy, can be obtained in improving the classification of meme image sentiment. This is because sentiment classification using human face datasets yields higher accuracy than using meme images. This research aims to develop a sentiment analysis model for meme images based on key points. The analyzed meme images contain human faces. The facial features extracted using key points are the eyebrows, eyes, and mouth. In the proposed method, key points of facial features are represented in the form of graphs, specifically directed graphs, weighted graphs, or weighted directed graphs. These graph representations of key points are then used to build a sentiment analysis model based on a Deep Neural Network (DNN) with three layers (hidden layer: i = 64, j = 64, k = 90). There are several contributions of this study, namely developing a human facial sentiment detection model using key points, representing key points as various graphs, and constructing a meme dataset with Indonesian text. The proposed model is evaluated using several metrics, namely accuracy, precision, recall, and F-1 score. Furthermore, a comparative analysis is conducted to evaluate the performance of the proposed model against existing approaches. The experimental results show that the proposed model, which utilized the directed graph representation of key points, obtained the highest accuracy at 83% and F1 score at 81%, respectively. Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
Show Figures

Figure 1

34 pages, 8515 KB  
Article
Hybrid Approach Using Dynamic Mode Decomposition and Wavelet Scattering Transform for EEG-Based Seizure Classification
by Sreevidya C, Neethu Mohan, Sachin Kumar S and Aravind Harikumar
Informatics 2025, 12(4), 117; https://doi.org/10.3390/informatics12040117 - 28 Oct 2025
Viewed by 1368
Abstract
Epilepsy is a brain disorder that affects individuals; hence, preemptive diagnosis is required. Accurate classification of seizures is critical to optimize the treatment of epilepsy. Patients with epilepsy are unable to lead normal lives due to the unpredictable nature of seizures. Thus, developing [...] Read more.
Epilepsy is a brain disorder that affects individuals; hence, preemptive diagnosis is required. Accurate classification of seizures is critical to optimize the treatment of epilepsy. Patients with epilepsy are unable to lead normal lives due to the unpredictable nature of seizures. Thus, developing new methods to help these patients can significantly improve their quality of life and result in huge financial savings for the healthcare industry. This paper presents a hybrid method integrating dynamic mode decomposition (DMD) and wavelet scattering transform (WST) for EEG-based seizure analysis. DMD allows for the breakdown of EEG signals into modes that catch the dynamical structures present in the EEG. Then, WST is applied as it is invariant to time-warping and computes robust hierarchical features at different timescales. DMD-WST combination provides an in-depth multi-scale analysis of the temporal structures present within the EEG data. This process improves the representation quality for feature extraction, which can convey dynamic modes and multi-scale frequency information for improved classification performance. The proposed hybrid approach is validated with three datasets, namely the CHB-MIT PhysioNet dataset, the Bern Barcelona dataset, and the Khas dataset, which can accurately distinguish the seizure and non-seizure states. The proposed method performed classification using different machine learning and deep learning methods, including support vector machine, random forest, k-nearest neighbours, booster algorithm, and bagging. These models were compared in terms of accuracy, precision, sensitivity, Cohen’s kappa, and Matthew’s correlation coefficient. The DMD-WST approach achieved a maximum accuracy of 99% and F1 score of 0.99 on the CHB-MIT dataset, and obtained 100% accuracy and F1 score of 1.00 on both the Bern Barcelona and Khas datasets, outperforming existing methods Full article
Show Figures

Figure 1

18 pages, 5614 KB  
Article
Computational Analysis of Zingiber officinale Identifies GABAergic Signaling as a Potential Therapeutic Mechanism in Colorectal Cancer
by Suthipong Chujan, Nutsira Vajeethaveesin and Jutamaad Satayavivad
Informatics 2025, 12(4), 116; https://doi.org/10.3390/informatics12040116 - 24 Oct 2025
Viewed by 1632
Abstract
Colorectal cancer cases are on the rise and have become a leading cause of cancer-related deaths. Ginger (Zingiber officinale) is widely used in traditional herbal medicine and has been proposed as a potential treatment for colorectal cancer. This study aimed to [...] Read more.
Colorectal cancer cases are on the rise and have become a leading cause of cancer-related deaths. Ginger (Zingiber officinale) is widely used in traditional herbal medicine and has been proposed as a potential treatment for colorectal cancer. This study aimed to explore the network pharmacology and pharmacodynamics of ginger in colorectal cancer treatment. Colorectal cancer patient data from the GEO dataset were analyzed to identify differentially expressed genes (DEGs). Six key components of ginger were selected based on specific criteria, and their target proteins were predicted using the TCMSP database. By overlapping DEGs with predicted targets, 36 candidate drug targets were identified. These targets were analyzed for biological alterations, pathway enrichment, protein–protein interactions, and hub-gene selection, integrating network pharmacology. Molecular docking simulations were conducted to confirm the binding interactions between ginger components and target proteins. The findings showed that GABAergic signaling and apoptosis were the most enriched pathways, suggesting their potential role in colorectal cancer treatment. Docking simulations further revealed that ginger’s active compounds bind to COX2 and ESR1, indicating anti-inflammatory effects and modulation of estrogenic activity. This study provides insight into the systemic mechanisms of ginger in colorectal cancer treatment through an integrated “drug–gene–pathway–disease” network approach. Full article
Show Figures

Figure 1

24 pages, 5556 KB  
Article
Efficient Wearable Sensor-Based Activity Recognition for Human–Robot Collaboration in Agricultural Environments
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
Informatics 2025, 12(4), 115; https://doi.org/10.3390/informatics12040115 - 23 Oct 2025
Viewed by 1243
Abstract
This study focuses on human awareness, a critical component in human–robot interaction, particularly within agricultural environments where interactions are enriched by complex contextual information. The main objective is identifying human activities occurring during collaborative harvesting tasks involving humans and robots. To achieve this, [...] Read more.
This study focuses on human awareness, a critical component in human–robot interaction, particularly within agricultural environments where interactions are enriched by complex contextual information. The main objective is identifying human activities occurring during collaborative harvesting tasks involving humans and robots. To achieve this, we propose a novel and lightweight deep learning model, named 1D-ResNeXt, designed explicitly for recognizing activities in agriculture-related human–robot collaboration. The model is built as an end-to-end architecture incorporating feature fusion and a multi-kernel convolutional block strategy. It utilizes residual connections and a split–transform–merge mechanism to mitigate performance degradation and reduce model complexity by limiting the number of trainable parameters. Sensor data were collected from twenty individuals with five wearable devices placed on different body parts. Each sensor was embedded with tri-axial accelerometers, gyroscopes, and magnetometers. Under real field conditions, the participants performed several sub-tasks commonly associated with agricultural labor, such as lifting and carrying loads. Before classification, the raw sensor signals were pre-processed to eliminate noise. The cleaned time-series data were then input into the proposed deep learning network for sequential pattern recognition. Experimental results showed that the chest-mounted sensor achieved the highest F1-score of 99.86%, outperforming other sensor placements and combinations. An analysis of temporal window sizes (0.5, 1.0, 1.5, and 2.0 s) demonstrated that the 0.5 s window provided the best recognition performance, indicating that key activity features in agriculture can be captured over short intervals. Moreover, a comprehensive evaluation of sensor modalities revealed that multimodal fusion of accelerometer, gyroscope, and magnetometer data yielded the best accuracy at 99.92%. The combination of accelerometer and gyroscope data offered an optimal compromise, achieving 99.49% accuracy while maintaining lower system complexity. These findings highlight the importance of strategic sensor placement and data fusion in enhancing activity recognition performance while reducing the need for extensive data and computational resources. This work contributes to developing intelligent, efficient, and adaptive collaborative systems, offering promising applications in agriculture and beyond, with improved safety, cost-efficiency, and real-time operational capability. Full article
Show Figures

Figure 1

18 pages, 1914 KB  
Article
Leveraging Transformer with Self-Attention for Multi-Label Emotion Classification in Crisis Tweets
by Patricia Anthony and Jing Zhou
Informatics 2025, 12(4), 114; https://doi.org/10.3390/informatics12040114 - 22 Oct 2025
Viewed by 1805
Abstract
Social media platforms have become a widely used medium for individuals to express complex and multifaceted emotions. Traditional single-label emotion classification methods fall short in accurately capturing the simultaneous presence of multiple emotions within these texts. To address this limitation, we propose a [...] Read more.
Social media platforms have become a widely used medium for individuals to express complex and multifaceted emotions. Traditional single-label emotion classification methods fall short in accurately capturing the simultaneous presence of multiple emotions within these texts. To address this limitation, we propose a classification model that enhances the pre-trained Cardiff NLP transformer by integrating additional self-attention layers. Experimental results show our approach achieves a micro-F1 score of 0.7208, a macro-F1 score of 0.6192, and an average Jaccard index of 0.6066, which is an overall improvement of approximately 3.00% compared to the baseline. We apply this model to a real-world dataset of tweets related to the 2011 Christchurch earthquakes as a case study to demonstrate its ability to capture multi-category emotional expressions and detect co-occurring emotions that single-label approaches would miss. Our analysis revealed distinct emotional patterns aligned with key seismic events, including overlapping positive and negative emotions, and temporal dynamics of emotional response. This work contributes a robust method for fine-grained emotion analysis which can aid disaster response, mental health monitoring and social research. Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
Show Figures

Figure 1

22 pages, 956 KB  
Systematic Review
Tailoring Treatment in the Age of AI: A Systematic Review of Large Language Models in Personalized Healthcare
by Giordano de Pinho Souza, Glaucia Melo and Daniel Schneider
Informatics 2025, 12(4), 113; https://doi.org/10.3390/informatics12040113 - 21 Oct 2025
Viewed by 1870
Abstract
Large Language Models (LLMs) are increasingly proposed to personalize healthcare delivery, yet their real-world readiness remains uncertain. We conducted a systematic literature review to assess how LLM-based systems are designed and used to enhance patient engagement and personalization, while identifying open challenges these [...] Read more.
Large Language Models (LLMs) are increasingly proposed to personalize healthcare delivery, yet their real-world readiness remains uncertain. We conducted a systematic literature review to assess how LLM-based systems are designed and used to enhance patient engagement and personalization, while identifying open challenges these tools pose. Four digital libraries (Scopus, IEEE Xplore, ACM, and Nature) were searched, yielding 3787 studies; 16 met the inclusion criteria. Most studies, published in 2024, span different types of motivations, architectures, limitations and privacy-preserving approaches. While LLMs show potential in automating patient data collection, recommendation/therapy generation, and continuous conversational support, their clinical reliability is limited. Most evaluations use synthetic or retrospective data, with only a few employing user studies or scalable simulation environments. This review highlights the tension between innovation and clinical applicability, emphasizing the need for robust evaluation protocols and human-in-the-loop systems to guide the safe and equitable deployment of LLMs in healthcare. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

19 pages, 4399 KB  
Article
Privacy-Preserving Synthetic Mammograms: A Generative Model Approach to Privacy-Preserving Breast Imaging Datasets
by Damir Shodiev, Egor Ushakov, Arsenii Litvinov and Yury Markin
Informatics 2025, 12(4), 112; https://doi.org/10.3390/informatics12040112 - 18 Oct 2025
Viewed by 1448
Abstract
Background: Significant progress has been made in the field of machine learning, enabling the development of methods for automatic interpretation of medical images that provide high-quality diagnostics. However, most of these methods require access to confidential data, making them difficult to apply under [...] Read more.
Background: Significant progress has been made in the field of machine learning, enabling the development of methods for automatic interpretation of medical images that provide high-quality diagnostics. However, most of these methods require access to confidential data, making them difficult to apply under strict privacy requirements. Existing privacy-preserving approaches, such as federated learning and dataset distillation, have limitations related to data access, visual interpretability, etc. Methods: This study explores the use of generative models to create synthetic medical data that preserves the statistical properties of the original data while ensuring privacy. The research is carried out on the VinDr-Mammo dataset of digital mammography images. A conditional generative method using Latent Diffusion Models (LDMs) is proposed with conditioning on diagnostic labels and lesion information. Diagnostic utility and privacy robustness are assessed via cancer classification tasks and re-identification tasks using Siamese neural networks and membership inference. Results: The generated synthetic data achieved a Fréchet Inception Distance (FID) of 5.8, preserving diagnostic features. A model trained solely on synthetic data achieved comparable performance to one trained on real data (ROC-AUC: 0.77 vs. 0.82). Visual assessments showed that synthetic images are indistinguishable from real ones. Privacy evaluations demonstrated a low re-identification risk (e.g., mAP@R = 0.0051 on the test set), confirming the effectiveness of the privacy-preserving approach. Conclusions: The study demonstrates that privacy-preserving generative models can produce synthetic medical images with sufficient quality for diagnostic task while significantly reducing the risk of patient re-identification. This approach enables secure data sharing and model training in privacy-sensitive domains such as medical imaging. Full article
(This article belongs to the Special Issue Health Data Management in the Age of AI)
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop