Previous Issue
Volume 12, June
 
 

Informatics, Volume 12, Issue 3 (September 2025) – 21 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 756 KiB  
Article
Designs and Interactions for Near-Field Augmented Reality: A Scoping Review
by Jacob Hobbs and Christopher Bull
Informatics 2025, 12(3), 77; https://doi.org/10.3390/informatics12030077 (registering DOI) - 1 Aug 2025
Abstract
Augmented reality (AR), which overlays digital content within the user’s view, is gaining traction across domains such as education, healthcare, manufacturing, and entertainment. The hardware constraints of commercially available HMDs are well acknowledged, but little work addresses what design or interactions techniques developers [...] Read more.
Augmented reality (AR), which overlays digital content within the user’s view, is gaining traction across domains such as education, healthcare, manufacturing, and entertainment. The hardware constraints of commercially available HMDs are well acknowledged, but little work addresses what design or interactions techniques developers can employ or build into experiences to work around these limitations. We conducted a scoping literature review, with the aim of mapping the current landscape of design principles and interaction techniques employed in near-field AR environments. We searched for literature published between 2016 and 2025 across major databases, including the ACM Digital Library and IEEE Xplore. Studies were included if they explicitly employed design or interaction techniques with a commercially available HMD for near-field AR experiences. A total of 780 articles were returned by the search, but just 7 articles met the inclusion criteria. Our review identifies key themes around how existing techniques are employed and the two competing goals of AR experiences, and we highlight the importance of embodiment in interaction efficacy. We present directions for future research based on and justified by our review. The findings offer a comprehensive overview for researchers, designers, and developers aiming to create more intuitive, effective, and context-aware near-field AR experiences. This review also provides a foundation for future research by outlining underexplored areas and recommending research directions for near-field AR interaction design. Full article
Show Figures

Figure 1

23 pages, 1192 KiB  
Article
Multi-Model Dialectical Evaluation of LLM Reasoning Chains: A Structured Framework with Dual Scoring Agents
by Catalin Anghel, Andreea Alexandra Anghel, Emilia Pecheanu, Ioan Susnea, Adina Cocu and Adrian Istrate
Informatics 2025, 12(3), 76; https://doi.org/10.3390/informatics12030076 (registering DOI) - 1 Aug 2025
Abstract
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed [...] Read more.
(1) Background and objectives: Large language models (LLMs) such as GPT, Mistral, and LLaMA exhibit strong capabilities in text generation, yet assessing the quality of their reasoning—particularly in open-ended and argumentative contexts—remains a persistent challenge. This study introduces Dialectical Agent, an internally developed modular framework designed to evaluate reasoning through a structured three-stage process: opinion, counterargument, and synthesis. The framework enables transparent and comparative analysis of how different LLMs handle dialectical reasoning. (2) Methods: Each stage is executed by a single model, and final syntheses are scored via two independent LLM evaluators (LLaMA 3.1 and GPT-4o) based on a rubric with four dimensions: clarity, coherence, originality, and dialecticality. In parallel, a rule-based semantic analyzer detects rhetorical anomalies and ethical values. All outputs and metadata are stored in a Neo4j graph database for structured exploration. (3) Results: The system was applied to four open-weight models (Gemma 7B, Mistral 7B, Dolphin-Mistral, Zephyr 7B) across ten open-ended prompts on ethical, political, and technological topics. The results show consistent stylistic and semantic variation across models, with moderate inter-rater agreement. Semantic diagnostics revealed differences in value expression and rhetorical flaws not captured by rubric scores. (4) Originality: The framework is, to our knowledge, the first to integrate multi-stage reasoning, rubric-based and semantic evaluation, and graph-based storage into a single system. It enables replicable, interpretable, and multidimensional assessment of generative reasoning—supporting researchers, developers, and educators working with LLMs in high-stakes contexts. Full article
Show Figures

Figure 1

26 pages, 5535 KiB  
Article
Research on Power Cable Intrusion Identification Using a GRT-Transformer-Based Distributed Acoustic Sensing (DAS) System
by Xiaoli Huang, Xingcheng Wang, Han Qin and Zhaoliang Zhou
Informatics 2025, 12(3), 75; https://doi.org/10.3390/informatics12030075 - 21 Jul 2025
Viewed by 390
Abstract
To address the high false alarm rate of intrusion detection systems based on distributed acoustic sensing (DAS) for power cables in complex underground environments, an innovative GRT-Transformer multimodal deep learning model is proposed. The core of this model lies in its distinctive three-branch [...] Read more.
To address the high false alarm rate of intrusion detection systems based on distributed acoustic sensing (DAS) for power cables in complex underground environments, an innovative GRT-Transformer multimodal deep learning model is proposed. The core of this model lies in its distinctive three-branch parallel collaborative architecture: two branches employ Gramian Angular Summation Field (GASF) and Recursive Pattern (RP) algorithms to convert one-dimensional intrusion waveforms into two-dimensional images, thereby capturing rich spatial patterns and dynamic characteristics and the third branch utilizes a Gated Recurrent Unit (GRU) algorithm to directly focus on the temporal evolution features of the waveform; additionally, a Transformer component is integrated to capture the overall trend and global dependencies of the signals. Ultimately, the terminal employs a Bidirectional Long Short-Term Memory (BiLSTM) network to perform a deep fusion of the multidimensional features extracted from the three branches, enabling a comprehensive understanding of the bidirectional temporal dependencies within the data. Experimental validation demonstrates that the GRT-Transformer achieves an average recognition accuracy of 97.3% across three typical intrusion events—illegal tapping, mechanical operations, and vehicle passage—significantly reducing false alarms, surpassing traditional methods, and exhibiting strong practical potential in complex real-world scenarios. Full article
Show Figures

Figure 1

15 pages, 2948 KiB  
Review
A Comprehensive Review of ChatGPT in Teaching and Learning Within Higher Education
by Samkelisiwe Purity Phokoye, Siphokazi Dlamini, Peggy Pinky Mthalane, Mthokozisi Luthuli and Smangele Pretty Moyane
Informatics 2025, 12(3), 74; https://doi.org/10.3390/informatics12030074 - 21 Jul 2025
Viewed by 838
Abstract
Artificial intelligence (AI) has become an integral component of various sectors, including higher education. AI, particularly in the form of advanced chatbots like ChatGPT, is increasingly recognized as a valuable tool for engagement in higher education institutions (HEIs). This growing trend highlights the [...] Read more.
Artificial intelligence (AI) has become an integral component of various sectors, including higher education. AI, particularly in the form of advanced chatbots like ChatGPT, is increasingly recognized as a valuable tool for engagement in higher education institutions (HEIs). This growing trend highlights the potential of AI to enhance student engagement and subsequently improve academic performance. Given this development, it is crucial for HEIs to delve deeper into the potential integration of AI-driven chatbots into educational practices. The aim of this study was to conduct a comprehensive review of the use of ChatGPT in teaching and learning within higher education. To offer a comprehensive viewpoint, it had two primary objectives: to identify the key factors influencing the adoption and acceptance of ChatGPT in higher education, and to investigate the roles of institutional policies and support systems in the acceptance of ChatGPT in higher education. A bibliometric analysis methodology was employed in this study, and a PRISMA diagram was used to explain the papers included in the analysis. The findings reveal the increasing adoption of ChatGPT within the higher education sector while also identifying the challenges faced during its implementation, ranging from technical issues to educational adaptations. Moreover, this review provides guidelines for various stakeholders to effectively integrate ChatGPT into higher education. Full article
Show Figures

Figure 1

26 pages, 2596 KiB  
Article
DFPoLD: A Hard Disk Failure Prediction on Low-Quality Datasets
by Shuting Wei, Xiaoyu Lu, Hongzhang Yang, Chenfeng Tu, Jiangpu Guo, Hailong Sun and Yu Feng
Informatics 2025, 12(3), 73; https://doi.org/10.3390/informatics12030073 - 16 Jul 2025
Viewed by 289
Abstract
Hard disk failure prediction is an important proactive maintenance method for storage systems. Recent years have seen significant progress in hard disk failure prediction using high-quality SMART datasets. However, in industrial applications, data loss often occurs during SMART data collection, transmission, and storage. [...] Read more.
Hard disk failure prediction is an important proactive maintenance method for storage systems. Recent years have seen significant progress in hard disk failure prediction using high-quality SMART datasets. However, in industrial applications, data loss often occurs during SMART data collection, transmission, and storage. Existing machine learning-based hard disk failure prediction models perform poorly on low-quality datasets. Therefore, this paper proposes a hard disk fault prediction technique based on low-quality datasets. Firstly, based on the original Backblaze dataset, we construct a low-quality dataset, Backblaze-, by simulating sector damage in actual scenarios and deleting 10% to 99% of the data. Time series features like the Absolute Sum of First Difference (ASFD) were introduced to amplify the differences between positive and negative samples and reduce the sensitivity of the model to SMART data loss. Considering the impact of different quality datasets on time window selection, we propose a time window selection formula that selects different time windows based on the proportion of data loss. It is found that the poorer the dataset quality, the longer the time window selection should be. The proposed model achieves a True Positive Rate (TPR) of 99.46%, AUC of 0.9971, and F1 score of 0.9871, with a False Positive Rate (FPR) under 0.04%, even with 80% data loss, maintaining performance close to that on the original dataset. Full article
(This article belongs to the Section Big Data Mining and Analytics)
Show Figures

Figure 1

24 pages, 1618 KiB  
Review
Design Requirements of Breast Cancer Symptom-Management Apps
by Xinyi Huang, Amjad Fayoumi, Emily Winter and Anas Najdawi
Informatics 2025, 12(3), 72; https://doi.org/10.3390/informatics12030072 - 15 Jul 2025
Viewed by 390
Abstract
Many breast cancer patients follow a self-managed treatment pathway, which may lead to gaps in the data available to healthcare professionals, such as information about patients’ everyday symptoms at home. Mobile apps have the potential to bridge this information gap, leading to more [...] Read more.
Many breast cancer patients follow a self-managed treatment pathway, which may lead to gaps in the data available to healthcare professionals, such as information about patients’ everyday symptoms at home. Mobile apps have the potential to bridge this information gap, leading to more effective treatments and interventions, as well as helping breast cancer patients monitor and manage their symptoms. In this paper, we elicit design requirements for breast cancer symptom-management mobile apps using a systematic review following the PRISMA framework. We then evaluate existing cancer symptom-management apps found on the Apple store according to the extent to which they meet these requirements. We find that, whilst some requirements are well supported (such as functionality to record multiple symptoms and provision of information), others are currently not being met, particularly interoperability, functionality related to responses from healthcare professionals, and personalisation. Much work is needed for cancer patients and healthcare professionals to experience the benefits of digital health innovation. The article demonstrates a formal requirements model, in which requirements are categorised as functional and non-functional, and presents a proposal for conceptual design for future mobile apps. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

20 pages, 1550 KiB  
Article
Strategy for Precopy Live Migration and VM Placement in Data Centers Based on Hybrid Machine Learning
by Taufik Hidayat, Kalamullah Ramli and Ruki Harwahyu
Informatics 2025, 12(3), 71; https://doi.org/10.3390/informatics12030071 - 15 Jul 2025
Viewed by 397
Abstract
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a [...] Read more.
Data center virtualization has grown rapidly alongside the expansion of application-based services but continues to face significant challenges, such as downtime caused by suboptimal hardware selection, load balancing, power management, incident response, and resource allocation. To address these challenges, this study proposes a combined machine learning method that uses an MDP to choose which VMs to move, the RF method to sort the VMs according to load, and NSGA-III to achieve multiple optimization objectives, such as reducing downtime, improving SLA, and increasing energy efficiency. For this model, the GWA-Bitbrains dataset was used, on which it had a classification accuracy of 98.77%, a MAPE of 7.69% in predicting migration duration, and an energy efficiency improvement of 90.80%. The results of real-world experiments show that the hybrid machine learning strategy could significantly reduce the data center workload, increase the total migration time, and decrease the downtime. The results of hybrid machine learning affirm the effectiveness of integrating the MDP, RF method, and NSGA-III for providing holistic solutions in VM placement strategies for large-scale data centers. Full article
(This article belongs to the Section Machine Learning)
Show Figures

Figure 1

40 pages, 759 KiB  
Systematic Review
Decoding Trust in Artificial Intelligence: A Systematic Review of Quantitative Measures and Related Variables
by Letizia Aquilino, Cinzia Di Dio, Federico Manzi, Davide Massaro, Piercosma Bisconti and Antonella Marchetti
Informatics 2025, 12(3), 70; https://doi.org/10.3390/informatics12030070 - 14 Jul 2025
Viewed by 801
Abstract
As artificial intelligence (AI) becomes ubiquitous across various fields, understanding people’s acceptance and trust in AI systems becomes essential. This review aims to identify quantitative measures used to measure trust in AI and the associated studied elements. Following the PRISMA guidelines, three databases [...] Read more.
As artificial intelligence (AI) becomes ubiquitous across various fields, understanding people’s acceptance and trust in AI systems becomes essential. This review aims to identify quantitative measures used to measure trust in AI and the associated studied elements. Following the PRISMA guidelines, three databases were consulted, selecting articles published before December 2023. Ultimately, 45 articles out of 1283 were selected. Articles were included if they were peer-reviewed journal publications in English reporting empirical studies measuring trust in AI systems with multi-item questionnaires. Studies were analyzed through the lenses of cognitive and affective trust. We investigated trust definitions, questionnaires employed, types of AI systems, and trust-related constructs. Results reveal diverse trust conceptualizations and measurements. In addition, the studies covered a wide range of AI system types, including virtual assistants, content detection tools, chatbots, medical AI, robots, and educational AI. Overall, the studies show compatibility of cognitive or affective trust focus between theorization, items, experimental stimuli, and level of anthropomorphism of the systems. The review underlines the need to adapt measurement of trust in the specific characteristics of human–AI interaction, accounting for both the cognitive and affective sides. Trust definitions and measurement could be chosen depending also on the level of anthropomorphism of the systems and the context of application. Full article
Show Figures

Figure 1

21 pages, 5069 KiB  
Article
A Patent-Based Technology Roadmap for AI-Powered Manipulators: An Evolutionary Analysis of the B25J Classification
by Yujia Zhai, Zehao Liu, Rui Zhao, Xin Zhang and Gengfeng Zheng
Informatics 2025, 12(3), 69; https://doi.org/10.3390/informatics12030069 - 11 Jul 2025
Viewed by 514
Abstract
Technology roadmapping is conducted by systematic mapping of technological evolution through patent analytics to inform innovation strategies. This study proposes an integrated framework combining hierarchical Latent Dirichlet Allocation (LDA) modeling with multiphase technology lifecycle theory, analyzing 113,449 Derwent patent abstracts (2008–2022) across three [...] Read more.
Technology roadmapping is conducted by systematic mapping of technological evolution through patent analytics to inform innovation strategies. This study proposes an integrated framework combining hierarchical Latent Dirichlet Allocation (LDA) modeling with multiphase technology lifecycle theory, analyzing 113,449 Derwent patent abstracts (2008–2022) across three dimensions: technological novelty, functional applications, and competitive advantages. By segmenting innovation stages via logistic growth curve modeling and optimizing topic extraction through perplexity validation, we constructed dynamic technology roadmaps to decode latent evolutionary patterns in AI-powered programmable manipulators (B25J classification) within an innovation trajectory. Key findings revealed: (1) a progressive transition from electromechanical actuation to sensor-integrated architectures, evidenced by 58% compound annual growth in embedded sensing patents; (2) application expansion from industrial automation (72% early stage patents) to precision medical operations, with surgical robotics growing 34% annually since 2018; and (3) continuous advancements in adaptive control algorithms, showing 2.7× growth in reinforcement learning implementations. The methodology integrates quantitative topic modeling (via pyLDAvis visualization and cosine similarity analysis) with qualitative lifecycle theory, addressing the limitations of conventional technology analysis methods by reconciling semantic granularity with temporal dynamics. The results identify core innovation trajectories—precision control, intelligent detection, and medical robotics—while highlighting emerging opportunities in autonomous navigation and human–robot collaboration. This framework provides empirically grounded strategic intelligence for R&D prioritization, cross-industry investment, and policy formulation in Industry 4.0. Full article
Show Figures

Figure 1

32 pages, 4717 KiB  
Article
MOGAD: Integrated Multi-Omics and Graph Attention for the Discovery of Alzheimer’s Disease’s Biomarkers
by Zhizhong Zhang, Yuqi Chen, Changliang Wang, Maoni Guo, Lu Cai, Jian He, Yanchun Liang, Garry Wong and Liang Chen
Informatics 2025, 12(3), 68; https://doi.org/10.3390/informatics12030068 - 9 Jul 2025
Viewed by 504
Abstract
The selection of appropriate biomarkers in clinical practice aids in the early detection, treatment, and prevention of disease while also assisting in the development of targeted therapeutics. Recently, multi-omics data generated from advanced technology platforms has become available for disease studies. Therefore, the [...] Read more.
The selection of appropriate biomarkers in clinical practice aids in the early detection, treatment, and prevention of disease while also assisting in the development of targeted therapeutics. Recently, multi-omics data generated from advanced technology platforms has become available for disease studies. Therefore, the integration of this data with associated clinical data provides a unique opportunity to gain a deeper understanding of disease. However, the effective integration of large-scale multi-omics data remains a major challenge. To address this, we propose a novel deep learning model—the Multi-Omics Graph Attention biomarker Discovery network (MOGAD). MOGAD aims to efficiently classify diseases and discover biomarkers by integrating various omics data such as DNA methylation, gene expression, and miRNA expression. The model consists of three main modules: Multi-head GAT network (MGAT), Multi-Graph Attention Fusion (MGAF), and Attention Fusion (AF), which work together to dynamically model the complex relationships among different omics layers. We incorporate clinical data (e.g., APOE genotype) which enables a systematic investigation of the influence of non-omics factors on disease classification. The experimental results demonstrate that MOGAD achieves a superior performance compared to existing single-omics and multi-omics integration methods in classification tasks for Alzheimer’s disease (AD). In the comparative experiment on the ROSMAP dataset, our model achieved the highest ACC (0.773), F1-score (0.787), and MCC (0.551). The biomarkers identified by MOGAD show strong associations with the underlying pathogenesis of AD. We also apply a Hi-C dataset to validate the biological rationality of the identified biomarkers. Furthermore, the incorporation of clinical data enhances the model’s robustness and uncovers synergistic interactions between omics and non-omics features. Thus, our deep learning model is able to successfully integrate multi-omics data to efficiently classify disease and discover novel biomarkers. Full article
Show Figures

Figure 1

24 pages, 1314 KiB  
Article
Balancing Accuracy and Efficiency in Vehicular Network Firmware Vulnerability Detection: A Fuzzy Matching Framework with Standardized Data Serialization
by Xiyu Fang, Kexun He, Yue Wu, Rui Chen and Jing Zhao
Informatics 2025, 12(3), 67; https://doi.org/10.3390/informatics12030067 - 9 Jul 2025
Viewed by 320
Abstract
Firmware vulnerabilities in embedded devices have caused serious security incidents, necessitating similarity analysis of binary program instruction embeddings to identify vulnerabilities. However, existing instruction embedding methods neglect program execution semantics, resulting in accuracy limitations. Furthermore, current embedding approaches utilize independent computation across models, [...] Read more.
Firmware vulnerabilities in embedded devices have caused serious security incidents, necessitating similarity analysis of binary program instruction embeddings to identify vulnerabilities. However, existing instruction embedding methods neglect program execution semantics, resulting in accuracy limitations. Furthermore, current embedding approaches utilize independent computation across models, where the lack of standardized interaction information between models makes it difficult for embedding models to efficiently detect firmware vulnerabilities. To address these challenges, this paper proposes a firmware vulnerability detection scheme based on statistical inference and code similarity fuzzy matching analysis for resource-constrained vehicular network environments, helping to balance both accuracy and efficiency. First, through dynamic programming and neighborhood search techniques, binary code is systematically partitioned into normalized segment collections according to specific rules. The binary code is then analyzed in segments to construct semantic equivalence mappings, thereby extracting similarity metrics for function execution semantics. Subsequently, Google Protocol Buffers (ProtoBuf) is introduced as a serialization format for inter-model data transmission, serving as a “translation layer” and “bridging technology” within the firmware vulnerability detection framework. Additionally, a ProtoBuf-based certificate authentication scheme is proposed to enhance vehicular network communication reliability, improve data serialization efficiency, and increase the efficiency and accuracy of the detection model. Finally, a vehicular network simulation environment is established through secondary development on the NS-3 network simulator, and the functionality and performance of this architecture were thoroughly tested. Results demonstrate that the algorithm possesses resistance capabilities against common security threats while minimizing performance impact. Experimental results show that FirmPB delivers superior accuracy with 0.044 s inference time and 0.932 AUC, outperforming current SOTA in detection performance. Full article
Show Figures

Figure 1

24 pages, 2710 KiB  
Article
From Innovation to Regulation: Insights from a Bibliometric Analysis of Research Patterns in Medical Data Governance
by Iulian V. Nastasa, Andrada-Raluca Artamonov, Ștefan Sebastian Busnatu, Dana Galieta Mincă and Octavian Andronic
Informatics 2025, 12(3), 66; https://doi.org/10.3390/informatics12030066 - 8 Jul 2025
Viewed by 484
Abstract
This study presents a comprehensive bibliometric analysis of the evolving landscape of data protection in medicine, examining research trends, thematic developments, and scholarly contributions from the 1960s to 2024. By analyzing 2159 publications indexed in the Scopus database using the Bibliometrix R package [...] Read more.
This study presents a comprehensive bibliometric analysis of the evolving landscape of data protection in medicine, examining research trends, thematic developments, and scholarly contributions from the 1960s to 2024. By analyzing 2159 publications indexed in the Scopus database using the Bibliometrix R package (v.4.3.2), based on R (v.4.4.3), this paper maps key research areas, leading journals, and international collaboration patterns. Our findings reveal a significant shift in focus over time, from early concerns centered on data privacy and management to contemporary themes involving advanced technologies such as artificial intelligence, blockchain, and big data analytics. This transition reflects the increasing complexity of balancing data accessibility with security, ethical, and regulatory requirements in healthcare. This analysis also highlights persistent challenges, including fragmented research efforts, disparities in global contributions, and the ongoing need for interdisciplinary collaboration. These insights offer a valuable foundation for future investigations into medical data governance and emphasize the importance of ethical and responsible innovation in an increasingly digital healthcare environment. Full article
Show Figures

Figure 1

25 pages, 4911 KiB  
Article
DA OMS-CNN: Dual-Attention OMS-CNN with 3D Swin Transformer for Early-Stage Lung Cancer Detection
by Yadollah Zamanidoost, Matis Rivron, Tarek Ould-Bachir and Sylvain Martel
Informatics 2025, 12(3), 65; https://doi.org/10.3390/informatics12030065 - 7 Jul 2025
Viewed by 409
Abstract
Lung cancer is one of the most prevalent and deadly forms of cancer, accounting for a significant portion of cancer-related deaths worldwide. It typically originates in the lung tissues, particularly in the cells lining the airways, and early detection is crucial for improving [...] Read more.
Lung cancer is one of the most prevalent and deadly forms of cancer, accounting for a significant portion of cancer-related deaths worldwide. It typically originates in the lung tissues, particularly in the cells lining the airways, and early detection is crucial for improving patient survival rates. Computed tomography (CT) imaging has become a standard tool for lung cancer screening, providing detailed insights into lung structures and facilitating the early identification of cancerous nodules. In this study, an improved Faster R-CNN model is employed to detect early-stage lung cancer. To enhance the performance of Faster R-CNN, a novel dual-attention optimized multi-scale CNN (DA OMS-CNN) architecture is used to extract representative features of nodules at different sizes. Additionally, dual-attention RoIPooling (DA-RoIpooling) is applied in the classification stage to increase the model’s sensitivity. In the false-positive reduction stage, a combination of multiple 3D shift window transformers (3D SwinT) is designed to reduce false-positive nodules. The proposed model was evaluated on the LUNA16 and PN9 datasets. The results demonstrate that integrating DA OMS-CNN, DA-RoIPooling, and 3D SwinT into the improved Faster R-CNN framework achieves a sensitivity of 96.93% and a CPM score of 0.911. Comprehensive experiments demonstrate that the proposed approach not only increases the sensitivity of lung cancer detection but also significantly reduces the number of false-positive nodules. Therefore, the proposed method can serve as a valuable reference for clinical applications. Full article
Show Figures

Figure 1

16 pages, 1535 KiB  
Article
Clinical Text Classification for Tuberculosis Diagnosis Using Natural Language Processing and Deep Learning Model with Statistical Feature Selection Technique
by Shaik Fayaz Ahamed, Sundarakumar Karuppasamy and Ponnuraja Chinnaiyan
Informatics 2025, 12(3), 64; https://doi.org/10.3390/informatics12030064 - 7 Jul 2025
Viewed by 478
Abstract
Background: In the medical field, various deep learning (DL) algorithms have been effectively used to extract valuable information from unstructured clinical text data, potentially leading to more effective outcomes. This study utilized clinical text data to classify clinical case reports into tuberculosis (TB) [...] Read more.
Background: In the medical field, various deep learning (DL) algorithms have been effectively used to extract valuable information from unstructured clinical text data, potentially leading to more effective outcomes. This study utilized clinical text data to classify clinical case reports into tuberculosis (TB) and non-tuberculosis (non-TB) groups using natural language processing (NLP), a pre-processing technique, and DL models. Methods: This study used 1743 open-source respiratory disease clinical text data, labeled via fuzzy matching with ICD-10 codes to create a labeled dataset. Two tokenization methods preprocessed the clinical text data, and three models were evaluated: the existing Text-CNN, the proposed Text-CNN with t-test, and Bio_ClinicalBERT. Performance was assessed using multiple metrics and validated on 228 baseline screening clinical case text data collected from ICMR–NIRT to demonstrate effective TB classification. Results: The proposed model achieved the best results in both the test and validation datasets. On the test dataset, it attained a precision of 88.19%, a recall of 90.71%, an F1-score of 89.44%, and an AUC of 0.91. Similarly, on the validation dataset, it achieved 100% precision, 98.85% recall, 99.42% F1-score, and an AUC of 0.982, demonstrating its effectiveness in TB classification. Conclusions: This study highlights the effectiveness of DL models in classifying TB cases from clinical notes. The proposed model outperformed the other two models. The TF-IDF and t-test showed statistically significant feature selection and enhanced model interpretability and efficiency, demonstrating the potential of NLP and DL in automating TB diagnosis in clinical decision settings. Full article
Show Figures

Figure 1

17 pages, 561 KiB  
Article
Web Accessibility in an Academic Management System in Brazil: Problems and Challenges for Attending People with Visual Impairments
by Mayra Correa, Maria Albeti Vitoriano and Carlos Humberto Llanos
Informatics 2025, 12(3), 63; https://doi.org/10.3390/informatics12030063 - 4 Jul 2025
Viewed by 352
Abstract
Accessibility in web systems is essential to ensure everyone can obtain information equally. Based on the Web Content Accessibility Guidelines (WCAGs), the Electronic Government Accessibility Model (eMAG) was established in Brazil to guide the accessibility of federal government web systems. Based on these [...] Read more.
Accessibility in web systems is essential to ensure everyone can obtain information equally. Based on the Web Content Accessibility Guidelines (WCAGs), the Electronic Government Accessibility Model (eMAG) was established in Brazil to guide the accessibility of federal government web systems. Based on these guidelines, this research sought to understand the reasons behind the persistent gaps in web accessibility in Brazil, even after 20 years of eMAG. To this end, the accessibility of the Integrated Academic Activities Management System (SIGAA), used by 39 higher education institutions in Brazil, was evaluated. The living lab methodology was used to carry out accessibility and usability tests based on students’ experiences with visual impairments during interaction with the system. Furthermore, IT professionals’ knowledge of eMAG/WCAG guidelines, the use of accessibility tools, and their beliefs about accessible systems were investigated through an online questionnaire. Additionally, the syllabuses of training courses for IT professionals at 20 universities were analyzed through document analysis. The research confirmed non-compliance with the guidelines in the software researched, gaps in the knowledge of IT professionals regarding software accessibility practices, and inadequacy of accessibility content within training courses. It is concluded, therefore, that universities should incorporate mandatory courses related to software accessibility into the training programs for IT professionals and that organizations should provide continuous training for IT professionals in software accessibility practices. Furthermore, the current accessibility legislation should be updated, and its compliance should be required within all organizations, whether public or private. Full article
Show Figures

Figure 1

21 pages, 4241 KiB  
Article
Federated Learning-Driven Cybersecurity Framework for IoT Networks with Privacy Preserving and Real-Time Threat Detection Capabilities
by Milad Rahmati and Antonino Pagano
Informatics 2025, 12(3), 62; https://doi.org/10.3390/informatics12030062 - 4 Jul 2025
Cited by 1 | Viewed by 727
Abstract
The rapid expansion of the Internet of Things (IoT) ecosystem has transformed industries but also exposed significant cybersecurity vulnerabilities. Traditional centralized methods for securing IoT networks struggle to balance privacy preservation with real-time threat detection. This study presents a Federated Learning-Driven Cybersecurity Framework [...] Read more.
The rapid expansion of the Internet of Things (IoT) ecosystem has transformed industries but also exposed significant cybersecurity vulnerabilities. Traditional centralized methods for securing IoT networks struggle to balance privacy preservation with real-time threat detection. This study presents a Federated Learning-Driven Cybersecurity Framework designed for IoT environments, enabling decentralized data processing through local model training on edge devices to ensure data privacy. Secure aggregation using homomorphic encryption supports collaborative learning without exposing sensitive information. The framework employs GRU-based recurrent neural networks (RNNs) for anomaly detection, optimized for resource-constrained IoT networks. Experimental results demonstrate over 98% accuracy in detecting threats such as distributed denial-of-service (DDoS) attacks, with a 20% reduction in energy consumption and a 30% reduction in communication overhead, showcasing the framework’s efficiency over traditional centralized approaches. This work addresses critical gaps in IoT cybersecurity by integrating federated learning with advanced threat detection techniques. It offers a scalable, privacy-preserving solution for diverse IoT applications, with future directions including blockchain integration for model aggregation traceability and quantum-resistant cryptography to enhance security. Full article
Show Figures

Figure 1

25 pages, 2618 KiB  
Review
International Trends and Influencing Factors in the Integration of Artificial Intelligence in Education with the Application of Qualitative Methods
by Juan Luis Cabanillas-García
Informatics 2025, 12(3), 61; https://doi.org/10.3390/informatics12030061 - 4 Jul 2025
Viewed by 559
Abstract
This study offers a comprehensive examination of the scientific output related to the integration of Artificial Intelligence (AI) in education using qualitative research methods, which is an emerging intersection that reflects growing interest in understanding the pedagogical, ethical, and methodological implications of AI [...] Read more.
This study offers a comprehensive examination of the scientific output related to the integration of Artificial Intelligence (AI) in education using qualitative research methods, which is an emerging intersection that reflects growing interest in understanding the pedagogical, ethical, and methodological implications of AI in educational contexts. Grounded in a theoretical framework that emphasizes the potential of AI to support personalized learning, augment instructional design, and facilitate data-driven decision-making, this study conducts a Systematic Literature Review and bibliometric analysis of 630 publications indexed in Scopus between 2014 and 2024. The results show a significant increase in scholarly output, particularly since 2020, with notable contributions from authors and institutions in the United States, China, and the United Kingdom. High-impact research is found in top-tier journals, and dominant themes include health education, higher education, and the use of AI for feedback and assessment. The findings also highlight the role of semi-structured interviews, thematic analysis, and interdisciplinary approaches in capturing the nuanced impacts of AI integration. The study concludes that qualitative methods remain essential for critically evaluating AI’s role in education, reinforcing the need for ethically sound, human-centered, and context-sensitive applications of AI technologies in diverse learning environments. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
Show Figures

Figure 1

35 pages, 1982 KiB  
Article
Predicting Mental Health Problems in Gay Men in Peru Using Machine Learning and Deep Learning Models
by Alejandro Aybar-Flores and Elizabeth Espinoza-Portilla
Informatics 2025, 12(3), 60; https://doi.org/10.3390/informatics12030060 - 2 Jul 2025
Viewed by 487
Abstract
Mental health disparities among those who self-identify as gay men in Peru remain a pressing public health concern, yet predictive models for early identification remain limited. This research aims to (1) develop machine learning and deep learning models to predict mental health issues [...] Read more.
Mental health disparities among those who self-identify as gay men in Peru remain a pressing public health concern, yet predictive models for early identification remain limited. This research aims to (1) develop machine learning and deep learning models to predict mental health issues in those who self-identify as gay men, and (2) evaluate the influence of demographic, economic, health-related, behavioral and social factors using interpretability techniques to enhance understanding of the factors shaping mental health outcomes. A dataset of 2186 gay men from the First Virtual Survey for LGBTIQ+ People in Peru (2017) was analyzed, considering demographic, economic, health-related, behavioral, and social factors. Several classification models were developed and compared, including Logistic Regression, Artificial Neural Networks, Random Forest, Gradient Boosting Machines, eXtreme Gradient Boosting, and a One-dimensional Convolutional Neural Network (1D-CNN). Additionally, the Shapley values and Layer-wise Relevance Propagation (LRP) heatmaps methods were used to evaluate the influence of the studied variables on the prediction of mental health issues. The results revealed that the 1D-CNN model demonstrated the strongest performance, achieving the highest classification accuracy and discrimination capability. Explainability analyses underlined prior infectious diseases diagnosis, access to medical assistance, experiences of discrimination, age, and sexual identity expression as key predictors of mental health outcomes. These findings suggest that advanced predictive techniques can provide valuable insights for identifying at-risk individuals, informing targeted interventions, and improving access to mental health care. Future research should refine these models to enhance predictive accuracy, broaden applicability, and support the integration of artificial intelligence into public health strategies aimed at addressing the mental health needs of this population. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

15 pages, 1079 KiB  
Article
Investigation of the Time Series Users’ Reactions on Instagram and Its Statistical Modeling
by Yasuhiro Sato and Yuhei Doka
Informatics 2025, 12(3), 59; https://doi.org/10.3390/informatics12030059 - 27 Jun 2025
Viewed by 432
Abstract
For the last decade, social networking services (SNS), such as X, Facebook, and Instagram, have become mainstream media for advertising and marketing. In SNS marketing, word-of-mouth among users can spread posted advertising information, which is known as viral marketing. In this study, we [...] Read more.
For the last decade, social networking services (SNS), such as X, Facebook, and Instagram, have become mainstream media for advertising and marketing. In SNS marketing, word-of-mouth among users can spread posted advertising information, which is known as viral marketing. In this study, we first analyzed the time series of user reactions to Instagram posts to clarify the characteristics of user behavior. Second, we modeled these variations using statistical distributions to predict the information diffusion of future posts and to provide some insights into the factors that affect users’ reactions on Instagram using the estimated parameters of the modeling. Our results demonstrate that user reactions have a peak value immediately after posting and decrease drastically and exponentially as time elapses. In addition, modeling with the Weibull distribution is the most suitable for user reactions, and the estimated parameters help identify key factors that influence user reactions. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
Show Figures

Figure 1

18 pages, 4253 KiB  
Article
The Emotional Landscape of Technological Innovation: A Data-Driven Case Study of ChatGPT’s Launch
by Lowri Williams and Pete Burnap
Informatics 2025, 12(3), 58; https://doi.org/10.3390/informatics12030058 - 22 Jun 2025
Viewed by 692
Abstract
The rapid development and deployment of artificial intelligence (AI) technologies have sparked intense public interest and debate. While these innovations promise to revolutionise various aspects of human life, it is crucial to understand the complex emotional responses they elicit from potential adopters and [...] Read more.
The rapid development and deployment of artificial intelligence (AI) technologies have sparked intense public interest and debate. While these innovations promise to revolutionise various aspects of human life, it is crucial to understand the complex emotional responses they elicit from potential adopters and users. Such findings can offer crucial guidance for stakeholders involved in the development, implementation, and governance of AI technologies like OpenAI’s ChatGPT, a large language model (LLM) that garnered significant attention upon its release, enabling more informed decision-making regarding potential challenges and opportunities. While previous studies have employed data-driven approaches towards investigating public reactions to emerging technologies, they often relied on sentiment polarity analysis, which categorises responses as positive or negative. However, this binary approach fails to capture the nuanced emotional landscape surrounding technological adoption. This paper overcomes this limitation by presenting a comprehensive analysis for investigating the emotional landscape surrounding technology adoption by using the launch of ChatGPT as a case study. In particular, a large corpus of social media texts containing references to ChatGPT was compiled. Text mining techniques were applied to extract emotions, capturing a more nuanced and multifaceted representation of public reactions. This approach allows the identification of specific emotions such as excitement, fear, surprise, and frustration, providing deeper insights into user acceptance, integration, and potential adoption of the technology. By analysing this emotional landscape, we aim to provide a more comprehensive understanding of the factors influencing ChatGPT’s reception and potential long-term impact. Furthermore, we employ topic modelling to identify and extract the common themes discussed across the dataset. This additional layer of analysis allows us to understand the specific aspects of ChatGPT driving different emotional responses. By linking emotions to particular topics, we gain a more contextual understanding of public reaction, which can inform decision-making processes in the development, deployment, and regulation of AI technologies. Full article
(This article belongs to the Section Big Data Mining and Analytics)
Show Figures

Figure 1

18 pages, 2689 KiB  
Article
Blockchain-Enabled, Nature-Inspired Federated Learning for Cattle Health Monitoring
by Lakshmi Prabha Ganesan and Saravanan Krishnan
Informatics 2025, 12(3), 57; https://doi.org/10.3390/informatics12030057 - 20 Jun 2025
Viewed by 530
Abstract
Traditional cattle health monitoring systems rely on centralized data collection, posing significant challenges related to data privacy, network connectivity, model reliability, and trust. This study introduces a novel, nature-inspired federated learning (FL) framework for cattle health monitoring, integrating blockchain to ensure model validation, [...] Read more.
Traditional cattle health monitoring systems rely on centralized data collection, posing significant challenges related to data privacy, network connectivity, model reliability, and trust. This study introduces a novel, nature-inspired federated learning (FL) framework for cattle health monitoring, integrating blockchain to ensure model validation, system resilience, and reputation management. Inspired by the fission–fusion dynamics of elephant herds, the framework adaptively forms and merges subgroups of edge nodes based on six key parameters: health metrics, activity levels, geographical proximity, resource availability, temporal activity, and network connectivity. Graph attention networks (GATs) enable dynamic fission, while Density-Based Spatial Clustering of Applications with Noise (DBSCAN) supports subgroup fusion based on model similarity. Blockchain smart contracts validate model contributions and ensure that only high-performing models participate in global aggregation. A reputation-driven mechanism promotes reliable nodes and discourages unstable participants. Experimental results show the proposed framework achieves 94.3% model accuracy, faster convergence, and improved resource efficiency. This adaptive and privacy-preserving approach transforms cattle health monitoring into a more trustworthy, efficient, and resilient process. Full article
Show Figures

Graphical abstract

Previous Issue
Back to TopTop