-
The Emotional Landscape of Technological Innovation: A Data Driven Case Study of ChatGPT’s Launchmetal Ions from Multi-Ion Solutions Using Polysaccharide Hydrogels
-
Decoding Trust in Artificial Intelligence: A Systematic Review of Quantitative Measures and Related Variables
-
Enhancing Cultural Heritage Accessibility Through 3D Artifact Visualization on Web-Based Open Frameworks
-
State-of-the-Art Cross-Platform Mobile Application Development Frameworks: A Comparative Study of Market and Developer Trends
Journal Description
Informatics
Informatics
is an international, peer-reviewed, open access journal on information and communication technologies, human–computer interaction, and social informatics, and is published quarterly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, and other databases.
- Journal Rank: CiteScore - Q1 (Communication)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 34.9 days after submission; acceptance to publication is undertaken in 4.7 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.8 (2024);
5-Year Impact Factor:
3.1 (2024)
Latest Articles
CyberKG: Constructing a Cybersecurity Knowledge Graph Based on SecureBERT_Plus for CTI Reports
Informatics 2025, 12(3), 100; https://doi.org/10.3390/informatics12030100 - 22 Sep 2025
Abstract
►
Show Figures
Cyberattacks, especially Advanced Persistent Threats (APTs), have become more complex. These evolving threats challenge traditional defense systems, which struggle to counter long-lasting and covert attacks. Cybersecurity Knowledge Graphs (CKGs), enabled through the integration of multi-source CTI, introduce novel approaches for proactive defense. However,
[...] Read more.
Cyberattacks, especially Advanced Persistent Threats (APTs), have become more complex. These evolving threats challenge traditional defense systems, which struggle to counter long-lasting and covert attacks. Cybersecurity Knowledge Graphs (CKGs), enabled through the integration of multi-source CTI, introduce novel approaches for proactive defense. However, building CKGs faces challenges such as unclear terminology, overlapping entity relationships in attack chains, and differences in CTI across sources. To tackle these challenges, we propose the CyberKG framework, which improves entity recognition and relation extraction using a SecureBERT_Plus-BiLSTM-Attention-CRF joint architecture. Semantic features are captured using a domain-adapted SecureBERT_Plus model, while temporal dependencies are modeled through BiLSTM. Attention mechanisms highlight key cross-sentence relationships, while CRF incorporates ATT&CK rule constraints. Hierarchical clustering (HAC), based on contextual embeddings, facilitates dynamic entity disambiguation and semantic fusion. Experimental evaluations on the DNRTI and MalwareDB datasets demonstrate strong performance in extraction accuracy, entity normalization, and the resolution of overlapping relations. The constructed knowledge graph supports APT tracking, attack-chain provenance, proactive defense prediction.
Full article
Open AccessArticle
Talking Tech, Teaching with Tech: How Primary Teachers Implement Digital Technologies in Practice
by
Lyubka Aleksieva, Veronica Racheva and Roumiana Peytcheva-Forsyth
Informatics 2025, 12(3), 99; https://doi.org/10.3390/informatics12030099 - 22 Sep 2025
Abstract
This paper explores how primary school teachers integrate digital technologies into their classroom practice, with a particular focus on the extent to which their stated intentions align with what actually takes place during lessons. Drawing on data from the Bulgarian SUMMIT project on
[...] Read more.
This paper explores how primary school teachers integrate digital technologies into their classroom practice, with a particular focus on the extent to which their stated intentions align with what actually takes place during lessons. Drawing on data from the Bulgarian SUMMIT project on digital transformation in education, the study employed a mixed-methods design combining semi-structured interviews, structured lesson observations, and analysis of teaching materials. The sample included 44 teachers from 26 Bulgarian schools, representing a range of educational contexts. The analysis was guided by the Digital Technology Integration Framework (DTIF), which distinguishes between three modes of technology use—Support, Extend, and Transform—based on the depth of pedagogical change. The findings indicated a strong degree of consistency between teachers’ accounts and observed practices in areas such as the use of digital tools for content visualisation, lesson enrichment, and reinforcement of knowledge. At the same time, the study highlights important gaps between teachers’ aspirations and classroom realities. Although many spoke of wanting to promote independent exploration, creativity, collaboration, and digital citizenship, these ambitions were rarely realised in observed lessons. Pupil autonomy and opportunities for creative digital production were limited, with extended and transformative practices appearing only occasionally. No significant subject-specific differences were identified: teachers across disciplines tended to rely on the same set of familiar tools, while more advanced or innovative uses of technology remained rare. Rather than offering a definitive account of progress, the study raises critical questions about teachers’ digital pedagogical competencies, contextual constraints and the depth of technology integration in everyday classroom practice. While digital tools are increasingly present, their use often remains limited to supporting traditional instruction, with extended and transformative applications still aspirational rather than routine. The findings draw attention to context-specific challenges in the Bulgarian primary education system and the importance of aligning digital innovation with pedagogical intent. This highlights the need for sustained professional development focused on learner-centred digital pedagogies, along with stronger institutional support and equitable access to infrastructure.
Full article
Open AccessSystematic Review
From E-Government to AI E-Government: A Systematic Review of Citizen Attitudes
by
Ioanna Savveli, Maria Rigou and Stefanos Balaskas
Informatics 2025, 12(3), 98; https://doi.org/10.3390/informatics12030098 - 16 Sep 2025
Abstract
►▼
Show Figures
Governments increasingly integrate artificial intelligence (AI) into digital public services, and understanding how citizens perceive and respond to these technologies has become essential. This systematic review analyzes 30 empirical studies published from early January 2019 to mid-April 2025, following PRISMA guidelines, to map
[...] Read more.
Governments increasingly integrate artificial intelligence (AI) into digital public services, and understanding how citizens perceive and respond to these technologies has become essential. This systematic review analyzes 30 empirical studies published from early January 2019 to mid-April 2025, following PRISMA guidelines, to map the current landscape of citizen attitudes toward AI-enabled e-government services. Guided by four research questions, the study examines: (1) the forms of AI implementation most commonly investigated, (2) the attitudinal variables used to assess user perception, (3) key factors influencing attitudes, and (4) concerns and challenges reported by users. The findings reveal that chatbots dominate current implementations, with behavioral intentions and satisfaction serving as the main outcome measures. Perceived usefulness, ease of use, trust, and perceived risk emerge as recurring determinants of positive attitudes. However, widespread concerns related to privacy and interface usability highlight persistent barriers. Overall, the review underscores the need for transparent, citizen-centered AI design and ethical safeguards to enhance acceptance and trust. It concludes that future research should address understudied applications, include vulnerable populations, and explore perceptions across diverse public sector domains.
Full article

Figure 1
Open AccessArticle
The Impact of the 2023 Wikipedia Redesign on User Experience
by
Tyler Wilson, Prajjwal Gandharv and Karl Vachuska
Informatics 2025, 12(3), 97; https://doi.org/10.3390/informatics12030097 - 16 Sep 2025
Abstract
►▼
Show Figures
In January 2023, Wikipedia introduced its most significant user interface (UI) redesign in over a decade, aiming to improve readability, accessibility, and navigation across devices. Despite the scale of this change, little empirical work has assessed its actual impact on user behavior. This
[...] Read more.
In January 2023, Wikipedia introduced its most significant user interface (UI) redesign in over a decade, aiming to improve readability, accessibility, and navigation across devices. Despite the scale of this change, little empirical work has assessed its actual impact on user behavior. This study employs a natural experiment framework, leveraging Wikipedia’s exogenous, site-wide redesign date and large-scale, publicly available data—including clickstream, pageview, and edit histories—to evaluate user experience before and after the change. Using a quasi-experimental design, we estimate an immediate jump of ~1.06 million monthly internal link clicks at launch, while average hourly pageviews in January rose 1.25% despite a one-time –1.79 million dip at rollout. These results highlight the potential of large-scale UI changes to reshape user interaction without broadly alienating users and demonstrate the value of quasi-experimental methods for Human–Computer Interaction (HCI) research. Our approach offers a replicable framework for evaluating real-world design interventions at scale.
Full article

Figure 1
Open AccessArticle
Digital Cultural Heritage in Southeast Asia: Knowledge Structures and Resources in GLAM Institutions
by
Kanyarat Kwiecien, Wirapong Chansanam and Kulthida Tuamsuk
Informatics 2025, 12(3), 96; https://doi.org/10.3390/informatics12030096 - 15 Sep 2025
Abstract
This study explores the digital organization of cultural heritage knowledge across national GLAM institutions (galleries, libraries, archives, and museums) in the ten ASEAN countries. By employing a qualitative content analysis approach, this research study investigates the types, structures, and dissemination patterns of information
[...] Read more.
This study explores the digital organization of cultural heritage knowledge across national GLAM institutions (galleries, libraries, archives, and museums) in the ten ASEAN countries. By employing a qualitative content analysis approach, this research study investigates the types, structures, and dissemination patterns of information resources available on 40 institutional websites. The findings reveal the diversity and richness of Southeast Asian cultural heritage, including national and local wisdom, history, significant figures, and material culture, collected and curated by these institutions. This study identifies key knowledge domains, content overlaps across GLAM sectors, and limitations in metadata and interoperability. Comparative analysis with international cultural knowledge infrastructures, such as the United Nations Educational Scientific and Cultural Organization (UNESCO)’s framework, Europeana, and the World Digital Library, highlights both shared values and regional distinctions. While GLAMs in the ASEAN have made significant strides in digital preservation and access, the lack of standardized metadata and cross-institutional integration impedes broader discoverability and reuse. This study contributes to the discourse on heritage informatics by providing an empirical foundation for enhancing digital cultural heritage systems in developing regions. The implications point toward the need for interoperable metadata standards, regional collaboration, and capacity building to support sustainable digital heritage ecosystems. This research study offers practical insights for policymakers, digital curators, and information professionals seeking to improve cultural knowledge infrastructures in Southeast Asia and similar contexts.
Full article
Open AccessArticle
Deep Learning-Based Forecasting of Boarding Patient Counts to Address Emergency Department Overcrowding
by
Orhun Vural, Bunyamin Ozaydin, James Booth, Brittany F. Lindsey and Abdulaziz Ahmed
Informatics 2025, 12(3), 95; https://doi.org/10.3390/informatics12030095 - 15 Sep 2025
Abstract
Emergency department (ED) overcrowding remains a major challenge for hospitals, resulting in worse outcomes, longer waits, elevated hospital operating costs, and greater strain on staff. Boarding count, the number of patients who have been admitted to an inpatient unit but are still in
[...] Read more.
Emergency department (ED) overcrowding remains a major challenge for hospitals, resulting in worse outcomes, longer waits, elevated hospital operating costs, and greater strain on staff. Boarding count, the number of patients who have been admitted to an inpatient unit but are still in the ED waiting for transfer, is a key patient flow metric that affects overall ED operations. This study presents a deep learning-based approach to forecasting ED boarding counts using only operational and contextual features—derived from hourly ED tracking, inpatient census, weather, holiday, and local event data—without patient-level clinical information. Different deep learning algorithms were tested, including convolutional and transformer-based time-series models, and the best-performing model, Time Series Transformer Plus (TSTPlus), achieved strong performance at the 6-h prediction horizon, with a mean absolute error of 4.30 and an R2 score of 0.79. After identifying TSTPlus as the best-performing model, its performance was further evaluated at additional horizons of 8, 10, and 12 h. The model was also evaluated under extreme operational conditions, demonstrating robust and accurate forecasts. These findings highlight the potential of the proposed forecasting approach to support proactive operational planning and reduce ED overcrowding.
Full article
(This article belongs to the Section Big Data Mining and Analytics)
►▼
Show Figures

Figure 1
Open AccessArticle
Simulating the Effects of Sensor Failures on Autonomous Vehicles for Safety Evaluation
by
Francisco Matos, João Durães and João Cunha
Informatics 2025, 12(3), 94; https://doi.org/10.3390/informatics12030094 - 15 Sep 2025
Abstract
►▼
Show Figures
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures,
[...] Read more.
Autonomous vehicles (AVs) are increasingly becoming a reality, enabled by advances in sensing technologies, intelligent control systems, and real-time data processing. For AVs to operate safely and effectively, they must maintain a reliable perception of their surroundings and internal state. However, sensor failures, whether due to noise, malfunction, or degradation, can compromise this perception and lead to incorrect localization or unsafe decisions by the autonomous control system. While modern AV systems often combine data from multiple sensors to mitigate such risks through sensor fusion techniques (e.g., Kalman filtering), the extent to which these systems remain resilient under faulty conditions remains an open question. This work presents a simulation-based fault injection framework to assess the impact of sensor failures on AVs’ behavior. The framework enables structured testing of autonomous driving software under controlled fault conditions, allowing researchers to observe how specific sensor failures affect system performance. To demonstrate its applicability, an experimental campaign was conducted using the CARLA simulator integrated with the Autoware autonomous driving stack. A multi-segment urban driving scenario was executed using a modified version of CARLA’s Scenario Runner to support Autoware-based evaluations. Faults were injected simulating LiDAR, GNSS, and IMU sensor failures in different route scenarios. The fault types considered in this study include silent sensor failures and severe noise. The results obtained by emulating sensor failures in our chosen system under test, Autoware, show that faults in LiDAR and IMU gyroscope have the most critical impact, often leading to erratic motion and collisions. In contrast, faults in GNSS and IMU accelerometers were well tolerated. This demonstrates the ability of the framework to investigate the fault-tolerance of AVs in the presence of critical sensor failures.
Full article

Figure 1
Open AccessArticle
Federated Learning Spam Detection Based on FedProx and Multi-Level Multi-Feature Fusion
by
Yunpeng Xiong, Junkuo Cao and Guolian Chen
Informatics 2025, 12(3), 93; https://doi.org/10.3390/informatics12030093 - 12 Sep 2025
Abstract
Traditional spam detection methodologies often neglect user privacy preservation, potentially incurring data leakage risks. Furthermore, current federated learning models for spam detection face several critical challenges: (1) data heterogeneity and instability during server-side parameter aggregation, (2) training instability in single neural network architectures
[...] Read more.
Traditional spam detection methodologies often neglect user privacy preservation, potentially incurring data leakage risks. Furthermore, current federated learning models for spam detection face several critical challenges: (1) data heterogeneity and instability during server-side parameter aggregation, (2) training instability in single neural network architectures leading to mode collapse, and (3) constrained expressive capability in multi-module frameworks due to excessive complexity. These issues represent fundamental research pain points in federated learning-based spam detection systems. To address this technical challenge, this study innovatively integrates federated learning frameworks with multi-feature fusion techniques to propose a novel spam detection model, FPW-BC. The FPW-BC model addresses data distribution imbalance through the FedProx aggregation algorithm and enhances stability during server-side parameter aggregation via a horse-racing selection strategy. The model effectively mitigates limitations inherent in both single and multi-module architectures through hierarchical multi-feature fusion. To validate FPW-BC’s performance, comprehensive experiments were conducted on six benchmark datasets with distinct distribution characteristics: CEAS, Enron, Ling, Phishing_email, Spam_email, and Fake_phishing, with comparative analysis against multiple baseline methods. Experimental results demonstrate that FPW-BC achieves exceptional generalization capability for various spam patterns while maintaining user privacy preservation. The model attained 99.40% accuracy on CEAS and 99.78% on Fake_phishing, representing significant dual improvements in both privacy protection and detection efficiency.
Full article
(This article belongs to the Topic Recent Advances in Artificial Intelligence for Security and Security for Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
A Data-Driven Informatics Framework for Regional Sustainability: Integrating Twin Mean-Variance Two-Stage DEA with Decision Analytics
by
Pasura Aungkulanon, Roberto Montemanni, Atiwat Nanphang and Pongchanun Luangpaiboon
Informatics 2025, 12(3), 92; https://doi.org/10.3390/informatics12030092 - 11 Sep 2025
Abstract
►▼
Show Figures
This study introduces a novel informatics framework for assessing regional sustainability by integrating Twin Mean-Variance Two-Stage Data Envelopment Analysis (TMV-TSDEA) with a desirability-based decision analytics system. The model evaluates both the efficiency and stability of economic and environmental performance across regions, supporting evidence-based
[...] Read more.
This study introduces a novel informatics framework for assessing regional sustainability by integrating Twin Mean-Variance Two-Stage Data Envelopment Analysis (TMV-TSDEA) with a desirability-based decision analytics system. The model evaluates both the efficiency and stability of economic and environmental performance across regions, supporting evidence-based policymaking and strategic planning. Applied to 16 Thai provinces, the framework incorporates a wide range of indicators—such as investment, population, tourism, industrial output, electricity use, forest coverage, and air quality. The twin mean-variance approach captures not only average efficiency but also the consistency of performance over time or under varying scenarios. A two-stage DEA structure models the transformation from economic inputs to environmental outcomes. To ensure comparability, all variables are normalized using desirability functions based on standardized statistical coding. The TMV-TSDEA framework generates composite performance scores that reveal clear disparities among regions. Provinces like Bangkok and Ayutthaya demonstrate a consistent high performance, while others show underperformance or variability requiring targeted policy action. Designed for integration with smart governance platforms, the framework provides a scalable and reproducible tool for regional benchmarking, resource allocation, and sustainability monitoring. By combining informatics principles with advanced analytics, TMV-TSDEA enhances transparency, supports decision-making, and offers a holistic foundation for sustainable regional development.
Full article

Figure 1
Open AccessArticle
Do Trusting Belief and Social Presence Matter? Service Satisfaction in Using AI Chatbots: Necessary Condition Analysis and Importance-Performance Map Analysis
by
Tai Ming Wut, Stephanie Wing Lee, Jing (Bill) Xu and Man Lung Jonathan Kwok
Informatics 2025, 12(3), 91; https://doi.org/10.3390/informatics12030091 - 9 Sep 2025
Abstract
►▼
Show Figures
Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore
[...] Read more.
Research indicates that perceived trust affects both behavioral intention to use chatbots and service satisfaction provided by chatbots in customer service contexts. However, it remains unclear whether perceived propensity to trust impacts service satisfaction in this context. Thus, this research aims to explore how customers’ propensity to trust influences trusting beliefs and, subsequently, their satisfaction when using chatbots for customer service. Through purposive sampling, individuals in Hong Kong with prior experience using chatbots were selected to participate in a quantitative survey. The study employed Necessary Condition Analysis, Importance-Performance Map Analysis, and Partial Least Squares Structural Equation Modelling to examine factors influencing users’ trusting beliefs toward chatbots in customer service settings. Findings revealed that trust in chatbot interactions is significantly influenced by propensity to trust technology, social presence, perceived usefulness, and perceived ease of use. Consequently, these factors, along with trusting belief, also influence service satisfaction in this context. Thus, Social Presence, Perceived Ease of Use, Propensity to Trust, Perceived Usefulness, and Trusting Belief are found necessary. By combining Importance-Performance Map Analysis, priority managerial action areas were identified. This research extends the Technology Acceptance Model by incorporating social presence, propensity to trust technology, and trusting belief in the context of AI chatbot use for customer service.
Full article

Figure 1
Open AccessArticle
Digitizing the Higaonon Language: A Mobile Application for Indigenous Preservation in the Philippines
by
Danilyn Abingosa, Paul Bokingkito, Jr., Sittie Noffaisah Pasandalan, Jay Rey Gosnell Alovera and Jed Otano
Informatics 2025, 12(3), 90; https://doi.org/10.3390/informatics12030090 - 8 Sep 2025
Abstract
►▼
Show Figures
This research addresses the critical need for language preservation among the Higaonon indigenous community in Mindanao, Philippines, through the development of a culturally responsive mobile dictionary application. The Higaonon language faces significant endangerment due to generational language shift, limited documentation, and a scarcity
[...] Read more.
This research addresses the critical need for language preservation among the Higaonon indigenous community in Mindanao, Philippines, through the development of a culturally responsive mobile dictionary application. The Higaonon language faces significant endangerment due to generational language shift, limited documentation, and a scarcity of educational materials. Employing user-centered design principles and participatory lexicography, this study involved collaboration with tribal elders, educators, and youth to document and digitize Higaonon vocabulary across ten culturally significant semantic domains. Each Higaonon lexeme was translated into English, Filipino, and Cebuano to enhance comprehension across linguistic groups. The resulting mobile application incorporates multilingual search capabilities, offline access, phonetic transcriptions, example sentences, and culturally relevant design elements. An evaluation conducted with 30 participants (15 Higaonon and 15 non-Higaonon speakers) revealed high satisfaction ratings across functionality (4.81/5.0), usability (4.63/5.0), and performance (4.73/5.0). Offline accessibility emerged as the most valued feature (4.93/5.0), while comparative analysis identified meaningful differences in user experience between native and non-native speakers, with Higaonon users providing more critical assessments particularly regarding font readability and performance optimization. The application demonstrates how community-driven technological interventions can support indigenous language revitalization while respecting cultural integrity, intellectual property rights, and addressing practical community needs. This research establishes a framework for ethical indigenous language documentation that prioritizes community self-determination and provides empirical evidence that culturally responsive digital technologies can effectively preserve endangered languages while serving as repositories for cultural knowledge embedded within linguistic systems.
Full article

Figure 1
Open AccessArticle
Tourist Flow Prediction Based on GA-ACO-BP Neural Network Model
by
Xiang Yang, Yongliang Cheng, Minggang Dong and Xiaolan Xie
Informatics 2025, 12(3), 89; https://doi.org/10.3390/informatics12030089 - 3 Sep 2025
Abstract
Tourist flow prediction plays a crucial role in enhancing the efficiency of scenic area management, optimizing resource allocation, and promoting the sustainable development of the tourism industry. To improve the accuracy and real-time performance of tourist flow prediction, we propose a BP model
[...] Read more.
Tourist flow prediction plays a crucial role in enhancing the efficiency of scenic area management, optimizing resource allocation, and promoting the sustainable development of the tourism industry. To improve the accuracy and real-time performance of tourist flow prediction, we propose a BP model based on a hybrid genetic algorithm (GA) and ant colony optimization algorithm (ACO), called the GA-ACO-BP model. First, we comprehensively considered multiple key factors related to tourist flow, including historical tourist flow data (such as tourist flow from yesterday, the previous day, and the same period last year), holiday types, climate comfort, and search popularity index on online map platforms. Second, to address the tendency of the BP model to get easily stuck in local optima, we introduce the GA, which has excellent global search capabilities. Finally, to further improve local convergence speed, we further introduce the ACO algorithm. The experimental results based on tourist flow data from the Elephant Trunk Hill Scenic Area in Guilin indicate that the GA-AC*O-BP model achieves optimal values for key tourist flow prediction metrics such as MAPE, RMSE, MAE, and R2, compared to commonly used prediction models. These values are 4.09%, 426.34, 258.80, and 0.98795, respectively. Compared to the initial BP neural network, the improved GA-ACO-BP model reduced error metrics such as MAPE, RMSE, and MAE by 1.12%, 244.04, and 122.91, respectively, and increased the R2 metric by 1.85%.
Full article
(This article belongs to the Topic The Applications of Artificial Intelligence in Tourism)
►▼
Show Figures

Figure 1
Open AccessArticle
Preliminary Design Guidelines for Evaluating Immersive Industrial Safety Training
by
André Cordeiro, Regina Leite, Lucas Almeida, Cintia Neves, Tiago Silva, Alexandre Siqueira, Marcio Catapan and Ingrid Winkler
Informatics 2025, 12(3), 88; https://doi.org/10.3390/informatics12030088 - 1 Sep 2025
Abstract
This study presents preliminary design guidelines to support the evaluation of industrial safety training using immersive technologies, with a focus on high-risk work environments such as working at height. Although virtual reality has been widely adopted for training, few studies have explored its
[...] Read more.
This study presents preliminary design guidelines to support the evaluation of industrial safety training using immersive technologies, with a focus on high-risk work environments such as working at height. Although virtual reality has been widely adopted for training, few studies have explored its use for behavior-level evaluation, corresponding to Level 3 of the Kirkpatrick Model. Addressing this gap, the study adopts the Design Science Research methodology, combining a systematic literature review with expert focus group analysis to develop a conceptual framework for training evaluation. The results identify key elements necessary for immersive training evaluations, including scenario configuration, ethical procedures, recruitment, equipment selection, experimental design, and implementation strategies. The resulting guidelines are organized into six categories: scenario configuration, ethical procedures, recruitment, equipment selection, experimental design, and implementation strategies. These guidelines represent a DSR-based conceptual artifact to inform future empirical studies and support the structured assessment of immersive safety training interventions. The study also highlights the potential of integrating behavioral and physiological indicators to support immersive evaluations of behavioral change, offering an expert-informed and structured foundation for future empirical studies in high-risk industrial contexts.
Full article
(This article belongs to the Special Issue Real-World Applications and Prototyping of Information Systems for Extended Reality (VR, AR, and MR))
►▼
Show Figures

Figure 1
Open AccessArticle
Analysis and Forecasting of Cryptocurrency Markets Using Bayesian and LSTM-Based Deep Learning Models
by
Bidesh Biswas Biki, Makoto Sakamoto, Amane Takei, Md. Jubirul Alam, Md. Riajuliislam and Showaibuzzaman Showaibuzzaman
Informatics 2025, 12(3), 87; https://doi.org/10.3390/informatics12030087 - 30 Aug 2025
Abstract
►▼
Show Figures
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling
[...] Read more.
The rapid rise of the prices of cryptocurrencies has intensified the need for robust forecasting models that can capture the irregular and volatile patterns. This study aims to forecast Bitcoin prices over a 15-day horizon by evaluating and comparing two distant predictive modeling approaches: the Bayesian State-Space model and Long Short-Term Memory (LSTM) neural networks. Historical price data from January 2024 to April 2025 is used for model training and testing. The Bayesian model provided probabilistic insights by achieving a Mean Squared Error (MSE) of 0.0000 and a Mean Absolute Error (MAE) of 0.0026 for training data. For testing data, it provided 0.0013 for MSE and 0.0307 for MAE. On the other hand, the LSTM model provided temporal dependencies and performed strongly by achieving 0.0004 for MSE, 0.0160 for MAE, 0.0212 for RMSE, 0.9924 for R2 in terms of training data and for testing data, and 0.0007 for MSE with an R2 of 0.3505. From the result, it indicates that while the LSTM model excels in training performance, the Bayesian model provides better interpretability with lower error margins in testing by highlighting the trade-offs between model accuracy and probabilistic forecasting in the cryptocurrency markets.
Full article

Figure 1
Open AccessReview
The Temporal Evolution of Large Language Model Performance: A Comparative Analysis of Past and Current Outputs in Scientific and Medical Research
by
Ishith Seth, Gianluca Marcaccini, Bryan Lim, Jennifer Novo, Stephen Bacchi, Roberto Cuomo, Richard J. Ross and Warren M. Rozen
Informatics 2025, 12(3), 86; https://doi.org/10.3390/informatics12030086 - 26 Aug 2025
Abstract
►▼
Show Figures
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model
[...] Read more.
Background: Large language models (LLMs) such as ChatGPT have evolved rapidly, with notable improvements in coherence, factual accuracy, and contextual relevance. However, their academic and clinical applicability remains under scrutiny. This study evaluates the temporal performance evolution of LLMs by comparing earlier model outputs (GPT-3.5 and GPT-4.0) with ChatGPT-4.5 across three domains: aesthetic surgery counseling, an academic discussion base of thumb arthritis, and a systematic literature review. Methods: We replicated the methodologies of three previously published studies using identical prompts in ChatGPT-4.5. Each output was assessed against its predecessor using a nine-domain Likert-based rubric measuring factual accuracy, completeness, reference quality, clarity, clinical insight, scientific reasoning, bias avoidance, utility, and interactivity. Expert reviewers in plastic and reconstructive surgery independently scored and compared model outputs across versions. Results: ChatGPT-4.5 outperformed earlier versions across all domains. Reference quality improved most significantly (a score increase of +4.5), followed by factual accuracy (+2.5), scientific reasoning (+2.5), and utility (+2.5). In aesthetic surgery counseling, GPT-3.5 produced generic responses lacking clinical detail, whereas ChatGPT-4.5 offered tailored, structured, and psychologically sensitive advice. In academic writing, ChatGPT-4.5 eliminated reference hallucination, correctly applied evidence hierarchies, and demonstrated advanced reasoning. In the literature review, recall remained suboptimal, but precision, citation accuracy, and contextual depth improved substantially. Conclusion: ChatGPT-4.5 represents a major step forward in LLM capability, particularly in generating trustworthy academic and clinical content. While not yet suitable as a standalone decision-making tool, its outputs now support research planning and early-stage manuscript preparation. Persistent limitations include information recall and interpretive flexibility. Continued validation is essential to ensure ethical, effective use in scientific workflows.
Full article

Figure 1
Open AccessArticle
Human-AI Symbiotic Theory (HAIST): Development, Multi-Framework Assessment, and AI-Assisted Validation in Academic Research
by
Laura Thomsen Morello and John C. Chick
Informatics 2025, 12(3), 85; https://doi.org/10.3390/informatics12030085 - 25 Aug 2025
Abstract
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and
[...] Read more.
This study introduces the Human-AI Symbiotic Theory (HAIST), designed to guide authentic collaboration between human researchers and artificial intelligence in academic contexts, while pioneering a novel AI-assisted approach to theory validation that transforms educational research methodology. Addressing critical gaps in educational theory and advancing validation practices, this research employed a sequential three-phase mixed-methods approach: (1) systematic theoretical synthesis integrating five paradigmatic perspectives across learning theory, cognition, information processing, ethics, and AI domains; (2) development of an innovative validation framework combining three established theory-building approaches with groundbreaking AI-assisted content assessment protocols; and (3) comprehensive theory validation through both traditional multi-framework evaluation and novel AI-based content analysis demonstrating unprecedented convergent validity. This research contributes both a theoretically grounded framework for human-AI research collaboration and a transformative methodological innovation demonstrating how AI tools can systematically augment traditional expert-driven theory validation. HAIST provides the first comprehensive theoretical foundation designed explicitly for human-AI partnerships in scholarly research with applicability across disciplines, while the AI-assisted validation methodology offers a scalable, reliable model for theory development. Future research directions include empirical testing of HAIST principles in live research settings and broader application of the AI-assisted validation methodology to accelerate theory development across educational research and related disciplines.
Full article
(This article belongs to the Special Issue Generative AI in Higher Education: Applications, Implications, and Future Directions)
Open AccessArticle
Marketing a Banned Remedy: A Topic Model Analysis of Health Misinformation in Thai E-Commerce
by
Kanitsorn Suriyapaiboonwattana, Yuttana Jaroenruen, Saiphit Satjawisate, Kate Hone, Panupong Puttarak, Nattapong Kaewboonma, Puriwat Lertkrai and Siwanath Nantapichai
Informatics 2025, 12(3), 84; https://doi.org/10.3390/informatics12030084 - 18 Aug 2025
Abstract
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance.
[...] Read more.
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance. Drawing on a dataset of 1546 product listings across major platforms (Facebook, TikTok, Shopee, and Lazada), we applied Latent Dirichlet Allocation (LDA) to identify prevailing promotional themes and compliance gaps. Despite explicit platform policies, 87.6% of listings appeared on Facebook. Medical claims, particularly for pain relief, featured in 77.6% of posts, while only 18.4% included any risk disclosure. These findings suggest a systematic exploitation of regulatory blind spots and consumer health anxieties, facilitated by templated cross-platform messaging. Anchored in Information Manipulation Theory and the Health Belief Model, the analysis offers theoretical insight into how misinformation is structured and sustained within digital commerce ecosystems. The Thai case highlights urgent implications for platform accountability, policy harmonization, and the design of algorithmic surveillance systems in global health product regulation.
Full article
(This article belongs to the Section Health Informatics)
►▼
Show Figures

Figure 1
Open AccessArticle
Predictive Language Processing in Humans and Large Language Models: A Comparative Study of Contextual Dependencies
by
Yifan Zhang and Kuzma Strelnikov
Informatics 2025, 12(3), 83; https://doi.org/10.3390/informatics12030083 - 15 Aug 2025
Abstract
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task
[...] Read more.
Human language comprehension relies on predictive processing; however, the computational mechanisms underlying this phenomenon remain unclear. This study investigates these mechanisms using large language models (LLMs), specifically GPT-3.5-turbo and GPT-4. We conducted a comparison of LLM and human performance on a phrase-completion task under varying levels of contextual cues (high, medium, and low) as defined using human performance, thereby enabling direct AI–human comparisons. Our findings indicate that LLMs significantly outperform humans, particularly in medium- and low-context conditions. While success in medium-context scenarios reflects the efficient utilization of contextual information, performance in low-context situations—where LLMs achieved approximately 25% accuracy compared to just 1% for humans—suggests that the models harness deep linguistic structures beyond mere surface context. This discovery implies that LLMs may elucidate previously unknown aspects of language architecture. The ability of LLMs to exploit deep structural regularities and statistical patterns in medium- and low-predictability contexts offers a novel perspective on the computational architecture of the human language system.
Full article
(This article belongs to the Section Human-Computer Interaction)
►▼
Show Figures

Figure 1
Open AccessArticle
Global Embeddings, Local Signals: Zero-Shot Sentiment Analysis of Transport Complaints
by
Aliya Nugumanova, Daniyar Rakhimzhanov and Aiganym Mansurova
Informatics 2025, 12(3), 82; https://doi.org/10.3390/informatics12030082 - 14 Aug 2025
Abstract
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific
[...] Read more.
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific sentiment analysis or opinion mining tasks on digital service data. To the best of our knowledge, we are the first to test this paradigm on operational multilingual complaints, where public transport agencies must prioritize thousands of Russian- and Kazakh-language messages each day. A human-labelled corpus of 2400 complaints is embedded with five open-source universal models. Obtained embeddings are matched to semantic “anchor” queries that describe three distinct facets: service aspect (eight classes), implicit frustration, and explicit customer request. In the strict zero-shot setting, the best encoder reaches 77% accuracy for aspect detection, 74% for frustration, and 80% for request; taken together, these signals reproduce human four-level priority in 60% of cases. Attaching a single-layer logistic probe on top of the frozen embeddings boosts performance to 89% for aspect, 83–87% for the binary facets, and 72% for end-to-end triage. Compared with recent fine-tuned sentiment analysis systems, our pipeline cuts memory demands by two orders of magnitude and eliminates task-specific training yet narrows the accuracy gap to under five percentage points. These findings indicate that a single frozen encoder, guided by handcrafted anchors and an ultra-light head, can deliver near-human triage quality across multiple pragmatic dimensions, opening the door to low-cost, language-agnostic monitoring of digital-service feedback.
Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
►▼
Show Figures

Figure 1
Open AccessArticle
A Flexible Profile-Based Recommender System for Discovering Cultural Activities in an Emerging Tourist Destination
by
Isabel Arregocés-Julio, Andrés Solano-Barliza, Aida Valls, Antonio Moreno, Marysol Castillo-Palacio, Melisa Acosta-Coll and José Escorcia-Gutierrez
Informatics 2025, 12(3), 81; https://doi.org/10.3390/informatics12030081 - 14 Aug 2025
Abstract
Recommendation systems applied to tourism are widely recognized for improving the visitor’s experience in tourist destinations, thanks to their ability to personalize the trip. This paper presents a hybrid approach that combines Machine Learning techniques with the Ordered Weighted Averaging (OWA) aggregation operator
[...] Read more.
Recommendation systems applied to tourism are widely recognized for improving the visitor’s experience in tourist destinations, thanks to their ability to personalize the trip. This paper presents a hybrid approach that combines Machine Learning techniques with the Ordered Weighted Averaging (OWA) aggregation operator to achieve greater accuracy in user segmentation and generate personalized recommendations. The data were collected through a questionnaire applied to tourists in the different points of interest of the Special, Tourist and Cultural District of Riohacha. In the first stage, the K-means algorithm defines the segmentation of tourists based on their socio-demographic data and travel preferences. The second stage uses the OWA operator with a disjunctive policy to assign the most relevant cluster given the input data. This hybrid approach provides a recommendation mechanism for tourist destinations and their cultural heritage.
Full article
(This article belongs to the Topic The Applications of Artificial Intelligence in Tourism)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Informatics Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Electronics, Informatics, Information, Software
Software Engineering and Applications
Topic Editors: Sanjay Misra, Robertas Damaševičius, Bharti SuriDeadline: 31 October 2025
Topic in
Electronics, Healthcare, Informatics, MAKE, Sensors, Systems, IJGI
Theories and Applications of Human-Computer Interaction
Topic Editors: Da Tao, Tingru Zhang, Hailiang WangDeadline: 31 December 2025
Topic in
Applied Sciences, Electronics, Informatics, JCP, Future Internet, Mathematics, Sensors, Remote Sensing
Recent Advances in Artificial Intelligence for Security and Security for Artificial Intelligence
Topic Editors: Tao Zhang, Xiangyun Tang, Jiacheng Wang, Chuan Zhang, Jiqiang LiuDeadline: 28 February 2026
Topic in
AI, Algorithms, BDCC, Computers, Data, Future Internet, Informatics, Information, MAKE, Publications, Smart Cities
Learning to Live with Gen-AI
Topic Editors: Antony Bryant, Paolo Bellavista, Kenji Suzuki, Horacio Saggion, Roberto Montemanni, Andreas Holzinger, Min ChenDeadline: 31 August 2026

Special Issues
Special Issue in
Informatics
Practical Applications of Sentiment Analysis
Guest Editors: Patricia Anthony, Jing ZhouDeadline: 30 September 2025
Special Issue in
Informatics
Generative AI in Higher Education: Applications, Implications, and Future Directions
Guest Editors: Amir Ghapanchi, Reza Ghanbarzadeh, Purarjomandlangrudi AfroozDeadline: 31 October 2025
Special Issue in
Informatics
Real-World Applications and Prototyping of Information Systems for Extended Reality (VR, AR, and MR)
Guest Editors: Kitti Puritat, Kannikar Intawong, Wirapong ChansanamDeadline: 31 March 2026
Special Issue in
Informatics
Health Data Management in the Age of AI
Guest Editors: Brenda Scholtz, Hanlie SmutsDeadline: 30 May 2026