Next Issue
Volume 16, October
Previous Issue
Volume 16, August
 
 

Information, Volume 16, Issue 9 (September 2025) – 109 articles

Cover Story (view full-size image): This work presents a real-world predictive maintenance solution in a food-processing facility, combining IoT sensor data on motor temperature and vibration with ERP production records. Ensemble machine-learning models such as XGBoost forecast critical threshold breaches up to ten minutes ahead, enabling proactive interventions and preventing costly failures. Through careful feature engineering, rigorous preprocessing, and sliding-window validation, the approach delivers high accuracy for temperature forecasting and promising results for vibration prediction. The deployed dashboard and API integrate into daily operations, reducing downtime, lowering maintenance costs, and improving equipment reliability—providing a scalable blueprint for Industry 4.0 manufacturers. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
23 pages, 881 KB  
Article
From Digital Services to Sustainable Ones: Novel Industry 5.0 Environments Enhanced by Observability
by Andrea Sabbioni, Antonio Corradi, Stefano Monti and Carlos Roberto De Rolt
Information 2025, 16(9), 821; https://doi.org/10.3390/info16090821 - 22 Sep 2025
Viewed by 217
Abstract
The rapid evolution of Information Technologies is deeply transforming manufacturing, demanding innovative and enhanced production paradigms. Industry 5.0 further advances that transformation by fostering a more resilient, sustainable, and human-centric industrial ecosystem, built on the seamless integration of all value chains. This shift [...] Read more.
The rapid evolution of Information Technologies is deeply transforming manufacturing, demanding innovative and enhanced production paradigms. Industry 5.0 further advances that transformation by fostering a more resilient, sustainable, and human-centric industrial ecosystem, built on the seamless integration of all value chains. This shift requires the timely collection and intelligent analysis of vast, heterogeneous data sources, including IoT devices, digital services, crowdsourcing platforms, and last but not least important human input, which is essential to drive innovation. With sustainability as a key priority, pervasive monitoring not only enables optimization to reduce greenhouse gas emissions but also plays a strategic role across the manufacturing economy. This work introduces Observability platform for Industry 5.0 (ObsI5), a novel observability framework specifically designed to support Industry 5.0 environments. ObsI5 extends cloud-native observability tools, originally developed for IT service monitoring, into manufacturing infrastructures, enabling the collection, analysis, and control of data across both IT and OT domains. Our solution integrates human contributions as active data sources and leverages standard observability practices to extract actionable insights from all available resources. We validate ObsI5 through a dedicated test bed, demonstrating its ability to meet the strict requirements of Industry 5.0 in terms of timeliness, security, and modularity. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

26 pages, 2614 KB  
Article
A Comparative Analysis of Parkinson’s Disease Diagnosis Approaches Using Drawing-Based Datasets: Utilizing Large Language Models, Machine Learning, and Fuzzy Ontologies
by Adam Koletis, Pavlos Bitilis, Georgios Bouchouras and Konstantinos Kotis
Information 2025, 16(9), 820; https://doi.org/10.3390/info16090820 - 22 Sep 2025
Viewed by 237
Abstract
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that impairs motor function, often causing tremors and difficulty with movement control. A promising diagnostic method involves analyzing hand-drawn patterns, such as spirals and waves, which show characteristic distortions in individuals with PD. This study [...] Read more.
Parkinson’s disease (PD) is a progressive neurodegenerative disorder that impairs motor function, often causing tremors and difficulty with movement control. A promising diagnostic method involves analyzing hand-drawn patterns, such as spirals and waves, which show characteristic distortions in individuals with PD. This study compares three computational approaches for classifying individuals as Parkinsonian or healthy based on drawing-derived features: (1) Large Language Models (LLMs), (2) traditional machine learning (ML) algorithms, and (3) a fuzzy ontology-based method using fuzzy sets and Fuzzy-OWL2. Each method offers unique strengths: LLMs leverage pre-trained knowledge for subtle pattern detection, ML algorithms excel in feature extraction and predictive accuracy, and fuzzy ontologies provide interpretable, logic-based reasoning under uncertainty. Using three structured handwriting datasets of varying complexity, we assessed performance in terms of accuracy, interpretability, and generalization. Among the approaches, the fuzzy ontology-based method showed the strongest performance on complex tasks, achieving a high F1-score, while ML models demonstrated strong generalization and LLMs offered a reliable, interpretable baseline. These findings suggest that combining symbolic and statistical AI may improve drawing-based PD diagnosis. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

24 pages, 1680 KB  
Review
Leveraging Artificial Intelligence for Sustainable Tutoring and Dropout Prevention in Higher Education: A Scoping Review on Digital Transformation
by Washington Raúl Fierro Saltos, Fabian Eduardo Fierro Saltos, Veloz Segura Elizabeth Alexandra and Edgar Fabián Rivera Guzmán
Information 2025, 16(9), 819; https://doi.org/10.3390/info16090819 - 22 Sep 2025
Viewed by 261
Abstract
The increasing integration of artificial intelligence into educational processes offers new opportunities to address critical issues in higher education, such as student dropout, academic underperformance, and the need for personalized tutoring. This scoping review aims to map the scientific literature on the use [...] Read more.
The increasing integration of artificial intelligence into educational processes offers new opportunities to address critical issues in higher education, such as student dropout, academic underperformance, and the need for personalized tutoring. This scoping review aims to map the scientific literature on the use of AI techniques to predict academic performance, risk of dropout, and the need for academic advising, with an emphasis on e-learning or technology-mediated environments. The study follows the Joanna Briggs Institute PCC strategy, and the review was reported following the PRISMA-ScR checklist for search reporting. A total of 63 peer-reviewed empirical studies (2019–2025) were included after systematic filtering from the Scopus and Web of Science databases. The findings reveal that supervised machine learning models, such as decision trees, random forests, and neural networks, dominate the field, with an emerging interest in deep learning, transfer learning, and explainable AI. Academic, behavioral, emotional, and contextual variables are integrated into increasingly complex and interpretable models. Most studies focus on undergraduate students in digital and hybrid learning contexts, particularly in regions with high dropout rates. The review highlights the potential of AI to enable early intervention and improve the effectiveness of tutoring systems, while noting limitations such as lack of model generalization and ethical concerns. Recommendations are provided for future research and institutional integration. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

23 pages, 619 KB  
Article
TisLLM: Temporal Integration-Enhanced Fine-Tuning of Large Language Models for Sequential Recommendation
by Xiaosong Zhu, Wenzheng Li, Bingqiang Zhang and Liqing Geng
Information 2025, 16(9), 818; https://doi.org/10.3390/info16090818 - 21 Sep 2025
Viewed by 192
Abstract
In recent years, the remarkable versatility of large language models (LLMs) has spurred considerable interest in leveraging their capabilities for recommendation systems. Critically, we argue that the intrinsic aptitude of LLMs for modeling sequential patterns and temporal dynamics renders them uniquely suited for [...] Read more.
In recent years, the remarkable versatility of large language models (LLMs) has spurred considerable interest in leveraging their capabilities for recommendation systems. Critically, we argue that the intrinsic aptitude of LLMs for modeling sequential patterns and temporal dynamics renders them uniquely suited for sequential recommendation tasks—a foundational premise explored in depth later in this work. This potential, however, is tempered by significant hurdles: a discernible gap exists between the general competencies of conventional LLMs and the specialized needs of recommendation tasks, and their capacity to uncover complex, latent data interrelationships often proves inadequate, potentially undermining recommendation efficacy. To bridge this gap, our approach centers on adapting LLMs through fine-tuning on dedicated recommendation datasets, enhancing task-specific alignment. Further, we present the temporal Integration Enhanced Fine-Tuning of Large Language Models for Sequential Recommendation (TisLLM) framework. TisLLM specifically targets the deeper excavation of implicit associations within recommendation data streams. Its core mechanism involves partitioning sequential user interaction data using temporally defined sliding windows. These chronologically segmented slices are then aggregated to form enriched contextual representations, which subsequently drive the LLM fine-tuning process. This methodology explicitly strengthens the model’s compatibility with the inherently sequential nature of recommendation scenarios. Rigorous evaluation on benchmark datasets provides robust empirical validation, confirming the effectiveness of the TisLLM framework. Full article
Show Figures

Graphical abstract

17 pages, 1622 KB  
Article
On Measuring Large Language Models Performance with Inferential Statistics
by Jesús M. Fraile-Hernández and Anselmo Peñas
Information 2025, 16(9), 817; https://doi.org/10.3390/info16090817 - 20 Sep 2025
Viewed by 184
Abstract
Measuring the reliability of performance evaluations is particularly important when we evaluate non-deterministic models. This is the case of using large language models (LLMs) in classification tasks, where different runs generate different outputs. This fact raises the question about how reliable the evaluation [...] Read more.
Measuring the reliability of performance evaluations is particularly important when we evaluate non-deterministic models. This is the case of using large language models (LLMs) in classification tasks, where different runs generate different outputs. This fact raises the question about how reliable the evaluation of a solution is. Previous work relies on executing several runs and then taking some kind of average together with confidence intervals. However, confidence intervals themselves may not be reliable if the number of executions is not large enough. Therefore, more effective and robust methods are needed for their estimation. In this work, we propose a methodology that estimates model performance while capturing the intra-run variability by leveraging instance-level predictions across multiple runs, enabling the computation of more reliable confidence intervals when the gold standard is available. Our method also offers greater computational efficiency by reducing the number of full model executions required to estimate performance variability. Compared against existing state-of-the-art evaluation methods, our approach achieves full empirical coverage (100%) of plausible performance outcomes using as few as three runs, whereas traditional methods reach at most 63% coverage, even with eight runs. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

31 pages, 3855 KB  
Article
Discovering Operational Patterns Using Image-Based Convolutional Clustering and Composite Evaluation: A Case Study in Foundry Melting Processes
by Zhipeng Ma, Bo Nørregaard Jørgensen and Zheng Grace Ma
Information 2025, 16(9), 816; https://doi.org/10.3390/info16090816 - 20 Sep 2025
Viewed by 178
Abstract
Industrial process monitoring increasingly relies on sensor-generated time-series data, yet the lack of labels, high variability, and operational noise make it difficult to extract meaningful patterns using conventional methods. Existing clustering techniques either rely on fixed distance metrics or deep models designed for [...] Read more.
Industrial process monitoring increasingly relies on sensor-generated time-series data, yet the lack of labels, high variability, and operational noise make it difficult to extract meaningful patterns using conventional methods. Existing clustering techniques either rely on fixed distance metrics or deep models designed for static data, limiting their ability to handle dynamic, unstructured industrial sequences. Addressing this gap, this paper proposes a novel framework for unsupervised discovery of operational modes in univariate time-series data using image-based convolutional clustering with composite internal evaluation. The proposed framework improves upon existing approaches in three ways: (1) raw time-series sequences are transformed into grayscale matrix representations via overlapping sliding windows, allowing effective feature extraction using a deep convolutional autoencoder; (2) the framework integrates both soft and hard clustering outputs and refines the selection through a two-stage strategy; and (3) clustering performance is objectively evaluated by a newly developed composite score, Seva, which combines normalized Silhouette, Calinski–Harabasz, and Davies–Bouldin indices. Applied to over 3900 furnace melting operations from a Nordic foundry, the method identifies seven explainable operational patterns, revealing significant differences in energy consumption, thermal dynamics, and production duration. Compared to classical and deep clustering baselines, the proposed approach achieves superior overall performance, greater robustness, and domain-aligned explainability. The framework addresses key challenges in unsupervised time-series analysis, such as sequence irregularity, overlapping modes, and metric inconsistency, and provides a generalizable solution for data-driven diagnostics and energy optimization in industrial systems. Full article
Show Figures

Figure 1

28 pages, 12579 KB  
Article
A Novel Local Dimming Approach by Controlling LCD Backlight Modules via Deep Learning
by Tsorng-Lin Chia, Yi-Yang Syu and Ping-Sheng Huang
Information 2025, 16(9), 815; https://doi.org/10.3390/info16090815 - 19 Sep 2025
Viewed by 298
Abstract
The display contrast and efficiency of power consumption for LCDs (Liquid Crystal Displays) continue to attract attention from both industry and academia. Local dimming approaches for direct-type backlight modules (BLMs, also referred to as backlight units, BLUs) are regarded as a potential solution. [...] Read more.
The display contrast and efficiency of power consumption for LCDs (Liquid Crystal Displays) continue to attract attention from both industry and academia. Local dimming approaches for direct-type backlight modules (BLMs, also referred to as backlight units, BLUs) are regarded as a potential solution. The purpose of this study is to explore how to optimize the local dimming method of LCD to achieve higher contrast and lower power consumption through deep learning techniques. In this paper, we propose a local dimming approach with dual modulation for LCD-LED displays based on VGG19 and UNet models. Experimental results have shown that this method not only reconstructs the input image into an HDR (High Dynamic Range) image but also automatically generates a control image for the backlight module and LCD panel. In addition, the proposed method can effectively improve the contrast and reduce the power consumption of the LCD in the absence of a public training dataset. Our method can achieve the best performance in MSE and HDR-VDP-2 among eight different combinations of mask and pre-training. Using deep learning techniques, this study has successfully optimized the local dimming approach of LCDs and demonstrated its benefits in improving contrast and reducing power consumption. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 241 KB  
Article
Theoretical Foundations for Governing AI-Based Learning Outcome Assessment in High-Risk Educational Contexts
by Flavio Manganello, Alberto Nico and Giannangelo Boccuzzi
Information 2025, 16(9), 814; https://doi.org/10.3390/info16090814 - 19 Sep 2025
Viewed by 312
Abstract
The governance of artificial intelligence (AI) in education requires theoretical grounding that extends beyond system compliance toward outcome-focused accountability. The EU AI Act classifies AI-based learning outcome assessment (AIB-LOA) as a high-risk application (Annex III, point 3b), underscoring the importance of algorithmic decision-making [...] Read more.
The governance of artificial intelligence (AI) in education requires theoretical grounding that extends beyond system compliance toward outcome-focused accountability. The EU AI Act classifies AI-based learning outcome assessment (AIB-LOA) as a high-risk application (Annex III, point 3b), underscoring the importance of algorithmic decision-making in student evaluation. Current regulatory frameworks such as GDPR and ALTAI focus primarily on ex-ante and system-focused approaches. ALTAI applications in education concentrate on compliance and vulnerability analysis while often failing to integrate governance principles with established educational evaluation practices. While explainable AI research demonstrates methodological sophistication (e.g., LIME, SHAP), it often fails to deliver pedagogically meaningful transparency. This study develops the XAI-ED Consequential Assessment Framework (XAI-ED CAF) as a sector-specific, outcome-focused governance model for AIB-LOA. The framework reinterprets ALTAI’s seven requirements (human agency, robustness, privacy, transparency, fairness, societal well-being, and accountability) through three evaluation theories: Messick’s consequential validity, Kirkpatrick’s four-level model, and Stufflebeam’s CIPP framework. Through this theoretical integration, the study identifies indicators and potential evidence types for institutional self-assessment. The analysis indicates that trustworthy AI in education extends beyond technical transparency or legal compliance. Governance must address student autonomy, pedagogical validity, interpretability, fairness, institutional culture, and accountability. The XAI-ED CAF reconfigures ALTAI as a pedagogically grounded accountability model, establishing structured evaluative criteria that align with both regulatory and educational standards. The framework contributes to AI governance in education by connecting regulatory obligations with pedagogical evaluation theory. It supports policymakers, institutions, and researchers in developing outcome-focused self-assessment practices. Future research should test and refine the framework through Delphi studies and institutional applications across various contexts. Full article
(This article belongs to the Special Issue Advances in Explainable Artificial Intelligence, 2nd Edition)
35 pages, 8370 KB  
Article
A Modified PESTEL- and FCM-Driven Decision Support System to Mitigate the Extinction of Marine Species in the Mediterranean Sea
by Konstantinos Kokkinos, Theodoros Pitropakis, Teodora Karagyaurova, Ia Mosashvili and Dimitris Klaoudatos
Information 2025, 16(9), 813; https://doi.org/10.3390/info16090813 - 18 Sep 2025
Viewed by 238
Abstract
The Mediterranean Sea, a biodiversity hotspot with over 8500 marine species, faces escalating threats from climate change, pollution, overfishing, and habitat degradation. This study introduces a novel Decision Support System (DSS) integrating a modified PESTEL framework (BESTEL: Biological, Economic, Social, Technological, Environmental, Legal) [...] Read more.
The Mediterranean Sea, a biodiversity hotspot with over 8500 marine species, faces escalating threats from climate change, pollution, overfishing, and habitat degradation. This study introduces a novel Decision Support System (DSS) integrating a modified PESTEL framework (BESTEL: Biological, Economic, Social, Technological, Environmental, Legal) with Fuzzy Cognitive Mapping (FCM) to assess and mitigate risks to marine species. Leveraging expert knowledge from 34 specialists, we identified 30 key factors influencing extinction risk, analyzed through Principal Component Analysis (PCA) to reduce dimensionality. The resulting FCM model simulated four policy scenarios, evaluating the impacts of climate change and dam proliferation on biodiversity. Findings reveal that mitigating both drivers significantly reduces extinction risk (−0.14), while unchecked climate change offsets gain from dam removal. The DSS highlights the dominance of climate stressors, with pollution and temperature shifts (−0.45, −0.42) as critical variables. Biological traits like reproductive frequency and longevity respond strongly to environmental improvements. This integrative approach bridges qualitative expertise and quantitative modeling, offering actionable insights for conservation planning. The study underscores the need for holistic strategies combining climate mitigation and habitat restoration to safeguard Mediterranean marine ecosystems. The FCM-based DSS provides a scalable tool for policymakers to prioritize interventions and assess trade-offs in complex socio-ecological systems. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Figure 1

25 pages, 3685 KB  
Article
Story Generation from Visual Inputs: Techniques, Related Tasks, and Challenges
by Daniel A. P. Oliveira, Eugénio Ribeiro and David Martins de Matos
Information 2025, 16(9), 812; https://doi.org/10.3390/info16090812 - 18 Sep 2025
Viewed by 388
Abstract
Creating engaging narratives from visual data is crucial for automated digital media consumption, assistive technologies, and interactive entertainment. This survey covers methodologies used in the generation of these narratives, focusing on their principles, strengths, and limitations. The survey also covers tasks related to [...] Read more.
Creating engaging narratives from visual data is crucial for automated digital media consumption, assistive technologies, and interactive entertainment. This survey covers methodologies used in the generation of these narratives, focusing on their principles, strengths, and limitations. The survey also covers tasks related to automatic story generation, such as image and video captioning, and Visual Question Answering. These tasks share common challenges with Visual Story Generation (VSG) and have served as inspiration for the techniques used in the field. We analyze the main datasets and evaluation metrics, providing a critical perspective on their limitations. Full article
Show Figures

Figure 1

20 pages, 2745 KB  
Article
Improving Detectability of Advanced Persistent Threats (APT) by Use of APT Group Digital Fingerprints
by Laszlo Erdodi, Doney Abraham and Siv Hilde Houmb
Information 2025, 16(9), 811; https://doi.org/10.3390/info16090811 - 18 Sep 2025
Viewed by 245
Abstract
Over the last 15 years, cyberattacks have moved from attacking IT systems to targeted attacks on Operational Technology (OT) systems, also known as Cyber–Physical Systems (CPS). The first targeted OT cyberattack was Stuxnet in 2010, at which time the term Advanced Persistent Threat [...] Read more.
Over the last 15 years, cyberattacks have moved from attacking IT systems to targeted attacks on Operational Technology (OT) systems, also known as Cyber–Physical Systems (CPS). The first targeted OT cyberattack was Stuxnet in 2010, at which time the term Advanced Persistent Threat (APT) appeared. An APT often refers to a sophisticated two-stage cyberattack requiring an extensive reconnaissance period before executing the actual attack. Following Stuxnet, a sizable number of APTs have been discovered and documented. APTs are difficult to detect due to the many steps involved, the large number of attacker capabilities that are in use, and the timeline. Such attacks are carried out over an extended time period, sometimes spanning several years, which means that they cannot be recognized using signatures, anomalies, or similar patterns. APTs require detection capabilities beyond what current detection paradigms are capable of, such as behavior-based, signature-based, protocol-based, or other types of Intrusion Detection and Prevention Systems (IDS/IPS). This paper describes steps towards improving the detection of APTs by means of APT group digital fingerprints. An APT group fingerprint is a digital representation of the attacker’s capabilities, their relations and dependencies, and their technical implementation for an APT group. The fingerprint is represented as a directed graph, which models the relationships between the relevant capabilities. This paper describes part of the analysis behind establishing the APT group digital fingerprint for the Russian Cyberspace Operations Group - Sandworm. Full article
Show Figures

Figure 1

14 pages, 1225 KB  
Article
Impacts of Artificial Intelligence Development on Humanity and Social Values
by Kelvin C. M. Chong, Yen-Kheng Tan and Xin Zhou
Information 2025, 16(9), 810; https://doi.org/10.3390/info16090810 - 18 Sep 2025
Viewed by 481
Abstract
Today, the impact of information technologies (IT) on humanity in this artificial intelligence (AI) era is vast, transformative, and remarkable, especially on human beliefs, practices, and truth discovery. Modern IT advancements like AI, particularly generative models, offer unprecedented opportunities to influence human thoughts [...] Read more.
Today, the impact of information technologies (IT) on humanity in this artificial intelligence (AI) era is vast, transformative, and remarkable, especially on human beliefs, practices, and truth discovery. Modern IT advancements like AI, particularly generative models, offer unprecedented opportunities to influence human thoughts and to challenge our entrenched worldviews. This paper seeks to study the evolving relationship between humans and non-human agents, such as AI systems, and to examine how generative technologies are reshaping the dynamics of knowledge, authority, and societal interaction, particularly in contexts where technology intersects with deeply held social values. In the study, the broader implications for societal practices and ethical questions will be zoomed in for investigation and discussed in the context of moral value as the focus. The paper also seeks to list out the various generative models developed for AI to reason and think logically, reviewed and evaluated for their potential impacts on humanity and social values. Two main research contributions, namely the (1) Virtue Ethics-Based Character Modeling for Artificial Moral Advisor (AMA) and the (2) Direct Preference Optimization (DPO), have been proposed to align contemporary large language models (LLMs) with moral values. The construction approach of the moral dataset that focused on virtue ethics for training the proposed LLM will be presented and discussed. The implementation of the AI moral character representation will be demonstrated in future research work. Full article
(This article belongs to the Special Issue Information Technology in Society)
Show Figures

Figure 1

32 pages, 3609 KB  
Article
BPMN-Based Design of Multi-Agent Systems: Personalized Language Learning Workflow Automation with RAG-Enhanced Knowledge Access
by Hedi Tebourbi, Sana Nouzri, Yazan Mualla, Meryem El Fatimi, Amro Najjar, Abdeljalil Abbas-Turki and Mahjoub Dridi
Information 2025, 16(9), 809; https://doi.org/10.3390/info16090809 - 17 Sep 2025
Viewed by 350
Abstract
The intersection of Artificial Intelligence (AI) and education is revolutionizing learning and teaching in this digital era, with Generative AI and large language models (LLMs) providing even greater possibilities for the future. The digital transformation of language education demands innovative approaches that combine [...] Read more.
The intersection of Artificial Intelligence (AI) and education is revolutionizing learning and teaching in this digital era, with Generative AI and large language models (LLMs) providing even greater possibilities for the future. The digital transformation of language education demands innovative approaches that combine pedagogical rigor with explainable AI (XAI) principles, particularly for low-resource languages. This paper presents a novel methodology that integrates Business Process Model and Notation (BPMN) with Multi-Agent Systems (MAS) to create transparent, workflow-driven language tutors. Our approach uniquely embeds XAI through three mechanisms: (1) BPMN’s visual formalism that makes agent decision-making auditable, (2) Retrieval-Augmented Generation (RAG) with verifiable knowledge provenance from textbooks of the National Institute of Languages of Luxembourg, and (3) human-in-the-loop validation of both content and pedagogical sequencing. To ensure realism in learner interaction, we integrate speech-to-text and text-to-speech technologies, creating an immersive, human-like learning environment. The system simulates intelligent tutoring through agents’ collaboration and dynamic adaptation to learner progress. We demonstrate this framework through a Luxembourgish language learning platform where specialized agents (Conversational, Reading, Listening, QA, and Grammar) operate within BPMN-modeled workflows. The system achieves high response faithfulness (0.82) and relevance (0.85) according to RAGA metrics, while speech integration using Whisper STT and Coqui TTS enables immersive practice. Evaluation with learners showed 85.8% satisfaction with contextual responses and 71.4% engagement rates, confirming the effectiveness of our process-driven approach. This work advances AI-powered language education by showing how formal process modeling can create pedagogically coherent and explainable tutoring systems. The architecture’s modularity supports extension to other low-resource languages while maintaining the transparency critical for educational trust. Future work will expand curriculum coverage and develop teacher-facing dashboards to further improve explainability. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

15 pages, 27018 KB  
Article
Smartphone-Based Seamless Scene and Object Recognition for Visually Impaired Persons
by Fisilmi Azizah Rahman, Ferina Ayu Pusparani, Wen Liang Yeoh and Osamu Fukuda
Information 2025, 16(9), 808; https://doi.org/10.3390/info16090808 - 17 Sep 2025
Viewed by 345
Abstract
This study introduces a mobile application designed to assist visually impaired persons (VIPs) in navigating complex environments, such as supermarkets. Recent assistive tools often identify objects in isolation without providing contextual awareness. In contrast, our proposed system uses seamless scene and object recognition [...] Read more.
This study introduces a mobile application designed to assist visually impaired persons (VIPs) in navigating complex environments, such as supermarkets. Recent assistive tools often identify objects in isolation without providing contextual awareness. In contrast, our proposed system uses seamless scene and object recognition to help users efficiently locate target items and understand their surroundings. Employing a “human-in-the-loop approach”, users control their smartphone camera direction to explore the space. Experiments conducted in a simulated shopping environment show that the system enhances object-finding efficiency and improves user orientation. This approach not only increases independence, but also promotes inclusivity by enabling VIPs to perform everyday tasks with greater confidence and autonomy. Full article
Show Figures

Figure 1

18 pages, 524 KB  
Article
A Multi-Angle Semantic Feature Fusion Method for Web User Behavior Anomaly Detection
by Li Wang, Mingshan Xia, Yakang Li, Jiahong Xu, Fengyao Hou and Fazhi Qi
Information 2025, 16(9), 807; https://doi.org/10.3390/info16090807 - 17 Sep 2025
Viewed by 252
Abstract
To address the increasing complexity of web user behavior anomaly detection and the issue of missing semantic information caused by relying solely on features like request semantics or request sequences, this study proposes a multi-angle semantic feature fusion approach for user behavior anomaly [...] Read more.
To address the increasing complexity of web user behavior anomaly detection and the issue of missing semantic information caused by relying solely on features like request semantics or request sequences, this study proposes a multi-angle semantic feature fusion approach for user behavior anomaly detection. The research is based on user sessions. Firstly, by analyzing the access sequence behavior within user sessions and utilizing an improved SimHash algorithm, sequence features are extracted to model browsing patterns. Secondly, combining the semantic content contained in user sessions, a multi-attention Transformer model is employed to extract semantic features, representing user visit semantics. Finally, an end-to-end model is constructed to fuse sequence and semantic features, enabling effective detection of user behavior anomalies. Experimental results demonstrate that the proposed model exhibits excellent performance and stability in detection accuracy, with significant effects in real-world anomaly user identification. As the proportion of anomalous sessions increases, precision, recall, and F1-score also improve, all reaching 99%. Even when anomalous sessions are scarce in the dataset, the model still achieves satisfactory detection results. Full article
(This article belongs to the Section Information Security and Privacy)
Show Figures

Figure 1

24 pages, 1088 KB  
Article
Multilingual Sentiment Analysis with Data Augmentation: A Cross-Language Evaluation in French, German, and Japanese
by Suboh Alkhushayni and Hyesu Lee
Information 2025, 16(9), 806; https://doi.org/10.3390/info16090806 - 17 Sep 2025
Viewed by 424
Abstract
Machine learning in natural language processing (NLP) analyzes datasets to make future predictions, but developing accurate models requires large, high-quality, and balanced datasets. However, collecting such datasets, especially for low-resource languages, is time-consuming and costly. As a solution, data augmentation can be used [...] Read more.
Machine learning in natural language processing (NLP) analyzes datasets to make future predictions, but developing accurate models requires large, high-quality, and balanced datasets. However, collecting such datasets, especially for low-resource languages, is time-consuming and costly. As a solution, data augmentation can be used to increase the dataset size by generating synthetic samples from existing data. This study examines the effect of translation-based data augmentation on sentiment analysis using small datasets in three diverse languages: French, German, and Japanese. We use two neural machine translation (NMT) services—Google Translate and DeepL—to generate augmented datasets through intermediate language translation. Sentiment analysis models based on Support Vector Machine (SVM) are trained on both original and augmented datasets and evaluated using accuracy, precision, recall, and F1 score. Our results demonstrate that translation augmentation significantly enhances model performance in both French and Japanese. For example, using Google Translate, model accuracy improved from 62.50% to 83.55% in Japanese (+21.05%) and from 87.66% to 90.26% in French (+2.6%). In contrast, the German dataset showed a minor improvement or decline, depending on the translator used. Google-based augmentation generally outperformed DeepL, which yielded smaller or negative gains. To evaluate cross-lingual generalization, models trained on one language were tested on datasets in the other two. Notably, a model trained on augmented German data improved its accuracy on French test data from 81.17% to 85.71% and on Japanese test data from 71.71% to 79.61%. Similarly, a model trained on augmented Japanese data improved accuracy on German test data by up to 3.4%. These findings highlight that translation-based augmentation can enhance sentiment classification and cross-language adaptability, particularly in low-resource and multilingual NLP settings. Full article
Show Figures

Figure 1

45 pages, 12590 KB  
Article
An End-to-End Data and Machine Learning Pipeline for Energy Forecasting: A Systematic Approach Integrating MLOps and Domain Expertise
by Xun Zhao, Zheng Grace Ma and Bo Nørregaard Jørgensen
Information 2025, 16(9), 805; https://doi.org/10.3390/info16090805 - 16 Sep 2025
Viewed by 423
Abstract
Energy forecasting is critical for modern power systems, enabling proactive grid control and efficient resource optimization. However, energy forecasting projects require systematic approaches that span project inception to model deployment while ensuring technical excellence, domain alignment, regulatory compliance, and reproducibility. Existing methodologies such [...] Read more.
Energy forecasting is critical for modern power systems, enabling proactive grid control and efficient resource optimization. However, energy forecasting projects require systematic approaches that span project inception to model deployment while ensuring technical excellence, domain alignment, regulatory compliance, and reproducibility. Existing methodologies such as CRISP-DM provide a foundation but lack explicit mechanisms for iterative feedback, decision checkpoints, and continuous energy-domain-expert involvement. This paper proposes a modular end-to-end framework for energy forecasting that integrates formal decision gates in each phase, embeds domain-expert validation, and produces fully traceable artifacts. The framework supports controlled iteration, rollback, and automation within an MLOps-compatible structure. A comparative analysis demonstrates its advantages in functional coverage, workflow logic, and governance over existing approaches. A case study on short-term electricity forecasting for a 2560 m2 office building validates the framework, achieving 24-h-ahead predictions with an RNN, reaching an RMSE of 1.04 kWh and an MAE of 0.78 kWh. The results confirm that the framework enhances forecast accuracy, reliability, and regulatory readiness in real-world energy applications. Full article
Show Figures

Figure 1

16 pages, 2128 KB  
Article
Secure Multifaceted-RAG: Hybrid Knowledge Retrieval with Security Filtering
by Grace Byun, Shinsun Lee, Nayoung Choi and Jinho D. Choi
Information 2025, 16(9), 804; https://doi.org/10.3390/info16090804 - 16 Sep 2025
Viewed by 384
Abstract
Existing Retrieval-Augmented Generation (RAG) systems face challenges in enterprise settings due to limited retrieval scope and data security risks. When relevant internal documents are unavailable, the system struggles to generate accurate and complete responses. Additionally, using closed-source Large Language Models (LLMs) raises concerns [...] Read more.
Existing Retrieval-Augmented Generation (RAG) systems face challenges in enterprise settings due to limited retrieval scope and data security risks. When relevant internal documents are unavailable, the system struggles to generate accurate and complete responses. Additionally, using closed-source Large Language Models (LLMs) raises concerns about exposing proprietary information. To address these issues, we propose the Secure Multifaceted-RAG (SecMulti-RAG) framework, which retrieves not only from internal documents but also from two supplementary sources: pre-generated expert knowledge for anticipated queries and on-demand external LLM-generated knowledge. To mitigate security risks, we adopt a local open-source generator and selectively utilize external LLMs only when prompts are deemed safe by a filtering mechanism. This approach enhances completeness, prevents data leakage, and reduces costs. In our evaluation on a report generation task in the automotive industry, SecMulti-RAG significantly outperforms traditional RAG—achieving 79.3–91.9% win rates across correctness, richness, and helpfulness in LLM-based evaluation and 56.3–70.4% in human evaluation. This highlights SecMulti-RAG as a practical and secure solution for enterprise RAG. Full article
Show Figures

Figure 1

20 pages, 2404 KB  
Article
TFR-LRC: Rack-Optimized Locally Repairable Codes: Balancing Fault Tolerance, Repair Degree, and Topology Awareness in Distributed Storage Systems
by Yan Wang, Yanghuang Cao and Junhao Shi
Information 2025, 16(9), 803; https://doi.org/10.3390/info16090803 - 15 Sep 2025
Viewed by 292
Abstract
Locally Repairable Codes (LRCs) have become the dominant design in wide-stripe erasure coding storage systems due to their excellent locality and low repair bandwidth. In such systems, the repair degree—defined as the number of helper nodes contacted during data recovery—is a key performance [...] Read more.
Locally Repairable Codes (LRCs) have become the dominant design in wide-stripe erasure coding storage systems due to their excellent locality and low repair bandwidth. In such systems, the repair degree—defined as the number of helper nodes contacted during data recovery—is a key performance metric. However, as stripe width increases, the probability of multiple simultaneous node failures grows, which significantly raises the repair degree in traditional LRCs. Addressing this challenge, we propose a new family of codes called TFR-LRCs (Locally Repairable Codes for balancing fault tolerance and repair efficiency). TFR-LRCs introduce flexible design choices that allow trade-offs between fault tolerance and repair degree: they can reduce the repair degree by slightly increasing storage overhead, or enhance fault tolerance by tolerating a slightly higher repair degree. We design a matrix-based construction to generate TFR-LRCs and evaluate their performance through extensive simulations. The results show that, under multiple failure scenarios, TFR-LRC reduces the repair degree by up to 35% compared with conventional LRCs, while preserving the original LRC structure. Moreover, under identical code parameters, TFR-LRC achieves improved fault tolerance, tolerating up to g+2 failures versus g+1 in conventional LRCs, with minimal additional cost. Notably, in maintenance mode, where entire racks may become temporarily unavailable, TFR-LRC demonstrates substantially better recovery efficiency compared to existing LRC schemes, making it a practical choice for real-world deployments. Full article
Show Figures

Figure 1

23 pages, 1898 KB  
Article
A Container-Native IAM Framework for Secure Green Mobility: A Case Study with Keycloak and Kubernetes
by Alexandre Sousa, Frederico Branco, Arsénio Reis and Manuel J. C. S. Reis
Information 2025, 16(9), 802; https://doi.org/10.3390/info16090802 - 15 Sep 2025
Viewed by 199
Abstract
The rapid adoption of green mobility solutions—such as electric-vehicle sharing and intelligent transportation systems—has accelerated the integration of Internet of Things (IoT) technologies, introducing complex security and performance challenges. While conceptual Identity and Access Management (IAM) frameworks exist, few are empirically validated for [...] Read more.
The rapid adoption of green mobility solutions—such as electric-vehicle sharing and intelligent transportation systems—has accelerated the integration of Internet of Things (IoT) technologies, introducing complex security and performance challenges. While conceptual Identity and Access Management (IAM) frameworks exist, few are empirically validated for the scale, heterogeneity, and real-time demands of modern mobility ecosystems. This work presents a data-backed, container-native reference architecture for secure and resilient Authentication, Authorization, and Accounting (AAA) in green mobility environments. The framework integrates Keycloak within a Kubernetes-orchestrated infrastructure and applies Zero Trust and defense-in-depth principles. Effectiveness is demonstrated through rigorous benchmarking across latency, throughput, memory footprint, and automated fault recovery. Compared to a monolithic baseline, the proposed architecture achieves over 300% higher throughput, 90% faster startup times, and 75% lower idle memory usage while enabling full service restoration in under one minute. This work establishes a validated deployment blueprint for IAM in IoT-driven transportation systems, offering a practical foundation for a secure and scalable mobility infrastructure. Full article
Show Figures

Figure 1

13 pages, 382 KB  
Article
The Blockchain Trust Paradox: Engineered Trust vs. Experienced Trust in Decentralized Systems
by Scott Keaney and Pierre Berthon
Information 2025, 16(9), 801; https://doi.org/10.3390/info16090801 - 15 Sep 2025
Viewed by 330
Abstract
Blockchain is described as a technology of trust. Its design relies on cryptography, decentralization, and immutability to ensure secure and transparent transactions. Yet users frequently report confusion, frustration, and skepticism when engaging with blockchain applications. This tension is the blockchain trust paradox: while [...] Read more.
Blockchain is described as a technology of trust. Its design relies on cryptography, decentralization, and immutability to ensure secure and transparent transactions. Yet users frequently report confusion, frustration, and skepticism when engaging with blockchain applications. This tension is the blockchain trust paradox: while trust is engineered into the technology, trust is not always experienced by its users. Our article examines the paradox through three theoretical perspectives. Socio-Technical Systems (STS) theory highlights how trust emerges from the interaction between technical features and social practices; Technology Acceptance models (TAM and UTAUT) emphasize how perceived usefulness and ease of use shape adoption. Ostrom’s commons governance theory explains how legitimacy and accountability affect trust in decentralized networks. Drawing on recent research in experience design, human–computer interaction, and decentralized governance, the article identifies the barriers that undermine user confidence. These include complex key management, unpredictable transaction costs, and unclear processes for decision-making and dispute resolution. The article offers an integrated framework that links engineered trust with experienced trust. Seven propositions are developed to guide future research and practice. The conclusion argues that blockchain technologies will gain traction if design and governance evolve alongside technical protocols to create systems that are both technically secure and trustworthy in experience. Full article
(This article belongs to the Special Issue Information Technology in Society)
Show Figures

Graphical abstract

17 pages, 1659 KB  
Article
Enhancing Multi-Region Target Search Efficiency Through Integrated Peripheral Vision and Head-Mounted Display Systems
by Gang Wang, Hung-Hsiang Wang and Zhihuang Huang
Information 2025, 16(9), 800; https://doi.org/10.3390/info16090800 - 15 Sep 2025
Viewed by 258
Abstract
Effectively managing visual search tasks across multiple spatial regions during daily activities such as driving, cycling, and navigating complex environments often overwhelms visual processing capacity, increasing the risk of errors and missed critical information. This study investigates an integrated approach that combines an [...] Read more.
Effectively managing visual search tasks across multiple spatial regions during daily activities such as driving, cycling, and navigating complex environments often overwhelms visual processing capacity, increasing the risk of errors and missed critical information. This study investigates an integrated approach that combines an Ambient Display system utilizing peripheral vision cues with traditional Head-Mounted Displays (HMDs) to enhance spatial search efficiency while minimizing cognitive burden. We systematically evaluated this integrated HMD-Ambient Display system against standalone HMD configurations through comprehensive user studies involving target search scenarios across multiple spatial regions. Our findings demonstrate that the combined approach significantly improves user performance by establishing a complementary visual system where peripheral stimuli effectively capture initial attention while central HMD cues provide precise directional guidance. The integrated system showed substantial improvements in reaction time for rear visual region searches and higher user preference ratings compared with HMD-only conditions. This integrated approach represents an innovative solution that efficiently utilizes dual visual channels, reducing cognitive load while enhancing search efficiency across distributed spatial areas. Our contributions provide valuable design guidelines for developing assistive technologies that improve performance in multi-region visual search tasks by strategically leveraging the complementary strengths of peripheral and central visual processing mechanisms. Full article
Show Figures

Figure 1

18 pages, 6253 KB  
Article
Exploring Sign Language Dataset Augmentation with Generative Artificial Intelligence Videos: A Case Study Using Adobe Firefly-Generated American Sign Language Data
by Valentin Bercaru and Nirvana Popescu
Information 2025, 16(9), 799; https://doi.org/10.3390/info16090799 - 15 Sep 2025
Viewed by 348
Abstract
Currently, high quality datasets focused on Sign Language Recognition are either private, proprietary or difficult to obtain due to costs. Therefore, we aim to mitigate this problem by augmenting a publicly available dataset with artificially generated data in order to enrich and obtain [...] Read more.
Currently, high quality datasets focused on Sign Language Recognition are either private, proprietary or difficult to obtain due to costs. Therefore, we aim to mitigate this problem by augmenting a publicly available dataset with artificially generated data in order to enrich and obtain a more diverse dataset. The performance of Sign Language Recognition (SLR) systems is highly dependent on the quality and diversity of training datasets. However, acquiring large-scale and well-annotated sign language video data remains a significant challenge. This experiment explores the use of Generative Artificial Intelligence (GenAI), specifically Adobe Firefly, to create synthetic video data for American Sign Language (ASL) fingerspelling. Thirteen letters out of 26 were selected for generation, and short videos representing each sign were synthesized and processed into static frames. These synthetic frames replaced approximately 7.5% of the original dataset and were integrated into the training data of a publicly available Convolutional Neural Network (CNN) model. After retraining the model with the augmented dataset, the accuracy did not drop. Moreover, the validation accuracy was approximately the same. The resulting model achieved a maximum accuracy of 98.04%. While the performance gain was limited (less than 1%), the approach illustrates the feasibility of using GenAI tools to generate training data and supports further research into data augmentation for low-resource SLR tasks. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

25 pages, 3276 KB  
Article
CPB-YOLOv8: An Enhanced Multi-Scale Traffic Sign Detector for Complex Road Environment
by Wei Zhao, Lanlan Li and Xin Gong
Information 2025, 16(9), 798; https://doi.org/10.3390/info16090798 - 15 Sep 2025
Viewed by 447
Abstract
Traffic sign detection is critically important for intelligent transportation systems, yet persistent challenges like multi-scale variation and complex background interference severely degrade detection accuracy and real-time performance. To address these limitations, this study presents CPB-YOLOv8, an advanced multi-scale detection framework based on the [...] Read more.
Traffic sign detection is critically important for intelligent transportation systems, yet persistent challenges like multi-scale variation and complex background interference severely degrade detection accuracy and real-time performance. To address these limitations, this study presents CPB-YOLOv8, an advanced multi-scale detection framework based on the YOLOv8 architecture. A Cross-Stage Partial-Partitioned Transformer Block (CSP-PTB) is incorporated into the feature extraction stage to preserve semantic information during downsampling while enhancing global feature representation. For feature fusion, a four-level bidirectional feature pyramid BiFPN integrated with a P2 detection layer significantly improves small-target detection capability. Further enhancement is achieved via an optimized loss function that balances multi-scale objective localization. Comprehensive evaluations were conducted on the TT100K, the CCTSDB, and a custom multi-scenario road image dataset capturing urban and suburban environments at 1920 × 1080 resolution. Results demonstrate compelling performance: On TT100K, CPB-YOLOv8 achieved 90.73% mAP@0.5 with a 12.5 MB model size, exceeding the YOLOv8s baseline by 3.94 percentage points and achieving 6.43% higher small-target recall. On CCTSDB, it attained a near-saturation performance of 99.21% mAP@0.5. Crucially, the model demonstrated exceptional robustness across diverse environmental conditions. Rigorous analysis on partitioned CCTSDB subsets based on weather and illumination, alongside validation using a separate self-collected dataset reserved solely for inference, confirmed strong adaptability to real-world distribution shifts and low-visibility scenarios. Cross-dataset validation and visual comparisons further substantiated the model’s robustness and its effective suppression of background interference. Full article
Show Figures

Graphical abstract

25 pages, 2377 KB  
Article
A FinTech-Aligned Optimization Framework for IoT-Enabled Smart Agriculture to Mitigate Greenhouse Gas Emissions
by Sofia Polymeni, Dimitrios N. Skoutas, Georgios Kormentzas and Charalabos Skianis
Information 2025, 16(9), 797; https://doi.org/10.3390/info16090797 - 14 Sep 2025
Viewed by 299
Abstract
With agriculture being the second biggest contributor to greenhouse gas (GHG) emissions through the excessive use of fertilizers, machinery, and inefficient farming practices, global efforts to reduce emissions have been intensified, opting for smarter, data-driven solutions. However, while machine learning (ML) offers powerful [...] Read more.
With agriculture being the second biggest contributor to greenhouse gas (GHG) emissions through the excessive use of fertilizers, machinery, and inefficient farming practices, global efforts to reduce emissions have been intensified, opting for smarter, data-driven solutions. However, while machine learning (ML) offers powerful predictive capabilities, its black-box nature presents a challenge for trust and adoption, particularly when integrated with auditable financial technology (FinTech) principles. To address this gap, this work introduces a novel, explanation-focused GHG emission optimization framework for IoT-enabled smart agriculture that is both transparent and prescriptive, distinguishing itself from macro-level land-use solutions by focusing on optimizable management practices while aligning with core FinTech principles and pollutant stock market mechanisms. The framework employs a two-stage statistical methodology that first identifies distinct agricultural emission profiles from macro-level data, and then models these emissions by developing a cluster-oriented principal component regression (PCR) model, which outperforms simpler variants by approximately 35% on average across all clusters. This interpretable model then serves as the core of a FinTech-aligned optimization framework that combines cluster-oriented modeling knowledge with a sequential least squares quadratic programming (SLSQP) algorithm to minimize emission-related costs under a carbon pricing mechanism, showcasing forecasted cost reductions as high as 43.55%. Full article
(This article belongs to the Special Issue Technoeconomics of the Internet of Things)
Show Figures

Graphical abstract

22 pages, 785 KB  
Article
Detection of Fake News in Romanian: LLM-Based Approaches to COVID-19 Misinformation
by Alexandru Dima, Ecaterina Ilis, Diana Florea and Mihai Dascalu
Information 2025, 16(9), 796; https://doi.org/10.3390/info16090796 - 13 Sep 2025
Viewed by 365
Abstract
The spread of misinformation during the COVID-19 pandemic raised widespread concerns about public health communication and media reliability. In this study, we focus on these issues as they manifested in Romanian-language media and employ Large Language Models (LLMs) to classify misinformation, with a [...] Read more.
The spread of misinformation during the COVID-19 pandemic raised widespread concerns about public health communication and media reliability. In this study, we focus on these issues as they manifested in Romanian-language media and employ Large Language Models (LLMs) to classify misinformation, with a particular focus on super-narratives—broad thematic categories that capture recurring patterns and ideological framings commonly found in pandemic-related fake news, such as anti-vaccination discourse, conspiracy theories, or geopolitical blame. While some of the categories reflect global trends, others are shaped by the Romanian cultural and political context. We introduce a novel dataset of fake news centered on COVID-19 misinformation in the Romanian geopolitical context, comprising both annotated and unannotated articles. We experimented with multiple LLMs using zero-shot, few-shot, supervised, and semi-supervised learning strategies, achieving the best results with an LLaMA 3.1 8B model and semi-supervised learning, which yielded an F1-score of 78.81%. Experimental evaluations compared this approach to traditional Machine Learning classifiers augmented with morphosyntactic features. Results show that semi-supervised learning substantially improved classification results in both binary and multi-class settings. Our findings highlight the effectiveness of semi-supervised adaptation in low-resource, domain-specific contexts, as well as the necessity of enabling real-time misinformation tracking and enhancing transparency through claim-level explainability and fact-based counterarguments. Full article
Show Figures

Figure 1

15 pages, 374 KB  
Article
Digital Governance, Democracy and Public Funding Efficiency in the EU-27: Comparative Insights with Emphasis on Greece
by Kyriaki Efthalitsidou, Konstantinos Spinthiropoulos, George Vittas and Nikolaos Sariannidis
Information 2025, 16(9), 795; https://doi.org/10.3390/info16090795 - 12 Sep 2025
Viewed by 433
Abstract
This study explores the relationship between digital governance, democratic quality, and public funding efficiency across the EU-27, with an emphasis on Greece. Using 2023 cross-sectional data from the DESI, Worldwide Governance Indicators, and Eurostat, we apply OLS regression and simulated DEA to assess [...] Read more.
This study explores the relationship between digital governance, democratic quality, and public funding efficiency across the EU-27, with an emphasis on Greece. Using 2023 cross-sectional data from the DESI, Worldwide Governance Indicators, and Eurostat, we apply OLS regression and simulated DEA to assess how digital maturity and democratic engagement impact fiscal performance. The sample includes all 27 EU member states, and the analysis is subject to limitations due to the cross-sectional design and the use of simulated DEA scores. Results show that higher DESI and Voice and Accountability scores are positively associated with greater efficiency. Greece, while improving, remains below the EU average. The novelty of this paper lies in combining econometric regression with efficiency benchmarking, highlighting the interplay of digital and democratic dimensions in fiscal performance. The findings highlight the importance of integrating digital infrastructure with participatory governance to achieve sustainable public finance. Full article
(This article belongs to the Special Issue Information Technology in Society)
Show Figures

Figure 1

44 pages, 1085 KB  
Article
EDTF: A User-Centered Approach to Digital Educational Games Design and Development
by Raluca Ionela Maxim and Joan Arnedo-Moreno
Information 2025, 16(9), 794; https://doi.org/10.3390/info16090794 - 12 Sep 2025
Viewed by 438
Abstract
The creation of digital educational games often lacks strong user-centered design despite available frameworks, which tend to focus on technical and instructional aspects. This paper presents the Empathic Design Thinking Framework (EDTF), a structured methodology tailored to digital educational game creation. Rooted in [...] Read more.
The creation of digital educational games often lacks strong user-centered design despite available frameworks, which tend to focus on technical and instructional aspects. This paper presents the Empathic Design Thinking Framework (EDTF), a structured methodology tailored to digital educational game creation. Rooted in human–computer interaction (HCI) principles, the EDTF integrates continuous co-design and iterative user research from ideation to deployment, involving both learners and instructors throughout all phases; it positions empathic design (ED) principles as an important component of HCI, focusing not only on identifying user needs but also on understanding users’ lived experiences, motivations, and frustrations. Developed through design science research, the EDTF offers step-by-step guidance, comprised of 10 steps, that reduces uncertainty for novice and experienced designers, developers, and HCI experts alike. The framework was validated in two robust phases. First, it was evaluated by 60 instructional game experts, including designers, developers, and HCI professionals, using an adapted questionnaire covering dimensions like clarity, problem-solving, consistency, and innovation, as well as standardized scales such as UMUX-Lite for perceived ease of use and usefulness and SUS for perceived usability. This was followed by in-depth interviews with 18 experts to understand the feasibility and conceptualization of EDTF applicability. The strong validation results highlight the framework’s potential to guide the design and development of educational games that take into account HCI principles and are usable, efficient, and impactful. Full article
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)
Show Figures

Graphical abstract

41 pages, 1953 KB  
Article
Balancing Business, IT, and Human Capital: RPA Integration and Governance Dynamics
by José Cascais Brás, Ruben Filipe Pereira, Marcella Melo, Isaias Scalabrin Bianchi and Rui Ribeiro
Information 2025, 16(9), 793; https://doi.org/10.3390/info16090793 - 12 Sep 2025
Viewed by 610
Abstract
In the era of rapid technological progress, Robotic Process Automation (RPA) has emerged as a pivotal tool across professional domains. Organizations pursue automation to boost efficiency and productivity, control costs, and reduce errors. RPA software automates repetitive, rules-based tasks previously performed by employees, [...] Read more.
In the era of rapid technological progress, Robotic Process Automation (RPA) has emerged as a pivotal tool across professional domains. Organizations pursue automation to boost efficiency and productivity, control costs, and reduce errors. RPA software automates repetitive, rules-based tasks previously performed by employees, and its effectiveness depends on integration across the business–IT–people interface. We adopted a mixed-methods study combining a PRISMA-guided multivocal review of peer-reviewed and gray sources with semi-structured practitioner interviews to capture firsthand insights and diverse perspectives. Triangulation of these phases examines RPA governance, auditing, and policy. The study clarifies the relationship between business processes and IT and offers guidance that supports procedural standardization, regulatory compliance, employee engagement, role clarity, and effective change management—thereby increasing the likelihood of successful RPA initiatives while prudently mitigating associated risks. Full article
Show Figures

Figure 1

16 pages, 1697 KB  
Article
Enhancing Ancient Ceramic Knowledge Services: A Question Answering System Using Fine-Tuned Models and GraphRAG
by Zhi Chen and Bingxiang Liu
Information 2025, 16(9), 792; https://doi.org/10.3390/info16090792 - 11 Sep 2025
Viewed by 274
Abstract
To address the challenges of extensive domain expertise and deficient semantic comprehension in the digital preservation of ancient ceramics, this paper proposes a knowledge question answering (QA) system integrating Low-Rank Adaptation (LoRA) fine-tuning and Graph Retrieval-Augmented Generation (GraphRAG). First, textual information of ceramic [...] Read more.
To address the challenges of extensive domain expertise and deficient semantic comprehension in the digital preservation of ancient ceramics, this paper proposes a knowledge question answering (QA) system integrating Low-Rank Adaptation (LoRA) fine-tuning and Graph Retrieval-Augmented Generation (GraphRAG). First, textual information of ceramic images is generated using the GLM-4V-9B model. These texts are then enriched with domain literature to produce ancient ceramic QA pairs via ERNIE 4.0 Turbo, culminating in a high-quality dataset of 2143 curated question–answer groups after manual refinement. Second, LoRA fine-tuning was employed on the Qwen2.5-7B-Instruct foundation model, significantly enhancing its question-answering proficiency specifically for the ancient ceramics domain. Finally, the GraphRAG framework is integrated, combining the fine-tuned large language model with knowledge graph path analysis to augment multi-hop reasoning capabilities for complex queries. Experimental results demonstrate performance improvements of 24.08% in ROUGE-1, 34.75% in ROUGE-2, 29.78% in ROUGE-L, and 4.52% in BERTScore_F1 over the baseline model. This evidence shows that the synergistic implementation of LoRA fine-tuning and GraphRAG delivers significant performance enhancements for ceramic knowledge systems, establishing a replicable technical framework for intelligent cultural heritage knowledge services. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop