Previous Issue
Volume 16, August
 
 

Information, Volume 16, Issue 9 (September 2025) – 83 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 374 KB  
Article
Digital Governance, Democracy and Public Funding Efficiency in the EU-27: Comparative Insights with Emphasis on Greece
by Kyriaki Efthalitsidou, Konstantinos Spinthiropoulos, George Vittas and Nikolaos Sariannidis
Information 2025, 16(9), 795; https://doi.org/10.3390/info16090795 (registering DOI) - 12 Sep 2025
Abstract
This study explores the relationship between digital governance, democratic quality, and public funding efficiency across the EU-27, with an emphasis on Greece. Using 2023 cross-sectional data from the DESI, Worldwide Governance Indicators, and Eurostat, we apply OLS regression and simulated DEA to assess [...] Read more.
This study explores the relationship between digital governance, democratic quality, and public funding efficiency across the EU-27, with an emphasis on Greece. Using 2023 cross-sectional data from the DESI, Worldwide Governance Indicators, and Eurostat, we apply OLS regression and simulated DEA to assess how digital maturity and democratic engagement impact fiscal performance. The sample includes all 27 EU member states, and the analysis is subject to limitations due to the cross-sectional design and the use of simulated DEA scores. Results show that higher DESI and Voice and Accountability scores are positively associated with greater efficiency. Greece, while improving, remains below the EU average. The novelty of this paper lies in combining econometric regression with efficiency benchmarking, highlighting the interplay of digital and democratic dimensions in fiscal performance. The findings highlight the importance of integrating digital infrastructure with participatory governance to achieve sustainable public finance. Full article
(This article belongs to the Special Issue Information Technology in Society)
Show Figures

Figure 1

44 pages, 1085 KB  
Article
EDTF: A User-Centered Approach to Digital Educational Games Design and Development
by Raluca Ionela Maxim and Joan Arnedo-Moreno
Information 2025, 16(9), 794; https://doi.org/10.3390/info16090794 - 12 Sep 2025
Abstract
The creation of digital educational games often lacks strong user-centered design despite available frameworks, which tend to focus on technical and instructional aspects. This paper presents the Empathic Design Thinking Framework (EDTF), a structured methodology tailored to digital educational game creation. Rooted in [...] Read more.
The creation of digital educational games often lacks strong user-centered design despite available frameworks, which tend to focus on technical and instructional aspects. This paper presents the Empathic Design Thinking Framework (EDTF), a structured methodology tailored to digital educational game creation. Rooted in human–computer interaction (HCI) principles, the EDTF integrates continuous co-design and iterative user research from ideation to deployment, involving both learners and instructors throughout all phases; it positions empathic design (ED) principles as an important component of HCI, focusing not only on identifying user needs but also on understanding users’ lived experiences, motivations, and frustrations. Developed through design science research, the EDTF offers step-by-step guidance, comprised of 10 steps, that reduces uncertainty for novice and experienced designers, developers, and HCI experts alike. The framework was validated in two robust phases. First, it was evaluated by 60 instructional game experts, including designers, developers, and HCI professionals, using an adapted questionnaire covering dimensions like clarity, problem-solving, consistency, and innovation, as well as standardized scales such as UMUX-Lite for perceived ease of use and usefulness and SUS for perceived usability. This was followed by in-depth interviews with 18 experts to understand the feasibility and conceptualization of EDTF applicability. The strong validation results highlight the framework’s potential to guide the design and development of educational games that take into account HCI principles and are usable, efficient, and impactful. Full article
(This article belongs to the Special Issue Recent Advances and Perspectives in Human-Computer Interaction)
Show Figures

Graphical abstract

41 pages, 1953 KB  
Article
Balancing Business, IT, and Human Capital: RPA Integration and Governance Dynamics
by José Cascais Brás, Ruben Filipe Pereira, Marcella Melo, Isaias Scalabrin Bianchi and Rui Ribeiro
Information 2025, 16(9), 793; https://doi.org/10.3390/info16090793 - 12 Sep 2025
Abstract
In the era of rapid technological progress, Robotic Process Automation (RPA) has emerged as a pivotal tool across professional domains. Organizations pursue automation to boost efficiency and productivity, control costs, and reduce errors. RPA software automates repetitive, rules-based tasks previously performed by employees, [...] Read more.
In the era of rapid technological progress, Robotic Process Automation (RPA) has emerged as a pivotal tool across professional domains. Organizations pursue automation to boost efficiency and productivity, control costs, and reduce errors. RPA software automates repetitive, rules-based tasks previously performed by employees, and its effectiveness depends on integration across the business–IT–people interface. We adopted a mixed-methods study combining a PRISMA-guided multivocal review of peer-reviewed and gray sources with semi-structured practitioner interviews to capture firsthand insights and diverse perspectives. Triangulation of these phases examines RPA governance, auditing, and policy. The study clarifies the relationship between business processes and IT and offers guidance that supports procedural standardization, regulatory compliance, employee engagement, role clarity, and effective change management—thereby increasing the likelihood of successful RPA initiatives while prudently mitigating associated risks. Full article
Show Figures

Figure 1

17 pages, 1697 KB  
Article
Enhancing Ancient Ceramic Knowledge Services: A Question Answering System Using Fine-Tuned Models and GraphRAG
by Zhi Chen and Bingxiang Liu
Information 2025, 16(9), 792; https://doi.org/10.3390/info16090792 - 11 Sep 2025
Abstract
To address the challenges of extensive domain expertise and deficient semantic comprehension in the digital preservation of ancient ceramics, this paper proposes a knowledge question answering (QA) system integrating Low-Rank Adaptation (LoRA) fine-tuning and Graph Retrieval-Augmented Generation (GraphRAG). First, textual information of ceramic [...] Read more.
To address the challenges of extensive domain expertise and deficient semantic comprehension in the digital preservation of ancient ceramics, this paper proposes a knowledge question answering (QA) system integrating Low-Rank Adaptation (LoRA) fine-tuning and Graph Retrieval-Augmented Generation (GraphRAG). First, textual information of ceramic images is generated using the GLM-4V-9B model. These texts are then enriched with domain literature to produce ancient ceramic QA pairs via ERNIE 4.0 Turbo, culminating in a high-quality dataset of 2143 curated question–answer groups after manual refinement. Second, LoRA fine-tuning was employed on the Qwen2.5-7B-Instruct foundation model, significantly enhancing its question-answering proficiency specifically for the ancient ceramics domain. Finally, the GraphRAG framework is integrated, combining the fine-tuned large language model with knowledge graph path analysis to augment multi-hop reasoning capabilities for complex queries. Experimental results demonstrate performance improvements of 24.08% in ROUGE-1, 34.75% in ROUGE-2, 29.78% in ROUGE-L, and 4.52% in BERTScore_F1 over the baseline model. This evidence shows that the synergistic implementation of LoRA fine-tuning and GraphRAG delivers significant performance enhancements for ceramic knowledge systems, establishing a replicable technical framework for intelligent cultural heritage knowledge services. Full article
Show Figures

Figure 1

40 pages, 471 KB  
Review
Theory and Metatheory in the Nature of Information: Review and Thematic Analysis
by Luke Tredinnick
Information 2025, 16(9), 791; https://doi.org/10.3390/info16090791 - 11 Sep 2025
Abstract
This paper addresses the nature of information through a thematic review of the literature. The nature of information describes its fundamental qualities, including structure, meaning, content and use. This paper reviews the critical and theoretical literature with the aim of defining the boundaries [...] Read more.
This paper addresses the nature of information through a thematic review of the literature. The nature of information describes its fundamental qualities, including structure, meaning, content and use. This paper reviews the critical and theoretical literature with the aim of defining the boundaries of a foundational theory of information. The paper is divided into three parts. The first part addresses metatheoretical aspects of the discourse, including the historicity of information, its conceptual ambiguity, the problem of definition, and the possibility of a foundational theory. The second part addresses key dimension of the critical discourse, including the subjective, objective and intersubjective nature of information, its relationship to meaning, and its relationship to the material world. The final part summarises the main conclusion and outlines the scope of a foundational theory. This paper highlights important gaps in the critical tradition, including the historicity of information, and in its relationship to material reality, complexity and computation. This paper differs from prior reviews in its thematic focus and consideration of metatheoretical aspects of the critical and theoretical tradition. Full article
(This article belongs to the Special Issue Advances in Information Studies)
18 pages, 1061 KB  
Article
HiPC-QR: Hierarchical Prompt Chaining for Query Reformulation
by Hua Yang, Hanyang Li and Teresa Gonçalves
Information 2025, 16(9), 790; https://doi.org/10.3390/info16090790 - 11 Sep 2025
Abstract
Query reformulation techniques optimize user queries to better align with documents, thus improving the performance of Information Retrieval (IR) systems. Previous methods have primarily focused on query expansion using techniques such as synonym replacement to improve recall. With the rapid advancement of Large [...] Read more.
Query reformulation techniques optimize user queries to better align with documents, thus improving the performance of Information Retrieval (IR) systems. Previous methods have primarily focused on query expansion using techniques such as synonym replacement to improve recall. With the rapid advancement of Large Language Models (LLMs), the knowledge embedded within these models has grown. Research in prompt engineering has introduced various methods, with prompt chaining proving particularly effective for complex tasks. Directly prompting LLMs to reformulate queries has become a viable approach. However, existing LLM-based prompt methods for query reformulation often introduce irrelevant content into reformulated queries, resulting in decreased retrieval precision and misalignment with user intent. We propose a novel approach called Hierarchical Prompt Chaining for Query Reformulation (HiPC-QR). HiPC-QR employs a two-step prompt chaining technique to extract keywords from the original query and refine its structure by filtering out non-essential keywords based on the user’s query intent. This process reduces the query’s restrictiveness while simultaneously expanding essential keywords to enhance retrieval effectiveness. We evaluated the effectiveness of HiPC-QR on two benchmark retrieval datasets, namely MS MARCO and TREC Deep Learning.The experimental results show that HiPC-QR outperforms existing query reformulation methods on large-scale datasets in terms of both recall@10 and MRR@10. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining: Innovations in Big Data Analytics)
Show Figures

Figure 1

33 pages, 2139 KB  
Article
Dengue Fever Detection Using Swarm Intelligence and XGBoost Classifier: An Interpretable Approach with SHAP and DiCE
by Proshenjit Sarker, Jun-Jiat Tiang and Abdullah-Al Nahid
Information 2025, 16(9), 789; https://doi.org/10.3390/info16090789 - 10 Sep 2025
Abstract
Dengue fever is a mosquito-borne viral disease that annually affects 100–400 million people worldwide. Early detection of dengue enables easy treatment planning and helps reduce mortality rates. This study proposes three Swarm-based Metaheuristic Algorithms, Golden Jackal Optimization, Fox Optimizer, and Sea Lion Optimization, [...] Read more.
Dengue fever is a mosquito-borne viral disease that annually affects 100–400 million people worldwide. Early detection of dengue enables easy treatment planning and helps reduce mortality rates. This study proposes three Swarm-based Metaheuristic Algorithms, Golden Jackal Optimization, Fox Optimizer, and Sea Lion Optimization, for feature selection and hyperparameter tuning, and an Extreme Gradient Boost classifier to forecast dengue fever using the Predictive Clinical Dengue dataset. Several existing models have been proposed for dengue fever classification, with some achieving high predictive performance. However, most of these studies have overlooked the importance of feature reduction, which is crucial to building efficient and interpretable models. Furthermore, prior research has lacked in-depth analysis of model behavior, particularly regarding the underlying causes of misclassification. Addressing these limitations, this study achieved a 10-fold cross-validation mean accuracy of 99.89%, an F-score of 99.92%, a precision of 99.84%, and a perfect recall of 100% by using only two features: WBC Count and Platelet Count. Notably, FOX-XGBoost and SLO-XGBoost achieved the same performance while utilizing only four and three features, respectively, demonstrating the effectiveness of feature reduction without compromising accuracy. Among these, GJO-XGBoost demonstrated the most efficient feature utilization while maintaining superior performance, emphasizing its potential for practical deployment in dengue fever diagnosis. SHAP analysis identified WBC Count as the most influential feature driving model predictions. Furthermore, DiCE explanations support this finding by showing that lower WBC Counts are associated with dengue-positive cases, whereas higher WBC Counts are indicative of dengue-negative individuals. SHAP interpreted the reasons behind misclassifications, while DiCE provided a correction mechanism by suggesting the minimal changes needed to convert incorrect predictions into correct ones. Full article
Show Figures

Figure 1

23 pages, 35493 KB  
Article
A Novel Point-Cloud-Based Alignment Method for Shelling Tool Pose Estimation in Aluminum Electrolysis Workshop
by Zhenggui Jiang, Yi Long, Yonghong Long, Weihua Fang and Xin Li
Information 2025, 16(9), 788; https://doi.org/10.3390/info16090788 - 10 Sep 2025
Abstract
In aluminum electrolysis workshops, real-time pose perception of shelling heads is crucial to process accuracy and equipment safety. However, due to high temperatures, smoke, dust, and metal obstructions, traditional pose estimation methods struggle to achieve high accuracy and robustness. At the same time, [...] Read more.
In aluminum electrolysis workshops, real-time pose perception of shelling heads is crucial to process accuracy and equipment safety. However, due to high temperatures, smoke, dust, and metal obstructions, traditional pose estimation methods struggle to achieve high accuracy and robustness. At the same time, the continuous movement of the shelling head and the similar geometric structures around it make it hard to match point-clouds, which makes it even harder to track the position and orientation. In response to the above challenges, we propose a multi-stage optimization pose estimation algorithm based on point-cloud processing. This method is designed for dynamic perception tasks of three-dimensional components in complex industrial scenarios. First stage improves the accuracy of initial matching by combining a weighted 3D Hough voting and adaptive threshold mechanism with an improved FPFH feature matching strategy. In the second stage, by integrating FPFH and PCA feature information, a stable initial registration is achieved using the RANSAC-IA coarse registration framework. In the third stage, we designed an improved ICP algorithm that effectively improved the convergence of the registration process and the accuracy of the final pose estimation. The experimental results show that the proposed method has good robustness and adaptability in a real electrolysis workshop environment, and can achieve pose estimation of the shelling head in the presence of noise, occlusion, and complex background interference. Full article
(This article belongs to the Special Issue Advances in Computer Graphics and Visual Computing)
Show Figures

Figure 1

37 pages, 626 KB  
Systematic Review
Early Detection and Intervention of Developmental Dyscalculia Using Serious Game-Based Digital Tools: A Systematic Review
by Josep Hornos-Arias, Sergi Grau and Josep M. Serra-Grabulosa
Information 2025, 16(9), 787; https://doi.org/10.3390/info16090787 - 10 Sep 2025
Abstract
Developmental dyscalculia is a neurobiologically based learning disorder that impairs numerical processing and calculation abilities. Numerous studies underscore the critical importance of early detection to enable effective intervention, highlighting the need for individualized, structured, and adaptive approaches. Digital tools, particularly those based on [...] Read more.
Developmental dyscalculia is a neurobiologically based learning disorder that impairs numerical processing and calculation abilities. Numerous studies underscore the critical importance of early detection to enable effective intervention, highlighting the need for individualized, structured, and adaptive approaches. Digital tools, particularly those based on serious games, appear to offer a promising level of personalization. This systematic review aims to evaluate the relevance of serious game-based digital solutions as tools for the detection and remediation of developmental dyscalculia in children aged 5 to 12 years. To provide readers with a comprehensive understanding of this field, the selected solutions were analyzed and classified according to the technologies employed (including emerging ones), their thematic focus, the mathematical abilities targeted, the configuration of experimental trials, and the outcomes reported. A systematic search was conducted across Scopus, Web of Knowledge, PubMed, Eric, PsycInfo, and IEEEXplore for studies published between 2000 and March 2025, yielding 7799 records. Additional studies were identified through reference screening. A total of 21 studies met the eligibility criteria. All procedures were registered in PROSPERO and conducted in accordance with PRISMA guidelines for systematic reviews. The methodological analysis of the included studies emphasized the importance of employing both control and experimental groups with adequate sample sizes to ensure robust evaluation. In terms of remediation, the findings highlight the value of pre- and post-intervention assessments and the implementation of individualized training sessions, ideally not exceeding 20 min in duration. The review revealed a greater prevalence of remediation-focused serious games compared to screening tools, with a growing trend toward the use of mobile technologies. However, the substantial methodological limitations observed across studies must be addressed to enable the rigorous evaluation of the potential of SGs to detect and support the improvement of difficulties associated with developmental dyscalculia. Moreover, despite the recognized importance of personalization and adaptability in effective interventions, relatively few studies incorporated machine learning algorithms to enable the development of fully adaptive systems. Full article
Show Figures

Figure 1

15 pages, 770 KB  
Article
Analysis of Large Language Models for Company Annual Reports Based on Retrieval-Augmented Generation
by Abhijit Mokashi, Bennet Puthuparambil, Chaissy Daniel and Thomas Hanne
Information 2025, 16(9), 786; https://doi.org/10.3390/info16090786 - 10 Sep 2025
Abstract
Large language models (LLMs) like ChatGPT-4 and Gemini 1.0 demonstrate significant text generation capabilities but often struggle with outdated knowledge, domain specificity, and hallucinations. Retrieval-Augmented Generation (RAG) offers a promising solution by integrating external knowledge sources to produce more accurate and informed responses. [...] Read more.
Large language models (LLMs) like ChatGPT-4 and Gemini 1.0 demonstrate significant text generation capabilities but often struggle with outdated knowledge, domain specificity, and hallucinations. Retrieval-Augmented Generation (RAG) offers a promising solution by integrating external knowledge sources to produce more accurate and informed responses. This research investigates RAG’s effectiveness in enhancing LLM performance for financial report analysis. We examine how RAG and the specific prompt design improve the provision of qualitative and quantitative financial information in terms of accuracy, relevance, and verifiability. Employing a design science research approach, we compare ChatGPT-4 responses before and after RAG integration, using annual reports from ten selected technology companies. Our findings demonstrate that RAG improves the relevance and verifiability of LLM outputs (by 0.66 and 0.71, respectively, on a scale from 1 to 5), while also reducing irrelevant or incorrect answers. Prompt specificity is shown to critically impact response quality. This study indicates RAG’s potential to mitigate LLM biases and inaccuracies, offering a practical solution for generating reliable and contextually rich financial insights. Full article
Show Figures

Figure 1

20 pages, 2173 KB  
Article
Intelligent Assessment of Scientific Creativity by Integrating Data Augmentation and Pseudo-Labeling
by Weini Weng, Chang Liu, Guoli Zhao, Luwei Song and Xingli Zhang
Information 2025, 16(9), 785; https://doi.org/10.3390/info16090785 - 10 Sep 2025
Viewed by 20
Abstract
Scientific creativity is a crucial indicator of adolescents’ potential in science and technology, and its automated evaluation plays a vital role in the early identification of innovative talent. To address challenges such as limited sample sizes, high annotation costs, and modality heterogeneity, this [...] Read more.
Scientific creativity is a crucial indicator of adolescents’ potential in science and technology, and its automated evaluation plays a vital role in the early identification of innovative talent. To address challenges such as limited sample sizes, high annotation costs, and modality heterogeneity, this study proposes a multimodal assessment method that integrates data augmentation and pseudo-labeling techniques. For the first time, a joint enhancement approach is introduced that combines textual and visual data with a pseudo-labeling strategy to accommodate the characteristics of text–image integration in elementary students’ cognitive expressions. Specifically, SMOTE is employed to expand questionnaire data, EDA is used to enhance hand-drawn text–image data, and text–image semantic alignment is applied to improve sample quality. Additionally, a confidence-driven pseudo-labeling mechanism is incorporated to optimize the use of unlabeled data. Finally, multiple machine learning models are integrated to predict scientific creativity. The results demonstrate the following: 1. Data augmentation significantly increases sample diversity, and the highest accuracy of information alignment was achieved when text and images were matched. 2. The combination of data augmentation and pseudo-labeling mechanisms improves model robustness and generalization. 3. Family environment, parental education, and curiosity are key factors influencing scientific creativity. This study offers a cost-effective and efficient approach for assessing scientific creativity in elementary students and provides practical guidance for fostering their innovative potential. Full article
Show Figures

Figure 1

25 pages, 1380 KB  
Review
A Systematic Review and Experimental Evaluation of Classical and Transformer-Based Models for Urdu Abstractive Text Summarization
by Muhammad Azhar, Adeen Amjad, Deshinta Arrova Dewi and Shahreen Kasim
Information 2025, 16(9), 784; https://doi.org/10.3390/info16090784 - 9 Sep 2025
Viewed by 79
Abstract
The rapid growth of digital content in Urdu has created an urgent need for effective automatic text summarization (ATS) systems. While extractive methods have been widely studied, abstractive summarization for Urdu remains largely unexplored due to the language’s complex morphology and rich literary [...] Read more.
The rapid growth of digital content in Urdu has created an urgent need for effective automatic text summarization (ATS) systems. While extractive methods have been widely studied, abstractive summarization for Urdu remains largely unexplored due to the language’s complex morphology and rich literary tradition. This paper systematically evaluates four transformer-based language models (BERT-Urdu, BART, mT5, and GPT-2) for Urdu abstractive summarization, comparing their performance against conventional machine learning and deep learning approaches. Using multiple Urdu datasets—including the Urdu Summarization Corpus, Fake News Dataset, and Urdu-Instruct-News—we show that fine-tuned Transformer Language Models (TLMs) consistently outperform traditional methods, with the multilingual mT5 model achieving a 0.42 absolute improvement in F1-score over the best baseline. Our analysis reveals that mT5’s architecture is particularly effective at handling Urdu-specific challenges such as right-to-left script processing, diacritic interpretation, and complex verb–noun compounding. Furthermore, we present empirically validated hyperparameter configurations and training strategies for Urdu ATS, establishing transformer-based approaches as the new state-of-the-art for Urdu summarization. Notably, mT5 outperforms Seq2Seq baselines by up to 20% in ROUGE-L, underscoring the efficacy of Transformer-based models for low-resource languages. This work contributes both a systematic review of prior research and a novel empirical benchmark for advancing Urdu abstractive summarization. Full article
Show Figures

Figure 1

38 pages, 3071 KB  
Article
A Hybrid Framework for the Sensitivity Analysis of Software-Defined Networking Performance Metrics Using Design of Experiments and Machine Learning Techniques
by Chekwube Ezechi, Mobayode O. Akinsolu, Wilson Sakpere, Abimbola O. Sangodoyin, Uyoata E. Uyoata, Isaac Owusu-Nyarko and Folahanmi T. Akinsolu
Information 2025, 16(9), 783; https://doi.org/10.3390/info16090783 - 9 Sep 2025
Viewed by 174
Abstract
Software-defined networking (SDN) is a transformative approach for managing modern network architectures, particularly in Internet-of-Things (IoT) applications. However, ensuring the optimal SDN performance and security often needs a robust sensitivity analysis (SA). To complement existing SA methods, this study proposes a new SA [...] Read more.
Software-defined networking (SDN) is a transformative approach for managing modern network architectures, particularly in Internet-of-Things (IoT) applications. However, ensuring the optimal SDN performance and security often needs a robust sensitivity analysis (SA). To complement existing SA methods, this study proposes a new SA framework that integrates design of experiments (DOE) and machine-learning (ML) techniques. Although existing SA methods have been shown to be effective and scalable, most of these methods have yet to hybridize anomaly detection and classification (ADC) and data augmentation into a single, unified framework. To fill this gap, a targeted application of well-established existing techniques is proposed. This is achieved by hybridizing these existing techniques to undertake a more robust SA of a typified SDN-reliant IoT network. The proposed hybrid framework combines Latin hypercube sampling (LHS)-based DOE and generative adversarial network (GAN)-driven data augmentation to improve SA and support ADC in SDN-reliant IoT networks. Hence, it is called DOE-GAN-SA. In DOE-GAN-SA, LHS is used to ensure uniform parameter sampling, while GAN is used to generate synthetic data to augment data derived from typified real-world SDN-reliant IoT network scenarios. DOE-GAN-SA also employs a classification and regression tree (CART) to validate the GAN-generated synthetic dataset. Through the proposed framework, ADC is implemented, and an artificial neural network (ANN)-driven SA on an SDN-reliant IoT network is carried out. The performance of the SDN-reliant IoT network is analyzed under two conditions: namely, a normal operating scenario and a distributed-denial-of-service (DDoS) flooding attack scenario, using throughput, jitter, and response time as performance metrics. To statistically validate the experimental findings, hypothesis tests are conducted to confirm the significance of all the inferences. The results demonstrate that integrating LHS and GAN significantly enhances SA, enabling the identification of critical SDN parameters affecting the modeled SDN-reliant IoT network performance. Additionally, ADC is also better supported, achieving higher DDoS flooding attack detection accuracy through the incorporation of synthetic network observations that emulate real-time traffic. Overall, this work highlights the potential of hybridizing LHS-based DOE, GAN-driven data augmentation, and ANN-assisted SA for robust network behavioral analysis and characterization in a new hybrid framework. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Graphical abstract

25 pages, 9775 KB  
Article
Opinion Formation in Wikipedia Ising Networks
by Leonardo Ermann, Klaus M. Frahm and Dima L. Shepelyansky
Information 2025, 16(9), 782; https://doi.org/10.3390/info16090782 - 9 Sep 2025
Viewed by 85
Abstract
We study the properties of opinion formation on Wikipedia Ising Networks. Each Wikipedia article is represented as a node, and links are formed by citations of one article to another, generating a directed network of a given language edition with millions of nodes. [...] Read more.
We study the properties of opinion formation on Wikipedia Ising Networks. Each Wikipedia article is represented as a node, and links are formed by citations of one article to another, generating a directed network of a given language edition with millions of nodes. Ising spins are placed at each node, and their orientation up or down is determined by a majority vote of connected neighbors. At the initial stage, there are only a few nodes from two groups with fixed competing opinions up and down, while other nodes are assumed to have no initial opinion with no effect on the vote. The competition of two opinions is modeled by an asynchronous Monte Carlo process converging to a spin-polarized steady-state phase. This phase remains stable with respect to small fluctuations induced by an effective temperature of the Monte Carlo process. The opinion polarization at the steady state provides opinion (spin) preferences for each node. In the framework of this Ising Network Opinion Formation model, we analyze the influence and competition between political leaders, world countries, and social concepts. This approach is also generalized to the competition between three groups of different opinions described by three colors; for example, Donald Trump, Vladimir Putin, and Xi Jinping, or the USA, Russia, and China, within English, Russian, and Chinese editions of Wikipedia of March 2025. We argue that this approach provides a generic description of opinion formation in various complex networks. Full article
Show Figures

Figure 1

18 pages, 614 KB  
Article
Digital Technology Deployment and Improved Corporate Performance: Evidence from the Manufacturing Sector in China
by Liwen Cheng, Rui Ma, Xihui Chen and Luca Esposito
Information 2025, 16(9), 781; https://doi.org/10.3390/info16090781 - 9 Sep 2025
Viewed by 228
Abstract
With global supply chains being reshaped and costs surging, China’s manufacturing sector faces mounting pressure to retain its position as the world’s largest manufacturing center. Meeting this challenge demands the full mobilization of digital factors, which has attracted increasing academic attention. However, limited [...] Read more.
With global supply chains being reshaped and costs surging, China’s manufacturing sector faces mounting pressure to retain its position as the world’s largest manufacturing center. Meeting this challenge demands the full mobilization of digital factors, which has attracted increasing academic attention. However, limited research has examined how the effective integration of digital factors with traditional production factors can improve corporate performance. With data on Chinese manufacturing enterprises from the A-share market, this study employs a fixed effect model and a mediating effect model to analyze how the synergies between digital and traditional factors enhance corporate performance. Further, it illustrates the heterogeneous impacts across different types of enterprises. The results reveal three key findings. First, the synergies between digital and traditional factors significantly enhance corporate performance, with digital–capital synergy proving more effective than digital–labor synergy. Second, this synergy promotes performance improvement through three primary mechanisms: strengthening internal control quality, fostering business model innovation, and increasing product differentiation. Third, the performance effects of multi-factor synergies vary considerably across enterprise types, being more pronounced in non-state-owned enterprises, firms with strong digital attributes, and firms without political connections. Overall, this study offers valuable insights for manufacturing firms seeking a competitive edge in high-end and intelligent manufacturing within an increasingly globalized competitive landscape. Full article
Show Figures

Figure 1

17 pages, 633 KB  
Article
Predicting Achievers in an Online Theatre Course Designed upon the Principles of Sustainable Education
by Stamatios Ntanos, Ioannis Georgakopoulos and Vassilis Zakopoulos
Information 2025, 16(9), 780; https://doi.org/10.3390/info16090780 - 8 Sep 2025
Viewed by 184
Abstract
The development of online courses aligned with sustainable education principles is crucial for equipping learners with 21st-century skills essential for a sustainable future. As online education expands, predicting achievers (in this research, students with a final grade of seven or higher) becomes essential [...] Read more.
The development of online courses aligned with sustainable education principles is crucial for equipping learners with 21st-century skills essential for a sustainable future. As online education expands, predicting achievers (in this research, students with a final grade of seven or higher) becomes essential for optimizing instructional strategies and improving retention rates. This study employs a Linear Discriminant Analysis (LDA) model to predict academic performance in an online theatre course rooted in sustainable education principles. Engagement metrics such as total logins and collaborative assignment completion emerged as decisive predictors, aligning with prior research emphasizing active learning and collaboration. The model demonstrated robust performance, achieving 90% accuracy, 80% specificity, and an 88% correct classification rate. These results underscore the potential of machine learning in identifying achievers while highlighting the significance of sustainable pedagogical components. Future research should explore emotional engagement indicators and multi-course validation to enhance predictive capabilities. By utilizing the e-learning system information, the presented methodology has the potential to assist institutional policymakers in enhancing learning outcomes, advancing sustainability goals, and supporting innovation across the educational and creative sectors. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Graphical abstract

21 pages, 5562 KB  
Article
LSNet: Adaptive Latent Space Networks for Vulnerability Severity Assessment
by Yizhou Wang, Jin Zhang and Mingfeng Huang
Information 2025, 16(9), 779; https://doi.org/10.3390/info16090779 - 8 Sep 2025
Viewed by 251
Abstract
Due to the increasing harmfulness of software vulnerabilities, it is increasingly suggested to propose more efficient vulnerability assessment methods. However, existing methods mainly rely on manual updates and inefficient rule matching, and they struggle to capture potential correlations between vulnerabilities, thus resulting in [...] Read more.
Due to the increasing harmfulness of software vulnerabilities, it is increasingly suggested to propose more efficient vulnerability assessment methods. However, existing methods mainly rely on manual updates and inefficient rule matching, and they struggle to capture potential correlations between vulnerabilities, thus resulting in issues such as strong subjectivity and low efficiency. To this end, a vulnerability severity assessment method named Latent Space Networks (LSNet) is proposed in this paper. Specifically, based on a clustering analysis in Common Vulnerability Scoring System (CVSS) metrics, we first exploit relations for CVSS metrics prediction and propose an adaptive transformer to extract vulnerability from both global semantic and local latent space features. Then, we utilize bidirectional encoding and token masking techniques to enhance the model’s understanding of vulnerability–location relationships, and combine the Transformer method with convolution to significantly improve the model’s ability to identify vulnerable text. Finally, extensive experiments conducted on the open vulnerability dataset and the CCF OSC2024 dataset demonstrate that LSNet is capable of extracting potential correlation features. Compared with baseline methods, including SVM, Transformer, TextCNN, BERT, DeBERTa, ALBERT, and RoBERTa, it exhibits higher accuracy and efficiency. Full article
(This article belongs to the Topic Addressing Security Issues Related to Modern Software)
Show Figures

Figure 1

33 pages, 1260 KB  
Review
Identity Management Systems: A Comprehensive Review
by Zhengze Feng, Ziyi Li, Hui Cui and Monica T. Whitty
Information 2025, 16(9), 778; https://doi.org/10.3390/info16090778 - 8 Sep 2025
Viewed by 242
Abstract
Blockchain technology has introduced new paradigms for identity management systems (IDMSs), enabling users to regain control over their identity data and reduce reliance on centralized authorities. In recent years, numerous blockchain-based IDMS solutions have emerged across both practical application domains and academic research. [...] Read more.
Blockchain technology has introduced new paradigms for identity management systems (IDMSs), enabling users to regain control over their identity data and reduce reliance on centralized authorities. In recent years, numerous blockchain-based IDMS solutions have emerged across both practical application domains and academic research. However, prior reviews often focus on single application areas, provide limited cross-domain comparison, and insufficiently address security challenges such as interoperability, revocation, and quantum resilience. This paper bridges these gaps by presenting the first comprehensive survey that examines IDMSs from three complementary perspectives: (i) historical evolution from centralized and federated models to blockchain-based decentralized architectures; (ii) a cross-domain taxonomy of blockchain-based IDMSs, encompassing both general-purpose designs and domain-specific implementations; and (iii) a security analysis of threats across the full identity lifecycle. Drawing on a systematic review of 47 studies published between 2019 and 2025 and conducted in accordance with the PRISMA methodology, the paper synthesizes academic research and real-world deployments to identify unresolved technical, economic, and social challenges, and to outline directions for future research. The findings aim to serve as a timely reference for both researchers and practitioners working toward secure, interoperable, and sustainable blockchain-based IDMSs. Full article
Show Figures

Figure 1

19 pages, 2646 KB  
Article
A Comprehensive Study of MCS-TCL: Multi-Functional Sampling for Trustworthy Compressive Learning
by Fuma Kimishima, Jian Yang and Jinjia Zhou
Information 2025, 16(9), 777; https://doi.org/10.3390/info16090777 - 7 Sep 2025
Viewed by 186
Abstract
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction [...] Read more.
Compressive Learning (CL) is an emerging paradigm that allows machine learning models to perform inference directly from compressed measurements, significantly reducing sensing and computational costs. While existing CL approaches have achieved competitive accuracy compared to traditional image-domain methods, they typically rely on reconstruction to address information loss and often neglect uncertainty arising from ambiguous or insufficient data. In this work, we propose MCS-TCL, a novel and trustworthy CL framework based on Multi-functional Compressive Sensing Sampling. Our approach unifies sampling, compression, and feature extraction into a single operation by leveraging the compatibility between compressive sensing and convolutional feature learning. This joint design enables efficient signal acquisition while preserving discriminative information, leading to feature representations that remain robust across varying sampling ratios. To enhance the model’s reliability, we incorporate evidential deep learning (EDL) during training. EDL estimates the distribution of evidence over output classes, enabling the model to quantify predictive uncertainty and assign higher confidence to well-supported predictions. Extensive experiments on image classification tasks show that MCS-TCL outperforms existing CL methods, achieving state-of-the-art accuracy at a low sampling rate of 6%. Additionally, our framework reduces model size by 85.76% while providing meaningful uncertainty estimates, demonstrating its effectiveness in resource-constrained learning scenarios. Full article
(This article belongs to the Special Issue AI-Based Image Processing and Computer Vision)
Show Figures

Figure 1

15 pages, 3574 KB  
Article
Prior Knowledge Shapes Success When Large Language Models Are Fine-Tuned for Biomedical Term Normalization
by Daniel B. Hier, Steven K. Platt and Anh Nguyen
Information 2025, 16(9), 776; https://doi.org/10.3390/info16090776 - 7 Sep 2025
Viewed by 217
Abstract
Large language models (LLMs) often fail to correctly associate biomedical terms with their standardized ontology identifiers, posing challenges for downstream applications that rely on accurate, machine-readable codes. These linking failures can compromise the integrity of data used in precision medicine, clinical decision support, [...] Read more.
Large language models (LLMs) often fail to correctly associate biomedical terms with their standardized ontology identifiers, posing challenges for downstream applications that rely on accurate, machine-readable codes. These linking failures can compromise the integrity of data used in precision medicine, clinical decision support, and population health. Fine-tuning can partially remedy these issues, but the degree of improvement varies across terms and terminologies. Focusing on the Human Phenotype Ontology (HPO), we show that a model’s prior knowledge of term–identifier pairs, acquired during pre-training, strongly predicts whether fine-tuning will enhance its linking accuracy. We evaluate prior knowledge in three complementary ways: (1) latent probabilistic knowledge, revealed through stochastic prompting, captures hidden associations not evident in deterministic output; (2) partial subtoken knowledge, reflected in incomplete but non-random generation of identifier components; and (3) term familiarity, inferred from annotation frequencies in the biomedical literature, which serve as a proxy for training exposure. We then assess how these forms of prior knowledge influence the accuracy of deterministic identifier linking. Fine-tuning performance varies most for terms in what we call the reactive middle zone of the ontology—terms with intermediate levels of prior knowledge that are neither absent nor fully consolidated. Fine-tuning was most successful when prior knowledge as measured by partial subtoken knowledge, was ‘weak’ or ‘medium’ or when prior knowledge as measured by latent probabilistic knowledge was ‘unknown’ or ‘weak’ (p<0.001). These terms from the ‘reactive middle’ exhibited the largest gains or losses in accuracy during fine-tuning, suggesting that the success of knowledge injection critically depends on the level of term–identifier pair knowledge in the LLM before fine-tuning. Full article
Show Figures

Figure 1

20 pages, 15996 KB  
Article
A Gramian Angular Field-Based Convolutional Neural Network Approach for Crack Detection in Low-Power Turbines from Vibration Signals
by Angel H. Rangel-Rodriguez, Juan P. Amezquita-Sanchez, David Granados-Lieberman, David Camarena-Martinez, Maximiliano Bueno-Lopez and Martin Valtierra-Rodriguez
Information 2025, 16(9), 775; https://doi.org/10.3390/info16090775 - 6 Sep 2025
Viewed by 386
Abstract
The detection of damage in wind turbine blades is critical for ensuring their operational efficiency and longevity. This study presents a novel method for wind turbine blade damage detection, utilizing Gramian Angular Field (GAF) transformations of vibration signals in combination with Convolutional Neural [...] Read more.
The detection of damage in wind turbine blades is critical for ensuring their operational efficiency and longevity. This study presents a novel method for wind turbine blade damage detection, utilizing Gramian Angular Field (GAF) transformations of vibration signals in combination with Convolutional Neural Networks (CNNs). The GAF method enables the transformation of vibration signals, which are captured using a triaxial accelerometer, into angular representations that preserve temporal dependencies and reveal distinctive texture patterns that can be associated with structural damage. This transformation facilitates the capability of CNNs to identify complex features correlated with crack severity in wind turbine blades, thereby enhancing the precision and effectiveness of turbine fault diagnosis. The GAF-CNN model achieved a notable classification accuracy over 99.9%, demonstrating its robustness and potential for automated damage detection. Unlike traditional methods, which rely on expert interpretation and are sensitive to noise, the proposed system offers a more efficient and precise tool for damage monitoring. The findings suggest that this method can significantly enhance wind turbine condition monitoring systems, offering reduced dependency on manual inspections and improving early detection capabilities. Full article
(This article belongs to the Special Issue Signal Processing Based on Machine Learning Techniques)
Show Figures

Figure 1

21 pages, 371 KB  
Article
A Generalized Method for Filtering Noise in Open-Source Project Selection
by Yi Ding, Qing Fang and Xiaoyan Liu
Information 2025, 16(9), 774; https://doi.org/10.3390/info16090774 - 6 Sep 2025
Viewed by 253
Abstract
GitHub hosts over 10 million repositories, providing researchers with vast opportunities to study diverse software engineering problems. However, as anyone can create a repository for any purpose at no cost, open-source platforms contain many non-cooperative or non-developmental noise projects (e.g., repositories of dotfiles). [...] Read more.
GitHub hosts over 10 million repositories, providing researchers with vast opportunities to study diverse software engineering problems. However, as anyone can create a repository for any purpose at no cost, open-source platforms contain many non-cooperative or non-developmental noise projects (e.g., repositories of dotfiles). When selecting open-source projects for analysis, mixing collaborative coding projects (e.g., machine learning frameworks) with noisy projects may bias research findings. To solve this problem, we optimize the Semi-Automatic Decision Tree Method (SADTM), an existing Collaborative Coding Project (CCP) classification method, to improve its generality and accuracy. We evaluate our method on the GHTorrent dataset (2012–2020) and find that it effectively enhances CCP classification in two key ways: (1) it demonstrates greater stability than existing methods, yielding consistent results across different datasets; (2) it achieves high precision, with an F-measure ranging from 0.780 to 0.893. Our method outperforms existing techniques in filtering noise and selecting CCPs, enabling researchers to extract high-quality open-source projects from candidate samples with reliable accuracy. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

23 pages, 1292 KB  
Article
Hardware Validation for Semi-Coherent Transmission Security
by Michael Fletcher, Jason McGinthy and Alan J. Michaels
Information 2025, 16(9), 773; https://doi.org/10.3390/info16090773 - 5 Sep 2025
Viewed by 304
Abstract
The rapid growth of Internet-connected devices integrating into our everyday lives has no end in sight. As more devices and sensor networks are manufactured, security tends to be a low priority. However, the security of these devices is critical, and many current research [...] Read more.
The rapid growth of Internet-connected devices integrating into our everyday lives has no end in sight. As more devices and sensor networks are manufactured, security tends to be a low priority. However, the security of these devices is critical, and many current research topics are looking at the composition of simpler techniques to increase overall security in these low-power commercial devices. Transmission security (TRANSEC) methods are one option for physical-layer security and are a critical area of research with the increasing reliance on the Internet of Things (IoT); most such devices use standard low-power Time-division multiple access (TDMA) or frequency-division multiple access (FDMA) protocols susceptible to reverse engineering. This paper provides a hardware validation of previously proposed techniques for the intentional injection of noise into the phase mapping process of a spread spectrum signal used within a receiver-assigned code division multiple access (RA-CDMA) framework, which decreases an eavesdropper’s ability to directly observe the true phase and reverse engineer the associated PRNG output or key and thus the spreading sequence, even at high SNRs. This technique trades a conscious reduction in signal correlation processing for enhanced obfuscation, with a slight hardware resource utilization increase of less than 2% of Adaptive Logic Modules (ALMs), solidifying this work as a low-power technique. This paper presents the candidate method, quantifies the expected performance impact, and incorporates a hardware-based validation on field-programmable gate array (FPGA) platforms using arbitrary-phase phase-shift keying (PSK)-based spread spectrum signals. Full article
(This article belongs to the Special Issue Hardware Security and Trust, 2nd Edition)
Show Figures

Figure 1

29 pages, 1260 KB  
Article
Modelling Social Attachment and Mental States from Facebook Activity with Machine Learning
by Stavroula Kridera and Andreas Kanavos
Information 2025, 16(9), 772; https://doi.org/10.3390/info16090772 - 5 Sep 2025
Viewed by 377
Abstract
Social networks generate vast amounts of data that can reveal patterns of human behaviour, social attachment, and mental states. This paper explores advanced machine learning techniques to detect and model such patterns, focusing on community structures, influential users, and information diffusion pathways. To [...] Read more.
Social networks generate vast amounts of data that can reveal patterns of human behaviour, social attachment, and mental states. This paper explores advanced machine learning techniques to detect and model such patterns, focusing on community structures, influential users, and information diffusion pathways. To address the scale, noise, and heterogeneity of social data, we leverage recent advances in graph theory, natural language processing, and anomaly detection. Our framework combines clustering for community detection, sentiment analysis for emotional state inference, and centrality metrics for influence estimation, while integrating multimodal data—including textual and visual content—for richer behavioural insights. Experimental results demonstrate that the proposed approach effectively extracts actionable knowledge, supporting mental well-being and strengthening digital social ties. Furthermore, we emphasise the role of privacy-preserving methods, such as federated learning, to ensure ethical analysis. These findings lay the groundwork for responsible and effective applications of machine learning in social network analysis. Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
Show Figures

Figure 1

21 pages, 288 KB  
Review
Generative AI and the Information Society: Ethical Reflections from Libraries
by Molefi Matsieli and Stephen Mutula
Information 2025, 16(9), 771; https://doi.org/10.3390/info16090771 - 5 Sep 2025
Viewed by 359
Abstract
The integration of generative artificial intelligence (generative AI) into library systems is transforming the global information society, offering new possibilities for improving information access, management, and dissemination. However, these advancements also raise significant ethical concerns, including algorithmic bias, epistemic injustice, intellectual property conflicts, [...] Read more.
The integration of generative artificial intelligence (generative AI) into library systems is transforming the global information society, offering new possibilities for improving information access, management, and dissemination. However, these advancements also raise significant ethical concerns, including algorithmic bias, epistemic injustice, intellectual property conflicts, data privacy breaches, job displacement, the spread of misinformation, and increasing digital inequality. This review critically examines these challenges through the lens of the World Summit on the Information Society (WSIS) Action Line C10, which emphasizes the ethical dimensions of the information society. It argues that while such concerns are global, they are particularly acute in the Global South, where structural barriers such as skills shortages, weak policy frameworks, and limited infrastructure undermine equitable access to AI benefits. The review calls for a more inclusive, transparent, and ethically responsible approach to AI adoption in libraries. It underscores the essential role of libraries as stewards of ethical information practices and advocates for collaborative strategies to ensure that generative AI serves as a tool for empowerment, rather than a driver of deepening inequality in the information society. Full article
40 pages, 796 KB  
Article
Entropy-Based Assessment of AI Adoption Patterns in Micro and Small Enterprises: Insights into Strategic Decision-Making and Ecosystem Development in Emerging Economies
by Gelmar García-Vidal, Alexander Sánchez-Rodríguez, Laritza Guzmán-Vilar, Reyner Pérez-Campdesuñer and Rodobaldo Martínez-Vivar
Information 2025, 16(9), 770; https://doi.org/10.3390/info16090770 - 5 Sep 2025
Viewed by 299
Abstract
This study examines patterns of artificial intelligence (AI) adoption in Ecuadorian micro and small enterprises (MSEs), with an emphasis on functional diversity across value chain activities. Based on a cross-sectional dataset of 781 enterprises and an entropy-based model, it assesses internal variability in [...] Read more.
This study examines patterns of artificial intelligence (AI) adoption in Ecuadorian micro and small enterprises (MSEs), with an emphasis on functional diversity across value chain activities. Based on a cross-sectional dataset of 781 enterprises and an entropy-based model, it assesses internal variability in AI use and explores its relationship with strategic perception and dynamic capabilities. The findings reveal predominant partial adoption, alongside high functional entropy in sectors such as mining and services, suggesting an ongoing phase of technological experimentation. However, a significant gap emerges between perceived strategic use and actual functional configurations—especially among microenterprises—indicating a misalignment between intent and organizational capacity. Barriers to adoption include limited technical skills, high costs, infrastructure constraints, and cultural resistance, yet over 70% of non-adopters express future adoption intentions. Regional analysis identifies both the Andean Highlands and Coastal regions as “innovative,” although with distinct profiles of digital maturity. While microenterprises focus on accessible tools (e.g., chatbots), small enterprises engage in data analytics and automation. Correlation analyses reveal no significant relationship between functional diversity and strategic value or capability development, underscoring the importance of qualitative organizational factors. While primarily descriptive, the entropy-based approach provides a robust diagnostic baseline that can be complemented by multivariate or qualitative methods to uncover causal mechanisms and strengthen policy implications. The proposed framework offers a replicable and adaptable tool for characterizing AI integration and informing differentiated support policies, with relevance for Ecuador and other emerging economies facing fragmented digital transformation. Full article
Show Figures

Figure 1

33 pages, 725 KB  
Review
Mapping Blockchain Applications in FinTech: A Systematic Review of Eleven Key Domains
by Tipon Tanchangya, Tapan Sarker, Junaid Rahman, Md Shafiul Islam, Naimul Islam and Kazi Omar Siddiqi
Information 2025, 16(9), 769; https://doi.org/10.3390/info16090769 - 5 Sep 2025
Viewed by 790
Abstract
Blockchain technology is now a useful tool that FinTech organizations use to increase transparency, optimize activities, and seize new possibilities. This research explores blockchain applications within the FinTech sector. This study systematically explores blockchain applications within the FinTech sector by 164 peer-reviewed articles, [...] Read more.
Blockchain technology is now a useful tool that FinTech organizations use to increase transparency, optimize activities, and seize new possibilities. This research explores blockchain applications within the FinTech sector. This study systematically explores blockchain applications within the FinTech sector by 164 peer-reviewed articles, utilizing the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. The review identifies eleven applications, such as smart contracts, financial inclusion, crowdfunding, digital identity, trade finance, regulatory compliance, insurance, asset management, investment, banking, and lending. A mixed-method strategy, combining quantitative and qualitative content analysis, was applied to examine the adoption and impact of blockchain across these subdomains. It further discusses current challenges such as regulatory ambiguity, interoperability limitations, and cybersecurity threats. This paper provides a consolidated framework of blockchain’s actual application in FinTech subdomains and identifies the main gaps in the existing literature. These results have practical implications for practitioners, researchers, and policymakers who seek to harness blockchain for achieving financial innovation and inclusive growth. Full article
(This article belongs to the Special Issue Decision Models for Economics and Business Management)
Show Figures

Figure 1

17 pages, 1749 KB  
Article
Secure Communication and Dynamic Formation Control of Intelligent Drone Swarms Using Blockchain Technology
by Huayu Li, Peiyan Li, Jing Liu and Peiying Zhang
Information 2025, 16(9), 768; https://doi.org/10.3390/info16090768 - 4 Sep 2025
Viewed by 267
Abstract
With the increasing deployment of unmanned aerial vehicle (UAV) swarms in scenarios such as disaster response, environmental monitoring, and military reconnaissance, the need for secure and scalable formation control has become critical. Traditional centralized architectures face challenges such as limited scalability, communication bottlenecks, [...] Read more.
With the increasing deployment of unmanned aerial vehicle (UAV) swarms in scenarios such as disaster response, environmental monitoring, and military reconnaissance, the need for secure and scalable formation control has become critical. Traditional centralized architectures face challenges such as limited scalability, communication bottlenecks, and single points of failure in large-scale swarm coordination. To address these issues, this paper proposes a blockchain-based decentralized formation control framework that integrates smart contracts to manage UAV registration, identity authentication, formation assignment, and positional coordination. The system follows a leader–follower structure, where the leader broadcasts formation tasks via on-chain events, while followers respond in real-time through event-driven mechanisms. A parameterized control model based on dynamic angle and distance adjustments is employed to support various formations, including V-shape, line, and circular configurations. The transformation from relative to geographic positions is achieved using Haversine and Euclidean methods. Experimental validation in a simulated environment demonstrates that the proposed method achieves lower communication latency and better responsiveness compared to polling-based schemes, while offering enhanced scalability and robustness. This work provides a feasible and secure decentralized control solution for future UAV swarm systems. Full article
Show Figures

Figure 1

16 pages, 1471 KB  
Article
Leveraging Explainable AI for LLM Text Attribution: Differentiating Human-Written and Multiple LLM-Generated Text
by Ayat A. Najjar, Huthaifa I. Ashqar, Omar Darwish and Eman Hammad
Information 2025, 16(9), 767; https://doi.org/10.3390/info16090767 - 4 Sep 2025
Viewed by 388
Abstract
The development of generative AI Large Language Models (LLMs) raised the alarm regarding the identification of content produced by generative AI vs. humans. In one case, issues arise when students heavily rely on such tools in a manner that can affect the development [...] Read more.
The development of generative AI Large Language Models (LLMs) raised the alarm regarding the identification of content produced by generative AI vs. humans. In one case, issues arise when students heavily rely on such tools in a manner that can affect the development of their writing or coding skills. Other issues of plagiarism also apply. This study aims to support efforts to detect and identify textual content generated using LLM tools. We hypothesize that LLM-generated text is detectable by machine learning (ML) and investigate ML models that can recognize and differentiate between texts generated by humans and multiple LLM tools. We used a dataset of student-written text in comparison with LLM-written text. We leveraged several ML and Deep Learning (DL) algorithms, such as Random Forest (RF) and Recurrent Neural Networks (RNNs) and utilized Explainable Artificial Intelligence (XAI) to understand the important features in attribution. Our method is divided into (1) binary classification to differentiate between human-written and AI-generated text and (2) multi-classification to differentiate between human-written text and text generated by five different LLM tools (ChatGPT, LLaMA, Google Bard, Claude, and Perplexity). Results show high accuracy in multi- and binary classification. Our model outperformed GPTZero (78.3%), with an accuracy of 98.5%. Notably, GPTZero was unable to recognize about 4.2% of the observations, but our model was able to recognize the complete test dataset. XAI results showed that understanding feature importance across different classes enables detailed author/source profiles, aiding in attribution and supporting plagiarism detection by highlighting unique stylistic and structural elements, thereby ensuring robust verification of content originality. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Figure 1

18 pages, 1495 KB  
Article
Retrieval-Augmented Generation vs. Baseline LLMs: A Multi-Metric Evaluation for Knowledge-Intensive Content
by Aparna Vinayan Kozhipuram, Samar Shailendra and Rajan Kadel
Information 2025, 16(9), 766; https://doi.org/10.3390/info16090766 - 4 Sep 2025
Viewed by 549
Abstract
(1) Background: The development of Generative Artificial Intelligence (GenAI) is transforming knowledge-intensive domains such as Education. However, Large Language Models (LLMs), which serve as the foundational components for GenAI tools, are trained on static datasets, often producing misleading, factually incorrect, or outdated responses. [...] Read more.
(1) Background: The development of Generative Artificial Intelligence (GenAI) is transforming knowledge-intensive domains such as Education. However, Large Language Models (LLMs), which serve as the foundational components for GenAI tools, are trained on static datasets, often producing misleading, factually incorrect, or outdated responses. Our study explores the performance gains of Retrieval-Augmented LLMs over baseline LLMs while also identifying the trade-off opportunity between smaller-parameter LLMs augmented with user-specific data to larger parameter LLMs. (2) Methods: We experimented with four different LLMs, each with a different number of parameters, to generate outputs. These outputs were then evaluated across seven lexical and semantic metrics to identify performance trends in Retrieval-Augmented Generation (RAG)-Augmented LLMs and analyze the impact of parameter size on LLM performance. (3) Results and Discussions: We have synthesized 968 different combinations to identify this trend with the help of different LLM sizes/parameters: TinyLlama 1.1B, Mistral 7B, Llama 3.1 8B, and Llama 1 13 B. These studies were grouped into two themes: RAG-Augmented LLM percentage improvements to baseline LLMs and compelling trade-off possibilities of RAG-Augmented smaller-parameter LLMs to larger-parameter LLMs. Our experiments show that RAG-Augmented LLMs demonstrate high lexical and semantic scores relative to baseline LLMs. This offers RAG-Augmented LLMs as a compelling trade-off for reducing the number of parameters in LLMs and lowering overall resource demands. (4) Conclusions: The findings outline that by leveraging RAG-Augmented LLMs, smaller-parameter LLMs can perform better or equivalently to large-parameter LLMs, particularly demonstrating strong lexical improvements. They reduce the risks of hallucination and keep the output more contextualized, making them a better choice for knowledge-intensive content in academic and research sectors. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop