Next Issue
Volume 16, September
Previous Issue
Volume 16, July
 
 

Information, Volume 16, Issue 8 (August 2025) – 90 articles

Cover Story (view full-size image): This paper proposes a multi-hop P2P streaming architecture for conversation-aware communication; it replaces the central media server with a decentralized WebRTC system using super node aggregation. Streams are dynamically routed through super nodes, enabling real-time topology adaptation to conversation changes. WebRTC data channels handle signaling and overlay updates for efficient dissemination. The system focuses on rapid resource reallocation after external triggers, maintaining alignment with interaction patterns. Evaluated via Docker simulation under dynamic networks, it shows strong suitability for adaptive naturalistic communication. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 1919 KB  
Article
Management of Virtualized Railway Applications
by Ivaylo Atanasov, Evelina Pencheva and Kamelia Nikolova
Information 2025, 16(8), 712; https://doi.org/10.3390/info16080712 - 21 Aug 2025
Viewed by 197
Abstract
Robust, reliable, and secure communications are essential for efficient railway operation and keeping employees and passengers safe. The Future Railway Mobile Communication System (FRMCS) is a global standard aimed at providing innovative, essential, and high-performance communication applications in railway transport. In comparison with [...] Read more.
Robust, reliable, and secure communications are essential for efficient railway operation and keeping employees and passengers safe. The Future Railway Mobile Communication System (FRMCS) is a global standard aimed at providing innovative, essential, and high-performance communication applications in railway transport. In comparison with the legacy communication system (GSM-R), it provides high data rates, ultra-high reliability, and low latency. The FRMCS architecture will also benefit from cloud computing, following the principles of the cloud-native 5G core network design based on Network Function Virtualization (NFV). In this paper, an approach to the management of virtualized FRMCS applications is presented. First, the key management functionality related to the virtualized FRMCS application is identified based on an analysis of the different use cases. Next, this functionality is synthesized as RESTful services. The communication between application management and the services is designed as Application Programing Interfaces (APIs). The APIs are formally verified by modeling the management states of an FRMCS application instance from different points of view, and it is mathematically proved that the management state models are synchronized in time. The latency introduced by the designed APIs, as a key performance indicator, is evaluated through emulation. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

30 pages, 725 KB  
Article
Balancing Tradition and Digitalization: Enhancing Museum Experiences in the Post-Pandemic Era
by Vasile Gherheș, Claudiu Coman, Anna Bucs, Marian Dalban and Dragoș Bulz
Information 2025, 16(8), 711; https://doi.org/10.3390/info16080711 - 20 Aug 2025
Viewed by 430
Abstract
This study analyzes how museums in Brașov County integrated digital technologies into their activities during the COVID-19 pandemic, with a focus on online communication and audience interaction. This research is based on a mixed-methods approach, including content analysis, semi-structured interviews with museum representatives, [...] Read more.
This study analyzes how museums in Brașov County integrated digital technologies into their activities during the COVID-19 pandemic, with a focus on online communication and audience interaction. This research is based on a mixed-methods approach, including content analysis, semi-structured interviews with museum representatives, and a questionnaire applied to the visiting public. The aim is to identify the digital strategies used, the challenges encountered, and visitors’ perceptions regarding the usefulness of these tools. The results indicate an accelerated but uneven adoption of digital technologies, influenced by available resources, internal competencies, and institutional support. Frequent online interaction is positively correlated with the perceived quality of digital content, and openness to virtual activities is higher among younger and more educated audiences. Identified limitations include the lack of specialized personnel, reduced budgets, and administrative difficulties. This study emphasizes the need for institutional reforms and investments in digitalization to ensure the sustainability of the digital transition, without losing the value of the physical museum experience. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Figure 1

10 pages, 466 KB  
Article
The Negative Concord Mystery: Insights from a Language Model
by William O’Grady, Haopeng Zhang and Miseon Lee
Information 2025, 16(8), 710; https://doi.org/10.3390/info16080710 - 20 Aug 2025
Viewed by 286
Abstract
An important recent development in the field of linguistics is the use of small language models to investigate language acquisition. Following this line of research, we investigate the mysterious appearance of ‘negative concord’ (e.g., I didn’t do nothing) in the speech of [...] Read more.
An important recent development in the field of linguistics is the use of small language models to investigate language acquisition. Following this line of research, we investigate the mysterious appearance of ‘negative concord’ (e.g., I didn’t do nothing) in the speech of children whose environment offers no exposure to patterns of this sort. Drawing on a 10-million-word version of the BabyLM corpus, we show that the preference for negative concord over patterns involving a single negative (e.g., I did nothing) can be traced to a cognitive force known as biuniqueness, whose effects will be examined with the help of data from both natural speech and a language model. Full article
26 pages, 2266 KB  
Article
A Phrase Fill-in-Blank Problem in a Client-Side Web Programming Assistant System
by Huiyu Qi, Zhikang Li, Nobuo Funabiki, Htoo Htoo Sandi Kyaw and Wen Chung Kao
Information 2025, 16(8), 709; https://doi.org/10.3390/info16080709 - 20 Aug 2025
Viewed by 328
Abstract
Mastering client-side Web programming is essential for the development of responsive and interactive Web applications. To support novice students’ self-study, in this paper, we propose a novel exercise format called the phrase fill-in-blank problem (PFP) in the Web Programming Learning Assistant System (WPLAS) [...] Read more.
Mastering client-side Web programming is essential for the development of responsive and interactive Web applications. To support novice students’ self-study, in this paper, we propose a novel exercise format called the phrase fill-in-blank problem (PFP) in the Web Programming Learning Assistant System (WPLAS). A PFP instance presents a source code with blanked phrases (a set of elements) and corresponding Web page screenshots. Then, it requests the user to fill in the blanks, and the answers are automatically evaluated through string matching with predefined correct answers. By increasing blanks, PFP can come close to writing a code from scratch. To facilitate scalable and context-aware question creation, we implemented the PFP instance generation algorithm in Python using regular expressions. This approach targets meaningful code segments in HTML, CSS, and JavaScript that reflect the interactive behavior of front-end development. For evaluations, we generated 10 PFP instances for basic Web programming topics and 5 instances for video games and assigned them to students at Okayama University, Japan, and the State Polytechnic of Malang, Indonesia. Their solution results show that most students could solve them correctly, indicating the effectiveness and accessibility of the generated instances. In addition, we investigated the ability of generative AI, specifically ChatGPT, to solve the PFP instances. The results show 86.7% accuracy for basic-topic PFP instances. Although it still cannot fully find answers, we must monitor progress carefully. In future work, we will enhance PFP in WPLAS to handle non-unique answers by improving answer validation for flexible recognition of equivalent responses. Full article
(This article belongs to the Special Issue Software Applications Programming and Data Security)
Show Figures

Figure 1

20 pages, 757 KB  
Article
Exploring Twitch Viewers’ Donation Intentions from a Dual Perspective: Uses and Gratifications Theory and the Practice of Freedom
by José Magano, Manuel Au-Yong-Oliveira and Antonio Sánchez-Bayón
Information 2025, 16(8), 708; https://doi.org/10.3390/info16080708 - 19 Aug 2025
Viewed by 515
Abstract
This study examines the factors that motivate viewers to financially support streamers on the Twitch digital platform. It proposes a conceptual framework that combines the uses and gratifications theory (UGT) with Michel Foucault’s concept of the practice of freedom (PF). Using a cross-sectional [...] Read more.
This study examines the factors that motivate viewers to financially support streamers on the Twitch digital platform. It proposes a conceptual framework that combines the uses and gratifications theory (UGT) with Michel Foucault’s concept of the practice of freedom (PF). Using a cross-sectional quantitative survey of 560 Portuguese Twitch users, the model investigates how three core constructs from UGT—entertainment, socialization, and informativeness—affect the intention to donate, with PF acting as a mediating variable. Structural equation modeling confirms that all three UGT-based motivations significantly influence donation intentions, with socialization exhibiting the strongest mediated effect through PF. The findings reveal that Twitch donations go beyond mere instrumental or playful actions; they serve as performative expressions of identity, autonomy, and ethical subjectivity. By framing PF as a link between interpersonal engagement and financial support, this study provides a contribution to media motivation research. The theoretical integration enhances our understanding of pro-social behavior in live streaming environments, challenging simplistic, transactional interpretations of viewer contributions vis-à-vis more political ones and the desire to freely dispose of what is ours to give. Additionally, this study may lay the groundwork for future inquiries into how ethical self-formation is intertwined with monetized online participation, offering useful insights for academics, platform designers, and content creators seeking to promote meaningful digital interactions. Full article
Show Figures

Figure 1

31 pages, 5952 KB  
Article
Low-Cost Smart Cane for Visually Impaired People with Pathway Surface Detection and Distance Estimation Using Weighted Bounding Boxes and Depth Mapping
by Teepakorn Mungdee, Prakaidaw Ramsiri, Kanyarak Khabuankla, Pipat Khambun, Thanakrit Nupim and Ponlawat Chophuk
Information 2025, 16(8), 707; https://doi.org/10.3390/info16080707 - 19 Aug 2025
Viewed by 872
Abstract
Visually impaired individuals are at a high risk of accidents due to sudden changes in walking surfaces and surrounding obstacles. Existing smart cane systems lack the capability to detect pathway surface transition points with accurate distance estimation and danger-level assessment. This study proposes [...] Read more.
Visually impaired individuals are at a high risk of accidents due to sudden changes in walking surfaces and surrounding obstacles. Existing smart cane systems lack the capability to detect pathway surface transition points with accurate distance estimation and danger-level assessment. This study proposes a low-cost smart cane that integrates a novel Pathway Surface Transition Point Detection (PSTPD) method with enhanced obstacle detection. The system employs dual RGB cameras, an ultrasonic sensor, and YOLO-based models to deliver real-time alerts based on object type, surface class, distance, and severity. It comprises three modules: (1) obstacle detection and classification into mild, moderate, or severe levels; (2) pathway surface detection across eight surface types with distance estimation using weighted bounding boxes and depth mapping; and (3) auditory notifications. Experimental results show a mean Average Precision (mAP@50) of 0.70 for obstacle detection and 0.92 for surface classification. The average distance estimation error was 0.3 cm for obstacles and 4.22 cm for pathway surface transition points. Additionally, the PSTPD method also demonstrated efficient processing with an average runtime of 0.6 s per instance. Full article
(This article belongs to the Special Issue AI and Data Analysis in Smart Cities)
Show Figures

Figure 1

21 pages, 296 KB  
Article
Empirical Research on the Influencing Factors and Causal Relationships of Enterprise Positive Topic Heat on Online Social Platforms
by Li Fu, Kai Xu and Jiakun Wang
Information 2025, 16(8), 706; https://doi.org/10.3390/info16080706 - 19 Aug 2025
Viewed by 231
Abstract
Purpose: As online social platforms have increasingly become a key arena for enterprises to disseminate information, enhancing the heat of enterprise positive topic on such platforms plays a critical role in improving brand value and achieving high-quality development. Against this backdrop, it is [...] Read more.
Purpose: As online social platforms have increasingly become a key arena for enterprises to disseminate information, enhancing the heat of enterprise positive topic on such platforms plays a critical role in improving brand value and achieving high-quality development. Against this backdrop, it is critically important for enterprises to precisely identify the key factors influencing positive topic heat and to reveal the causal relationships between these factors and topic heat. Method: This paper focuses on the influencing factors and causal pathways of enterprise positive topic heat on online social platforms. To achieve this, this paper collects data on enterprise positive topic publicity from Sina Weibo, taking topic heat as the dependent variable, and uses regression analysis to examine its causal relationships with independent variables such as topic host attribute and activity level. Subsequently, a cost performance indicator was constructed and quantitatively evaluated to assess the effectiveness of different host attributes based on both performance and cost. Result: This research shows topic hosts play an active role in the topic spreading process, but some of them an exhibit asymmetric ability to increase topic heat compared to their level. The relationship between the topic heat contribution and host’s activity level changes with the change of the host attribute. In addition, the host’s cost performance for each attribute is significant different. By empirically analyzing the influencing factors of positive topic heat and revealing their underlying mechanisms, enterprises can identify the key factors for enhancing positive topic heat on online social platforms and optimize decisions such as the selection of hosts for related topic promotion. Full article
(This article belongs to the Section Information Applications)
22 pages, 747 KB  
Article
Unpacking the Black Box: How AI Capability Enhances Human Resource Functions in China’s Healthcare Sector
by Xueru Chen, Maria Pilar Martínez-Ruiz, Elena Bulmer and Benito Yáñez-Araque
Information 2025, 16(8), 705; https://doi.org/10.3390/info16080705 - 19 Aug 2025
Viewed by 674
Abstract
Artificial intelligence (AI) is transforming organizational functions across sectors; however, its application to human resource management (HRM) within healthcare remains underexplored. This study aims to unpack the black-box nature of AI capability’s impact on HR functions within China’s healthcare sector, a domain undergoing [...] Read more.
Artificial intelligence (AI) is transforming organizational functions across sectors; however, its application to human resource management (HRM) within healthcare remains underexplored. This study aims to unpack the black-box nature of AI capability’s impact on HR functions within China’s healthcare sector, a domain undergoing rapid digital transformation, driven by national innovation policies. Grounded in resource-based theory, the study conceptualizes AI capability as a multidimensional construct encompassing tangible resources, human resources, and organizational intangibles. Using a structural equation modeling approach (PLS-SEM), the analysis draws on survey data from 331 professionals across five hospitals in three Chinese cities. The results demonstrate a strong, positive, and statistically significant relationship between AI capability and HR functions, accounting for 75.2% of the explained variance. These findings indicate that AI capability enhances HR performance through smarter recruitment, personalized training, and data-driven talent management. By empirically illuminating the mechanisms linking AI capability to HR outcomes, the study contributes to theoretical development and offers actionable insights for healthcare administrators and policymakers. It positions AI not merely as a technological tool but as a strategic resource to address talent shortages and improve equity in workforce distribution. This work helps to clarify a previously opaque area of AI application in healthcare HRM. Full article
(This article belongs to the Special Issue Emerging Research in Knowledge Management and Innovation)
Show Figures

Figure 1

21 pages, 2544 KB  
Article
Towards Fair Graph Neural Networks via Counterfactual and Balance
by Zhiguo Xiao, Yangfan Zhou, Dongni Li and Ke Wang
Information 2025, 16(8), 704; https://doi.org/10.3390/info16080704 - 19 Aug 2025
Viewed by 570
Abstract
In recent years, graph neural networks (GNNs) have shown powerful performance in processing non-Euclidean data. However, similar to other machine-learning algorithms, GNNs can amplify data bias in high-risk decision-making systems, which can easily lead to unfairness in the final decision-making results. At present, [...] Read more.
In recent years, graph neural networks (GNNs) have shown powerful performance in processing non-Euclidean data. However, similar to other machine-learning algorithms, GNNs can amplify data bias in high-risk decision-making systems, which can easily lead to unfairness in the final decision-making results. At present, a large number of studies focus on solving the fairness problem of GNNs, but the existing methods mostly rely on building complex model architectures or rely on technical means in the field of non-GNNs. To this end, this paper proposes FairCNCB (Fair Graph Neural Network based on Counterfactual and Category Balance) to address the problem of class imbalancing in minority sensitive attribute groups. First, we conduct a causal analysis of fair representation and employ the adversarial network to generate counterfactual node samples, effectively mitigating bias induced by sensitive attributes. Secondly, we calculate the weights for minority sensitive attribute groups, and reconstruct the loss function to achieve the fairness of sensitive attribute classes among different groups. The synergy between the two modules optimizes GNNs from multiple dimensions and significantly improves the performance of GNNs in terms of fairness. The experimental results on the three datasets show the effectiveness and fairness of FairCNCB. The performance metrics (such as AUC, F1, and ACC) have been improved by approximately 2%, and the fairness metrics (△sp, △eo) have been enhanced by approximately 5%. Full article
Show Figures

Figure 1

26 pages, 2107 KB  
Article
TSRACE-AI: Traffic Sign Recognition Accelerated with Co-Designed Edge AI Based on Hybrid FPGA Architecture for ADAS
by Abderrahmane Smaali, Said Ben Alla and Abdellah Touhafi
Information 2025, 16(8), 703; https://doi.org/10.3390/info16080703 - 18 Aug 2025
Viewed by 300
Abstract
The need for efficient and real-time traffic sign recognition has become increasingly important as autonomous vehicles and Advanced Driver Assistance Systems (ADASs) continue to evolve. This study introduces TSRACE-AI, a system that accelerates traffic sign recognition by combining hardware and software in a [...] Read more.
The need for efficient and real-time traffic sign recognition has become increasingly important as autonomous vehicles and Advanced Driver Assistance Systems (ADASs) continue to evolve. This study introduces TSRACE-AI, a system that accelerates traffic sign recognition by combining hardware and software in a hybrid architecture deployed on the PYNQ-Z2 FPGA platform. The design employs the Deep Learning Processing Unit (DPU) for hardware acceleration and incorporates 8-bit fixed-point quantization to enhance the performance of the CNN model. The proposed system achieves a 98.85% reduction in latency and a 200.28% increase in throughput compared to similar works, with a trade-off of a 90.35% decrease in power efficiency. Despite this trade-off, the system excels in latency-sensitive applications, demonstrating its suitability for real-time decision-making. By balancing speed and power efficiency, TSRACE-AI offers a compelling solution for integrating traffic sign recognition into ADAS, paving the way for enhanced autonomous driving capabilities. Full article
Show Figures

Figure 1

25 pages, 3070 KB  
Article
Feeding Urban Rail Transit: Hybrid Microtransit Network Design Based on Parsimonious Continuum Approach
by Qian Ye, Yunyu Zhang, Kunzheng Wang, Xinghua Liu and Chunfu Shao
Information 2025, 16(8), 702; https://doi.org/10.3390/info16080702 - 18 Aug 2025
Viewed by 300
Abstract
In recent years, the passenger flow volume of conventional transit in major cities has declined steadily. Ground public transit often suffers from congestion during rush hours caused by frequent stops (e.g., conventional fixed-route buses) or excessively high operating costs (e.g., demand-responsive transit). While [...] Read more.
In recent years, the passenger flow volume of conventional transit in major cities has declined steadily. Ground public transit often suffers from congestion during rush hours caused by frequent stops (e.g., conventional fixed-route buses) or excessively high operating costs (e.g., demand-responsive transit). While rail transit offers reliable service with dedicated right-of-way, its high capital and operational costs pose challenges for integrated planning with other transit modes. The joint design of rail, conventional buses, and DRT remains underexplored. To bridge this gap, this paper proposes and analyses a new hybrid transit system that integrates conventional transit service with demand-adaptive transit (DAT) to feed urban rail transit (the system hence called hybrid microtransit system). The main task is to optimally design the hybrid microtransit system to allocate resources efficiently across different modes. Both the conventional transit and DAT connect passengers from their origin/destination to the rail transit stations. Travelers can choose one of the services to access urban rail transit, or directly walk. Accordingly, we divide the service area into three parts and compute the user costs to access rail transit by conventional transit and DAT. The optimal design problem is hence formulated as a mixed integer program by minimizing the total system cost, which includes both the user and agency (operating) costs. Numerical experiment results demonstrate that the hybrid microtransit system performs better than the system that only has conventional transit to feed under all demand levels, achieving up to a 7% reduction in total system cost. These may provide some evidence to resolve the “first-mile” challenges of rail transit in megacities by designing better conventional transit and DAT. Full article
(This article belongs to the Special Issue Big Data Analytics in Smart Cities)
Show Figures

Figure 1

16 pages, 1396 KB  
Article
Multi-Dimensional Control Rules and Assessment Methods for Surface Engineering Data Quality in Oil and Gas Field
by Taiwu Xia, Feng Wang, Zhan Huang, Wei Zhang, Gangping Chen, Jun Zhou and Cui Liu
Information 2025, 16(8), 701; https://doi.org/10.3390/info16080701 - 18 Aug 2025
Viewed by 320
Abstract
The current digital delivery of surface engineering in oil and gas fields faces challenges such as difficulty in integrating multiple heterogeneous data sources, low efficiency in quality reviews, and a lack of unified evaluation standards, which seriously restrict the implementation of intelligent operation [...] Read more.
The current digital delivery of surface engineering in oil and gas fields faces challenges such as difficulty in integrating multiple heterogeneous data sources, low efficiency in quality reviews, and a lack of unified evaluation standards, which seriously restrict the implementation of intelligent operation and maintenance. Based on this, this study constructs multi-dimensional control rules for data quality covering the entire lifecycle. Based on the characteristics of structured, semi-structured, and unstructured data, five-dimensional review criteria and quantification methods for normative, integrity, consistency, accuracy, and timeliness were developed. At the same time, by integrating the analytic hierarchy process (AHP) and the entropy weight method (EWM), a combined subjective and objective weight evaluation model was established to achieve scientific quantitative calculation of quality indicators. Verification with a project by Southwest Oil and Gas Field shows that the system effectively achieves quantifiable diagnosis and traceability of engineering data quality, revealing the differentiation characteristics of different data types in the quality dimension. The research results provide core methodological support for the establishment of an integrated data governance paradigm of “collection—review—operation and maintenance” in oil and gas fields, facilitating the implementation of intelligent operation and maintenance. Full article
Show Figures

Figure 1

26 pages, 1886 KB  
Article
Path Planning with Adaptive Autonomy Based on an Improved A Algorithm and Dynamic Programming for Mobile Robots
by Muhammad Aatif, Muhammad Zeeshan Baig, Umar Adeel and Ammar Rashid
Information 2025, 16(8), 700; https://doi.org/10.3390/info16080700 - 17 Aug 2025
Viewed by 434
Abstract
Sustainable path-planning algorithms are essential for executing complex user-defined missions by mobile robots. Addressing various scenarios with a unified criterion during the design phase is often impractical due to the potential for unforeseen situations. Therefore, it is important to incorporate the concept of [...] Read more.
Sustainable path-planning algorithms are essential for executing complex user-defined missions by mobile robots. Addressing various scenarios with a unified criterion during the design phase is often impractical due to the potential for unforeseen situations. Therefore, it is important to incorporate the concept of adaptive autonomy for path planning. This approach allows the system to autonomously select the best path-planning strategy. The technique utilizes dynamic programming with an adaptive memory size, leveraging a cellular decomposition technique to divide the map into convex cells. The path is divided into three segments: the first segment connects the starting point to the center of the starting cell, the second segment connects the center of the goal cell to the goal point, and the third segment connects the center of the starting cell to the center of the goal cell. Since each cell is convex, internal path planning simply requires a straight line between two points within a cell. Path planning uses an improved A (I-A) algorithm, which evaluates the feasibility of a direct path to the goal from the current position during execution. When a direct path is discovered, the algorithm promptly returns and saves it in memory. The memory size is proportional to the square of the total number of cells, and it stores paths between the centers of cells. By storing and reusing previously calculated paths, this method significantly reduces redundant computation and supports long-term sustainability in mobile robot deployments. The final phase of the path-planning process involves pruning, which eliminates unnecessary waypoints. This approach obviates the need for repetitive path planning across different scenarios thanks to its compact memory size. As a result, paths can be swiftly retrieved from memory when needed, enabling efficient and prompt navigation. Simulation results indicate that this algorithm consistently outperforms other algorithms in finding the shortest path quickly. Full article
Show Figures

Figure 1

17 pages, 1159 KB  
Article
Sports Analytics for Evaluating Injury Impact on NBA Performance
by Vangelis Sarlis, George Papageorgiou and Christos Tjortjis
Information 2025, 16(8), 699; https://doi.org/10.3390/info16080699 - 17 Aug 2025
Viewed by 611
Abstract
This study investigates the impact of injuries on National Basketball Association (NBA) player performance over 20 seasons, using large-scale performance data and a statistical evaluation. Injury events were matched with player–game performance metrics to assess how various injury types influence short-, medium-, and [...] Read more.
This study investigates the impact of injuries on National Basketball Association (NBA) player performance over 20 seasons, using large-scale performance data and a statistical evaluation. Injury events were matched with player–game performance metrics to assess how various injury types influence short-, medium-, and long-term performance outcomes, measured across 2-, 5-, and 10-game windows. Using paired sample t-tests and Cohen’s d, we quantified both the statistical significance and effect size of changes in key performance metrics before and after injury. The analysis applies paired t-tests and Cohen’s d to quantify the statistical and practical significance of performance deviations pre- and post-injury. Our results show that while most injury types are associated with measurable performance declines, especially in offensive and defensive ratings, certain categories, such as cardiovascular injuries, demonstrate counterintuitive improvements post-recovery. These patterns suggest that not all injuries have equivalent consequences and highlight the importance of individualized recovery protocols. This work contributes to the growing field of sports injury analytics by combining statistical modeling and sports analytics to deliver actionable insights for coaches, medical staff, and performance analysts in managing player rehabilitation and optimizing return-to-play decisions. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Graphical abstract

16 pages, 281 KB  
Article
Modeling Concrete and Virtual Manipulatives for Mathematics Teacher Training: A Case Study in ICT-Enhanced Pedagogies
by Angela Ogbugwa Ochogboju and Javier Díez-Palomar
Information 2025, 16(8), 698; https://doi.org/10.3390/info16080698 - 17 Aug 2025
Viewed by 520
Abstract
This feature paper explores the comparative pedagogical roles of concrete and virtual manipulatives in preservice mathematics teacher education. Based on a design-based research (DBR) methodology, this study investigates the effects of tangible tools (e.g., base-ten blocks, fraction circles) and digital applications (e.g., GeoGebra [...] Read more.
This feature paper explores the comparative pedagogical roles of concrete and virtual manipulatives in preservice mathematics teacher education. Based on a design-based research (DBR) methodology, this study investigates the effects of tangible tools (e.g., base-ten blocks, fraction circles) and digital applications (e.g., GeoGebra Classic 6, Polypad) on preservice teachers’ problem solving, conceptual understanding, engagement, and instructional reasoning. Data were collected through surveys (n = 53), semi-structured interviews (n = 25), and classroom observations (n = 30) in a Spanish university’s teacher education program. Key findings show that both forms of manipulatives significantly enhance engagement and conceptual clarity, but are affected by logistical and digital access barriers. This paper further proposes a theoretically grounded model for simulating manipulatives through ICT-based environments, enabling scalable and adaptive mathematics teacher training. By linking constructivist learning theory, the Technologically Enhanced Learning Environment (TELE) framework, and simulation-based pedagogy, this model aims to replicate the cognitive, affective, and collaborative affordances of manipulatives in virtual contexts. Distinct from prior work, this study contributes an integrated theoretical and practical framework, contextualized through empirical classroom data, and presents a clear plan for real-world ICT-based implementation. The findings provide actionable insights for teacher educators, edtech developers, and policymakers seeking to expand equitable and engaging mathematics education through simulation and blended modalities. Full article
(This article belongs to the Special Issue ICT-Based Modelling and Simulation for Education)
22 pages, 1330 KB  
Article
Internet Governance in the Context of Global Digital Contracts: Integrating SAR Data Processing and AI Techniques for Standards, Rules, and Practical Paths
by Xiaoying Fu, Wenyi Zhang and Zhi Li
Information 2025, 16(8), 697; https://doi.org/10.3390/info16080697 - 16 Aug 2025
Viewed by 363
Abstract
With the increasing frequency of digital economic activities on a global scale, internet governance has become a pressing issue. Traditional multilateral approaches to formulating internet governance rules have struggled to address critical challenges such as privacy leakage and low global internet defense capabilities. [...] Read more.
With the increasing frequency of digital economic activities on a global scale, internet governance has become a pressing issue. Traditional multilateral approaches to formulating internet governance rules have struggled to address critical challenges such as privacy leakage and low global internet defense capabilities. To tackle these issues, this study integrates SAR data processing and interpretation using AI techniques with the development of governance rules through international agreements and multi-stakeholder mechanisms. This approach aims to strengthen privacy protection and enhance the overall effectiveness of internet governance. This study incorporates differential privacy protection laws and cert-free cryptography algorithms, combined with SAR data analysis powered by AI techniques, to address privacy protection and security challenges in internet governance. SAR data provides a unique layer of spatial and environmental context, which, when analyzed using advanced AI models, offers valuable insights into network patterns and potential vulnerabilities. By applying these techniques, internet governance can more effectively monitor and secure global data flows, ensuring a more robust defense against cyber threats. Experimental results demonstrate that the proposed approach significantly outperforms traditional methods. When processing 20 GB of data, the encryption time was reduced by approximately 1.2 times compared to other methods. Furthermore, satisfaction with the newly developed internet governance rules increased by 13.3%. By integrating SAR data processing and AI, the model enhances the precision and scalability of governance mechanisms, enabling real-time responses to privacy and security concerns. In the context of the Global Digital Compact, this research effectively improves the standards, rules, and practical pathways for internet governance. It not only enhances the security and privacy of global data networks but also promotes economic development, social progress, and national security. The integration of SAR data analysis and AI techniques provides a powerful toolset for addressing the complexities of internet governance in a digitally connected world. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

22 pages, 3289 KB  
Article
Thematic Evolution of China’s Media Governance Policies: A Tri-Logic Synergistic Perspective
by Li Shao and Miao Ao
Information 2025, 16(8), 696; https://doi.org/10.3390/info16080696 - 16 Aug 2025
Viewed by 490
Abstract
China’s media governance policies play a crucial role in shaping media ecology and promoting the modernization of national governance capacity. This study employed the Latent Dirichlet Allocation (LDA) model and co-occurrence network analysis to systematically analyze the thematic content of national-level media governance [...] Read more.
China’s media governance policies play a crucial role in shaping media ecology and promoting the modernization of national governance capacity. This study employed the Latent Dirichlet Allocation (LDA) model and co-occurrence network analysis to systematically analyze the thematic content of national-level media governance policies issued in China between 1996 and 2024, and to examine the evolution of policy themes from a triple logical synergy perspective. In consideration of the socio-economic context and governance issues, this study has categorized the evolution of media governance policies into four distinct phases. This study used the LDA model to extract high-frequency words and built a co-occurrence network to explore the structural relationship among these words, with a synergy framework to analyze the thematic evolution across periods. The findings indicate that China’s media governance policies over the past three decades have been the result of stage-by-stage adjustments under the synergistic influences of technological drivers, social demands, and governance philosophies. Media governance constitutes a pivotal component in the modernization of China’s national governance capacity. A comprehensive analysis of the evolution of policy themes reveals the internal pattern of media governance in China. Full article
Show Figures

Graphical abstract

27 pages, 18762 KB  
Article
From Data to Decision: A Semantic and Network-Centric Approach to Urban Green Space Planning
by Elisavet Parisi and Charalampos Bratsas
Information 2025, 16(8), 695; https://doi.org/10.3390/info16080695 - 16 Aug 2025
Viewed by 1052
Abstract
Urban sustainability poses a deeply interdisciplinary challenge, spanning technical fields like data science and environmental science, design-oriented disciplines like architecture and spatial planning, and domains such as economics, policy, and social studies. While numerous advanced tools are used in these domains, ranging from [...] Read more.
Urban sustainability poses a deeply interdisciplinary challenge, spanning technical fields like data science and environmental science, design-oriented disciplines like architecture and spatial planning, and domains such as economics, policy, and social studies. While numerous advanced tools are used in these domains, ranging from geospatial systems to AI and network analysis-, they often remain fragmented, domain-specific, and difficult to integrate. This paper introduces a semantic framework that aims not to replace existing analytical methods, but to interlink their outputs and datasets within a unified, queryable knowledge graph. Leveraging semantic web technologies, the framework enables the integration of heterogeneous urban data, including spatial, network, and regulatory information, permitting advanced querying and pattern discovery across formats. Applying the methodology to two urban contexts—Thessaloniki (Greece) as a full implementation and Marine Parade GRC (Singapore) as a secondary test—we demonstrate its flexibility and potential to support more informed decision-making in diverse planning environments. The methodology reveals both opportunities and constraints shaped by accessibility, connectivity, and legal zoning, offering a reusable approach for urban interventions in other contexts. More broadly, the work illustrates how semantic technologies can foster interoperability among tools and disciplines, creating the conditions for truly data-driven, collaborative urban planning. Full article
Show Figures

Figure 1

30 pages, 388 KB  
Article
Do Security and Privacy Attitudes and Concerns Affect Travellers’ Willingness to Use Mobility-as-a-Service (MaaS) Systems?
by Maria Sophia Heering, Haiyue Yuan and Shujun Li
Information 2025, 16(8), 694; https://doi.org/10.3390/info16080694 - 15 Aug 2025
Viewed by 340
Abstract
Mobility-as-a-Service (MaaS) represents a transformative shift in transportation, enabling users to plan, book, and pay for diverse mobility services via a unified digital platform. While previous research has explored factors influencing MaaS adoption, few studies have addressed users’ perspectives, particularly concerning data privacy [...] Read more.
Mobility-as-a-Service (MaaS) represents a transformative shift in transportation, enabling users to plan, book, and pay for diverse mobility services via a unified digital platform. While previous research has explored factors influencing MaaS adoption, few studies have addressed users’ perspectives, particularly concerning data privacy and cyber security. To address this gap, we conducted an online survey with 320 UK-based participants recruited via Prolific. This study examined psychological, demographic, and perceptual factors influencing individuals’ willingness to adopt MaaS, focusing on cyber security and privacy attitudes, as well as perceived benefits and costs. The results of a hierarchical linear regression model revealed that trust in how commercial websites manage personal data positively influenced willingness to use MaaS, highlighting the indirect role of privacy and security concerns. However, when additional predictors were included, this effect diminished, and perceptions of benefits and costs emerged as the primary drivers of MaaS adoption, with the model explaining 54.5% of variance. These findings suggest that privacy concerns are outweighed by users’ cost–benefit evaluations. The minimal role of trust and security concerns underscores the need for MaaS providers to proactively promote cyber security awareness, build user trust, and collaborate with researchers and policymakers to ensure ethical and secure MaaS deployment. Full article
Show Figures

Figure 1

25 pages, 394 KB  
Review
Quantum Computing Applications in Supply Chain Information and Optimization: Future Scenarios and Opportunities
by Mohammad Shamsuddoha, Mohammad Abul Kashem, Tasnuba Nasir, Ahamed Ismail Hossain and Md Foysal Ahmed
Information 2025, 16(8), 693; https://doi.org/10.3390/info16080693 - 15 Aug 2025
Viewed by 836
Abstract
Quantum computing is a groundbreaking innovation that can resolve complex supply chain problems that traditional computing techniques are unable to manage. Given a focus on information flow, optimization, and potential future applications, this study explores how supply chain management could utilize quantum computing. [...] Read more.
Quantum computing is a groundbreaking innovation that can resolve complex supply chain problems that traditional computing techniques are unable to manage. Given a focus on information flow, optimization, and potential future applications, this study explores how supply chain management could utilize quantum computing. The study used a mixed-methods approach, including scenario modeling, case studies of prominent companies, and literature reviews. The study intends to evaluate the function of quantum computing in dynamic route optimization, investigate how it can enhance supply chain resilience, and examine how it could optimize the flow of information for decision-making processes. Findings demonstrate that quantum computing offers unprecedented computational power for scenario analysis and decision-making and operates exceptionally well in activities like dynamic route optimization, parcel packaging, and reorganization during disruptions. For instance, companies like DHL and FedEx utilize quantum systems to improve efficiency substantially. However, constraints like high implementation costs, cybersecurity weaknesses, and technological infancy prevent widespread acceptance. Further research should investigate hybrid solutions that integrate quantum and classical computing while addressing these obstacles. This paper concludes that although quantum computing has the potential to transform supply chains by improving information flow, resilience, and efficiency, its wider adoption will require overcoming current financial and technological challenges. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Graphical abstract

18 pages, 862 KB  
Article
Integration of Multi-Criteria Decision-Making and Dimensional Entropy Minimization in Furniture Design
by Anna Jasińska and Maciej Sydor
Information 2025, 16(8), 692; https://doi.org/10.3390/info16080692 - 14 Aug 2025
Viewed by 312
Abstract
Multi-criteria decision analysis (MCDA) in furniture design is challenged by increasing product complexity and component proliferation. This study introduces a novel framework that integrates entropy reduction—achieved through dimensional standardization and modularity—as a core factor in the MCDA methodologies. The framework addresses both individual [...] Read more.
Multi-criteria decision analysis (MCDA) in furniture design is challenged by increasing product complexity and component proliferation. This study introduces a novel framework that integrates entropy reduction—achieved through dimensional standardization and modularity—as a core factor in the MCDA methodologies. The framework addresses both individual furniture evaluation and product family optimization through systematic complexity reduction. The research employed a two-phase methodology. First, a comparative analysis evaluated two furniture variants (laminated particleboard versus oak wood) using the Weighted Sum Model (WSM) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The divergent rankings produced by these methods revealed inherent evaluation ambiguities stemming from their distinct mathematical foundations, highlighting the need for additional decision criteria. Building on these findings, the study further examined ten furniture variants, identifying the potential to transform their individual components into universal components, applicable across various furniture variants (or configurations) in a furniture line. The proposed dimensional modifications enhance modularity and interoperability within product lines, simplifying design processes, production, warehousing logistics, product servicing, and liquidation at end of lifetime. The integration of entropy reduction as a quantifiable criterion within MCDA represents a significant methodological advancement. By prioritizing dimensional standardization and modularity, the framework reduces component variety while maintaining design flexibility. This approach offers furniture manufacturers a systematic method for balancing product diversity with operational efficiency, addressing a critical gap in current design evaluation practices. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
Show Figures

Figure 1

23 pages, 348 KB  
Article
Exploring the Key Drivers of Financial Performance in the Context of Corporate and Public Governance: Empirical Evidence
by Georgeta Vintilă, Mihaela Onofrei, Alexandra Ioana Vintilă and Vasilica Izabela Fometescu
Information 2025, 16(8), 691; https://doi.org/10.3390/info16080691 - 14 Aug 2025
Viewed by 581
Abstract
This research focuses on analyzing the determinants of financial performance for the companies included in the Standard & Poor’s 500 index over the period from 2014 to 2023. To guide managerial decisions aimed at enhancing company performance, this study examines, as key drivers, [...] Read more.
This research focuses on analyzing the determinants of financial performance for the companies included in the Standard & Poor’s 500 index over the period from 2014 to 2023. To guide managerial decisions aimed at enhancing company performance, this study examines, as key drivers, the main financial indicators, core corporate governance characteristics, and U.S. public governance indicators. The investigation begins with a retrospective review of the specialized literature, highlighting the findings of previous studies in the field and providing the basis for selecting the variables used in the present empirical analysis. The research method employed is fixed-effects panel-data regression. The dependent variables are financial performance measures, such as the EBITDA margin, EBIT margin, net profit margin, and ROA. This study’s main results show that the price-to-book ratio, liquidity, sales growth, CEO duality, board gender diversity, ESG score, and U.S. regulatory quality exert a positive influence on financial performance. In contrast, the price-to-earnings ratio, net debt, capital intensity, R&D intensity, weighted average cost of capital, board independence, and the COVID-19 pandemic crisis have a negative impact on the financial performance of U.S. companies. The findings of this investigation could serve as benchmarks for supporting managerial decisions at the company level regarding the improvement of their financial performance. Full article
(This article belongs to the Special Issue Decision Models for Economics and Business Management)
11 pages, 3732 KB  
Article
Convolutional Autoencoders for Data Compression and Anomaly Detection in Small Satellite Technologies
by Dishanand Jayeprokash and Julia Gonski
Information 2025, 16(8), 690; https://doi.org/10.3390/info16080690 - 14 Aug 2025
Viewed by 387
Abstract
Small satellite technologies have enhanced the potential and feasibility of geodesic missions through the simplification of design and decreased costs allowing for more frequent launches. On-satellite data acquisition systems can benefit from the implementation of machine learning (ML) for better performance and greater [...] Read more.
Small satellite technologies have enhanced the potential and feasibility of geodesic missions through the simplification of design and decreased costs allowing for more frequent launches. On-satellite data acquisition systems can benefit from the implementation of machine learning (ML) for better performance and greater efficiency on tasks such as image processing or feature extraction. This work presents convolutional autoencoders for implementation on the payload of small satellites, designed to achieve the dual functionality of data compression for more efficient off-satellite transmission and at-source anomaly detection to inform satellite data-taking. This capability is demonstrated for the use case of disaster monitoring using aerial image datasets of the African continent, offering avenues for both the implementation of novel ML-based approaches in small satellite applications and the expansion of space technology and artificial intelligence in Africa. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Figure 1

34 pages, 6293 KB  
Article
A Novel Approach to State-to-State Transformation in Quantum Computing
by Artyom M. Grigoryan, Alexis A. Gomez and Sos S. Agaian
Information 2025, 16(8), 689; https://doi.org/10.3390/info16080689 - 13 Aug 2025
Viewed by 260
Abstract
This article presents a new approach to the problem of transforming one quantum state into another. It is shown that an r-qubit superposition |x can be obtained from another r-qubit superposition |y, by using only [...] Read more.
This article presents a new approach to the problem of transforming one quantum state into another. It is shown that an r-qubit superposition |x can be obtained from another r-qubit superposition |y, by using only (2r1) rotations, each presented by one controlled rotation gate. The quantum superpositions with real amplitudes are considered. The traditional two-stage approach Uy1Ux:|x|0r|y requires twice as many rotations. Here, both transformations to the conventual basis state, Ux: |x |0r and Uy: |y |0r, use (2r1) rotations each on two binary planes, and many of these rotations require additional sets of CNOTs to be represented as 1- or 2-qubit-controlled gates. The proposed method is based on the concept of the discrete signal-induced heap transform (DsiHT) which is unitary and generated by a vector and a set of angular equations with given parameters. The quantum analog of this transform is described. The main characteristic of the DsiHT is the path of processing the data. It is shown that there exist such fast paths that allow for effective computing of the DsiHT, which leads to the simple quantum circuits for state preparation and transformation. Examples of such paths are given and quantum circuits for preparation and transformation of 2-, 3-, and 4-qubits are described in detail. CNOT gates are not used, but only controlled gates of elementary rotations around the y-axis. It is shown that the transformation and, in particular, only rotation gates with control qubits are required for initialization of 2-, 3-, and 4-qubits. The quantum circuits are simple and have a recursive form, which makes them easy to implement for arbitrary r-qubit superposition, with r2. This approach significantly reduces the complexity of quantum state transformations, paving the way for more efficient quantum algorithms and practical implementations on near-term quantum devices. Full article
Show Figures

Figure 1

23 pages, 823 KB  
Review
Ensemble Large Language Models: A Survey
by Ibomoiye Domor Mienye and Theo G. Swart
Information 2025, 16(8), 688; https://doi.org/10.3390/info16080688 - 13 Aug 2025
Viewed by 1214
Abstract
Large language models (LLMs) have transformed the field of natural language processing (NLP), achieving state-of-the-art performance in tasks such as translation, summarization, and reasoning. Despite their impressive capabilities, challenges persist, including biases, limited interpretability, and resource-intensive training. Ensemble learning, a technique that combines [...] Read more.
Large language models (LLMs) have transformed the field of natural language processing (NLP), achieving state-of-the-art performance in tasks such as translation, summarization, and reasoning. Despite their impressive capabilities, challenges persist, including biases, limited interpretability, and resource-intensive training. Ensemble learning, a technique that combines multiple models to improve performance, presents a promising avenue for addressing these limitations in LLMs. This review explores the emerging field of ensemble LLMs, providing a comprehensive analysis of current methodologies, applications across diverse domains, and existing challenges. By reviewing ensemble strategies and evaluating their effectiveness, this paper highlights the potential of ensemble LLMs to enhance robustness and generalizability while proposing future research directions to advance the field. Full article
Show Figures

Figure 1

17 pages, 2534 KB  
Article
Modeling Recommender Systems Using Disease Spread Techniques
by Peixiong He, Libo Sun, Xian Gao, Yi Zhou and Xiao Qin
Information 2025, 16(8), 687; https://doi.org/10.3390/info16080687 - 13 Aug 2025
Viewed by 321
Abstract
Recommender systems on digital platforms profoundly influence user behavior through content dissemination, and their diffusion process is similar to the spreading mechanism of infectious diseases to some extent. In this paper, we use a network-based susceptibility-infection (SI) model to model the propagation dynamics [...] Read more.
Recommender systems on digital platforms profoundly influence user behavior through content dissemination, and their diffusion process is similar to the spreading mechanism of infectious diseases to some extent. In this paper, we use a network-based susceptibility-infection (SI) model to model the propagation dynamics of recommended content, and systematically compare the differences in propagation efficiency among three recommendation strategies based on popularity, collaborative filtering, and content. We constructed scale-free user networks based on real-world clickstream data and dynamically adapted the SI model to reflect the realistic scenario of user engagement decay over time. To enhance the understanding of the recommendation process, we further simulate the visualization changes of the propagation process to show how the content spreads among users. The experimental results show that collaborative filtering performs superior in the initial dissemination, but its dissemination effect decays rapidly over time and is weaker than the other two methods. This study provides new ideas for modeling and understanding recommender systems from an epidemiological perspective. Full article
Show Figures

Figure 1

26 pages, 423 KB  
Article
Enhancing Privacy-Preserving Network Trace Synthesis Through Latent Diffusion Models
by Jin-Xi Yu, Yi-Han Xu, Min Hua, Gang Yu and Wen Zhou
Information 2025, 16(8), 686; https://doi.org/10.3390/info16080686 - 12 Aug 2025
Viewed by 291
Abstract
Network trace is a comprehensive record of data packets traversing a computer network, serving as a critical resource for analyzing network behavior. However, in practice, the limited availability of high-quality network traces, coupled with the presence of sensitive information such as IP addresses [...] Read more.
Network trace is a comprehensive record of data packets traversing a computer network, serving as a critical resource for analyzing network behavior. However, in practice, the limited availability of high-quality network traces, coupled with the presence of sensitive information such as IP addresses and MAC addresses, poses significant challenges to advancing network trace analysis. To address these issues, this paper focuses on network trace synthesis in two practical scenarios: (1) data expansion, where users create synthetic traces internally to diversify and enhance existing network trace utility; (2) data release, where synthesized network traces are shared externally. Inspired by the powerful generative capabilities of latent diffusion models (LDMs), this paper introduces NetSynDM, which leverages LDM to address the challenges of network trace synthesis in data expansion scenarios. To address the challenges in the data release scenario, we integrate differential privacy (DP) mechanisms into NetSynDM, introducing DPNetSynDM, which leverages DP Stochastic Gradient Descent (DP-SGD) to update NetSynDM, incorporating privacy-preserving noise throughout the training process. Experiments on five widely used network trace datasets show that our methods outperform prior works. NetSynDM achieves an average 166.1% better performance in fidelity compared to baselines. DPNetSynDM strikes an improved balance between privacy and fidelity, surpassing previous state-of-the-art network trace synthesis method fidelity scores of 18.4% on UGR16 while reducing privacy risk scores by approximately 9.79%. Full article
Show Figures

Figure 1

10 pages, 724 KB  
Article
Real-Time Speech-to-Text on Edge: A Prototype System for Ultra-Low Latency Communication with AI-Powered NLP
by Stefano Di Leo, Luca De Cicco and Saverio Mascolo
Information 2025, 16(8), 685; https://doi.org/10.3390/info16080685 - 11 Aug 2025
Viewed by 1067
Abstract
This paper presents a real-time speech-to-text (STT) system designed for edge computing environments requiring ultra-low latency and local processing. Differently from cloud-based STT services, the proposed solution runs entirely on a local infrastructure which allows the enforcement of user privacy and provides high [...] Read more.
This paper presents a real-time speech-to-text (STT) system designed for edge computing environments requiring ultra-low latency and local processing. Differently from cloud-based STT services, the proposed solution runs entirely on a local infrastructure which allows the enforcement of user privacy and provides high performance in bandwidth-limited or offline scenarios. The designed system is based on a browser-native audio capture through WebRTC, real-time streaming with WebSocket, and offline automatic speech recognition (ASR) utilizing the Vosk engine. A natural language processing (NLP) component, implemented as a microservice, improves transcription results for spelling accuracy and clarity. Our prototype reaches sub-second end-to-end latency and strong transcription capabilities under realistic conditions. Furthermore, the modular architecture allows extensibility, integration of advanced AI models, and domain-specific adaptations. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

41 pages, 2180 KB  
Systematic Review
On the Application of Artificial Intelligence and Cloud-Native Computing to Clinical Research Information Systems: A Systematic Literature Review
by Isabel Bejerano-Blázquez and Miguel Familiar-Cabero
Information 2025, 16(8), 684; https://doi.org/10.3390/info16080684 - 10 Aug 2025
Viewed by 1090
Abstract
The pharmaceutical and biotechnology sector is an intricate and rapidly evolving industry encompassing the full lifecycle of drugs, medicines, and clinical devices. Its growth is driven by factors such as the aging population, the rise in chronic diseases, and the increasing focus on [...] Read more.
The pharmaceutical and biotechnology sector is an intricate and rapidly evolving industry encompassing the full lifecycle of drugs, medicines, and clinical devices. Its growth is driven by factors such as the aging population, the rise in chronic diseases, and the increasing focus on personalized medicine. Nevertheless, it also faces significant challenges due to rising costs, increased complexity, and regulatory hurdles. Through a systematic literature review (SLR) as a research method combined with a comprehensive market analysis, this paper explores how several leading early-adopter healthcare companies are increasing their investments in computer-based clinical research information systems (CRISs) to sustain productivity, particularly through the adoption of artificial intelligence (AI) and cloud-native computing. As an extension of this research, a novel 360-degree reference blueprint is proposed for the domain analysis of medical features within AI-powered CRIS applications. This theoretical framework specifically targets clinical trial management systems (CRIS-CTMSs). Additionally, a detailed review is presented of the leading commercial solutions, assessing their portfolios and business maturity, while highlighting major open innovation collaborations with prominent pharmaceutical and biotechnology companies. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

21 pages, 1902 KB  
Article
Mobile Platform for Continuous Screening of Clear Water Quality Using Colorimetric Plasmonic Sensing
by Rima Mansour, Caterina Serafinelli, Rui Jesus and Alessandro Fantoni
Information 2025, 16(8), 683; https://doi.org/10.3390/info16080683 - 10 Aug 2025
Viewed by 355
Abstract
Effective water quality monitoring is very important for detecting pollution and protecting public health. However, traditional methods are slow, relying on costly equipment, central laboratories, and expert staffing, which delays real-time measurements. At the same time, significant advancements have been made in the [...] Read more.
Effective water quality monitoring is very important for detecting pollution and protecting public health. However, traditional methods are slow, relying on costly equipment, central laboratories, and expert staffing, which delays real-time measurements. At the same time, significant advancements have been made in the field of plasmonic sensing technologies, making them ideal for environmental monitoring. However, their reliance on large, expensive spectrometers limits accessibility. This work aims to bridge the gap between advanced plasmonic sensing and practical water monitoring needs, by integrating plasmonic sensors with mobile technology. We present BioColor, a mobile platform that consists of a plasmonic sensor setup, mobile application, and cloud services. The platform processes captured colorimetric sensor images in real-time using optimized image processing algorithms, including region-of-interest segmentation, color extraction (mean and dominant), and comparison via the CIEDE2000 metric. The results are visualized within the mobile app, providing instant and automated access to the sensing outcome. In our validation experiments, the system consistently measured color differences in various sensor images captured under media with different refractive indices. A user experience test with 12 participants demonstrated excellent usability, resulting in a System Usability Scale (SUS) score of 93. The BioColor platform brings advanced sensing capabilities from hardware into software, making environmental monitoring more accessible, efficient, and continuous. Full article
(This article belongs to the Special Issue Optimization Algorithms and Their Applications)
Show Figures

Figure 1

Previous Issue
Back to TopTop