Next Issue
Volume 16, November
Previous Issue
Volume 16, September
 
 

Information, Volume 16, Issue 10 (October 2025) – 101 articles

Cover Story (view full-size image): Many individuals across diverse disciplines record spoken data yet lack accessible ways to transform it into structured, analyzable form. CLARE bridges this gap by uniting a time-synchronized transcript editor with automated, context-aware knowledge graph generation and interactive refinement. Users can correct transcripts, generate predicate-typed graphs, and edit nodes and edges. The system accommodates both local and cloud deployment options, enabling users to tailor model selection to their specific privacy constraints and performance needs. On the MINE benchmark, CLARE captured more factual content than prior systems and achieved further gains through brief human review, demonstrating a practical, transparent workflow from conversation to knowledge graph. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 298 KB  
Article
Video Games in Schools: Putting Flow State in Context
by Marcello Sarini, Francesco Bocci, Giulia Centini, Anna Maso and Luca Pianigiani
Information 2025, 16(10), 922; https://doi.org/10.3390/info16100922 - 21 Oct 2025
Viewed by 386
Abstract
There is growing interest in exploring the positive effects of commercial video games for educational purposes. These investigations focus on enhancing learning outcomes but also on improving emotional and physical well-being, which may lead to greater motivation to learn and more positive learning [...] Read more.
There is growing interest in exploring the positive effects of commercial video games for educational purposes. These investigations focus on enhancing learning outcomes but also on improving emotional and physical well-being, which may lead to greater motivation to learn and more positive learning experiences. However, few studies focus on the role of flow state in these phenomena. In line with this, we present two studies, the former conducted in a primary school in Northern Italy, and the latter in two secondary schools in Central Italy. Both of them aim to investigate the hidden effects of gaming at school in connection with the flow state. The first study, involving students from four classes, aimed at revealing collective dynamics unique to each class. The second study involved students from three classes of two different schools. This study investigated the peculiarity of the effects of different genres of games played in classes. We observed how flow state in context raised meaningful differences in some of the variables measured. These variations seem to reflect distinctive traits regarding playing in general and in the specifics of each class. These findings suggest that educators could consider the unique characteristics that gaming makes evident when designing educational strategies, potentially tailoring the learning process to better align with the specific dynamics of the context. Full article
Show Figures

Figure 1

22 pages, 5641 KB  
Article
A Globally Optimal Alternative to MLP
by Zheng Li, Jerry Cheng and Huanying Helen Gu
Information 2025, 16(10), 921; https://doi.org/10.3390/info16100921 - 21 Oct 2025
Viewed by 252
Abstract
In deep learning, achieving the global minimum poses a significant challenge, even for relatively simple architectures such as Multi-Layer Perceptrons (MLPs). To address this challenge, we visualized model states at both local and global optima, thereby identifying the factors that impede the transition [...] Read more.
In deep learning, achieving the global minimum poses a significant challenge, even for relatively simple architectures such as Multi-Layer Perceptrons (MLPs). To address this challenge, we visualized model states at both local and global optima, thereby identifying the factors that impede the transition of models from local to global minima when employing conventional model training methodologies. Based on these insights, we propose the Lagrange Regressor (LReg), a framework that is mathematically equivalent to MLPs. Rather than updates via optimization techniques, LReg employs a Mesh-Refinement–Coarsening (discrete) process to ensure the convergence of the model’s loss function to the global minimum. LReg achieves faster convergence and overcomes the inherent limitations of neural networks in fitting multi-frequency functions. Experiments conducted on large-scale benchmarks including ImageNet-1K (image classification), GLUE (natural language understanding), and WikiText (language modeling) show that LReg consistently enhances the performance of pre-trained models, significantly lowers test loss, and scales effectively to big data scenarios. These results underscore LReg’s potential as a scalable, optimization-free alternative for deep learning in large and complex datasets, aligning closely with the goals of innovative big data analytics. Full article
Show Figures

Figure 1

24 pages, 468 KB  
Article
Mining User Perspectives: Multi Case Study Analysis of Data Quality Characteristics
by Minnu Malieckal and Anjula Gurtoo
Information 2025, 16(10), 920; https://doi.org/10.3390/info16100920 - 21 Oct 2025
Viewed by 312
Abstract
With the growth of digital economies, data quality forms a key factor in enabling use and delivering value. Existing research defines quality through technical benchmarks or provider-led frameworks. Our study shifts the focus to actual users. Thirty-seven distinct data quality dimensions identified through [...] Read more.
With the growth of digital economies, data quality forms a key factor in enabling use and delivering value. Existing research defines quality through technical benchmarks or provider-led frameworks. Our study shifts the focus to actual users. Thirty-seven distinct data quality dimensions identified through a comprehensive review of the literature provide limited applicability for practitioners seeking actionable guidance. To address the gap, in-depth interviews of senior professionals from 25 organizations were conducted, representing sectors like computer science and technology, finance, environmental, social, and governance, and urban infrastructure. Data are analysed using content analysis methodology, with 2 level coding, supported by NVivo R1 software. Several newer perspectives emerged. Firstly, data quality is not simply about accuracy or completeness, rather it depends on suitability for real-world tasks. Secondly, trust grows with data transparency. Knowing where the data comes from and the nature of data processing matters as much as the data per se. Thirdly, users are open to paying for data, provided the data is clean, reliable, and ready to use. These and other results suggest data users focus on a narrower, more practical set of priorities, considered essential in actual workflows. Rethinking quality from a consumer’s perspective offers a practical path to building credible and accessible data ecosystems. This study is particularly useful for data platform designers, policymakers, and organisations aiming to strengthen data quality and trust in data exchange ecosystems. Full article
Show Figures

Graphical abstract

17 pages, 1824 KB  
Article
Towards Accurate Thickness Recognition from Pulse Eddy Current Data Using the MRDC-BiLSE Network
by Wenhui Chen, Hong Zhang, Yiran Peng, Benhuang Liu, Shunwu Xu, Hao Yan, Jian Zhang and Zhaowen Chen
Information 2025, 16(10), 919; https://doi.org/10.3390/info16100919 - 20 Oct 2025
Viewed by 308
Abstract
Accurate thickness recognition plays a vital role in safeguarding the structural reliability of critical assets. Pulse eddy current testing (PECT), as a non-destructive method that is both non-contact and insensitive to surface coatings, provides an efficient pathway for this purpose. Nevertheless, the complex, [...] Read more.
Accurate thickness recognition plays a vital role in safeguarding the structural reliability of critical assets. Pulse eddy current testing (PECT), as a non-destructive method that is both non-contact and insensitive to surface coatings, provides an efficient pathway for this purpose. Nevertheless, the complex, nonstationary, and nonlinear characteristics of PECT signals make it difficult for conventional models to jointly capture localized high-frequency patterns and long-range temporal dependencies, thereby constraining their prediction performance. To overcome these issues, we introduce a novel deep learning framework, multi-scale residual dilated convolution, and bidirectional long short-term memory with a squeeze-and-excitation mechanism (MRDC-BiLSE) for PECT time series analysis. The architecture integrates a multi-scale residual dilated convolution block. By combining dilated convolutions with residual connections at different scales, this block captures structural patterns across multiple temporal resolutions, leading to more comprehensive and discriminative feature extraction. Furthermore, to better exploit temporal dependencies, the BiLSTM-SE module combines bidirectional modeling with a squeeze-and-excitation mechanism, resulting in more discriminative feature representations. Experiments on experimental PECT datasets confirm that MRDC-BiLSE surpasses existing methods, showing applicability for real-world thickness recognition. Full article
(This article belongs to the Special Issue Signal Processing and Machine Learning, 2nd Edition)
Show Figures

Figure 1

30 pages, 2980 KB  
Article
Game On: Exploring the Potential for Soft Skill Development Through Video Games
by Juan Bartolomé, Idoya del Río, Aritz Martínez, Andoni Aranguren, Ibai Laña and Sergio Alloza
Information 2025, 16(10), 918; https://doi.org/10.3390/info16100918 - 20 Oct 2025
Viewed by 511
Abstract
Soft skills remain fundamental for employability and sustainable human development in an increasingly technology-driven society. These interpersonal and cognitive competencies—such as communication, adaptability, and critical thinking—represent uniquely human capabilities that current Artificial Intelligence (AI) systems cannot replicate. However, assessing and developing these skills [...] Read more.
Soft skills remain fundamental for employability and sustainable human development in an increasingly technology-driven society. These interpersonal and cognitive competencies—such as communication, adaptability, and critical thinking—represent uniquely human capabilities that current Artificial Intelligence (AI) systems cannot replicate. However, assessing and developing these skills consistently remains a challenge due to the lack of standardized evaluation frameworks. This study explores the potential of commercial video games as engaging environments for soft skills enhancement and introduces an AI-based assessment methodology to quantify such improvement. Using player data collected from the Steam platform, we designed and validated an AI model based on Gradient Boosting Regressor (GBR) to estimate participants’ soft skill progression. The model achieved high predictive performance (R2 ≈ 0.9; MAE/RMSE ≈ 1), demonstrating strong alignment between gameplay behavior and soft skill improvement. The results highlight that video game-based data analysis can provide a reliable, non-intrusive alternative to traditional testing methods, reducing test-related anxiety while maintaining assessment validity. This approach supports the integration of video games into educational and professional training frameworks as a scalable and data-driven tool for soft skills development. Full article
(This article belongs to the Special Issue Artificial Intelligence and Games Science in Education)
Show Figures

Figure 1

32 pages, 2787 KB  
Review
Deep Learning for Regular Raster Spatio-Temporal Prediction: An Overview
by Vincenzo Capone, Angelo Casolaro and Francesco Camastra
Information 2025, 16(10), 917; https://doi.org/10.3390/info16100917 - 19 Oct 2025
Viewed by 656
Abstract
The raster is the most common type of spatio-temporal data, and it can be either regularly or irregularly spaced. Spatio-temporal prediction on regular raster data is crucial for modelling and understanding dynamics in disparate realms, such as environment, traffic, astronomy, remote sensing, gaming [...] Read more.
The raster is the most common type of spatio-temporal data, and it can be either regularly or irregularly spaced. Spatio-temporal prediction on regular raster data is crucial for modelling and understanding dynamics in disparate realms, such as environment, traffic, astronomy, remote sensing, gaming and video processing, to name a few. Historically, statistical and classical machine learning methods have been used to model spatio-temporal data, and, in recent years, deep learning has shown outstanding results in regular raster spatio-temporal prediction. This work provides a self-contained review about effective deep learning methods for the prediction of regular raster spatio-temporal data. Each deep learning technique is described in detail, underlining its advantages and drawbacks. Finally, a discussion of relevant aspects and further developments in deep learning for regular raster spatio-temporal prediction is presented. Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting, 2nd Edition)
Show Figures

Figure 1

25 pages, 2522 KB  
Article
Reference-Less Evaluation of Machine Translation: Navigating Through the Resource-Scarce Scenarios
by Archchana Sindhujan, Diptesh Kanojia and Constantin Orăsan
Information 2025, 16(10), 916; https://doi.org/10.3390/info16100916 - 18 Oct 2025
Viewed by 345
Abstract
Reference-less evaluation of machine translation, or Quality Estimation (QE), is vital for low-resource language pairs where high-quality references are often unavailable. In this study, we investigate segment-level QE methods comparing encoder-based models such as MonoTransQuest, CometKiwi, and xCOMET with various decoder-based [...] Read more.
Reference-less evaluation of machine translation, or Quality Estimation (QE), is vital for low-resource language pairs where high-quality references are often unavailable. In this study, we investigate segment-level QE methods comparing encoder-based models such as MonoTransQuest, CometKiwi, and xCOMET with various decoder-based methods (Tower+, ALOPE, and other instruction-fine-tuned language models). Our work primarily focused on utilizing eight low-resource language pairs, involving both English on the source side and the target side of the translation. Results indicate that while fine-tuned encoder-based models remain strong performers across most low-resource language pairs, decoder-based Large Language Models (LLMs) show clear improvements when adapted through instruction tuning. Importantly, the ALOPE framework further enhances LLM performance beyond standard fine-tuning, demonstrating its effectiveness in narrowing the gap with encoder-based approaches and highlighting its potential as a viable strategy for low-resource QE. In addition, our experiments demonstrates that with adaptation techniques such as LoRA (Low Rank Adapters) and quantization, decoder-based QE models can be trained with competitive GPU memory efficiency, though they generally require substantially more disk space than encoder-based models. Our findings highlight the effectiveness of encoder-based models for low-resource QE and suggest that advances in cross-lingual modeling will be key to improving LLM-based QE in the future. Full article
Show Figures

Figure 1

36 pages, 496 KB  
Review
Foundations for a Generic Ontology for Visualization: A Comprehensive Survey
by Suzana Loshkovska and Panče Panov
Information 2025, 16(10), 915; https://doi.org/10.3390/info16100915 - 18 Oct 2025
Viewed by 331
Abstract
This paper surveys existing ontologies for visualization, which formally define and organize knowledge about visualization concepts, techniques, and tools. Although visualization is a mature field, the rapid growth of data complexity makes semantically rich frameworks increasingly essential for building intelligent and automated visualization [...] Read more.
This paper surveys existing ontologies for visualization, which formally define and organize knowledge about visualization concepts, techniques, and tools. Although visualization is a mature field, the rapid growth of data complexity makes semantically rich frameworks increasingly essential for building intelligent and automated visualization systems. Current ontologies remain fragmented, heterogeneous, and inconsistent in terminology and modeling strategies, limiting their coverage and adoption. We present a systematic analysis of representative ontologies, highlighting shared themes and, most importantly, the gaps that hinder unification. These gaps provide the foundations for developing a comprehensive, generic ontology of visualization, aimed at unifying core concepts and supporting reuse across research and practice. Full article
(This article belongs to the Special Issue Knowledge Representation and Ontology-Based Data Management)
Show Figures

Figure 1

59 pages, 13469 KB  
Review
Convolutional Neural Network Acceleration Techniques Based on FPGA Platforms: Principles, Methods, and Challenges
by Li Gao, Zhongqiang Luo and Lin Wang
Information 2025, 16(10), 914; https://doi.org/10.3390/info16100914 - 18 Oct 2025
Viewed by 580
Abstract
As the complexity of convolutional neural networks (CNN) continues to increase, efficient deployment on computationally constrained hardware platforms has become a significant challenge. Against this backdrop, field-programmable gate arrays (FPGA) emerge as an up-and-coming CNN acceleration platform due to their inherent energy efficiency, [...] Read more.
As the complexity of convolutional neural networks (CNN) continues to increase, efficient deployment on computationally constrained hardware platforms has become a significant challenge. Against this backdrop, field-programmable gate arrays (FPGA) emerge as an up-and-coming CNN acceleration platform due to their inherent energy efficiency, reconfigurability, and parallel processing capabilities. This paper establishes a systematic analytical framework to explore CNN optimization strategies on FPGA from both algorithmic and hardware perspectives. It emphasizes co-design methodologies between algorithms and hardware, extending these concepts to other embedded system applications. Furthermore, the paper summarizes current performance evaluation frameworks to assess the effectiveness of acceleration schemes comprehensively. Finally, building upon existing work, it identifies key challenges in this field and outlines future research directions. Full article
Show Figures

Graphical abstract

27 pages, 2111 KB  
Article
When Technology Signals Trust: Blockchain vs. Traditional Cues in Cross-Border Cosmetic E-Commerce
by Xiaoling Liu and Ahmad Yahya Dawod
Information 2025, 16(10), 913; https://doi.org/10.3390/info16100913 - 18 Oct 2025
Viewed by 313
Abstract
Using platform self-operation, customer reviews, and compensation commitments as traditional benchmarks, this study foregrounds blockchain traceability as a technology-enabled authenticity signal in cross-border cosmetic e-commerce (CBEC). Using an 8-scenario orthogonal experiment, we test a model in which perceived risk mediates the effects of [...] Read more.
Using platform self-operation, customer reviews, and compensation commitments as traditional benchmarks, this study foregrounds blockchain traceability as a technology-enabled authenticity signal in cross-border cosmetic e-commerce (CBEC). Using an 8-scenario orthogonal experiment, we test a model in which perceived risk mediates the effects of authenticity signals on purchase intention. We probe blockchain boundary conditions by examining their interactions with traditional signals. Our results show that blockchain is the only signal with a significant direct effect on purchase intention and that it also exerts an indirect effect by reducing perceived risk. While customer reviews show no consistent effect, self-operation and compensation influence purchase intention indirectly via risk reduction. Moderation tests indicate that blockchain is most effective in low-trust settings—i.e., when self-operation, reviews, or compensation safeguards are absent or weak—while this marginal impact declines when such safeguards are strong. These findings refine signaling theory by distinguishing a technology-backed signal from institutional and social signals and by positioning perceived risk as the central mechanism in CBEC cosmetics. Managerially speaking, blockchain should serve as the anchor signal in high-risk contexts and as a reinforcing signal where traditional assurances already exist. Future work should extend to field/transactional data and additional signals (e.g., brand reputation, third-party certifications). Full article
Show Figures

Figure 1

24 pages, 661 KB  
Article
Brain Network Analysis and Recognition Algorithm for MDD Based on Class-Specific Correlation Feature Selection
by Zhengnan Zhang, Yating Hu, Jiangwen Lu and Yunyuan Gao
Information 2025, 16(10), 912; https://doi.org/10.3390/info16100912 - 17 Oct 2025
Viewed by 295
Abstract
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes [...] Read more.
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes a brain network analysis and recognition algorithm based on class-specific correlation feature selection. Leveraging electroencephalogram monitoring as a more objective MDD detection tool, this study employs tensor sparse representation to reduce the dimensionality of functional brain network time-series data, extracting the most representative functional connectivity matrices. To mitigate the impact of redundant connections, a feature selection algorithm combining topologically aware maximum class-specific dynamic correlation and minimum redundancy is integrated, identifying an optimal feature subset that best distinguishes MDD patients from healthy controls. The selected features are then ranked by relevance and fed into a hybrid CNN-BiLSTM classifier. Experimental results demonstrate classification accuracies of 95.96% and 94.90% on the MODMA and PRED + CT datasets, respectively, significantly outperforming conventional methods. This study not only improves the accuracy of MDD identification but also enhances the clinical interpretability of feature selection results, offering novel perspectives for pathological MDD research and clinical diagnosis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 414 KB  
Article
DQMAF—Data Quality Modeling and Assessment Framework
by Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Viewed by 505
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only [...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

25 pages, 3111 KB  
Article
Intrusion Detection in Industrial Control Systems Using Transfer Learning Guided by Reinforcement Learning
by Jokha Ali, Saqib Ali, Taiseera Al Balushi and Zia Nadir
Information 2025, 16(10), 910; https://doi.org/10.3390/info16100910 - 17 Oct 2025
Viewed by 565
Abstract
Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new [...] Read more.
Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new networks with limited data. To address this, this paper introduces an adaptive intrusion detection framework that combines a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model with a novel transfer learning strategy. We employ a Reinforcement Learning (RL) agent to intelligently guide the fine-tuning process, which allows the IDS to dynamically adjust its parameters such as layer freezing and learning rates in real-time based on performance feedback. We evaluated our system in a realistic data-scarce scenario using only 50 labeled training samples. Our RL-Guided model achieved a final F1-score of 0.9825, significantly outperforming a standard neural fine-tuning model (0.861) and a target baseline model (0.759). Analysis of the RL agent’s behavior confirmed that it learned a balanced and effective policy for adapting the model to the target domain. We conclude that the proposed RL-guided approach creates a highly accurate and adaptive IDS that overcomes the limitations of static transfer learning methods. This dynamic fine-tuning strategy is a powerful and promising direction for building resilient cybersecurity defenses for critical infrastructure. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

19 pages, 286 KB  
Article
Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration
by Zainab Salma, Raquel Hijón-Neira and Celeste Pizarro
Information 2025, 16(10), 909; https://doi.org/10.3390/info16100909 - 17 Oct 2025
Viewed by 754
Abstract
The rapid integration of generative artificial intelligence (AI) into creative workflows is transforming design from a human-driven activity into a synergistic process between humans and AI systems. Yet, most current tools still operate as linear “executors” of user commands, which fundamentally clashes with [...] Read more.
The rapid integration of generative artificial intelligence (AI) into creative workflows is transforming design from a human-driven activity into a synergistic process between humans and AI systems. Yet, most current tools still operate as linear “executors” of user commands, which fundamentally clashes with the non-linear, iterative, and ambiguous nature of human creativity. Addressing this gap, this article introduces a conceptual framework of five irreducible paradoxes—ambiguity vs. precision, control vs. serendipity, speed vs. reflection, individual vs. collective, and originality vs. remix—as core design tensions that shape human–AI co-creative systems. Rather than treating these tensions as problems to solve, we argue they should be understood as design drivers that can guide the creation of next-generation co-creative environments. Through a critical synthesis of existing literature, we show how current executor-based AI tools (e.g., Microsoft 365 Copilot, Midjourney) fail to support non-linear exploration, refinement, and human creative agency. This study contributes a novel theoretical lens for critically analyzing existing systems and a generative framework for designing human–AI collaboration environments that augment, rather than replace, human creative agency. Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
22 pages, 8968 KB  
Article
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 - 16 Oct 2025
Viewed by 273
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, [...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

14 pages, 1149 KB  
Article
Modality Information Aggregation Graph Attention Network with Adversarial Training for Multi-Modal Knowledge Graph Completion
by Hankiz Yilahun, Elyar Aili, Seyyare Imam and Askar Hamdulla
Information 2025, 16(10), 907; https://doi.org/10.3390/info16100907 - 16 Oct 2025
Viewed by 260
Abstract
Multi-modal knowledge graph completion (MMKGC) aims to complete knowledge graphs by integrating structural information with multi-modal (e.g., visual, textual, and numerical) features and leveraging cross-modal reasoning within a unified semantic space to infer and supplement missing factual knowledge. Current MMKGC methods have advanced [...] Read more.
Multi-modal knowledge graph completion (MMKGC) aims to complete knowledge graphs by integrating structural information with multi-modal (e.g., visual, textual, and numerical) features and leveraging cross-modal reasoning within a unified semantic space to infer and supplement missing factual knowledge. Current MMKGC methods have advanced in terms of integrating multi-modal information but have overlooked the imbalance in modality importance for target entities. Treating all modalities equally dilutes critical semantics and amplifies irrelevant information, which in turn limits the semantic understanding and predictive performance of the model. To address these limitations, we proposed a modality information aggregation graph attention network with adversarial training for multi-modal knowledge graph completion (MIAGAT-AT). MIAGAT-AT focuses on hierarchically modeling complex cross-modal interactions. By combining the multi-head attention mechanism with modality-specific projection methods, it precisely captures global semantic dependencies and dynamically adjusts the weight of modality embeddings according to the importance of each modality, thereby optimizing cross-modal information fusion capabilities. Moreover, through the use of random noise and multi-layer residual blocks, the adversarial training generates high-quality multi-modal feature representations, thereby effectively enhancing information from imbalanced modalities. Experimental results demonstrate that our approach significantly outperforms 18 existing baselines and establishes a strong performance baseline across three distinct datasets. Full article
Show Figures

Figure 1

39 pages, 1709 KB  
Article
Harnessing Machine Learning to Analyze Renewable Energy Research in Latin America and the Caribbean
by Javier De La Hoz-M, Edwan A. Ariza-Echeverri, John A. Taborda, Diego Vergara and Izabel F. Machado
Information 2025, 16(10), 906; https://doi.org/10.3390/info16100906 - 16 Oct 2025
Viewed by 459
Abstract
The transition to renewable energy is essential for mitigating climate change and promoting sustainable development, particularly in Latin America and the Caribbean (LAC). Despite its vast potential, the region faces structural and economic challenges that hinder a sustainable energy transition. Understanding scientific production [...] Read more.
The transition to renewable energy is essential for mitigating climate change and promoting sustainable development, particularly in Latin America and the Caribbean (LAC). Despite its vast potential, the region faces structural and economic challenges that hinder a sustainable energy transition. Understanding scientific production in this field is key to shaping policy, investment, and technological progress. The primary objective of this study is to conduct a large-scale, data-driven analysis of renewable energy research in LAC, mapping its thematic evolution, collaboration networks, and key research trends over the past three decades. To achieve this, machine learning-based topic modeling and network analysis were applied to examine research trends in renewable energy in LAC. A dataset of 18,780 publications (1994–2024) from Scopus and Web of Science was analyzed using Latent Dirichlet Allocation (LDA) to uncover thematic structures. Network analysis assessed collaboration patterns and regional integration in research. Findings indicate a growing focus on solar, wind, and bioenergy advancements, alongside increasing attention to climate change policies, energy storage, and microgrid optimization. Artificial intelligence (AI) applications in energy management are emerging, mirroring global trends. However, research disparities persist, with Brazil, Mexico, and Chile leading output while smaller nations remain underrepresented. International collaborations, especially with North America and Europe, play a crucial role in research development. Renewable energy research supports Sustainable Development Goals (SDGs) 7 (Affordable and Clean Energy) and 13 (Climate Action). Despite progress, challenges remain in translating research into policy and addressing governance, financing, and socio-environmental factors. AI-driven analytics offer opportunities for improved energy planning. Strengthening regional collaboration, increasing research investment, and integrating AI into policy frameworks will be crucial for advancing the energy transition in LAC. This study provides evidence-based insights for policymakers, researchers, and industry leaders. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

23 pages, 506 KB  
Review
Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education
by Promethi Das Deep, William D. Edgington, Nitu Ghosh and Md. Shiblur Rahaman
Information 2025, 16(10), 905; https://doi.org/10.3390/info16100905 - 16 Oct 2025
Viewed by 1663
Abstract
The rapid rise of generative AI tools such as ChatGPT has prompted significant shifts in how higher education institutions approach academic integrity. Many universities have implemented AI detection tools like Turnitin AI, GPTZero, Copyleaks, and ZeroGPT to identify AI-generated content in student work. [...] Read more.
The rapid rise of generative AI tools such as ChatGPT has prompted significant shifts in how higher education institutions approach academic integrity. Many universities have implemented AI detection tools like Turnitin AI, GPTZero, Copyleaks, and ZeroGPT to identify AI-generated content in student work. This qualitative evidence synthesis draws on peer-reviewed journal articles published between 2021 and 2024 to evaluate the effectiveness, limitations, and ethical implications of AI detection tools in academic settings. While AI detectors offer scalable solutions, they frequently produce false positives and lack transparency, especially for multilingual or non-native English speakers. Ethical concerns surrounding surveillance, consent, and fairness are central to the discussion. The review also highlights gaps in institutional policies, inconsistent enforcement, and limited faculty training. It calls for a shift away from punitive approaches toward AI-integrated pedagogies that emphasize ethical use, student support, and inclusive assessment design. Emerging innovations such as watermarking and hybrid detection systems are discussed, though implementation challenges persist. Overall, the findings suggest that while AI detection tools play a role in preserving academic standards, institutions must adopt balanced, transparent, and student-centered strategies that align with evolving digital realities and uphold academic integrity without compromising rights or equity. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

18 pages, 2307 KB  
Article
Can We Trust AI Content Detection Tools for Critical Decision-Making?
by Tadesse G. Wakjira, Ibrahim A. Tijani, M. Shahria Alam, Mustafa Mashal and Mohammad Khalad Hasan
Information 2025, 16(10), 904; https://doi.org/10.3390/info16100904 - 16 Oct 2025
Viewed by 1132
Abstract
The rapid integration of artificial intelligence (AI) in content generation has encouraged the development of AI detection tools aimed at distinguishing between human- and AI-authored texts. These tools are increasingly adopted not only in academia but also in sensitive decision-making contexts, including candidate [...] Read more.
The rapid integration of artificial intelligence (AI) in content generation has encouraged the development of AI detection tools aimed at distinguishing between human- and AI-authored texts. These tools are increasingly adopted not only in academia but also in sensitive decision-making contexts, including candidate screening by hiring agencies in government and private sectors. This extensive reliance raises serious questions about their reliability, fairness, and appropriateness for high-stakes applications. This study evaluates the performance of six widely used AI content detection tools, namely Undetectable AI, Zerogpt.com, Zerogpt.net, Brandwell.ai, Gowinston.ai, and Crossplag, referred to as Tools A through F in this study. The assessment focused on the ability of the tools to identify human versus AI-generated content across multiple domains. Verified human-authored texts were gathered from reputable sources, including university websites, pre-ChatGPT publications in Nature and Science, government portals, and media outlets (e.g., BBC, US News). Complementary datasets of AI-generated texts were produced using ChatGPT-4o, encompassing coherent essays, nonsensical passages, and hybrid texts with grammatical errors, to test tool robustness. The results reveal significant performance limitations. The accuracy ranged from 14.3% (Tool B) to 71.4% (Tool D), with the precision and recall metrics showing inconsistent detection capabilities. The tools were also highly sensitive to minor textual modifications, where slight changes in phrasing could flip classifications between “AI-generated” and “human-authored.” Overall, the current AI detection tools lack the robustness and reliability needed for enforcing academic integrity or making employment-related decisions. The findings highlight an urgent need for more transparent, accurate, and context-aware frameworks before these tools can be responsibly incorporated into critical institutional or societal processes. Full article
Show Figures

Figure 1

15 pages, 2861 KB  
Article
Sustainable Real-Time NLP with Serverless Parallel Processing on AWS
by Chaitanya Kumar Mankala and Ricardo J. Silva
Information 2025, 16(10), 903; https://doi.org/10.3390/info16100903 - 15 Oct 2025
Viewed by 607
Abstract
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, [...] Read more.
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, and ClinicalBERT. By containerizing inference workloads and orchestrating parallel execution, the system eliminates the need for dedicated servers while dynamically scaling to workload demand. Experimental evaluation on the IMDb Reviews dataset demonstrates substantial efficiency gains: parallel execution achieved a 6.07× reduction in wall-clock duration, an 81.2% reduction in total computing time and energy consumption, and a 79.1% reduction in variable costs compared to sequential processing. These improvements directly translate into a smaller carbon footprint, highlighting the sustainability benefits of serverless architectures for AI workloads. The findings show that the proposed framework is model-independent and provides consistent advantages across diverse Transformer variants. This work illustrates how cloud-native, event-driven infrastructures can democratize access to large-scale NLP by reducing cost, processing time, and environmental impact while offering a reproducible pathway for real-world research and industrial applications. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Graphical abstract

28 pages, 2245 KB  
Article
GCHS: A Custodian-Aware Graph-Based Deep Learning Model for Intangible Cultural Heritage Recommendation
by Wei Xiao, Bowen Yu and Hanyue Zhang
Information 2025, 16(10), 902; https://doi.org/10.3390/info16100902 - 15 Oct 2025
Viewed by 388
Abstract
Digital platforms for intangible cultural heritage (ICH) function as vibrant electronic marketplaces, yet they grapple with content overload, high search costs, and under-leveraged social networks of heritage custodians. To address these electronic-commerce challenges, we present GCHS, a custodian-aware, graph-based deep learning model that [...] Read more.
Digital platforms for intangible cultural heritage (ICH) function as vibrant electronic marketplaces, yet they grapple with content overload, high search costs, and under-leveraged social networks of heritage custodians. To address these electronic-commerce challenges, we present GCHS, a custodian-aware, graph-based deep learning model that enhances ICH recommendation by uniting three critical signals: custodians’ social relationships, user interest profiles, and content metadata. Leveraging an attention mechanism, GCHS dynamically prioritizes influential custodians and resharing behaviors to streamline user discovery and engagement. We first characterize ICH-specific propagation patterns, e.g., custodians’ social influence, heterogeneous user interests, and content co-consumption and then encode these factors within a collaborative graph framework. Evaluation on a real-world ICH dataset demonstrates that GCHS delivers improvements in Top-N recommendation accuracy over leading benchmarks and significantly outperforms in terms of next-N sequence prediction. By integrating social, cultural, and transactional dimensions, our approach not only drives more effective digital commerce interactions around heritage content but also supports sustainable cultural dissemination and stakeholder participation. This work advances electronic-commerce research by illustrating how graph-based deep learning can optimize content discovery, personalize user experience, and reinforce community networks in digital heritage ecosystems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 527 KB  
Article
Estimating Weather Effects on Well-Being and Mobility with Multi-Source Longitudinal Data
by Davide Marzorati, Francesca Dalia Faraci and Tiziano Gerosa
Information 2025, 16(10), 901; https://doi.org/10.3390/info16100901 - 15 Oct 2025
Viewed by 309
Abstract
Understanding the influence of weather on human well-being and mobility is essential to promoting healthier lifestyles. In this study we employ data collected from 151 participants over a continuous 30-day period in Switzerland to examine the effects of weather on well-being and mobility. [...] Read more.
Understanding the influence of weather on human well-being and mobility is essential to promoting healthier lifestyles. In this study we employ data collected from 151 participants over a continuous 30-day period in Switzerland to examine the effects of weather on well-being and mobility. Physiological data were retrieved through wearable devices, while mobility was automatically tracked through Google Location History, enabling detailed analysis of participants’ mobility behaviors. Mixed effects linear models were used to estimate the effects of temperature, precipitation, and sunshine duration on well-being and mobility while controlling for potential socio-demographic confounders. In this work, we demonstrate the feasibility of combining multi-source physiological and location data for environmental health research. Our results show small but significant effects of weather on several well-being outcomes (activity, sleep, and stress), while mobility was mostly affected by the level of precipitation. In line with previous research, our findings confirm that normal weather fluctuations exert significant but moderate effects on health-related behavior, highlighting the need to shift research focus toward extreme weather variations that lie beyond typical seasonal ranges. Given the potentially severe consequences of such extremes for public health and health-care systems, this shift will help identify more consistent effects, thereby informing targeted interventions and policy planning. Full article
(This article belongs to the Section Biomedical Information and Health)
Show Figures

Figure 1

16 pages, 3157 KB  
Article
ADR: Attention Head Detection and Reweighting Enhance RAG Performance in a Positional-Encoding-Free Paradigm
by Mingwei Wang, Xiaobo Li, Qian Zeng, Xingbang Liu, Minghao Yang and Zhichen Jia
Information 2025, 16(10), 900; https://doi.org/10.3390/info16100900 - 15 Oct 2025
Viewed by 364
Abstract
Retrieval-augmented generation (RAG) has established a new search paradigm, in which large language models integrate external resources to compensate for their inherent knowledge limitations. However, limited context awareness reduces the performance of large language models in RAG tasks. Existing solutions incur additional time [...] Read more.
Retrieval-augmented generation (RAG) has established a new search paradigm, in which large language models integrate external resources to compensate for their inherent knowledge limitations. However, limited context awareness reduces the performance of large language models in RAG tasks. Existing solutions incur additional time and memory overhead and depend on specific positional encodings. In this paper, we propose Attention Head Detection and Reweighting (ADR), a lightweight and general framework. Specifically, we employ a recognition task to identify RAG-suppressing heads that limit the model’s context awareness. We then reweight their outputs with learned coefficients to mitigate the influence of these RAG-suppressing heads. After training, the weights are fixed during inference, introducing no additional time overhead and remaining agnostic to the choice of positional embedding. Experiments on PetroAI further demonstrate that ADR enhances the context awareness of fine-tuned models. Full article
Show Figures

Figure 1

16 pages, 3660 KB  
Article
A Network Scanning Organization Discovery Method Based on Graph Convolutional Neural Network
by Pengfei Xue, Luhan Dong, Chenyang Wang, Cheng Huang and Jie Wang
Information 2025, 16(10), 899; https://doi.org/10.3390/info16100899 - 15 Oct 2025
Viewed by 259
Abstract
With the quick development of network technology, the number of active IoT devices is growing rapidly. Numerous network scanning organizations have emerged to scan and detect network assets around the clock. This greatly facilitates illegal cyberattacks and adversely affects cybersecurity. Therefore, it is [...] Read more.
With the quick development of network technology, the number of active IoT devices is growing rapidly. Numerous network scanning organizations have emerged to scan and detect network assets around the clock. This greatly facilitates illegal cyberattacks and adversely affects cybersecurity. Therefore, it is important to discover and identify network scanning organizations on the Internet. Motivated by this, we propose a network scanning organization discovery method based on a graph convolutional neural network, which can effectively cluster out network scanning organizations. First, we constructed a network scanning attribute graph to represent the topological relationship between network scanning behaviors and targets. Then, we extract the deep feature relationships in the attribute graph via graph convolutional neural network and perform clustering to get network scanning organizations. Finally, the effectiveness of the method proposed in this paper is experimentally verified with an accuracy of 83.41% for the identification of network scanning organizations. Full article
(This article belongs to the Special Issue Cyber Security in IoT)
Show Figures

Figure 1

19 pages, 4546 KB  
Article
LiDAR Dreamer: Efficient World Model for Autonomous Racing with Cartesian-Polar Encoding and Lightweight State-Space Cells
by Myeongjun Kim, Jong-Chan Park, Sang-Min Choi and Gun-Woo Kim
Information 2025, 16(10), 898; https://doi.org/10.3390/info16100898 - 14 Oct 2025
Viewed by 595
Abstract
Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and [...] Read more.
Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and Dreamer variants) struggle to simultaneously satisfy sample efficiency, prediction reliability, and real-time control performance, making them difficult to apply in actual high-speed racing environments. To address these challenges, we propose LiDAR Dreamer, a novel world model specialized for LiDAR sensor data. LiDAR Dreamer introduces three core techniques: (1) efficient point cloud preprocessing and encoding via Cartesian Polar Bar Charts, (2) Light Structured State-Space Cells (LS3C) that reduce RSSM parameters by 14.2% while preserving key dynamic information, and (3) a Displacement Covariance Distance divergence function, which enhances both learning stability and expressiveness. Experiments in PyBullet F1TENTH simulation environments demonstrate that LiDAR Dreamer achieves competitive performance across different track complexities. On the Austria track with complex corners, it reaches 90% of DreamerV3’s performance (1.14 vs. 1.27 progress) while using 81.7% fewer parameters. On the simpler Columbia track, while model-free methods achieve higher absolute performance, LiDAR Dreamer shows improved sample efficiency compared to baseline Dreamer models, converging faster to stable performance. The Treitlstrasse environment results demonstrate comparable performance to baseline methods. Furthermore, beyond the 14.2% RSSM parameter reduction, reward loss converged more stably without spikes, improving overall training efficiency and stability. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 7599 KB  
Article
Strategic Launch Pad Positioning: Optimizing Drone Path Planning Through Genetic Algorithms
by Gregory Gasteratos and Ioannis Karydis
Information 2025, 16(10), 897; https://doi.org/10.3390/info16100897 - 14 Oct 2025
Viewed by 389
Abstract
Multi-drone operations face significant efficiency challenges when launch pad locations are predetermined without optimization, leading to suboptimal route configurations and increased travel distances. This research addresses launch pad positioning as a continuous planar location-routing problem (PLRP), developing a genetic algorithm framework integrated with [...] Read more.
Multi-drone operations face significant efficiency challenges when launch pad locations are predetermined without optimization, leading to suboptimal route configurations and increased travel distances. This research addresses launch pad positioning as a continuous planar location-routing problem (PLRP), developing a genetic algorithm framework integrated with multiple Traveling Salesman Problem (mTSP) solvers to optimize launch pad coordinates within operational areas. The methodology was evaluated through extensive experimentation involving over 17 million test executions across varying problem complexities and compared against brute-force optimization, Particle Swarm Optimization (PSO), and simulated annealing (SA) approaches. The results demonstrate that the genetic algorithm achieves 97–100% solution accuracy relative to exhaustive search methods while reducing computational requirements by four orders of magnitude, requiring an average of 527 iterations compared to 30,000 for PSO and 1000 for SA. Smart initialization strategies and adaptive termination criteria provide additional performance enhancements, reducing computational effort by 94% while maintaining 98.8% solution quality. Statistical validation confirms systematic improvements across all tested scenarios. This research establishes a validated methodological framework for continuous launch pad optimization in UAV operations, providing practical insights for real-world applications where both solution quality and computational efficiency are critical operational factors while acknowledging the simplified energy model limitations that warrant future research into more complex operational dynamics. Full article
Show Figures

Figure 1

26 pages, 6270 KB  
Article
Autonomous Navigation Approach for Complex Scenarios Based on Layered Terrain Analysis and Nonlinear Model
by Wenhe Chen, Leer Hua, Shuonan Shen, Yue Wang, Qi Pu and Xundiao Ma
Information 2025, 16(10), 896; https://doi.org/10.3390/info16100896 - 14 Oct 2025
Viewed by 367
Abstract
In complex scenarios, such as industrial parks and underground parking lots, efficient and safe autonomous navigation is essential for driverless operation and automatic parking. However, conventional modular navigation methods, especially the A* algorithm, suffer from excessive node traversal and short paths that bring [...] Read more.
In complex scenarios, such as industrial parks and underground parking lots, efficient and safe autonomous navigation is essential for driverless operation and automatic parking. However, conventional modular navigation methods, especially the A* algorithm, suffer from excessive node traversal and short paths that bring vehicles dangerously close to obstacles. To address these issues, we propose an autonomous navigation approach based on a layered terrain cost map and a nonlinear predictive control model, which ensures real-time performance, safety, and reduced computational cost. The global planner applies a two-stage A* strategy guided by the hierarchical terrain cost map, improving efficiency and obstacle avoidance, while the local planner combines linear interpolation with nonlinear model predictive control to adaptively adjust the vehicle speed under varying terrain conditions. Experiments conducted in simulated and real underground parking scenarios demonstrate that the proposed method significantly improves the computational efficiency and navigation safety, outperforming the traditional A* algorithm and other baseline approaches in overall performance. Full article
Show Figures

Figure 1

19 pages, 3592 KB  
Article
Comparing AI-Assisted and Traditional Teaching in College English: Pedagogical Benefits and Learning Behaviors
by Changyi Li and Jiang Long
Information 2025, 16(10), 895; https://doi.org/10.3390/info16100895 - 14 Oct 2025
Viewed by 725
Abstract
In the era of artificial intelligence, higher education is embracing new opportunities for pedagogical innovation. This study investigates the impact of integrating AI into college English teaching, focusing on its role in enhancing students’ critical thinking and academic engagement. A controlled experiment compared [...] Read more.
In the era of artificial intelligence, higher education is embracing new opportunities for pedagogical innovation. This study investigates the impact of integrating AI into college English teaching, focusing on its role in enhancing students’ critical thinking and academic engagement. A controlled experiment compared AI-assisted instruction with traditional teaching, revealing that AI-supported learning improved overall English proficiency, especially in writing skills and among lower- and intermediate-level learners. Behavioral analysis showed that the quality of AI interaction—such as meaningful feedback adoption and autonomous revision—was more influential than mere usage frequency. Student feedback further suggested that AI-enhanced teaching stimulated motivation and self-efficacy while also raising concerns about potential overreliance and shallow engagement. These findings highlight both the promise and the limitations of AI in language education, underscoring the importance of teacher facilitation and thoughtful design of human–AI interaction to support deep and sustainable learning. Full article
Show Figures

Figure 1

24 pages, 2328 KB  
Review
Large Language Model Agents for Biomedicine: A Comprehensive Review of Methods, Evaluations, Challenges, and Future Directions
by Xiaoran Xu and Ravi Sankar
Information 2025, 16(10), 894; https://doi.org/10.3390/info16100894 - 14 Oct 2025
Viewed by 1263
Abstract
Large language model (LLM)-based agents are rapidly emerging as transformative tools across biomedical research and clinical applications. By integrating reasoning, planning, memory, and tool use capabilities, these agents go beyond static language models to operate autonomously or collaboratively within complex healthcare settings. This [...] Read more.
Large language model (LLM)-based agents are rapidly emerging as transformative tools across biomedical research and clinical applications. By integrating reasoning, planning, memory, and tool use capabilities, these agents go beyond static language models to operate autonomously or collaboratively within complex healthcare settings. This review provides a comprehensive survey of biomedical LLM agents, spanning their core system architectures, enabling methodologies, and real-world use cases such as clinical decision making, biomedical research automation, and patient simulation. We further examine emerging benchmarks designed to evaluate agent performance under dynamic, interactive, and multimodal conditions. In addition, we systematically analyze key challenges, including hallucinations, interpretability, tool reliability, data bias, and regulatory gaps, and discuss corresponding mitigation strategies. Finally, we outline future directions in areas such as continual learning, federated adaptation, robust multi-agent coordination, and human AI collaboration. This review aims to establish a foundational understanding of biomedical LLM agents and provide a forward-looking roadmap for building trustworthy, reliable, and clinically deployable intelligent systems. Full article
Show Figures

Figure 1

20 pages, 425 KB  
Article
Data-Driven Event-Triggering Control of Discrete Time-Delay Systems
by Yifan Gong, Zhicheng Li and Yang Wang
Information 2025, 16(10), 893; https://doi.org/10.3390/info16100893 - 14 Oct 2025
Viewed by 292
Abstract
This paper investigates the data-driven event-triggering control of discrete time-delay systems. When there is enough data available, the system parameters can be determined by identified methods, and the model-based controller design can be implemented. However, with little data, this method does not result [...] Read more.
This paper investigates the data-driven event-triggering control of discrete time-delay systems. When there is enough data available, the system parameters can be determined by identified methods, and the model-based controller design can be implemented. However, with little data, this method does not result in an accurate system. The data-driven control method is introduced to address this issue. This paper classifies discrete-time systems with time delays into those with known delays and those with unknown delays. Controllers for systems with known delays and unknown delays are designed based on limited data, and stability is ensured by constructing improved Lyapunov functions. Two analyses are introduced: For the known delay condition, the lifting model method is presented to raise order and change the time-delay system to a delay-free system. Further, the stabilization criterion is presented. For the unknown time-delay system, according to the basic data-driven assumption, the data-driven stabilization criterion is presented. Also, the introduction of a dynamic event-triggering scheme and the discussion in this paper on how its parameters can be chosen can save more computational resources. Based on the two methods, the Lyapunov function is constructed separately, and the controller is derived through Linear Matrix Inequality. Finally, a discrete time-delay system is used as an example to show the effectiveness of these two methods. In addition, the dynamic event-triggering scheme proposed in this paper is compared with other articles to show that the parameter selection method proposed in this paper has better performance. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop