Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, and is published monthly online by MDPI. The International Society for the Study of Information (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.6 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
2.9 (2024);
5-Year Impact Factor:
3.0 (2024)
Latest Articles
Deep Learning for Regular Raster Spatio-Temporal Prediction: An Overview
Information 2025, 16(10), 917; https://doi.org/10.3390/info16100917 (registering DOI) - 19 Oct 2025
Abstract
The raster is the most common type of spatio-temporal data, and it can be either regularly or irregularly spaced. Spatio-temporal prediction on regular raster data is crucial for modelling and understanding dynamics in disparate realms, such as environment, traffic, astronomy, remote sensing, gaming
[...] Read more.
The raster is the most common type of spatio-temporal data, and it can be either regularly or irregularly spaced. Spatio-temporal prediction on regular raster data is crucial for modelling and understanding dynamics in disparate realms, such as environment, traffic, astronomy, remote sensing, gaming and video processing, to name a few. Historically, statistical and classical machine learning methods have been used to model spatio-temporal data, and, in recent years, deep learning has shown outstanding results in regular raster spatio-temporal prediction. This work provides a self-contained review about effective deep learning methods for the prediction of regular raster spatio-temporal data. Each deep learning technique is described in detail, underlining its advantages and drawbacks. Finally, a discussion of relevant aspects and further developments in deep learning for regular raster spatio-temporal prediction is presented.
Full article
(This article belongs to the Special Issue New Deep Learning Approach for Time Series Forecasting, 2nd Edition)
►
Show Figures
Open AccessArticle
Reference-Less Evaluation of Machine Translation: Navigating Through the Resource-Scarce Scenarios
by
Archchana Sindhujan, Diptesh Kanojia and Constantin Orăsan
Information 2025, 16(10), 916; https://doi.org/10.3390/info16100916 (registering DOI) - 18 Oct 2025
Abstract
Reference-less evaluation of machine translation, or Quality Estimation (QE), is vital for low-resource language pairs where high-quality references are often unavailable. In this study, we investigate segment-level QE methods comparing encoder-based models such as MonoTransQuest, CometKiwi, and xCOMET with various decoder-based
[...] Read more.
Reference-less evaluation of machine translation, or Quality Estimation (QE), is vital for low-resource language pairs where high-quality references are often unavailable. In this study, we investigate segment-level QE methods comparing encoder-based models such as MonoTransQuest, CometKiwi, and xCOMET with various decoder-based methods (Tower+, ALOPE, and other instruction-fine-tuned language models). Our work primarily focused on utilizing eight low-resource language pairs, involving both English on the source side and the target side of the translation. Results indicate that while fine-tuned encoder-based models remain strong performers across most low-resource language pairs, decoder-based Large Language Models (LLMs) show clear improvements when adapted through instruction tuning. Importantly, the ALOPE framework further enhances LLM performance beyond standard fine-tuning, demonstrating its effectiveness in narrowing the gap with encoder-based approaches and highlighting its potential as a viable strategy for low-resource QE. In addition, our experiments demonstrates that with adaptation techniques such as LoRA (Low Rank Adapters) and quantization, decoder-based QE models can be trained with competitive GPU memory efficiency, though they generally require substantially more disk space than encoder-based models. Our findings highlight the effectiveness of encoder-based models for low-resource QE and suggest that advances in cross-lingual modeling will be key to improving LLM-based QE in the future.
Full article
(This article belongs to the Special Issue Machine Translation Quality Estimation: Advances and Emerging Challenges)
Open AccessReview
Foundations for a Generic Ontology for Visualization: A Comprehensive Survey
by
Suzana Loshkovska and Panče Panov
Information 2025, 16(10), 915; https://doi.org/10.3390/info16100915 (registering DOI) - 18 Oct 2025
Abstract
This paper surveys existing ontologies for visualization, which formally define and organize knowledge about visualization concepts, techniques, and tools. Although visualization is a mature field, the rapid growth of data complexity makes semantically rich frameworks increasingly essential for building intelligent and automated visualization
[...] Read more.
This paper surveys existing ontologies for visualization, which formally define and organize knowledge about visualization concepts, techniques, and tools. Although visualization is a mature field, the rapid growth of data complexity makes semantically rich frameworks increasingly essential for building intelligent and automated visualization systems. Current ontologies remain fragmented, heterogeneous, and inconsistent in terminology and modeling strategies, limiting their coverage and adoption. We present a systematic analysis of representative ontologies, highlighting shared themes and, most importantly, the gaps that hinder unification. These gaps provide the foundations for developing a comprehensive, generic ontology of visualization, aimed at unifying core concepts and supporting reuse across research and practice.
Full article
(This article belongs to the Special Issue Knowledge Representation and Ontology-Based Data Management)
Open AccessReview
Convolutional Neural Network Acceleration Techniques Based on FPGA Platforms: Principles, Methods, and Challenges
by
Li Gao, Zhongqiang Luo and Lin Wang
Information 2025, 16(10), 914; https://doi.org/10.3390/info16100914 (registering DOI) - 18 Oct 2025
Abstract
►▼
Show Figures
As the complexity of convolutional neural networks (CNN) continues to increase, efficient deployment on computationally constrained hardware platforms has become a significant challenge. Against this backdrop, field-programmable gate arrays (FPGA) emerge as an up-and-coming CNN acceleration platform due to their inherent energy efficiency,
[...] Read more.
As the complexity of convolutional neural networks (CNN) continues to increase, efficient deployment on computationally constrained hardware platforms has become a significant challenge. Against this backdrop, field-programmable gate arrays (FPGA) emerge as an up-and-coming CNN acceleration platform due to their inherent energy efficiency, reconfigurability, and parallel processing capabilities. This paper establishes a systematic analytical framework to explore CNN optimization strategies on FPGA from both algorithmic and hardware perspectives. It emphasizes co-design methodologies between algorithms and hardware, extending these concepts to other embedded system applications. Furthermore, the paper summarizes current performance evaluation frameworks to assess the effectiveness of acceleration schemes comprehensively. Finally, building upon existing work, it identifies key challenges in this field and outlines future research directions.
Full article

Graphical abstract
Open AccessArticle
When Technology Signals Trust: Blockchain vs. Traditional Cues in Cross-Border Cosmetic E-Commerce
by
Xiaoling Liu and Ahmad Yahya Dawod
Information 2025, 16(10), 913; https://doi.org/10.3390/info16100913 (registering DOI) - 18 Oct 2025
Abstract
Using platform self-operation, customer reviews, and compensation commitments as traditional benchmarks, this study foregrounds blockchain traceability as a technology-enabled authenticity signal in cross-border cosmetic e-commerce (CBEC). Using an 8-scenario orthogonal experiment, we test a model in which perceived risk mediates the effects of
[...] Read more.
Using platform self-operation, customer reviews, and compensation commitments as traditional benchmarks, this study foregrounds blockchain traceability as a technology-enabled authenticity signal in cross-border cosmetic e-commerce (CBEC). Using an 8-scenario orthogonal experiment, we test a model in which perceived risk mediates the effects of authenticity signals on purchase intention. We probe blockchain boundary conditions by examining their interactions with traditional signals. Our results show that blockchain is the only signal with a significant direct effect on purchase intention and that it also exerts an indirect effect by reducing perceived risk. While customer reviews show no consistent effect, self-operation and compensation influence purchase intention indirectly via risk reduction. Moderation tests indicate that blockchain is most effective in low-trust settings—i.e., when self-operation, reviews, or compensation safeguards are absent or weak—while this marginal impact declines when such safeguards are strong. These findings refine signaling theory by distinguishing a technology-backed signal from institutional and social signals and by positioning perceived risk as the central mechanism in CBEC cosmetics. Managerially speaking, blockchain should serve as the anchor signal in high-risk contexts and as a reinforcing signal where traditional assurances already exist. Future work should extend to field/transactional data and additional signals (e.g., brand reputation, third-party certifications).
Full article
Open AccessArticle
Brain Network Analysis and Recognition Algorithm for MDD Based on Class-Specific Correlation Feature Selection
by
Zhengnan Zhang, Yating Hu, Jiangwen Lu and Yunyuan Gao
Information 2025, 16(10), 912; https://doi.org/10.3390/info16100912 - 17 Oct 2025
Abstract
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes
[...] Read more.
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes a brain network analysis and recognition algorithm based on class-specific correlation feature selection. Leveraging electroencephalogram monitoring as a more objective MDD detection tool, this study employs tensor sparse representation to reduce the dimensionality of functional brain network time-series data, extracting the most representative functional connectivity matrices. To mitigate the impact of redundant connections, a feature selection algorithm combining topologically aware maximum class-specific dynamic correlation and minimum redundancy is integrated, identifying an optimal feature subset that best distinguishes MDD patients from healthy controls. The selected features are then ranked by relevance and fed into a hybrid CNN-BiLSTM classifier. Experimental results demonstrate classification accuracies of 95.96% and 94.90% on the MODMA and PRED + CT datasets, respectively, significantly outperforming conventional methods. This study not only improves the accuracy of MDD identification but also enhances the clinical interpretability of feature selection results, offering novel perspectives for pathological MDD research and clinical diagnosis.
Full article
(This article belongs to the Section Artificial Intelligence)
Open AccessArticle
DQMAF—Data Quality Modeling and Assessment Framework
by
Razan Al-Toq and Abdulaziz Almaslukh
Information 2025, 16(10), 911; https://doi.org/10.3390/info16100911 - 17 Oct 2025
Abstract
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only
[...] Read more.
In today’s digital ecosystem, where millions of users interact with diverse online services and generate vast amounts of textual, transactional, and behavioral data, ensuring the trustworthiness of this information has become a critical challenge. Low-quality data—manifesting as incompleteness, inconsistency, duplication, or noise—not only undermines analytics and machine learning models but also exposes unsuspecting users to unreliable services, compromised authentication mechanisms, and biased decision-making processes. Traditional data quality assessment methods, largely based on manual inspection or rigid rule-based validation, cannot cope with the scale, heterogeneity, and velocity of modern data streams. To address this gap, we propose DQMAF (Data Quality Modeling and Assessment Framework), a generalized machine learning–driven approach that systematically profiles, evaluates, and classifies data quality to protect end-users and enhance the reliability of Internet services. DQMAF introduces an automated profiling mechanism that measures multiple dimensions of data quality—completeness, consistency, accuracy, and structural conformity—and aggregates them into interpretable quality scores. Records are then categorized into high, medium, and low quality, enabling downstream systems to filter or adapt their behavior accordingly. A distinctive strength of DQMAF lies in integrating profiling with supervised machine learning models, producing scalable and reusable quality assessments applicable across domains such as social media, healthcare, IoT, and e-commerce. The framework incorporates modular preprocessing, feature engineering, and classification components using Decision Trees, Random Forest, XGBoost, AdaBoost, and CatBoost to balance performance and interpretability. We validate DQMAF on a publicly available Airbnb dataset, showing its effectiveness in detecting and classifying data issues with high accuracy. The results highlight its scalability and adaptability for real-world big data pipelines, supporting user protection, document and text-based classification, and proactive data governance while improving trust in analytics and AI-driven applications.
Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
►▼
Show Figures

Figure 1
Open AccessArticle
Intrusion Detection in Industrial Control Systems Using Transfer Learning Guided by Reinforcement Learning
by
Jokha Ali, Saqib Ali, Taiseera Al Balushi and Zia Nadir
Information 2025, 16(10), 910; https://doi.org/10.3390/info16100910 - 17 Oct 2025
Abstract
Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new
[...] Read more.
Securing Industrial Control Systems (ICSs) is critical, but it is made challenging by the constant evolution of cyber threats and the scarcity of labeled attack data in these specialized environments. Standard intrusion detection systems (IDSs) often fail to adapt when transferred to new networks with limited data. To address this, this paper introduces an adaptive intrusion detection framework that combines a hybrid Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM) model with a novel transfer learning strategy. We employ a Reinforcement Learning (RL) agent to intelligently guide the fine-tuning process, which allows the IDS to dynamically adjust its parameters such as layer freezing and learning rates in real-time based on performance feedback. We evaluated our system in a realistic data-scarce scenario using only 50 labeled training samples. Our RL-Guided model achieved a final F1-score of 0.9825, significantly outperforming a standard neural fine-tuning model (0.861) and a target baseline model (0.759). Analysis of the RL agent’s behavior confirmed that it learned a balanced and effective policy for adapting the model to the target domain. We conclude that the proposed RL-guided approach creates a highly accurate and adaptive IDS that overcomes the limitations of static transfer learning methods. This dynamic fine-tuning strategy is a powerful and promising direction for building resilient cybersecurity defenses for critical infrastructure.
Full article
(This article belongs to the Section Information Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Designing Co-Creative Systems: Five Paradoxes in Human–AI Collaboration
by
Zainab Salma, Raquel Hijón-Neira and Celeste Pizarro
Information 2025, 16(10), 909; https://doi.org/10.3390/info16100909 - 17 Oct 2025
Abstract
The rapid integration of generative artificial intelligence (AI) into creative workflows is transforming design from a human-driven activity into a synergistic process between humans and AI systems. Yet, most current tools still operate as linear “executors” of user commands, which fundamentally clashes with
[...] Read more.
The rapid integration of generative artificial intelligence (AI) into creative workflows is transforming design from a human-driven activity into a synergistic process between humans and AI systems. Yet, most current tools still operate as linear “executors” of user commands, which fundamentally clashes with the non-linear, iterative, and ambiguous nature of human creativity. Addressing this gap, this article introduces a conceptual framework of five irreducible paradoxes—ambiguity vs. precision, control vs. serendipity, speed vs. reflection, individual vs. collective, and originality vs. remix—as core design tensions that shape human–AI co-creative systems. Rather than treating these tensions as problems to solve, we argue they should be understood as design drivers that can guide the creation of next-generation co-creative environments. Through a critical synthesis of existing literature, we show how current executor-based AI tools (e.g., Microsoft 365 Copilot, Midjourney) fail to support non-linear exploration, refinement, and human creative agency. This study contributes a novel theoretical lens for critically analyzing existing systems and a generative framework for designing human–AI collaboration environments that augment, rather than replace, human creative agency.
Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
Open AccessArticle
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by
Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 - 16 Oct 2025
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously,
[...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications.
Full article
(This article belongs to the Section Information Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Modality Information Aggregation Graph Attention Network with Adversarial Training for Multi-Modal Knowledge Graph Completion
by
Hankiz Yilahun, Elyar Aili, Seyyare Imam and Askar Hamdulla
Information 2025, 16(10), 907; https://doi.org/10.3390/info16100907 - 16 Oct 2025
Abstract
►▼
Show Figures
Multi-modal knowledge graph completion (MMKGC) aims to complete knowledge graphs by integrating structural information with multi-modal (e.g., visual, textual, and numerical) features and leveraging cross-modal reasoning within a unified semantic space to infer and supplement missing factual knowledge. Current MMKGC methods have advanced
[...] Read more.
Multi-modal knowledge graph completion (MMKGC) aims to complete knowledge graphs by integrating structural information with multi-modal (e.g., visual, textual, and numerical) features and leveraging cross-modal reasoning within a unified semantic space to infer and supplement missing factual knowledge. Current MMKGC methods have advanced in terms of integrating multi-modal information but have overlooked the imbalance in modality importance for target entities. Treating all modalities equally dilutes critical semantics and amplifies irrelevant information, which in turn limits the semantic understanding and predictive performance of the model. To address these limitations, we proposed a modality information aggregation graph attention network with adversarial training for multi-modal knowledge graph completion (MIAGAT-AT). MIAGAT-AT focuses on hierarchically modeling complex cross-modal interactions. By combining the multi-head attention mechanism with modality-specific projection methods, it precisely captures global semantic dependencies and dynamically adjusts the weight of modality embeddings according to the importance of each modality, thereby optimizing cross-modal information fusion capabilities. Moreover, through the use of random noise and multi-layer residual blocks, the adversarial training generates high-quality multi-modal feature representations, thereby effectively enhancing information from imbalanced modalities. Experimental results demonstrate that our approach significantly outperforms 18 existing baselines and establishes a strong performance baseline across three distinct datasets.
Full article

Figure 1
Open AccessArticle
Harnessing Machine Learning to Analyze Renewable Energy Research in Latin America and the Caribbean
by
Javier De La Hoz-M, Edwan A. Ariza-Echeverri, John A. Taborda, Diego Vergara and Izabel F. Machado
Information 2025, 16(10), 906; https://doi.org/10.3390/info16100906 - 16 Oct 2025
Abstract
The transition to renewable energy is essential for mitigating climate change and promoting sustainable development, particularly in Latin America and the Caribbean (LAC). Despite its vast potential, the region faces structural and economic challenges that hinder a sustainable energy transition. Understanding scientific production
[...] Read more.
The transition to renewable energy is essential for mitigating climate change and promoting sustainable development, particularly in Latin America and the Caribbean (LAC). Despite its vast potential, the region faces structural and economic challenges that hinder a sustainable energy transition. Understanding scientific production in this field is key to shaping policy, investment, and technological progress. The primary objective of this study is to conduct a large-scale, data-driven analysis of renewable energy research in LAC, mapping its thematic evolution, collaboration networks, and key research trends over the past three decades. To achieve this, machine learning-based topic modeling and network analysis were applied to examine research trends in renewable energy in LAC. A dataset of 18,780 publications (1994–2024) from Scopus and Web of Science was analyzed using Latent Dirichlet Allocation (LDA) to uncover thematic structures. Network analysis assessed collaboration patterns and regional integration in research. Findings indicate a growing focus on solar, wind, and bioenergy advancements, alongside increasing attention to climate change policies, energy storage, and microgrid optimization. Artificial intelligence (AI) applications in energy management are emerging, mirroring global trends. However, research disparities persist, with Brazil, Mexico, and Chile leading output while smaller nations remain underrepresented. International collaborations, especially with North America and Europe, play a crucial role in research development. Renewable energy research supports Sustainable Development Goals (SDGs) 7 (Affordable and Clean Energy) and 13 (Climate Action). Despite progress, challenges remain in translating research into policy and addressing governance, financing, and socio-environmental factors. AI-driven analytics offer opportunities for improved energy planning. Strengthening regional collaboration, increasing research investment, and integrating AI into policy frameworks will be crucial for advancing the energy transition in LAC. This study provides evidence-based insights for policymakers, researchers, and industry leaders.
Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
►▼
Show Figures

Figure 1
Open AccessReview
Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education
by
Promethi Das Deep, William D. Edgington, Nitu Ghosh and Md. Shiblur Rahaman
Information 2025, 16(10), 905; https://doi.org/10.3390/info16100905 - 16 Oct 2025
Abstract
The rapid rise of generative AI tools such as ChatGPT has prompted significant shifts in how higher education institutions approach academic integrity. Many universities have implemented AI detection tools like Turnitin AI, GPTZero, Copyleaks, and ZeroGPT to identify AI-generated content in student work.
[...] Read more.
The rapid rise of generative AI tools such as ChatGPT has prompted significant shifts in how higher education institutions approach academic integrity. Many universities have implemented AI detection tools like Turnitin AI, GPTZero, Copyleaks, and ZeroGPT to identify AI-generated content in student work. This qualitative evidence synthesis draws on peer-reviewed journal articles published between 2021 and 2024 to evaluate the effectiveness, limitations, and ethical implications of AI detection tools in academic settings. While AI detectors offer scalable solutions, they frequently produce false positives and lack transparency, especially for multilingual or non-native English speakers. Ethical concerns surrounding surveillance, consent, and fairness are central to the discussion. The review also highlights gaps in institutional policies, inconsistent enforcement, and limited faculty training. It calls for a shift away from punitive approaches toward AI-integrated pedagogies that emphasize ethical use, student support, and inclusive assessment design. Emerging innovations such as watermarking and hybrid detection systems are discussed, though implementation challenges persist. Overall, the findings suggest that while AI detection tools play a role in preserving academic standards, institutions must adopt balanced, transparent, and student-centered strategies that align with evolving digital realities and uphold academic integrity without compromising rights or equity.
Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Can We Trust AI Content Detection Tools for Critical Decision-Making?
by
Tadesse G. Wakjira, Ibrahim A. Tijani, M. Shahria Alam, Mustafa Mashal and Mohammad Khalad Hasan
Information 2025, 16(10), 904; https://doi.org/10.3390/info16100904 - 16 Oct 2025
Abstract
►▼
Show Figures
The rapid integration of artificial intelligence (AI) in content generation has encouraged the development of AI detection tools aimed at distinguishing between human- and AI-authored texts. These tools are increasingly adopted not only in academia but also in sensitive decision-making contexts, including candidate
[...] Read more.
The rapid integration of artificial intelligence (AI) in content generation has encouraged the development of AI detection tools aimed at distinguishing between human- and AI-authored texts. These tools are increasingly adopted not only in academia but also in sensitive decision-making contexts, including candidate screening by hiring agencies in government and private sectors. This extensive reliance raises serious questions about their reliability, fairness, and appropriateness for high-stakes applications. This study evaluates the performance of six widely used AI content detection tools, namely Undetectable AI, Zerogpt.com, Zerogpt.net, Brandwell.ai, Gowinston.ai, and Crossplag, referred to as Tools A through F in this study. The assessment focused on the ability of the tools to identify human versus AI-generated content across multiple domains. Verified human-authored texts were gathered from reputable sources, including university websites, pre-ChatGPT publications in Nature and Science, government portals, and media outlets (e.g., BBC, US News). Complementary datasets of AI-generated texts were produced using ChatGPT-4o, encompassing coherent essays, nonsensical passages, and hybrid texts with grammatical errors, to test tool robustness. The results reveal significant performance limitations. The accuracy ranged from 14.3% (Tool B) to 71.4% (Tool D), with the precision and recall metrics showing inconsistent detection capabilities. The tools were also highly sensitive to minor textual modifications, where slight changes in phrasing could flip classifications between “AI-generated” and “human-authored.” Overall, the current AI detection tools lack the robustness and reliability needed for enforcing academic integrity or making employment-related decisions. The findings highlight an urgent need for more transparent, accurate, and context-aware frameworks before these tools can be responsibly incorporated into critical institutional or societal processes.
Full article

Figure 1
Open AccessArticle
Sustainable Real-Time NLP with Serverless Parallel Processing on AWS
by
Chaitanya Kumar Mankala and Ricardo J. Silva
Information 2025, 16(10), 903; https://doi.org/10.3390/info16100903 - 15 Oct 2025
Abstract
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa,
[...] Read more.
This paper proposes a scalable serverless architecture for real-time natural language processing (NLP) on large datasets using Amazon Web Services (AWS). The framework integrates AWS Lambda, Step Functions, and S3 to enable fully parallel sentiment analysis with Transformer-based models such as DistilBERT, RoBERTa, and ClinicalBERT. By containerizing inference workloads and orchestrating parallel execution, the system eliminates the need for dedicated servers while dynamically scaling to workload demand. Experimental evaluation on the IMDb Reviews dataset demonstrates substantial efficiency gains: parallel execution achieved a 6.07× reduction in wall-clock duration, an 81.2% reduction in total computing time and energy consumption, and a 79.1% reduction in variable costs compared to sequential processing. These improvements directly translate into a smaller carbon footprint, highlighting the sustainability benefits of serverless architectures for AI workloads. The findings show that the proposed framework is model-independent and provides consistent advantages across diverse Transformer variants. This work illustrates how cloud-native, event-driven infrastructures can democratize access to large-scale NLP by reducing cost, processing time, and environmental impact while offering a reproducible pathway for real-world research and industrial applications.
Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
GCHS: A Custodian-Aware Graph-Based Deep Learning Model for Intangible Cultural Heritage Recommendation
by
Wei Xiao, Bowen Yu and Hanyue Zhang
Information 2025, 16(10), 902; https://doi.org/10.3390/info16100902 - 15 Oct 2025
Abstract
Digital platforms for intangible cultural heritage (ICH) function as vibrant electronic marketplaces, yet they grapple with content overload, high search costs, and under-leveraged social networks of heritage custodians. To address these electronic-commerce challenges, we present GCHS, a custodian-aware, graph-based deep learning model that
[...] Read more.
Digital platforms for intangible cultural heritage (ICH) function as vibrant electronic marketplaces, yet they grapple with content overload, high search costs, and under-leveraged social networks of heritage custodians. To address these electronic-commerce challenges, we present GCHS, a custodian-aware, graph-based deep learning model that enhances ICH recommendation by uniting three critical signals: custodians’ social relationships, user interest profiles, and content metadata. Leveraging an attention mechanism, GCHS dynamically prioritizes influential custodians and resharing behaviors to streamline user discovery and engagement. We first characterize ICH-specific propagation patterns, e.g., custodians’ social influence, heterogeneous user interests, and content co-consumption and then encode these factors within a collaborative graph framework. Evaluation on a real-world ICH dataset demonstrates that GCHS delivers improvements in Top-N recommendation accuracy over leading benchmarks and significantly outperforms in terms of next-N sequence prediction. By integrating social, cultural, and transactional dimensions, our approach not only drives more effective digital commerce interactions around heritage content but also supports sustainable cultural dissemination and stakeholder participation. This work advances electronic-commerce research by illustrating how graph-based deep learning can optimize content discovery, personalize user experience, and reinforce community networks in digital heritage ecosystems.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Estimating Weather Effects on Well-Being and Mobility with Multi-Source Longitudinal Data
by
Davide Marzorati, Francesca Dalia Faraci and Tiziano Gerosa
Information 2025, 16(10), 901; https://doi.org/10.3390/info16100901 - 15 Oct 2025
Abstract
Understanding the influence of weather on human well-being and mobility is essential to promoting healthier lifestyles. In this study we employ data collected from 151 participants over a continuous 30-day period in Switzerland to examine the effects of weather on well-being and mobility.
[...] Read more.
Understanding the influence of weather on human well-being and mobility is essential to promoting healthier lifestyles. In this study we employ data collected from 151 participants over a continuous 30-day period in Switzerland to examine the effects of weather on well-being and mobility. Physiological data were retrieved through wearable devices, while mobility was automatically tracked through Google Location History, enabling detailed analysis of participants’ mobility behaviors. Mixed effects linear models were used to estimate the effects of temperature, precipitation, and sunshine duration on well-being and mobility while controlling for potential socio-demographic confounders. In this work, we demonstrate the feasibility of combining multi-source physiological and location data for environmental health research. Our results show small but significant effects of weather on several well-being outcomes (activity, sleep, and stress), while mobility was mostly affected by the level of precipitation. In line with previous research, our findings confirm that normal weather fluctuations exert significant but moderate effects on health-related behavior, highlighting the need to shift research focus toward extreme weather variations that lie beyond typical seasonal ranges. Given the potentially severe consequences of such extremes for public health and health-care systems, this shift will help identify more consistent effects, thereby informing targeted interventions and policy planning.
Full article
(This article belongs to the Section Biomedical Information and Health)
►▼
Show Figures

Figure 1
Open AccessArticle
ADR: Attention Head Detection and Reweighting Enhance RAG Performance in a Positional-Encoding-Free Paradigm
by
Mingwei Wang, Xiaobo Li, Qian Zeng, Xingbang Liu, Minghao Yang and Zhichen Jia
Information 2025, 16(10), 900; https://doi.org/10.3390/info16100900 - 15 Oct 2025
Abstract
►▼
Show Figures
Retrieval-augmented generation (RAG) has established a new search paradigm, in which large language models integrate external resources to compensate for their inherent knowledge limitations. However, limited context awareness reduces the performance of large language models in RAG tasks. Existing solutions incur additional time
[...] Read more.
Retrieval-augmented generation (RAG) has established a new search paradigm, in which large language models integrate external resources to compensate for their inherent knowledge limitations. However, limited context awareness reduces the performance of large language models in RAG tasks. Existing solutions incur additional time and memory overhead and depend on specific positional encodings. In this paper, we propose Attention Head Detection and Reweighting (ADR), a lightweight and general framework. Specifically, we employ a recognition task to identify RAG-suppressing heads that limit the model’s context awareness. We then reweight their outputs with learned coefficients to mitigate the influence of these RAG-suppressing heads. After training, the weights are fixed during inference, introducing no additional time overhead and remaining agnostic to the choice of positional embedding. Experiments on PetroAI further demonstrate that ADR enhances the context awareness of fine-tuned models.
Full article

Figure 1
Open AccessArticle
A Network Scanning Organization Discovery Method Based on Graph Convolutional Neural Network
by
Pengfei Xue, Luhan Dong, Chenyang Wang, Cheng Huang and Jie Wang
Information 2025, 16(10), 899; https://doi.org/10.3390/info16100899 - 15 Oct 2025
Abstract
With the quick development of network technology, the number of active IoT devices is growing rapidly. Numerous network scanning organizations have emerged to scan and detect network assets around the clock. This greatly facilitates illegal cyberattacks and adversely affects cybersecurity. Therefore, it is
[...] Read more.
With the quick development of network technology, the number of active IoT devices is growing rapidly. Numerous network scanning organizations have emerged to scan and detect network assets around the clock. This greatly facilitates illegal cyberattacks and adversely affects cybersecurity. Therefore, it is important to discover and identify network scanning organizations on the Internet. Motivated by this, we propose a network scanning organization discovery method based on a graph convolutional neural network, which can effectively cluster out network scanning organizations. First, we constructed a network scanning attribute graph to represent the topological relationship between network scanning behaviors and targets. Then, we extract the deep feature relationships in the attribute graph via graph convolutional neural network and perform clustering to get network scanning organizations. Finally, the effectiveness of the method proposed in this paper is experimentally verified with an accuracy of 83.41% for the identification of network scanning organizations.
Full article
(This article belongs to the Special Issue Cyber Security in IoT)
►▼
Show Figures

Figure 1
Open AccessFeature PaperArticle
LiDAR Dreamer: Efficient World Model for Autonomous Racing with Cartesian-Polar Encoding and Lightweight State-Space Cells
by
Myeongjun Kim, Jong-Chan Park, Sang-Min Choi and Gun-Woo Kim
Information 2025, 16(10), 898; https://doi.org/10.3390/info16100898 - 14 Oct 2025
Abstract
Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and
[...] Read more.
Autonomous racing serves as a challenging testbed that exposes the limitations of perception-decision-control algorithms in extreme high-speed environments, revealing safety gaps not addressed in existing autonomous driving research. However, traditional control techniques (e.g., FGM and MPC) and reinforcement learning-based approaches (including model-free and Dreamer variants) struggle to simultaneously satisfy sample efficiency, prediction reliability, and real-time control performance, making them difficult to apply in actual high-speed racing environments. To address these challenges, we propose LiDAR Dreamer, a novel world model specialized for LiDAR sensor data. LiDAR Dreamer introduces three core techniques: (1) efficient point cloud preprocessing and encoding via Cartesian Polar Bar Charts, (2) Light Structured State-Space Cells (LS3C) that reduce RSSM parameters by 14.2% while preserving key dynamic information, and (3) a Displacement Covariance Distance divergence function, which enhances both learning stability and expressiveness. Experiments in PyBullet F1TENTH simulation environments demonstrate that LiDAR Dreamer achieves competitive performance across different track complexities. On the Austria track with complex corners, it reaches 90% of DreamerV3’s performance (1.14 vs. 1.27 progress) while using 81.7% fewer parameters. On the simpler Columbia track, while model-free methods achieve higher absolute performance, LiDAR Dreamer shows improved sample efficiency compared to baseline Dreamer models, converging faster to stable performance. The Treitlstrasse environment results demonstrate comparable performance to baseline methods. Furthermore, beyond the 14.2% RSSM parameter reduction, reward loss converged more stably without spikes, improving overall training efficiency and stability.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1

Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Electronics, Information, Mathematics, Sensors
Extended Reality: Models and Applications
Topic Editors: Moldoveanu Alin, Anca Morar, Robert Gabriel LupuDeadline: 31 October 2025
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
Applied Sciences, Computers, Entropy, Information, MAKE, Systems
Opportunities and Challenges in Explainable Artificial Intelligence (XAI)
Topic Editors: Luca Longo, Mario Brcic, Sebastian LapuschkinDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026

Conferences
Special Issues
Special Issue in
Information
New Applications in Multiple Criteria Decision Analysis, 3rd Edition
Guest Editor: Maria CarneroDeadline: 30 October 2025
Special Issue in
Information
Exploring Traditional and AI-Driven Approaches on Knowledge Graphs and Semantic Web Technologies
Guest Editor: Charalampos BratsasDeadline: 30 October 2025
Special Issue in
Information
ICT, AI, and Assistive Technology for Accessible and Inclusive Education
Guest Editors: Silvio Pagliara, Katerina MavrouDeadline: 31 October 2025
Special Issue in
Information
Artificial Intelligence and Data Science for Smart Cities
Guest Editor: Bonny BanerjeeDeadline: 31 October 2025
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero