Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.9 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Information Systems and Technology: Analytics, Applied System Innovation, Cryptography, Data, Digital, Informatics, Information, Journal of Cybersecurity and Privacy and Multimedia.
Impact Factor:
2.9 (2024);
5-Year Impact Factor:
3.0 (2024)
Latest Articles
Continual Learning for Saudi-Dialect Offensive-Language Detection Under Temporal Linguistic Drift
Information 2026, 17(1), 99; https://doi.org/10.3390/info17010099 (registering DOI) - 18 Jan 2026
Abstract
Offensive-language detection systems that perform well at a given point in time often degrade as linguistic patterns evolve, particularly in dialectal Arabic social media, where new terms emerge and familiar expressions shift in meaning. This study investigates temporal linguistic drift in Saudi-dialect offensive-language
[...] Read more.
Offensive-language detection systems that perform well at a given point in time often degrade as linguistic patterns evolve, particularly in dialectal Arabic social media, where new terms emerge and familiar expressions shift in meaning. This study investigates temporal linguistic drift in Saudi-dialect offensive-language detection through a systematic evaluation of continual-learning approaches. Building on the Saudi Offensive Dialect (SOD) dataset, we designed test scenarios incorporating newly introduced offensive terms, context-shifting expressions, and varying proportions of historical data to assess both adaptation and knowledge retention. Eight continual-learning configurations—Experience Replay (ER), Elastic Weight Consolidation (EWC), Low-Rank Adaptation (LoRA), and their combinations—were evaluated across five test scenarios. Results show that models without continual-learning experience a 13.4-percentage-point decline in F1-macro on evolved patterns. In our experiments, Experience Replay achieved a relatively favorable balance, maintaining 0.812 F1-macro on historical data and 0.976 on contemporary patterns (KR = −0.035; AG = +0.264), though with increased memory and training time. EWC showed moderate retention (KR = −0.052) with comparable adaptation (AG = +0.255). On the SimuReal test set—designed with realistic class imbalance and only 5% drift terms—ER achieved 0.842 and EWC achieved 0.833, compared to the original model’s 0.817, representing modest improvements under realistic conditions. LoRA-based methods showed lower adaptation in our experiments, likely reflecting the specific LoRA configuration used in this study. Further investigation with alternative settings is warranted.
Full article
(This article belongs to the Special Issue Social Media Mining: Algorithms, Insights, and Applications)
Open AccessArticle
Social Engineering Attacks Using Technical Job Interviews: Real-Life Case Analysis and AI-Assisted Mitigation Proposals
by
Tomás de J. Mateo Sanguino
Information 2026, 17(1), 98; https://doi.org/10.3390/info17010098 (registering DOI) - 18 Jan 2026
Abstract
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather
[...] Read more.
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather than seeking to generalize statistical evidence. The study examines a real-world covert attack conducted through a simulated interview, identifying the technical and psychological elements that contribute to its effectiveness, assessing the performance of artificial intelligence (AI) assistants in early detection and proposing mitigation strategies. To this end, a methodology was implemented that combines discursive reconstruction of the attack, code exploitation and forensic analysis. The experimental phase, primarily focused on evaluating 10 large language models (LLMs) against a fragment of obfuscated code, reveals that the malware initially evaded detection by 62 antivirus engines, while assistants such as GPT 5.1, Grok 4.1 and Claude Sonnet 4.5 successfully identified malicious patterns and suggested operational countermeasures. The discussion highlights how the apparent legitimacy of platforms like LinkedIn, Calendly and Bitbucket, along with time pressure and technical familiarity, act as catalysts for deception. Based on these findings, the study suggests that LLMs may play a role in the early detection of threats, offering a potentially valuable avenue to enhance security in technical recruitment processes by enabling the timely identification of malicious behavior. To the best of available knowledge, this represents the first academically documented case of its kind analyzed from an interdisciplinary perspective.
Full article
(This article belongs to the Special Issue Emerging Research in Artificial Intelligence for Code Analysis and Security)
►▼
Show Figures

Figure 1
Open AccessArticle
Fuzzy-Based MCDA Technique Applied in Multi-Risk Problems Involving Heatwave Risks in and Pandemic Scenarios
by
Rosa Cafaro, Barbara Cardone, Ferdinando Di Martino, Cristiano Mauriello and Vittorio Miraglia
Information 2026, 17(1), 97; https://doi.org/10.3390/info17010097 (registering DOI) - 18 Jan 2026
Abstract
Assessing the increased impacts/risks of urban heatwaves generated by stressors such as a pandemic period, such as the one experienced during the COVID-19 pandemic, is complicated by the lack of comprehensive information that allows for an analytical determination of the alteration produced on
[...] Read more.
Assessing the increased impacts/risks of urban heatwaves generated by stressors such as a pandemic period, such as the one experienced during the COVID-19 pandemic, is complicated by the lack of comprehensive information that allows for an analytical determination of the alteration produced on climate risks/impacts. The assessment of the increased impacts/risks of urban heatwaves generated by stressors such as those due to the presence of a pandemic is complicated by the lack of comprehensive information that allows for the functional determination of the increased impacts/risks due to such stressors. On the other hand, it is essential for decision makers to understand the complex interactions between climate risks and the environmental and socioeconomic conditions generated by pandemics in an urban context, where specific restrictions on citizens’ livability are in place to protect their health. This study aims to address this need by proposing a fuzzy multi-criteria decision-making framework in a GIS environment that intuitively allows experts to assess the increase in heatwave risk factors for the population generated by pandemics. This assessment is accomplished by varying the values in the pairwise comparison matrices of the criteria that contribute to the construction of physical and socioeconomic vulnerability, exposure, and the hazard scenario. The framework was tested to assess heatwave impacts/risks on the population in the study area, which includes the municipalities of the metropolitan city of Naples, Italy, an urban area with high residential density where numerous summer heatwaves have been recorded over the last decade. The findings indicate a rise in impact/risks during pandemic times, particularly in municipalities with the greatest resident population density, situated close to Naples.
Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
Open AccessSystematic Review
Dynamic Difficulty Adjustment in Serious Games: A Literature Review
by
Lucia Víteková, Christian Eichhorn, Johanna Pirker and David A. Plecher
Information 2026, 17(1), 96; https://doi.org/10.3390/info17010096 (registering DOI) - 17 Jan 2026
Abstract
This systematic literature review analyzes the role of dynamic difficulty adaptation (DDA) in serious games (SGs) to provide an overview of current trends and identify research gaps. The purpose of the study is to contextualize how DDA is being employed in SGs to
[...] Read more.
This systematic literature review analyzes the role of dynamic difficulty adaptation (DDA) in serious games (SGs) to provide an overview of current trends and identify research gaps. The purpose of the study is to contextualize how DDA is being employed in SGs to enhance their learning outcomes, effectiveness, and game enjoyment. The review included studies published over the past five years that implemented specific DDA methods within SGs. Publications were identified through Google Scholar (searched up to 10 November 2025) and screened for relevance, resulting in 75 relevant papers. No formal risk-of-bias assessment was conducted. These studies were analyzed by publication year, source, application domain, DDA type, and effectiveness. The results indicate a growing interest in adaptive SGs across domains, including rehabilitation and education, with DDA methods ranging from rule-based (e.g., fuzzy logic) and player modeling (using performance, physiological, or emotional metrics) to various machine learning techniques (reinforcement learning, genetic algorithms, neural networks). Newly emerging trends, such as the integration of generative artificial intelligence for DDA, were also identified. Evidence suggests that DDA can enhance learning outcomes and game experience, although study differences, limited evaluation metrics, and unexplored opportunities for adaptive SGs highlight the need for further research.
Full article
(This article belongs to the Special Issue Serious Games, Games for Learning and Gamified Apps)
Open AccessArticle
Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models
by
Marco Rospocher, Massimo Salgaro and Simone Rebora
Information 2026, 17(1), 95; https://doi.org/10.3390/info17010095 - 16 Jan 2026
Abstract
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less
[...] Read more.
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less authentic or emotionally resonant than human creations, with authorship attribution strongly shaping esthetic judgments. Yet little attention has been paid to how AI systems themselves evaluate creative authorship. This study investigates how large language models (LLMs) evaluate literary quality under different framings of authorship—Human, AI, or Human+AI collaboration. Using a questionnaire-based experimental design, we prompted four instruction-tuned LLMs (ChatGPT 4, Gemini 2, Gemma 3, and LLaMA 3) to read and assess three short stories in Italian, originally generated by ChatGPT 4 in the narrative style of Roald Dahl. For each story × authorship condition × model combination, we collected 100 questionnaire completions, yielding 3600 responses in total. Across esthetic, literary, and inclusiveness dimensions, the stated authorship systematically conditioned model judgments: identical stories were consistently rated more favorably when framed as human-authored or human–AI co-authored than when labeled as AI-authored, revealing a robust negative bias toward AI authorship. Model-specific analyses further indicate distinctive evaluative profiles and inclusiveness thresholds across proprietary and open-source systems. Our findings extend research on attribution bias into the computational realm, showing that LLM-based evaluations reproduce human-like assumptions about creative agency and literary value. We publicly release all materials to facilitate transparency and future comparative work on AI-mediated literary evaluation.
Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Information Modeling of Asymmetric Aesthetics Using DCGAN: A Data-Driven Approach to the Generation of Marbling Art
by
Muhammed Fahri Unlersen and Hatice Unlersen
Information 2026, 17(1), 94; https://doi.org/10.3390/info17010094 - 15 Jan 2026
Abstract
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its
[...] Read more.
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its original aesthetic qualities. A data-driven generative model is therefore required to create unlimited, high-fidelity digital surrogates that safeguard this UNESCO heritage against physical loss and enable large-scale cultural applications. This study introduces a deep generative modeling framework for the digital reconstruction of traditional Turkish marbling (Ebru) art using a Deep Convolutional Generative Adversarial Network (DCGAN). A dataset of 20,400 image patches, systematically derived from 17 original marbling works, was used to train the proposed model. The framework aims to mathematically capture the asymmetric, fluid, and stochastic nature of Ebru patterns, enabling the reproduction of their aesthetic structure in a digital medium. The generated images were evaluated using multiple quantitative and perceptual metrics, including Fréchet Inception Distance (FID), Kernel Inception Distance (KID), Learned Perceptual Image Patch Similarity (LPIPS), and PRDC-based indicators (Precision, Recall, Density, Coverage). For experimental validation, the proposed DCGAN framework is additionally compared against a Vanilla GAN baseline trained under identical conditions, highlighting the advantages of convolutional architectures for modeling marbling textures. The results show that the DCGAN model achieved a high level of realism and diversity without mode collapse or overfitting, producing images that were perceptually close to authentic marbling works. In addition to the quantitative evaluation, expert qualitative assessment by a traditional Ebru artist confirmed that the model reproduced the organic textures, color dynamics, and compositional asymmetrical characteristic of real marbling art. The proposed approach demonstrates the potential of deep generative models for the digital preservation, dissemination, and reinterpretation of intangible cultural heritage recognized by UNESCO.
Full article
(This article belongs to the Topic Advanced Development and Applications of AI-Generated Content (AIGC))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Point Cloud Quality Assessment via Complexity-Driven Patch Sampling and Attention-Enhanced Swin-Transformer
by
Xilei Shen, Qiqi Li, Renwei Tu, Yongqiang Bai, Di Ge and Zhongjie Zhu
Information 2026, 17(1), 93; https://doi.org/10.3390/info17010093 - 15 Jan 2026
Abstract
►▼
Show Figures
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC.
[...] Read more.
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC. In this paper, a no-reference point cloud quality assessment (PCQA) method that uses complexity-driven patch sampling and an attention-enhanced Swin-Transformer is proposed to accurately assess the perceived quality of PC. Given that projected PC maps effectively capture distortions and that the quality-related information density varies significantly across local patches, a complexity-driven patch sampling strategy is proposed. By quantifying patch complexity, regions with higher information density are preferentially sampled to enhance subsequent quality-sensitive feature representation. Given that the indistinguishable response strengths between key and redundant channels during feature extraction may dilute effective features, an Attention-Enhanced Swin-Transformer is proposed to adaptively reweight critical channels, thereby improving feature extraction performance. Given that traditional regression heads typically use a single-layer linear mapping, which overlooks the heterogeneous importance of information across channels, a gated regression head is designed to enable adaptive fusion of global and statistical features via a statistics-guided gating mechanism. Experiments on the SJTU-PCQA dataset demonstrate that the proposed method consistently outperforms representative PCQA methods.
Full article

Figure 1
Open AccessArticle
AI-Enhanced Modular Information Architecture for Cultural Heritage: Designing Cognitive-Efficient and User-Centered Experiences
by
Fotios Pastrakis, Markos Konstantakis and George Caridakis
Information 2026, 17(1), 92; https://doi.org/10.3390/info17010092 - 15 Jan 2026
Abstract
Digital cultural heritage platforms face a dual challenge: preserving rich historical information while engaging an audience with declining attention spans. This paper addresses that challenge by proposing a modular information architecture designed to mitigate cognitive overload in cultural heritage tourism applications. We begin
[...] Read more.
Digital cultural heritage platforms face a dual challenge: preserving rich historical information while engaging an audience with declining attention spans. This paper addresses that challenge by proposing a modular information architecture designed to mitigate cognitive overload in cultural heritage tourism applications. We begin by examining evidence of diminishing sustained attention in digital user experience and its specific ramifications for cultural heritage sites, where dense content can overwhelm users. Grounded in cognitive load theory and principles of user-centered design, we outline a theoretical framework linking mental models, findability, and modular information architecture. We then present a user-centric modeling methodology that elicits visitor mental models and tasks (via card sorting, contextual inquiry, etc.), informing the specification of content components and semantic metadata (leveraging standards like Dublin Core and CIDOC-CRM). A visual framework is introduced that maps user tasks to content components, clusters these into UI components with progressive disclosure, and adapts them into screen instances suited to context, illustrated through a step-by-step walkthrough. Using this framework, we comparatively evaluate personalization and information structuring strategies in three platforms—TripAdvisor, Google Arts and Culture, and Airbnb Experiences—against criteria of cognitive load mitigation and user engagement. We also discuss how this modular architecture provides a structural foundation for human-centered, explainable AI–driven personalization and recommender services in cultural heritage contexts. The analysis reveals gaps in current designs (e.g., overwhelming content or passive user roles) and highlights best practices (such as tailored recommendations and progressive reveal of details). We conclude with implications for designing cultural heritage experiences that are cognitively accessible yet richly informative, summarizing contributions and suggesting future research in cultural UX, component-based design, and adaptive content delivery.
Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Potential of Different Machine Learning Methods in Cost Estimation of High-Rise Construction in Croatia
by
Ksenija Tijanić Štrok
Information 2026, 17(1), 91; https://doi.org/10.3390/info17010091 - 15 Jan 2026
Abstract
The fundamental goal of a construction project is to complete the construction phase within budget, but in practice, planned cost estimates are often exceeded. The causes of overruns can be due to insufficient preparation and planning of the project, changes during construction, activation
[...] Read more.
The fundamental goal of a construction project is to complete the construction phase within budget, but in practice, planned cost estimates are often exceeded. The causes of overruns can be due to insufficient preparation and planning of the project, changes during construction, activation of risky events, etc. Also, construction costs are often calculated based on experience rather than scientifically based approaches. Due to the challenges, this paper investigates the potential of several different machine learning methods (linear regression, decision tree forest, support vector machine and general regression neural network) for estimating construction costs. The methods were implemented on a database of recent high-rise construction projects in the Republic of Croatia. Results confirmed the potential of the selected assessment methods; in particular, the support vector machine stands out in terms of accuracy metrics. Established machine learning models contribute to a deeper understanding of real construction costs, their optimization, and more effective cost management during the construction phase.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Graphical abstract
Open AccessArticle
ARIA: An AI-Supported Adaptive Augmented Reality Framework for Cultural Heritage
by
Markos Konstantakis and Eleftheria Iakovaki
Information 2026, 17(1), 90; https://doi.org/10.3390/info17010090 - 15 Jan 2026
Abstract
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and
[...] Read more.
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and emotional diversity. This paper presents ARIA (Augmented Reality for Interpreting Artefacts), a conceptual and architectural framework for AI-supported, adaptive AR experiences in cultural heritage settings. ARIA is designed to address current limitations in personalization, affect-awareness, and ethical governance by integrating multimodal context sensing, lightweight affect recognition, and AI-driven content personalization within a unified system architecture. The framework combines Retrieval-Augmented Generation (RAG) for controlled, knowledge-grounded narrative adaptation, continuous user modeling, and interoperable Digital Asset Management (DAM), while embedding Human-Centered Design (HCD) and Fairness, Accountability, Transparency, and Ethics (FATE) principles at its core. Emphasis is placed on accountable personalization, privacy-preserving data handling, and curatorial oversight of narrative variation. ARIA is positioned as a design-oriented contribution rather than a fully implemented system. Its architecture, data flows, and adaptive logic are articulated through representative museum use-case scenarios and a structured formative validation process including expert walkthrough evaluation and feasibility analysis, providing a foundation for future prototyping and empirical evaluation. The framework aims to support the development of scalable, ethically grounded, and emotionally responsive AR experiences for next-generation digital museology.
Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies for Sustainable Development)
►▼
Show Figures

Graphical abstract
Open AccessArticle
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
by
Vladimir Frants, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2026, 17(1), 89; https://doi.org/10.3390/info17010089 - 14 Jan 2026
Abstract
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor
[...] Read more.
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor limitations and environmental factors, weakening visual fidelity and reducing performance in vision tasks. Common issues such as insufficient lighting, backlighting, and limited exposure create low contrast, heavy shadows, and poor visibility, particularly at night. We propose QWR-Dec-Net, a quaternion-based Retinex decomposition network tailored for low-light image enhancement. QWR-Dec-Net consists of two key modules: a decomposition module that separates illumination and reflectance, and a denoising module that fuses a quaternion holistic color representation with wavelet multi-frequency information. This structure jointly improves color constancy and noise suppression. Experiments on low-light remote sensing datasets (LSCIDMR and UCMerced) show that QWR-Dec-Net outperforms current methods in PSNR, SSIM, LPIPS, and classification accuracy. The model’s accurate illumination estimation and stable reflectance make it well-suited for remote sensing tasks such as object detection, video surveillance, precision agriculture, and autonomous navigation.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessReview
Flourishing Considerations for AI
by
Tyler J. VanderWeele and Jonathan D. Teubner
Information 2026, 17(1), 88; https://doi.org/10.3390/info17010088 - 14 Jan 2026
Abstract
Artificial intelligence (AI) is transforming countless aspects of society, including possibly even who we are as persons. AI technologies may affect our flourishing for good or for ill. In this paper, we put forward principled considerations concerning flourishing and AI that are oriented
[...] Read more.
Artificial intelligence (AI) is transforming countless aspects of society, including possibly even who we are as persons. AI technologies may affect our flourishing for good or for ill. In this paper, we put forward principled considerations concerning flourishing and AI that are oriented towards ensuring AI technologies are conducive to human flourishing, rather than impeding it. The considerations are intended to help guide discussions around the development of, and engagement with, AI technologies so as to orient them towards the promotion of individual and societal flourishing. Five sets of considerations around flourishing and AI are discussed concerning: (i) the output provided by large language models; (ii) the specific AI product design; (iii) our engagement with those products; (iv) the effects this is having on human knowledge; and (v) the effects this is having on the self-realization of the human person. While not exhaustive, it is argued that each of these sets of considerations must be taken seriously if these technologies are to help promote, rather than impede, flourishing. We suggest that we should ultimately frame all of our thinking on AI technologies around flourishing.
Full article
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)
►▼
Show Figures

Graphical abstract
Open AccessArticle
The Art Nouveau Path: From Gameplay Logs to Learning Analytics in a Mobile Augmented Reality Game for Sustainability Education
by
João Ferreira-Santos and Lúcia Pombo
Information 2026, 17(1), 87; https://doi.org/10.3390/info17010087 - 14 Jan 2026
Abstract
Mobile augmented reality games (MARGs) generate rich digital traces of how students engage with complex, place-based learning tasks. This study analyses gameplay logs from the Art Nouveau Path, a location-based MARG within the EduCITY Digital Teaching and Learning Ecosystem (DTLE), to develop
[...] Read more.
Mobile augmented reality games (MARGs) generate rich digital traces of how students engage with complex, place-based learning tasks. This study analyses gameplay logs from the Art Nouveau Path, a location-based MARG within the EduCITY Digital Teaching and Learning Ecosystem (DTLE), to develop a learning analytics workflow that uses detailed gameplay logs to inform sustainability-focused educational design. During the post-game segment of a repeated cross-sectional intervention, 439 students in 118 collaborative groups completed 36 quiz tasks at 8 Art Nouveau heritage Points of Interest (POI). Group-level logs (4248 group-item responses) capturing correctness, AR-specific scores, session duration and pacing were transformed into interpretable indicators, combined with error mapping and cluster analysis, and triangulated with post-game open-ended reflections. Results show high overall feasibility (mean accuracy 85.33%) and a small subset of six conceptually demanding items with lower accuracy (mean 68.36%, range 58.47% to 72.88%) concentrated in specific path segments and media types. Cluster analysis yields three collaborative gameplay profiles, labeled ‘fast but fragile’, ‘slow but moderate’ and ‘thorough and successful’, which differ systematically in accuracy, pacing and engagement with AR-mediated tasks. The study proposes a replicable event-based workflow that links mobile AR gameplay logs to design decisions for heritage-based education for sustainability.
Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Optimization and Application of Generative AI Algorithm Based on Transformer Architecture in Adaptive Learning
by
Xuan Liu and Zhi Li
Information 2026, 17(1), 86; https://doi.org/10.3390/info17010086 - 13 Jan 2026
Abstract
At present, generative AI has problems of insufficient content generation accuracy, weak personalized response, and low reasoning efficiency in adaptive learning scenarios, which limit its in-depth application in intelligent teaching. To solve this problem, this paper proposed a Transformer fine-tuning method based on
[...] Read more.
At present, generative AI has problems of insufficient content generation accuracy, weak personalized response, and low reasoning efficiency in adaptive learning scenarios, which limit its in-depth application in intelligent teaching. To solve this problem, this paper proposed a Transformer fine-tuning method based on low-rank adaptation technology, which realized efficient parameter update of pre-trained models through low-rank matrix insertion, and combined the instruction fine-tuning strategy to perform domain adaptation training on the model for the constructed educational scenario dataset. At the same time, a dynamic prompt construction mechanism was introduced to enhance the model’s context perception ability of individual learners’ behaviors, thereby achieving precise alignment and personalized control of generated content. This paper embeds the “wrong question guidance” and “knowledge graph embedding” mechanisms in the model, provides intelligent feedback based on student errors, and promotes in-depth understanding of subject knowledge through knowledge graphs. Experimental results showed that this method scored higher than 0.9 in BLEU and ROUGE-L. The average response delay was low, which was significantly better than the traditional fine-tuning method. This method showed good adaptability and practicality in the fusion of generative AI and adaptive learning and provided a generalizable optimization path and application solution for intelligent education systems.
Full article
(This article belongs to the Special Issue Deep Learning Approach for Time Series Forecasting)
Open AccessReview
The Pervasiveness of Digital Identity: Surveying Themes, Trends, and Ontological Foundations
by
Matthew Comb and Andrew Martin
Information 2026, 17(1), 85; https://doi.org/10.3390/info17010085 - 13 Jan 2026
Abstract
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing
[...] Read more.
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing in parallel without interoperability. This paper presents a systematic survey of digital identity research, drawing on a Scopus-indexed baseline corpus of 2551 publications spanning full years 2005–2024, complemented by a recent stratum of 1241 publications (2023–2025) used to surface contemporary thematic structure and inform the ontology-oriented synthesis. The survey contributes in three ways. First, it provides an integrated overview of the digital identity landscape, tracing influential and widely cited works, historical developments, and recent scholarship across technical, legal, organisational, and cultural domains. Second, it applies natural language processing and subject metadata to identify thematic patterns, disciplinary emphases, and influential authors, exposing trends and cross-field connections difficult to capture through manual review. Third, it consolidates recurring concepts and relationships into ontological fragments (illustrative concept maps and subgraphs) that surface candidate entities, processes, and contexts as signals for future formalisation and alignment of fragmented approaches. By clarifying how digital identity has been conceptualised and where gaps remain, the study provides a foundation for progress toward a universal digital identity that is coherent, interoperable, and socially inclusive.
Full article
(This article belongs to the Section Information and Communications Technology)
►▼
Show Figures

Figure 1
Open AccessArticle
A Model for a Serialized Set-Oriented NoSQL Database Management System
by
Alexandru-George Șerban and Alexandru Boicea
Information 2026, 17(1), 84; https://doi.org/10.3390/info17010084 - 13 Jan 2026
Abstract
Recent advancements in data management highlight the increasing focus on large-scale integration and analytics, with the management of duplicate information becoming a more resource-intensive and costly task. Existing SQL and NoSQL systems inadequately address the semantic constraints of set-based data, either by compromising
[...] Read more.
Recent advancements in data management highlight the increasing focus on large-scale integration and analytics, with the management of duplicate information becoming a more resource-intensive and costly task. Existing SQL and NoSQL systems inadequately address the semantic constraints of set-based data, either by compromising relational fidelity or through inefficient deduplication mechanisms. This paper presents a set-oriented centralized NoSQL database management system (DBMS) that enforces uniqueness by construction, thereby reducing downstream deduplication and enhancing result determinism. The system utilizes in-memory execution with binary serialized persistence, achieving time complexity for exact-match CRUD operations while maintaining ACID-compliant transactional semantics through explicit commit operations. A comparative performance evaluation against Redis and MongoDB highlights the trade-offs between consistency guarantees and latency. The results reveal that enforced set uniqueness completely eliminates duplicates, incurring only moderate latency trade-offs compared to in-memory performance measures. The model can be extended for fuzzy queries and imprecise data by retrieving the membership function information. This work demonstrates that the set-oriented DBMS design represents a distinct architectural paradigm that addresses data integrity constraints inadequately handled by contemporary database systems.
Full article
(This article belongs to the Section Information Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
Fund Similarity: A Use of Bipartite Graphs
by
Ren-Raw Chen, Liangbingyan Luo, Yihui Wang and Xiaohu Zhang
Information 2026, 17(1), 83; https://doi.org/10.3390/info17010083 - 13 Jan 2026
Abstract
►▼
Show Figures
Fund similarity is important for investors when constructing diversified portfolios. Because mutual funds do not always adhere closely to their stated investment policies, investors may unintentionally hold funds with overlapping exposures, leading to reduced diversification and instead causing “diworsification”, which is an investment
[...] Read more.
Fund similarity is important for investors when constructing diversified portfolios. Because mutual funds do not always adhere closely to their stated investment policies, investors may unintentionally hold funds with overlapping exposures, leading to reduced diversification and instead causing “diworsification”, which is an investment term for when too much complexity leads to worse results. As a result, various quantitative methods have been proposed in the literature to investigate fund similarity, primarily using portfolio holdings. Recently, machine learning tools such as clustering and graph theory have been introduced to capture fund similarity. This paper builds on this literature by applying bipartite graphs and node2vec embeddings to a comprehensive dataset that covers 3874 funds over a nearly 6-year period. Our empirical evidence suggests that, bipartiteness is not preserved for non-index (active) funds. Furthermore, while graph embeddings yield higher similarity scores than holding-based measures, they do not necessarily outperform holding-based similarity in explaining returns. These findings suggest that graph-based embeddings capture structural relationships among funds that are distinct from direct portfolio overlap but are not sufficient substitutes when similarity is evaluated solely through returns. As a result, we recommend a more comprehensive similarity measure that includes important risk metrics such as volatility risk, liquidity risk, and systemic risk.
Full article

Graphical abstract
Open AccessArticle
Probabilistic Modeling and Pattern Discovery-Based Sindhi Information Retrieval System
by
Dil Nawaz Hakro, Abdullah Abbasi, Anjum Zameer Bhat, Saleem Raza, Muhammad Babar and Osama Al Rahbi
Information 2026, 17(1), 82; https://doi.org/10.3390/info17010082 - 13 Jan 2026
Abstract
►▼
Show Figures
Natural language processing is the technology used to interact with computers using human languages. An overlapping technology is Information Retrieval (IR), in which a user searches for the demanded or required documents from among a number of documents that are already stored. The
[...] Read more.
Natural language processing is the technology used to interact with computers using human languages. An overlapping technology is Information Retrieval (IR), in which a user searches for the demanded or required documents from among a number of documents that are already stored. The required document is retrieved according to the relevance of the query of the user, and the results are presented in descending order. Many of the languages have their own IR systems, whereas a dedicated IR system for Sindhi still needs attention. Various approaches to effective information retrieval have been proposed. As Sindhi is an old language with a rich history and literature, it needs IR. For the development of Sindhi IR, a document database is required so that the documents can be retrieved accordingly. Many Sindhi documents were identified and collected from various sources, such as books, journal, magazines, and newspapers. These documents were identified as having potential for use in indexing and other forms of processing. Probabilistic modeling and pattern discovery were used to find patterns and for effective retrieval and relevancy. The results for Sindhi Information Retrieval systems are promising and presented more than 90% relevancy. The time elapsed was recorded as ranging from 0.2 to 4.8 s for a single word and 4.6 s with a Sindhi sentence, with the same starting time of 0.2 s. The IR system for Sindhi can be fine-tuned and utilized for other languages with the same characteristics, which adopt Arabic script.
Full article

Graphical abstract
Open AccessArticle
Bayesian Neural Networks with Regularization for Sparse Zero-Inflated Data Modeling
by
Sunghae Jun
Information 2026, 17(1), 81; https://doi.org/10.3390/info17010081 - 13 Jan 2026
Abstract
Zero inflation is pervasive across text mining, event log, and sensor analytics, and it often degrades the predictive performance of analytical models. Classical approaches, most notably the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, address excess zeros but rely on rigid
[...] Read more.
Zero inflation is pervasive across text mining, event log, and sensor analytics, and it often degrades the predictive performance of analytical models. Classical approaches, most notably the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB) models, address excess zeros but rely on rigid parametric assumptions and fixed model structures, which can limit flexibility in high-dimensional, sparse settings. We propose a Bayesian neural network (BNN) with regularization for sparse zero-inflated data modeling. The method separately parameterizes the zero inflation probability and the count intensity under ZIP/ZINB likelihoods, while employing Bayesian regularization to induce sparsity and control overfitting. Posterior inference is performed using variational inference. We evaluate the approach through controlled simulations with varying zero ratios and a real-world dataset, and we compare it against Poisson generalized linear models, ZIP, and ZINB baselines. The present study focuses on predictive performance measured by mean squared error (MSE). Across all settings, the proposed method achieves consistently lower prediction error and improved uncertainty problems, with ablation studies confirming the contribution of the regularization components. These results demonstrate that a regularized BNN provides a flexible and robust framework for sparse zero-inflated data analysis in information-rich environments.
Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Efficient Detection of XSS and DDoS Attacks with Bent Functions
by
Shahram Miri Kelaniki and Nikos Komninos
Information 2026, 17(1), 80; https://doi.org/10.3390/info17010080 - 13 Jan 2026
Abstract
In this paper, we investigate the use of Bent functions, particularly the Maiorana–McFarland (M–M) construction, as a nonlinear preprocessing method to enhance machine learning-based detection systems for Distributed Denial of Service (DDoS) and Cross-Site Scripting (XSS) attacks. Experimental results demonstrated consistent improvements in
[...] Read more.
In this paper, we investigate the use of Bent functions, particularly the Maiorana–McFarland (M–M) construction, as a nonlinear preprocessing method to enhance machine learning-based detection systems for Distributed Denial of Service (DDoS) and Cross-Site Scripting (XSS) attacks. Experimental results demonstrated consistent improvements in classification performance following the M–M Bent transformation. In labeled DDoS data, classification performance was maintained at 100% accuracy, with improved Kappa statistics and lower misclassification rates. In labeled XSS data, classification accuracy was reduced from 100% to 87.19% to reduce overfitting. The transformed classifier also mitigated overfitting by increasing feature diversity. In DDoS and XSS unlabeled data, accuracy improved from 99.85% to 99.92% in unsupervised learning cases for DDoS, and accuracy improved from 98.94% to 100% in unsupervised learning cases for XSS, with improved cluster separation also being noted. In summary, the results suggest that Bent functions significantly improve DDoS and XSS detection by enhancing the separation of benign and malicious traffic. All of these aspects, along with increased dataset quality, increase our confidence in resilience detection in a cyber detection pipeline.
Full article
(This article belongs to the Special Issue Intrusion Detection Systems in IoT Networks)
►▼
Show Figures

Graphical abstract
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026
Topic in
Applied Sciences, Information, Systems, Technologies, Electronics, AI
Challenges and Opportunities of Integrating Service Science with Data Science and Artificial Intelligence
Topic Editors: Dickson K. W. Chiu, Stuart SoDeadline: 30 April 2026
Topic in
Electronics, Future Internet, Technologies, Telecom, Network, Microwave, Information, Signals
Advanced Propagation Channel Estimation Techniques for Sixth-Generation (6G) Wireless Communications
Topic Editors: Han Wang, Fangqing Wen, Xianpeng WangDeadline: 31 May 2026
Conferences
Special Issues
Special Issue in
Information
Machine Learning for the Blockchain
Guest Editors: Georgios Alexandridis, Thanasis Papaioannou, Georgios Siolas, Paraskevi TzouveliDeadline: 31 January 2026
Special Issue in
Information
Emerging Applications of Machine Learning in Healthcare, Industry, and Beyond
Guest Editors: Francesco Isgrò, Huiyu Zhou, Daniele RaviDeadline: 31 January 2026
Special Issue in
Information
Selected Papers of the 10th North American International Conference on Industrial Engineering and Operations Management
Guest Editors: Luis Rabelo, Shahram TajDeadline: 31 January 2026
Special Issue in
Information
Transformative Technologies in Healthcare: Harnessing Machine Learning, Deep Learning and Large Language Models in Health Informatics
Guest Editors: Balu Bhasuran, Kalpana RajaDeadline: 31 January 2026
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero



