Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.9 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Information Systems and Technology: Analytics, Applied System Innovation, Cryptography, Data, Digital, Informatics, Information, Journal of Cybersecurity and Privacy and Multimedia.
Impact Factor:
2.9 (2024);
5-Year Impact Factor:
3.0 (2024)
Latest Articles
A Modular Approach to Automated News Generation Using Large Language Models
Information 2026, 17(4), 319; https://doi.org/10.3390/info17040319 (registering DOI) - 25 Mar 2026
Abstract
Advances in Generative Artificial Intelligence have enabled the development of models capable of generating text, images, and audio that are similar to what humans can create. These models often have valuable general knowledge thanks to their training on large datasets. Through fine-tuning or
[...] Read more.
Advances in Generative Artificial Intelligence have enabled the development of models capable of generating text, images, and audio that are similar to what humans can create. These models often have valuable general knowledge thanks to their training on large datasets. Through fine-tuning or prompt-based adaptation, this knowledge can be applied to specific tasks. In this work, we propose a modular approach to automated news generation using Large Language Models, composed of an information retrieval module and a text generation module. The proposed system leverages both publicly available (open-weight) and proprietary Large Language Models, enabling a comparative evaluation of their behavior within the proposed news generation pipeline. We describe the experiments carried out with a total of five representative Large Language Models spanning both categories, detailing their configurations and performance. The results demonstrate the feasibility of using Large Language Models to automate this task and identify systematic differences in behavior across model categories, as well as the problems that remain to be solved to enable fully autonomous news generation.
Full article
(This article belongs to the Special Issue Artificial Intelligence, Generative AI and Large Language Models: Transforming Technology and Society)
►
Show Figures
Open AccessArticle
The Calligraphic Spectrum: Quantifying the Quality of Arabic Children’s Handwritten Character Generation Using CWGAN-GP and Multimeric Evaluation
by
Shafia Alshahrani and Hajar Alharbi
Information 2026, 17(4), 318; https://doi.org/10.3390/info17040318 (registering DOI) - 25 Mar 2026
Abstract
►▼
Show Figures
Due to high intraclass variability and subtle intercharacter differences, automatic Arabic handwriting recognition remains a challenging task, particularly for children’s handwriting. This study proposes a hybrid framework that combines class-conditional Wasserstein generative adversarial networks with gradient penalty (CWGAN-GP) for data augmentation and a
[...] Read more.
Due to high intraclass variability and subtle intercharacter differences, automatic Arabic handwriting recognition remains a challenging task, particularly for children’s handwriting. This study proposes a hybrid framework that combines class-conditional Wasserstein generative adversarial networks with gradient penalty (CWGAN-GP) for data augmentation and a convolutional neural network (CNN) enhanced with squeeze-and-excitation (SE) blocks for improved feature discrimination. Experiments were restricted to disconnected (isolated) characters from the Hijja dataset, which comprised 12,355 samples divided as follows: 80% for training (9884), 10% for validation (1236), and 10% for testing (1235). Training the CNN on real data alone yielded an accuracy of 93.47%, while incorporating CWGAN-GP-generated samples improved performance to 96.27%. Notably, the proposed SE-CNN trained with the CWGAN-GP-augmented data achieved the highest accuracy of 99.27%. This result demonstrates that the combination of advanced generative data augmentation and architectural refinement significantly enhances Arabic handwritten character recognition performance.
Full article

Graphical abstract
Open AccessArticle
Multimodal Fake News Detection via Evidence Retrieval and Visual Forensics with Large Vision-Language Models
by
Liwei Dong, Yanli Chen, Wei Ke, Hanzhou Wu, Lunzhi Deng and Guixiang Liao
Information 2026, 17(4), 317; https://doi.org/10.3390/info17040317 (registering DOI) - 25 Mar 2026
Abstract
Fake news has caused significant harm and disruption across various sectors of society. With the rapid advancement of the Internet and social media platforms, both academic and industrial communities have shown growing interest in multimodal fake news detection. In this work, we propose
[...] Read more.
Fake news has caused significant harm and disruption across various sectors of society. With the rapid advancement of the Internet and social media platforms, both academic and industrial communities have shown growing interest in multimodal fake news detection. In this work, we propose MERF (Multimodal Evidence Retrieval and Forensics with LVLM), a unified framework for multimodal fake news detection that leverages the reasoning capabilities of Large Vision-Language Models (LVLMs). While LVLMs outperform traditional Large Language Models (LLMs) in processing multimodal content, our study reveals that their reasoning abilities remain limited in the absence of sufficient supporting evidence. MERF addresses this challenge by integrating web-based content retrieval, reverse image search, and image manipulation detection into a coherent pipeline, enabling the model to generate informed and explainable veracity judgments. Specifically, our approach performs cross-modal consistency checking, retrieves corroborative information for both textual and visual content, and applies forensic analysis to detect potential visual forgeries. The aggregated evidence is then fed into the LVLM, facilitating comprehensive reasoning and evidence-based decision-making. Experimental results on two public benchmark datasets—Weibo and Twitter—demonstrate that MERF consistently outperforms state-of-the-art baselines across all major evaluation metrics, achieving substantial improvements in accuracy, robustness, and interpretability.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Fine-Grained Vision-Language Method with Prompt Tuning for Blind Image Quality Assessment
by
Kai Tan, Wang Luo, Yaqing Chen, Xin He, Yumei Zhang, Mengqiang Li and Haoyu Wang
Information 2026, 17(4), 316; https://doi.org/10.3390/info17040316 - 24 Mar 2026
Abstract
Blind image quality assessment (BIQA) without reference images remains significantly challenging due to the fact that perceptual quality is largely determined by subtle, spatially localized distortions. However, existing Contrastive Language–Image Pre-training (CLIP)-based methods exhibit limited sensitivity to fine-grained degradations such as local blur,
[...] Read more.
Blind image quality assessment (BIQA) without reference images remains significantly challenging due to the fact that perceptual quality is largely determined by subtle, spatially localized distortions. However, existing Contrastive Language–Image Pre-training (CLIP)-based methods exhibit limited sensitivity to fine-grained degradations such as local blur, noise, compression artifacts, and exposure inconsistencies, since they are optimized for global semantic alignment. To overcome these limitations, we propose a fine-grained vision–language framework that enhances distortion-aware representation by considering both fine-grained visual and detailed textual domains. Specially, our method employs a fine-grained CLIP architecture in conjunction with explicit textual descriptions to enable the effective identification of subtle regional degradations. Furthermore, a parameter-efficient prompt-tuning strategy is utilized to facilitate the learning of task-adaptive prompt representations tailored to image quality assessment (IQA). Extensive experiments on three widely used in-the-wild IQA benchmarks show that the proposed method achieves strong consistency with human subjective judgments: our training-free FGCLIP-IQA reaches a maximum SROCC of 0.732 on KonIQ-10k, outperforming the vanilla CLIP-IQA baseline, while the prompt-tuned FGCLIP-IQA+ further achieves a maximum SROCC of 0.909 on KonIQ-10k with only a small number of learnable parameters and exhibits robust cross-dataset generalization capabilities. These results demonstrate that the fine-grained vision–language alignment shows great potential for future development, and provides an efficient and accurate solution for the BIQA task.
Full article
(This article belongs to the Section Information Processes)
►▼
Show Figures

Figure 1
Open AccessArticle
WCGAN-GA-RF: Healthcare Fraud Detection via Generative Adversarial Networks and Evolutionary Feature Selection
by
Junze Cai, Shuhui Wu, Yawen Zhang, Jiale Shao and Yuanhong Tao
Information 2026, 17(4), 315; https://doi.org/10.3390/info17040315 - 24 Mar 2026
Abstract
Healthcare fraud poses significant risks to insurance systems, undermining both financial sustainability and equitable access to care. Accurate detection of fraudulent claims is therefore critical to ensuring the integrity of healthcare insurance operations. However, the increasing sophistication of fraud techniques and limited data
[...] Read more.
Healthcare fraud poses significant risks to insurance systems, undermining both financial sustainability and equitable access to care. Accurate detection of fraudulent claims is therefore critical to ensuring the integrity of healthcare insurance operations. However, the increasing sophistication of fraud techniques and limited data availability have undermined the performance of traditional detection approaches. To address these challenges, this paper proposes WCGAN-GA-RF, an integrated fraud detection framework that synergistically combines Wasserstein Conditional Generative Adversarial Network with gradient penalty (WCGAN-GP) for synthetic data generation, genetic algorithm-based feature selection (GA-RF) for dimensionality reduction, and Random Forest (RF) for classification. The proposed framework was empirically validated on a real-world dataset of 16,000 healthcare insurance claims from a Chinese healthcare technology firm, characterized by a 16:1 class imbalance ratio ( fraudulent samples) and 118 original features. Using a stratified train–test split with results averaged over five independent runs, the WCGAN-GA-RF framework achieved a precision of , a recall of , and an F1-score of . Notably, the GA-RF component achieved a feature reduction (from 80 to 28 features) while maintaining competitive detection accuracy. Comparative experiments demonstrate that the proposed approach outperforms conventional oversampling methods, including Random Oversampling (ROS), Synthetic Minority Oversampling Technique (SMOTE), and Adaptive Synthetic Sampling (ADASYN), particularly in handling high-dimensional, severely imbalanced healthcare fraud data.
Full article
(This article belongs to the Special Issue Advancements in Healthcare Data Science: Innovations, Challenges and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Modeling the Spreading of Fake News Through the Interactions Between Human Heuristics and Recommender Systems
by
Franco Bagnoli, Tijan Juraj Cvetković, Andrea Guazzini, Pietro Lió and Riccardo Romei
Information 2026, 17(4), 314; https://doi.org/10.3390/info17040314 - 24 Mar 2026
Abstract
In many cases, the pieces of information at our disposal (messages) come from a recommender source, which can be either an official news system, a large language model or simply a social network. Often, also, these messages are build so as to promote
[...] Read more.
In many cases, the pieces of information at our disposal (messages) come from a recommender source, which can be either an official news system, a large language model or simply a social network. Often, also, these messages are build so as to promote their active spreading, which, on the other hand, has a positive effect on one’s own popularity. However, the content of the message can be false, giving origin to a phenomenon analogous to the spreading of a disease. In principle, there is always the possibility of checking the correctness of the message by “investing” some time, so we can say that this checking has a cost. We develop a simple model based on the mechanism of “risk perception” (propensity to check the falseness of a message) and mutual trustability (affinity), based on the average number of fake messages received and checked. On the other side, the probability of emitting a fake message is inversely proportional to one’s risk perception and the affinity among agents is also exploited by the recommender system. We aim at investigating this process with the goal of deriving methods for identifying the penetration level of fake news from behavioral patterns of users. This model represents an integration of cognitive psychology with computational agent-based modeling.
Full article
(This article belongs to the Special Issue 2nd Edition of Modern Recommender Systems: Approaches, Challenges and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Experimental Validation of an SDR-Based Direction of Arrival Estimation Testbed
by
Nikita Sheremet and Grigoriy Fokin
Information 2026, 17(4), 313; https://doi.org/10.3390/info17040313 - 24 Mar 2026
Abstract
Advanced mobile communication standards of the fifth and subsequent generations widely use beamforming technology. While many publications on this topic rely on simulation tools, some work has been dedicated to experimental testing using software-defined radio (SDR) platforms. These platforms are often expensive and
[...] Read more.
Advanced mobile communication standards of the fifth and subsequent generations widely use beamforming technology. While many publications on this topic rely on simulation tools, some work has been dedicated to experimental testing using software-defined radio (SDR) platforms. These platforms are often expensive and require significant expertise to configure. This paper proposes a novel cost-effective method for combining a pair of dual-channel Universal Software Radio Peripheral (USRP) B210 boards into a four-element antenna array direction of arrival estimation testbed using Metronom synchronization devices. The hardware and developed software implementation is detailed, including the antenna layout and software modules, based on USRP Hardware Driver, that provide the frequency and time synchronization necessary for amplitude-phase processing. Experimental validation of the testbed using the MUltiple SIgnal Classification (MUSIC) algorithm demonstrates high stability of angle of arrival estimates, with a standard deviation not exceeding 0.4°. The algorithm achieved a resolution of 16.1° for two sources, which surpasses the half-power beamwidth of 25.6°. The theoretical significance of this work lies in the scientific validation of combining SDR devices with the precise synchronization required for beamforming. Its practical value is in enabling the experimental testing of beamforming without the need for costly multichannel SDR hardware.
Full article
(This article belongs to the Section Wireless Technologies)
►▼
Show Figures

Figure 1
Open AccessArticle
Cultural Knowledge Presentation of Salah Lanna Within the Context of Buddhist Art: Expressed Through Stone Buddha Statues via Virtual Reality
by
Phichete Julrode and Piyapat Jarusawat
Information 2026, 17(4), 312; https://doi.org/10.3390/info17040312 - 24 Mar 2026
Abstract
The traditional craft of Buddha statue carving represents an important form of cultural heritage in many Asian societies, yet the transmission of this knowledge is increasingly threatened by modernization and the declining number of skilled artisans. This study explores the use of Virtual
[...] Read more.
The traditional craft of Buddha statue carving represents an important form of cultural heritage in many Asian societies, yet the transmission of this knowledge is increasingly threatened by modernization and the declining number of skilled artisans. This study explores the use of Virtual Reality (VR) as an innovative tool for preserving and teaching the cultural knowledge associated with Salah Lanna stone Buddha carving. A VR-based learning environment was developed to simulate traditional carving techniques, tools, and cultural narratives related to Lanna Buddhist art. The system was designed using Unity 3D and integrated hand-tracking interaction to enable immersive practice of carving procedures. The prototype was evaluated through expert review involving ten specialists in Buddha carving, art education, and VR technology. The evaluation assessed five dimensions: usability, authenticity, cultural relevance, immersion, and perceived learning potential. Results indicate high levels of expert evaluation results regarding the effectiveness of the system, with average scores of 4.6 for usability, 4.8 for authenticity, 4.7 for cultural relevance, 4.5 for immersion, and 4.9 for perceived learning potential on a five-point scale. The findings suggest that VR technology can provide a promising platform for preserving traditional craftsmanship and supporting immersive cultural learning. By integrating technical training with cultural narratives, the system demonstrates potential for enhancing access to traditional craft education while contributing to the digital preservation of Salah Lanna cultural heritage.
Full article
(This article belongs to the Special Issue Advances in Extended Reality Technologies for User Experience Design)
►▼
Show Figures

Figure 1
Open AccessArticle
Verifying SDG ESG Compliance in Manufacturing Industry Projects by Surveying Sponsors
by
Kenneth David Strang and Narasimha Rao Vajjhala
Information 2026, 17(4), 311; https://doi.org/10.3390/info17040311 - 24 Mar 2026
Abstract
This study addresses a critical gap in the operationalization of sustainability frameworks at the project level by developing and validating an empirically grounded measurement instrument for assessing Environmental, Social, and Governance (ESG) compliance in manufacturing industry projects. While the United Nations Sustainable Development
[...] Read more.
This study addresses a critical gap in the operationalization of sustainability frameworks at the project level by developing and validating an empirically grounded measurement instrument for assessing Environmental, Social, and Governance (ESG) compliance in manufacturing industry projects. While the United Nations Sustainable Development Goals (SDGs) articulate sustainability aspirations at the national and global level, and ESG frameworks capture organizational-level sustainability performance, no validated instrument exists for measuring ESG integration at the project level where sustainability commitments are ultimately operationalized. Drawing on the theoretical foundations of sustainable project management, stakeholder theory, and the ESG governance literature, the authors developed a 30-item survey instrument capturing six conceptual dimensions of ESG-aligned project performance. Data were collected from 2231 project sponsors and decision-makers in North American goods manufacturing firms classified under NAICS codes 31–33, which collectively encompass the entire manufacturing sector in North America. Through a sequential analytical approach employing principal component analysis (PCA) for initial item reduction, exploratory factor analysis (EFA) for dimensionality assessment, and structural equation modelling (SEM) for confirmatory validation, a parsimonious two-factor model emerged with excellent fit indices (CFI = 0.99, TLI = 0.98, RMSEA = 0.052, SRMR < 0.035). The first factor captures ESG planning activities undertaken during project initiation and planning phases, while the second factor represents ESG monitoring and controlling functions during project execution. The reduction from six theoretical dimensions to two empirical factors reflects lifecycle governance theory, where planning-phase governance and execution-phase control emerge as functionally distinct but correlated constructs. The validated instrument offers practical utility for project managers, organizational sustainability officers, and policy-makers seeking standardized benchmarks for ESG compliance at the operational project level. The validated instrument and complete survey are shared for replication and testing across different industries and countries.
Full article
(This article belongs to the Special Issue Artificial Intelligence in Sustainable Supply Chains: Innovations, Applications, and Future Directions)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Intelligent Analysis of Data Flows for Real-Time Classification of Traffic Incidents
by
Gary Reyes, Roberto Tolozano-Benites, Cristhina Ortega-Jaramillo, Christian Albia-Bazurto, Laura Lanzarini, Waldo Hasperué, Dayron Rumbaut and Julio Barzola-Monteses
Information 2026, 17(3), 310; https://doi.org/10.3390/info17030310 - 23 Mar 2026
Abstract
Social media platforms have been established as relevant sources of real-time information for urban traffic analysis. This study proposes an intelligent framework for the classification and spatiotemporal analysis of traffic incidents based on semi-synthetic data streams constructed from historical geolocated seeds for controlled
[...] Read more.
Social media platforms have been established as relevant sources of real-time information for urban traffic analysis. This study proposes an intelligent framework for the classification and spatiotemporal analysis of traffic incidents based on semi-synthetic data streams constructed from historical geolocated seeds for controlled validation, utilizing real reports from platforms such as X and Telegram. The approach integrates adaptive machine learning and incremental density-based clustering. An Adaptive Random Forest (ARF) incremental classifier is used to identify the type of incident, allowing for continuous updating of the model in response to changes in traffic flow and concept drift. The classified events are then processed using DenStream, a clustering algorithm that incorporates a temporal decay mechanism designed to identify dynamic spatial patterns and discard older information. The evaluation is performed in a controlled streaming simulation environment that replicates the dynamics of cities such as Panama and Guayaquil. The proposed framework demonstrated robust quantitative performance, achieving a prequential accuracy of up to 86.4% and a weighted F1-score of 0.864 in the Panama scenario, maintaining high stability against semantic noise. The results suggest that this hybrid architecture is a highly viable approach for urban traffic monitoring, providing useful information for Intelligent Transportation Systems (ITS) by processing authentic social signals.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
Multi-Scale Spatiotemporal Fusion and Steady-State Memory-Driven Load Forecasting for Integrated Energy Systems
by
Yong Liang, Lin Bao, Xiaoyan Sun and Junping Tang
Information 2026, 17(3), 309; https://doi.org/10.3390/info17030309 - 23 Mar 2026
Abstract
Load forecasting for Integrated Energy Systems (IESs) is critical to enabling multi-energy coordinated optimization and low-carbon scheduling. Facing multi-load types and multi-site high-dimensional heterogeneous data, there remains a global learning challenge stemming from insufficient representation of spatiotemporal coupling features. In response to the
[...] Read more.
Load forecasting for Integrated Energy Systems (IESs) is critical to enabling multi-energy coordinated optimization and low-carbon scheduling. Facing multi-load types and multi-site high-dimensional heterogeneous data, there remains a global learning challenge stemming from insufficient representation of spatiotemporal coupling features. In response to the multi-source heterogeneous characteristics of IES loads, this paper designs a Spatiotemporal Topology Encoder that maps load data into a tensorized multi-energy spatiotemporal topological representation via fuzzy classification and multi-scale ranking. In parallel, we construct a MultiScale Hybrid Convolver to extract multi-scale, multi-level global spatiotemporal features of multi-energy load representations. We further develop a Temporal Segmentation Transformer and a Steady-State Exponentially Gated Memory Unit, and design a jointly optimized forecasting model that enforces global dynamic correlations and local, steady-state preservation. Altogether, we propose a multi-scale spatiotemporal fusion and steady-state memory-driven load forecasting method for integrated energy systems (MSTF-SMDN). Extensive experiments on a public real-world dataset from Arizona State University demonstrate the superiority of the proposed approach: compared to the strongest baseline, MSTF-SMDN reduces cooling load RMSE by 16.09%, heating load RMSE by 12.97%, and electric load RMSE by 6.14%, while achieving R2 values of 0.99435, 0.98701, and 0.96722, respectively, confirming its feasibility, efficiency, and promising potential for multi-energy load forecasting in IES.
Full article
(This article belongs to the Special Issue Emerging Applications of Machine Learning in Healthcare, Industry, and Beyond)
►▼
Show Figures

Figure 1
Open AccessArticle
Leakage-Free Evaluation for Employee Attrition Prediction on Tabular Data
by
Ana Maria Căvescu and Alina Nirvana Popescu
Information 2026, 17(3), 308; https://doi.org/10.3390/info17030308 - 23 Mar 2026
Abstract
In the context of employee attrition prediction using imbalanced tabular data, we propose a reproducible, leakage-aware evaluation protocol and validate it on the IBM HR Attrition dataset. We perform the train/test split prior to any rebalancing; SMOTE (Synthetic Minority Over-sampling Technique) is applied
[...] Read more.
In the context of employee attrition prediction using imbalanced tabular data, we propose a reproducible, leakage-aware evaluation protocol and validate it on the IBM HR Attrition dataset. We perform the train/test split prior to any rebalancing; SMOTE (Synthetic Minority Over-sampling Technique) is applied exclusively within the training portion of each fold in stratified 5-fold cross-validation, while the test set remains untouched. One-Hot Encoding is performed consistently using pd.get_dummies. We benchmark Logistic Regression, Random Forest, ExtraTrees, LightGBM, and XGBoost using imbalance-aware metrics: F1 for the minority class, PR-AUC reported as Average Precision (AP), and ROC-AUC reported both in cross-validation and on the held-out test set. XGBoost attains the best mean AP in cross-validation (0.556 ± 0.056). Logistic Regression achieves the highest mean F1 (0.439 ± 0.048), while LightGBM yields the best mean ROC-AUC (0.791 ± 0.026). On the test set, XGBoost achieves a precision value of 0.65 and a recall value of 0.45 at a fixed threshold of 0.5. Overall, the results highlight a trade-off between stable minority-class detection (Logistic Regression) and stronger risk ranking performance (boosting models) under class imbalance.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1
Open AccessArticle
The Information Efficiency Metric (IEM): An Info-Metric Approach for Quantifying AI Language Model Performance
by
Ljerka Luić, Maja Barbić and Marijana Rončević
Information 2026, 17(3), 307; https://doi.org/10.3390/info17030307 - 22 Mar 2026
Abstract
The interaction between humans and artificial intelligence has become a critical channel for information exchange, yet no quantitative, theoretically grounded framework exists for measuring information efficiency in human–AI communication. This study empirically validated an info-metrics framework operationalizing information efficiency through three dimensions—information density
[...] Read more.
The interaction between humans and artificial intelligence has become a critical channel for information exchange, yet no quantitative, theoretically grounded framework exists for measuring information efficiency in human–AI communication. This study empirically validated an info-metrics framework operationalizing information efficiency through three dimensions—information density (D), relevance (R), and redundancy (Q)—synthesized into an information efficiency metric (IEM). We analyzed 60 AI responses from ChatGPT 5.2 and Claude Opus 4.5 across factual, analytical, and creative question types using combined coding, automated structural measures, and human evaluation of informational units. The results showed that information density and relevance positively contributed to IEM, while redundancy had a negative contribution. Efficiency varied by task type, with factual prompts showing the highest variability across models and highest efficiency. Contrary to expectations, creative responses did not exhibit higher redundancy, suggesting that expressive diversity does not necessarily constitute informational noise. The framework offers a task-sensitive, theoretically grounded approach to evaluating human–AI information exchange beyond correctness or subjective quality judgment, supporting systems-oriented optimization of conversational AI protocols.
Full article
(This article belongs to the Special Issue Multimodal Human-Computer Interaction)
►▼
Show Figures

Figure 1
Open AccessArticle
Digitalisation, Digital Governance, and Eco-Innovation: Evidence from Cross-Country Data in 2022
by
Keisuke Kokubun
Information 2026, 17(3), 306; https://doi.org/10.3390/info17030306 - 22 Mar 2026
Abstract
This study examines the relationship between digitalisation and eco-innovation across countries, with a particular focus on the role of digital government and digital standardisation. Using cross-country data for 2022, eco-innovation is proxied by environment-related patenting activity, while digitalisation is measured using the United
[...] Read more.
This study examines the relationship between digitalisation and eco-innovation across countries, with a particular focus on the role of digital government and digital standardisation. Using cross-country data for 2022, eco-innovation is proxied by environment-related patenting activity, while digitalisation is measured using the United Nations E-Government Development Index (EGDI). Employing a combination of ordinary least squares, population-weighted regressions, spline specifications, and quantile regressions, we document three main findings. First, digitalisation is positively and robustly associated with eco-innovation across countries. Second, the relationship is non-linear, with marginal effects that strengthen at higher levels of digital development, suggesting important complementarities between digital capabilities and national innovation systems. Third, the association between digitalisation and eco-innovation is heterogeneous across the distribution of eco-innovation, with particularly strong associations observed among countries with intermediate levels of innovative activity. Taken together, these findings suggest that digitalisation is systematically associated with eco-innovation across countries and indicate the potential relevance of digital governance and digital standardisation to sustainable technological change.
Full article
(This article belongs to the Special Issue Standards Digitisation and Digital Standardisation)
►▼
Show Figures

Figure 1
Open AccessArticle
Biometric Identification Under Different Emotions via EEG: A Deep Learning Approach
by
Zhyar Abdalla Jamal and Azhin Tahir Sabir
Information 2026, 17(3), 305; https://doi.org/10.3390/info17030305 - 22 Mar 2026
Abstract
Electroencephalography (EEG) has attracted growing interest as a biometric modality because it reflects ongoing brain activity and is inherently difficult to counterfeit. At the same time, EEG signals are influenced by internal conditions such as emotions, which may affect identification stability, particularly when
[...] Read more.
Electroencephalography (EEG) has attracted growing interest as a biometric modality because it reflects ongoing brain activity and is inherently difficult to counterfeit. At the same time, EEG signals are influenced by internal conditions such as emotions, which may affect identification stability, particularly when recordings are obtained using portable consumer-grade systems. This study examines how emotional states influence EEG-based biometric performance and evaluates deep learning architectures to determine an effective modeling approach for cross-emotion robustness. EEG data were collected from 65 participants using a 14-channel Emotiv EPOC X headset, with 54 subjects retained after self-reported emotional validation. Recordings were acquired under neutral, positive, and negative visual stimuli. To address variability associated with portable acquisition, preprocessing made use of the device’s internal signal quality metrics to select reliable segments, compensate for degraded regions, and reduce noise. Among the evaluated models, a Bidirectional Long Short-Term Memory (BiLSTM) network enhanced with Convolutional Block Attention Module (CBAM) and Multi-Head Self-Attention (MHSA) achieved highest performance in our experiments. The model was trained on neutral-state data and subsequently evaluated under emotional conditions. It reached 95.91% accuracy in the neutral condition and maintained high performance under positive (94.31%) and negative (92.99%) states. Despite a modest decline under negative stimuli, identification performance remained stable. These findings support the feasibility of robust EEG-based biometric authentication using consumer-grade devices in realistic settings.
Full article
(This article belongs to the Section Biomedical Information and Health)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Deriving Empirically Grounded NFR Specifications from Practitioner Discourse: A Validated Methodology Applied to Trustworthy APIs in the AI Era
by
Apitchaka Singjai
Information 2026, 17(3), 304; https://doi.org/10.3390/info17030304 - 22 Mar 2026
Abstract
Specifying non-functional requirements (NFRs) for rapidly evolving domains such as trustworthy APIs in the AI era is challenging as best practices emerge through practitioner discourse faster than traditional requirements engineering can capture them. We present a systematic methodology for deriving prioritized NFR specifications
[...] Read more.
Specifying non-functional requirements (NFRs) for rapidly evolving domains such as trustworthy APIs in the AI era is challenging as best practices emerge through practitioner discourse faster than traditional requirements engineering can capture them. We present a systematic methodology for deriving prioritized NFR specifications from multimedia practitioner discourse combining AI-assisted transcript analysis, grounded theory principles, and Theme Coverage Score (TCS) validation. Our five-task approach integrates purposive sampling, automated transcription with speaker diarization, grounded theory coding extracting stakeholder-specific themes with TCS quantification, MoSCoW prioritization using empirically derived thresholds (Must Have ≥85%, Should Have 65–84%, Could Have 45–64%, and Won’t Have <45%), and NFR specification consistent with ISO/IEC 25010:2023 principles of stakeholder perspective, measurable quality criteria, and explicit rationale. Applying this methodology to 22 expert presentations on trustworthy APIs yields Weighted Coverage Score of 0.71 and 30 prioritized NFR specifications across five trustworthiness dimensions. MoSCoW classification produces 11 Must Have requirements (Robustness and Transparency), 9 Should Have, 6 Could Have, and 4 Won’t Have. The analysis reveals systematic disparities where Fairness contributes zero Must Have or Should Have requirements due to insufficient practitioner consensus. Each NFR emphasizes stakeholder perspective, measurable quality criteria, and explicit rationale, enabling systematic verification. The validated methodology with complete replication package enables empirically grounded, prioritized NFR derivation from practitioner discourse in any rapidly evolving domain.
Full article
(This article belongs to the Special Issue Intelligent Software Engineering: Synergy Between AI and Software Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
ChatGPT-Assisted Learning Effectiveness and Academic Achievement: A Mechanism-Based Model in Higher Education
by
Ahmed Mohamed Hasanein and Bassam Samir Al-Romeedy
Information 2026, 17(3), 303; https://doi.org/10.3390/info17030303 - 21 Mar 2026
Abstract
This study examines the impact of ChatGPT-assisted learning on the academic achievement of hospitality and tourism students in Egyptian public universities, with particular emphasis on the mediating roles of perceived usefulness and self-regulated learning. Drawing conceptually on the Technology Acceptance Model (TAM), the
[...] Read more.
This study examines the impact of ChatGPT-assisted learning on the academic achievement of hospitality and tourism students in Egyptian public universities, with particular emphasis on the mediating roles of perceived usefulness and self-regulated learning. Drawing conceptually on the Technology Acceptance Model (TAM), the study adopts a contextualized framework that emphasizes perceived usefulness while incorporating ChatGPT-assisted learning effectiveness as a learning-oriented driver within generative AI-supported educational environments. A quantitative research design was employed using an online survey administered to students who actively used ChatGPT for academic purposes. A total of 689 valid responses were collected from nine public universities and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM) to test the proposed hypotheses. The findings indicate that ChatGPT-Assisted Learning Effectiveness (CALE) has a statistically significant and positive direct effect on academic achievement (AA; β = 0.386, T = 3.946, p < 0.001, 95% CI = 0.192–0.561) and strongly predicts perceived usefulness (β = 0.673, T = 9.274, p < 0.001, 95% CI = 0.581–0.742) and self-regulated learning (β = 0.707, T = 10.734, p < 0.001, 95% CI = 0.621–0.779). In turn, PU (β = 0.281, T = 3.854, p < 0.001, 95% CI = 0.142–0.417) and SRL (β = 0.220, T = 2.418, p = 0.016, 95% CI = 0.041–0.356) significantly enhance academic achievement. Mediation analyses further confirm that PU (β = 0.189, T = 2.366, p = 0.018, 95% CI = 0.031–0.284) and SRL (β = 0.156, T = 3.699, p < 0.001, 95% CI = 0.102–0.301) partially mediate the relationship between CALE and academic achievement. These findings offer important theoretical insights by contextualizing TAM’s performance-related logic within generative AI-driven learning environments and refining its application to academic outcome settings, while highlighting self-regulated learning as a critical explanatory mechanism. From a practical perspective, the study provides valuable implications for educators and policymakers by emphasizing the need to promote students’ perceived usefulness of ChatGPT and foster learner autonomy, positioning generative AI as a powerful pedagogical support tool for enhancing academic success in hospitality and tourism education.
Full article
(This article belongs to the Special Issue Trends in Artificial Intelligence-Supported E-Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
A Unified Clustering-Based Anonymization for Privacy-Preserving Data Publishing with Multidimensional Privacy Quantification
by
Anselme Herman Eyeleko, Tao Feng and Yan Yan
Information 2026, 17(3), 302; https://doi.org/10.3390/info17030302 - 20 Mar 2026
Abstract
As widely adopted privacy models in privacy-preserving data publishing (PPDP), k-anonymity and ℓ-diversity have been extensively studied by researchers to enable the release of useful information while preserving data privacy. However, existing methods suffer from several limitations. They often rely on
[...] Read more.
As widely adopted privacy models in privacy-preserving data publishing (PPDP), k-anonymity and ℓ-diversity have been extensively studied by researchers to enable the release of useful information while preserving data privacy. However, existing methods suffer from several limitations. They often rely on single-dimensional privacy models and lack unified metrics for accurately quantifying privacy leakages. Many approaches overlook the impact of semantic similarity and adversarial prior and posterior beliefs among sensitive attributes and frequently employ suboptimal similarity measures that fail to account for the heterogeneous nature of quasi-identifiers, thereby degrading both privacy protection and data utility. To address these challenges, this paper proposes CAMDP, a unified clustering-based anonymization method for privacy-preserving data publishing with multidimensional privacy quantification. CAMDP constructs equivalence classes that satisfy k-anonymity while simultaneously enhancing sensitive attribute diversity, reducing semantic similarity, and limiting divergence between prior and posterior adversarial beliefs. A unified multidimensional metric is introduced to jointly quantify privacy leakage and information loss, guiding the anonymization process. Additionally, a similarity-aware distance metric tailored to mixed-type quasi-identifiers is employed to reduce information loss. Experimental results on three benchmark datasets, Adult, Careplans, and Airline, demonstrate that CAMDP consistently outperforms state-of-the-art methods. Across all tested configurations, CAMDP achieves the lowest average privacy leakage (0.1235, 0.0795, and 0.1855, respectively), lower average information loss (0.626, 0.636, and 0.60, respectively), and the lowest average intra-cluster dissimilarity (0.586, 0.635, and 0.573, respectively), while maintaining competitive execution time across the three datasets.
Full article
(This article belongs to the Special Issue Privacy-Preserving Data Analytics and Secure Computation)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Hybrid Machine Learning Approach for Classifying Indonesian Cybercrime Discourse Using a Localized Threat Taxonomy
by
Firman Arifman, Teddy Mantoro and Dini Oktarina Dwi Handayani
Information 2026, 17(3), 301; https://doi.org/10.3390/info17030301 - 20 Mar 2026
Abstract
Indonesia’s rapid digital growth has been accompanied by escalating cyber threats, with public discourse on social media emerging as a critical but underutilized source of threat intelligence. This discourse is characterized by informal language and local nuances that render existing international cybercrime taxonomies
[...] Read more.
Indonesia’s rapid digital growth has been accompanied by escalating cyber threats, with public discourse on social media emerging as a critical but underutilized source of threat intelligence. This discourse is characterized by informal language and local nuances that render existing international cybercrime taxonomies ineffective, creating a gap in scalable, locally relevant threat analytics. This study introduces the Indonesian Cybercrime Threat Taxonomy (ICTT), a novel five-dimensional framework tailored to Indonesian online environments. An end-to-end OSINT pipeline was developed to collect 2344 samples from X (formerly Twitter) and YouTube, employing weak supervision with 12 high-precision regex patterns to generate training labels. A state-of-the-art IndoBERT model was fine-tuned on this data, and its performance was compared against rule-based and hybrid classification models. On a manually annotated gold-standard dataset of 600 samples, both the IndoBERT and hybrid models achieved 96.8% accuracy, significantly outperforming the rule-based baseline (66.7%). The models demonstrated strong generalization across both social media platforms, and the hybrid approach provided an effective balance of high performance and interpretability. This research demonstrates that informal public discourse can be systematically transformed into structured threat intelligence. The ICTT and the accompanying hybrid classification system provide a scalable, interpretable, and locally relevant foundation for cyber threat analytics in Indonesia, establishing a methodological blueprint for other low-resource language contexts.
Full article
(This article belongs to the Special Issue Information Extraction and Language Discourse Processing)
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring Optical Flow Methods for Automated Fall Detection System
by
Simeon Karpuzov, Stiliyan Kalitzin, Stefan Tabakov, Dobromir Tsolyov and Georgi Petkov
Information 2026, 17(3), 300; https://doi.org/10.3390/info17030300 - 20 Mar 2026
Abstract
►▼
Show Figures
Falls pose severe risks to vulnerable populations, particularly the elderly and individuals with adverse neurological conditions, necessitating reliable and non-obstructive detection systems. While previous multi-modal approaches utilizing video and audio have demonstrated strong performance, they face significant limitations regarding sensitivity to environmental noise.
[...] Read more.
Falls pose severe risks to vulnerable populations, particularly the elderly and individuals with adverse neurological conditions, necessitating reliable and non-obstructive detection systems. While previous multi-modal approaches utilizing video and audio have demonstrated strong performance, they face significant limitations regarding sensitivity to environmental noise. This paper presents a robust, video-only fall detection framework that eliminates reliance on acoustic data to enhance universality. We conduct a comprehensive comparative analysis of five optical flow (OF) algorithms—Horn–Schunck, Lucas–Kanade (LK), LK-Derivative of Gaussian, Farneback, and the spectral method SOFIA—to determine the range of applicability of each technique for capturing fall dynamics. Beyond detection accuracy, we investigate the computational efficiency of each configuration. This optimized, privacy-centric pipeline offers a scalable solution for continuous monitoring in home and clinical settings, addressing the critical need for immediate intervention following high-impact falls.
Full article

Figure 1
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026
Topic in
Applied Sciences, Information, Systems, Technologies, Electronics, AI
Challenges and Opportunities of Integrating Service Science with Data Science and Artificial Intelligence
Topic Editors: Dickson K. W. Chiu, Stuart SoDeadline: 30 April 2026
Topic in
Electronics, Future Internet, Technologies, Telecom, Network, Microwave, Information, Signals
Advanced Propagation Channel Estimation Techniques for Sixth-Generation (6G) Wireless Communications
Topic Editors: Han Wang, Fangqing Wen, Xianpeng WangDeadline: 31 May 2026
Topic in
AI, Applied Sciences, Education Sciences, Electronics, Information
Explainable AI in Education
Topic Editors: Guanfeng Liu, Karina Luzia, Luke Bozzetto, Tommy Yuan, Pengpeng ZhaoDeadline: 30 June 2026
Conferences
Special Issues
Special Issue in
Information
Trends in Artificial Intelligence-Supported E-Learning
Guest Editors: Todorka Glushkova, Lyubka DoukovskaDeadline: 31 March 2026
Special Issue in
Information
Advancements in Healthcare Data Science: Innovations, Challenges and Applications
Guest Editor: Muneer AhmadDeadline: 31 March 2026
Special Issue in
Information
Artificial Intelligence for Signal, Image and Video Processing
Guest Editors: Seyed Sahand Mohammadi Ziabari, Ali Mohammed Mansoor AlsahagDeadline: 31 March 2026
Special Issue in
Information
Multimodal Human-Computer Interaction
Guest Editors: Nuno Almeida, Samuel Silva, António Joaquim da Silva TeixeiraDeadline: 31 March 2026
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero





