Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.6 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Information Systems and Technology: Analytics, Applied System Innovation, Cryptography, Data, Digital, Informatics, Information, Journal of Cybersecurity and Privacy and Multimedia.
Impact Factor:
2.9 (2024);
5-Year Impact Factor:
3.0 (2024)
Latest Articles
Community-Aware Two-Stage Diversification for Social Media User Recommendation with Graph Neural Networks
Information 2026, 17(1), 29; https://doi.org/10.3390/info17010029 - 31 Dec 2025
Abstract
The occurrence of filter bubbles and echo chambers in social media recommendation systems poses a significant threat to information diversity and democratic discourse. Although graph neural networks (GNNs) achieve leading accuracy in user recommendation, their optimization for engagement metrics inadvertently reinforces homophily, creating
[...] Read more.
The occurrence of filter bubbles and echo chambers in social media recommendation systems poses a significant threat to information diversity and democratic discourse. Although graph neural networks (GNNs) achieve leading accuracy in user recommendation, their optimization for engagement metrics inadvertently reinforces homophily, creating isolated information ecosystems. This research developed community-aware two-stage diversification with GNNs (CATD-GNN), a method that leverages the inherent community structure of social networks to promote diversity without sacrificing recommendation quality. CATD-GNN integrates community detection with GNN learning through a two-stage diversification process. The proposed method employs the Louvain method to identify community structures as pseudo-categories, then applies submodular neighbor selection and community-based loss reweighting during GNN training (Stage 1), followed by coverage and redundancy-aware reranking (Stage 2). Twitter data capturing Black Lives Matter discourse and Reddit political discussion networks were used to evaluate the method. CATD-GNN achieves improvements in diversity metrics while maintaining competitive accuracy. The two-stage architecture demonstrates a synergistic effect: the combination of diversity-aware training and coverage-based reranking produces greater improvements than either component alone. The proposed method successfully identifies and recommends users from different communities while preserving recommendation relevance.
Full article
(This article belongs to the Special Issue 2nd Edition of Modern Recommender Systems: Approaches, Challenges and Applications)
►
Show Figures
Open AccessArticle
Oversampling Algorithm Based on Improved K-Means and Gaussian Distribution
by
Wenhao Xie and Xiao Huang
Information 2026, 17(1), 28; https://doi.org/10.3390/info17010028 - 31 Dec 2025
Abstract
Oversampling is common and effective in resolving the classification problem of imbalanced data. Traditional oversampling methods are prone to generating overlapping or noisy samples. Clustering can effectively alleviate the above problems to a certain extent. However, the quality of clustering results has a
[...] Read more.
Oversampling is common and effective in resolving the classification problem of imbalanced data. Traditional oversampling methods are prone to generating overlapping or noisy samples. Clustering can effectively alleviate the above problems to a certain extent. However, the quality of clustering results has a significant impact on the final classification performance. To address this problem, an oversampling algorithm based on the Gaussian distribution oversampling algorithm and the K-means clustering algorithm combining compactness and separateness (CSKGO) is proposed in this paper. The algorithm first uses the K-means clustering algorithm, combining compactness and separateness to cluster the minority samples, constructs the cluster compactness index and inter-cluster separateness index to obtain the optimal number of clusters and the clustering results, and obtains the local distribution characteristics of the minority samples through clustering. Secondly, the sampling ratio for each cluster is assigned based on the compactness of the clustering results to determine the number of samples for each cluster in the minority class. Then, the mean vectors and covariance matrices of each cluster are calculated, and the Gaussian distribution oversampling algorithm is used to generate new samples that match the distribution of characteristics of the real minority samples, which are combined with the majority samples to form balanced data. To verify the effectiveness of the proposed algorithm, 24 datasets were selected from the University of California Irvine (UCI) Repository, and they were oversampled using the CSKGO algorithm proposed in this paper and other oversampling algorithms, respectively. Finally, these datasets were classified using Random Forest, Support Vector Machine, and K-Nearest Neighbor Classifiers. The results indicate that the algorithm proposed in this paper has higher accuracy, F-measure, G-mean, and AUC values, which can effectively improve the classification performance of the imbalanced datasets.
Full article
Open AccessArticle
Simulating Advanced Social Botnets: A Framework for Behavior Realism and Coordinated Stealth
by
Rui Jin and Yong Liao
Information 2026, 17(1), 27; https://doi.org/10.3390/info17010027 - 31 Dec 2025
Abstract
The increasing sophistication of social bots demands advanced simulation frameworks to model potential vulnerabilities in detection systems and probe their robustness.While existing studies have explored aspects of social bot simulation, they often fall short in capturing key adversarial behaviors. To address this gap,
[...] Read more.
The increasing sophistication of social bots demands advanced simulation frameworks to model potential vulnerabilities in detection systems and probe their robustness.While existing studies have explored aspects of social bot simulation, they often fall short in capturing key adversarial behaviors. To address this gap, we propose a simulation framework that jointly incorporates both realistic behavioral mimicry and adaptive inter-bot coordination. Our approach introduces a human-like behavior module that reduces detectable divergence from genuine user activity patterns through distributional matching, combined with a coordination module that enables strategic cooperation while maintaining structural stealth. The effectiveness of the proposed framework is validated through adversarial simulations against both feature-based (Random Forest) and graph-based (BotRGCN) detectors on a real-world dataset. Experimental results demonstrate that our approach enables bots to achieve remarkable evasion capabilities, with the human-like behavior module reaching up to a 100% survival rate against RF-based detectors and 99.1% against the BotRGCN detector. This study yields two key findings: (1) The integration of human-like behavior and target-aware coordination establishes a new paradigm for simulating botnets that are resilient to both feature-based and graph-based detectors; (2) The proposed likelihood-based reward and group-state optimization mechanism effectively align botnet activities with the social context, achieving concealment through integration rather than mere avoidance. The framework provides valuable insights into the complex interplay between evasion strategies and detector effectiveness, offering a robust foundation for future research on social bot threats.
Full article
(This article belongs to the Special Issue Social Media Mining: Algorithms, Insights, and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Mapping Fake News Research in Digital Media: A Bibliometric and Topic Modeling Analysis of Global Trends
by
Yuh-Shan Ho, Fatma Yardibi, Murat Ertan Dogan and Huseyin Kusetogullari
Information 2026, 17(1), 26; https://doi.org/10.3390/info17010026 - 31 Dec 2025
Abstract
This study aims to identify research trends in communication regarding the phenomenon of “fake news” in digital media. Fake news has become a rapidly growing and significant area of research in communication studies in recent years. Published studies were collected from the Science
[...] Read more.
This study aims to identify research trends in communication regarding the phenomenon of “fake news” in digital media. Fake news has become a rapidly growing and significant area of research in communication studies in recent years. Published studies were collected from the Science Citation Index Expanded database. The analysis included the annual distribution of publications, citation metrics, leading journals, countries, institutions, and authors. To explore the conceptual structure, topic modeling was conducted using text mining techniques along with DBSCAN and k-means clustering methods. The United States is a leader in the field, both as a producer country and in terms of technology implementation. Vraga, Bode, Tully, Hameleers, and Tandoc are among the most influential authors. The most cited studies specifically focus on misinformation in the health sector and political disinformation and manipulation during elections. Topic modeling analyses show that the literature mainly clusters around health disinformation, political communication, and verification technologies. The findings have important implications for communication policies, media literacy, and fact-checking technologies. Research that systematically examines fake news from a communication perspective, both in performance and conceptual structures, is scarce in the literature. The resulting thematic clusters provide valuable insights for future research.
Full article
(This article belongs to the Section Information Processes)
►▼
Show Figures

Figure 1
Open AccessArticle
A Cross-Scale Feature Fusion Method for Effectively Enhancing Small Object Detection Performance
by
Yaoxing Kang, Yunzuo Zhang, Yaheng Ren and Yu Cheng
Information 2026, 17(1), 25; https://doi.org/10.3390/info17010025 - 31 Dec 2025
Abstract
Deep learning-based industrial product surface defect detection methods are replacing manual inspection, while the issue of small object detection remains a key challenge in the current field of surface defect detection. The feature pyramid structures demonstrate great potential in improving the performance of
[...] Read more.
Deep learning-based industrial product surface defect detection methods are replacing manual inspection, while the issue of small object detection remains a key challenge in the current field of surface defect detection. The feature pyramid structures demonstrate great potential in improving the performance of small object detection and are one of the important current research directions. Nevertheless, traditional feature pyramid networks still suffer from problems such as imprecise focus on key features, insufficient feature discrimination capabilities, and weak correlations between features. To address these issues, this paper proposes a plug-and-play guided focus feature pyramid network, named GF-FPN. Built on the foundation of FPN, this network is designed with a bottom-up guided aggregation network (GFN): through a lightweight pyramidal attention module (LPAM), star operation, and residual connections, it establishes correlations between objects and local contextual information, as well as between shallow-level details and deep-level semantic features. This enables the feature pyramid network to focus on key features, enhance the ability to distinguish between objects and backgrounds, and thereby improve the model’s small object detection performance. Experimental results on the self-built TinyIndus dataset and NEU-DET demonstrate that the detection model based on GF-FPN exhibits more competitive advantages in object detection compared to existing models.
Full article
(This article belongs to the Special Issue Machine Learning in Image Processing and Computer Vision)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Exploring the Determinants of Continuous Participation in Virtual International Design Workshops Mediated by AI-Driven Digital Humans
by
Yufeng Fu, Chun Yang, Zhiyuan Wang and Juncheng Mu
Information 2026, 17(1), 24; https://doi.org/10.3390/info17010024 - 31 Dec 2025
Abstract
As artificial intelligence (AI) technologies and Virtual Exchange (VE) become increasingly embedded in higher education, AI-driven digital humans have begun to feature in design-oriented virtual international workshops, providing a novel context for examining learner behaviour. This study develops a structural model to examine
[...] Read more.
As artificial intelligence (AI) technologies and Virtual Exchange (VE) become increasingly embedded in higher education, AI-driven digital humans have begun to feature in design-oriented virtual international workshops, providing a novel context for examining learner behaviour. This study develops a structural model to examine the links between system support, interaction processes, self-efficacy, satisfaction, and international learning intention. Specifically, it investigates how perceived AI support, system ease of use, and interaction intensity influence students’ continuous participation in international learning through the mediating roles of learning self-efficacy, interaction quality, and satisfaction. Data were collected through an online questionnaire administered to undergraduate and postgraduate students who had participated in an AI-driven digital human–supported online international design workshop, yielding 611 valid responses. Reliability and validity analyses, as well as structural equation modelling, were conducted using SPSS 22 and AMOS v.22.0. The results show that perceived AI support, system ease of use, and interaction intensity each have a significant positive effect on learning self-efficacy and interaction quality. Both self-efficacy and interaction quality, in turn, significantly enhance learning satisfaction, which subsequently increases students’ intentions for sustained participation in international learning. Overall, the findings reveal a coherent causal chain: AI-driven digital human system characteristics → learning process experience → learning satisfaction → sustained participation intention. This study demonstrates that integrating AI-driven digital humans can meaningfully improve learners’ process experiences in virtual international design workshops. The results provide empirical guidance for curriculum design, pedagogical strategies, and platform optimization in AI-supported, design-oriented virtual international learning environments.
Full article
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Research on Capacity Optimization Configuration of Wind/PV/Storage Power Supply System for Communication Base Station Group
by
Ximei Hu, Shuxia Yang and Zhiqiang He
Information 2026, 17(1), 23; https://doi.org/10.3390/info17010023 - 31 Dec 2025
Abstract
►▼
Show Figures
Under the “dual carbon” goals, enhancing the energy supply for communication base stations is crucial for energy conservation and emission reduction. An individual base station with wind/photovoltaic (PV)/storage system exhibits limited scalability, resulting in poor economy and reliability. To address this, a collaborative
[...] Read more.
Under the “dual carbon” goals, enhancing the energy supply for communication base stations is crucial for energy conservation and emission reduction. An individual base station with wind/photovoltaic (PV)/storage system exhibits limited scalability, resulting in poor economy and reliability. To address this, a collaborative power supply scheme for communication base station group is proposed. This paper establishes a capacity optimization configuration model for such integrated system and introduces a hybrid solution methodology combining random scenario analysis, Nondominated Sorting Genetic Algorithm II (NSGA-II), and Generalized Power Mean (GPM). Typical scenarios are solved using NSGA-II to generate a candidate solution set, which is then refined under operational constraints. The GPM method is applied to determine the final configuration by accounting for attribute correlations. A case study on a Chinese base station group, considering uncertainties in renewable generation, demonstrates the feasibility and effectiveness of the proposed approach.
Full article

Figure 1
Open AccessArticle
Structural–Semantic Term Weighting for Interpretable Topic Modeling with Higher Coherence and Lower Token Overlap
by
Dmitriy Rodionov, Evgenii Konnikov, Gleb Golikov and Polina Yakob
Information 2026, 17(1), 22; https://doi.org/10.3390/info17010022 - 31 Dec 2025
Abstract
Topic modeling of large news streams is widely used to reconstruct economic and political narratives, which requires coherent topics with low lexical overlap while remaining interpretable to domain experts. We propose TF-SYN-NER-Rel, a structural–semantic term weighting scheme that extends classical TF-IDF by integrating
[...] Read more.
Topic modeling of large news streams is widely used to reconstruct economic and political narratives, which requires coherent topics with low lexical overlap while remaining interpretable to domain experts. We propose TF-SYN-NER-Rel, a structural–semantic term weighting scheme that extends classical TF-IDF by integrating positional, syntactic, factual, and named-entity coefficients derived from morphosyntactic and dependency parses of Russian news texts. The method is embedded into a standard Latent Dirichlet Allocation (LDA) pipeline and evaluated on a large Russian-language news corpus from the online archive of Moskovsky Komsomolets (over 600,000 documents), with political, financial, and sports subsets obtained via dictionary-based expert labeling. For each subset, TF-SYN-NER-Rel is compared with standard TF-IDF under identical LDA settings, and topic quality is assessed using the C_v coherence metric. To assess robustness, we repeat model training across multiple random initializations and report aggregate coherence statistics. Quantitative results show that TF-SYN-NER-Rel improves coherence and yields smoother, more stable coherence curves across the number of topics. Qualitative analysis indicates reduced lexical overlap between topics and clearer separation of event-centered and institutional themes, especially in political and financial news. Overall, the proposed pipeline relies on CPU-based NLP tools and sparse linear algebra, providing a computationally lightweight and interpretable complement to embedding- and LLM-based topic modeling in large-scale news monitoring.
Full article
(This article belongs to the Collection Natural Language Processing and Applications: Challenges and Perspectives)
►▼
Show Figures

Figure 1
Open AccessArticle
AVI-SHIELD: An Explainable TinyML Cross-Platform Threat Detection Framework for Aviation Mobile Security
by
Chaymae Majdoubi, Saida EL Mendili, Youssef Gahi and Khalil El-Khatib
Information 2026, 17(1), 21; https://doi.org/10.3390/info17010021 - 31 Dec 2025
Abstract
The integration of mobile devices into aviation powering electronic flight bags, maintenance logs, and flight planning tools has created a critical and expanding cyber-attack surface. Security for these systems must be not only effective but also transparent, resource-efficient, and certifiable to meet stringent
[...] Read more.
The integration of mobile devices into aviation powering electronic flight bags, maintenance logs, and flight planning tools has created a critical and expanding cyber-attack surface. Security for these systems must be not only effective but also transparent, resource-efficient, and certifiable to meet stringent aviation safety standards. This paper presents AVI-SHIELD, a novel framework for developing high-assurance, on-device threat detection. Our methodology, grounded in the MITRE ATT&CK® framework, models credible aviation-specific threats to generate the AviMal-TinyX dataset. We then design and optimize a set of compact, interpretable detection algorithms through quantization and pruning for deployment on resource-constrained hardware. Evaluation demonstrates that AVI-SHIELD achieves 97.2% detection accuracy on AviMal-TinyX while operating with strict resource efficiency (<1.5 MB model size, <35 ms inference time and <0.1 Joules per inference) on both Android and iOS. The framework provides crucial decision transparency through integrated, on-device analysis of detection results, adding a manageable overhead (~120 ms) only upon detection. Its successful deployment on both Android and iOS demonstrates that AVI-SHIELD can provide a uniform security posture across heterogeneous device fleets, a critical requirement for airline operations. This work provides a foundational approach for deploying certifiable, edge-based security that delivers the mandatory offline protection required for safety critical mobile aviation applications.
Full article
(This article belongs to the Special Issue Theoretical Foundations and Algorithms for Scheduling in Parallel and Distributed Systems)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Application of FDTD Method in the Calculation of Lightning Propagation Effects on Mixed Terrain of Land and Sea
by
Fang Xiao, Qiming Ma, Xiao Zhou, Jiajun Song, Jiaquan Wang and Linsen Jiang
Information 2026, 17(1), 20; https://doi.org/10.3390/info17010020 - 29 Dec 2025
Abstract
►▼
Show Figures
Based on the finite-difference time-domain (FDTD) method, this study investigates the propagation effects of lightning electromagnetic fields over mixed sea–land paths. A self-developed FDTD computational model is employed, which takes into account the influence of the Earth–ionosphere waveguide structure on the radiation field
[...] Read more.
Based on the finite-difference time-domain (FDTD) method, this study investigates the propagation effects of lightning electromagnetic fields over mixed sea–land paths. A self-developed FDTD computational model is employed, which takes into account the influence of the Earth–ionosphere waveguide structure on the radiation field propagation. Through numerical simulations, the waveforms of the vertical electric field and azimuthal magnetic field of the lightning radiation during mixed-path propagation are obtained. The results demonstrate that under long-distance propagation conditions of 50 km, the discontinuity between land and sea media significantly distorts the electric field waveform, while the influence on the magnetic field waveform is negligible. This study provides a reliable numerical basis for analyzing the propagation characteristics of lightning radiation fields in complex terrain and offers valuable insights for lightning location and electromagnetic environment assessment.
Full article

Figure 1
Open AccessSystematic Review
Data Management in Smart Manufacturing Supply Chains: A Systematic Review of Practices and Applications (2020–2025)
by
Nouhaila Smina, Youssef Gahi and Jihane Gharib
Information 2026, 17(1), 19; https://doi.org/10.3390/info17010019 - 27 Dec 2025
Abstract
Smart supply chains, enabled by Industry 4.0 technologies, are increasingly recognized as key drivers of competitiveness, leveraging data across the value chain to enhance visibility, responsiveness, and resilience, while supporting better planning, optimized resource utilization, and agile customer service. Effective data management has
[...] Read more.
Smart supply chains, enabled by Industry 4.0 technologies, are increasingly recognized as key drivers of competitiveness, leveraging data across the value chain to enhance visibility, responsiveness, and resilience, while supporting better planning, optimized resource utilization, and agile customer service. Effective data management has thus become a strategic capability, fostering operational performance, innovation, and long-term value creation. However, existing research and practice remain fragmented, often focusing on isolated functions such as production, logistics, or quality, the most data-intensive and critical domains in smart manufacturing, without comprehensively addressing data acquisition, storage, integration, analysis, and visualization across all supply chain phases. This article addresses these gaps through a systematic literature review of 55 peer-reviewed studies published between 2020 and 2025, conducted following PRISMA guidelines using Scopus and Web of Science. Contributions are categorized into reviews, frameworks/models, and empirical studies, and the analysis examines how data is collected, integrated, and leveraged across the entire supply chain. By adopting a holistic perspective, this study provides a comprehensive understanding of data management in smart manufacturing supply chains, highlights current practices and persistent challenges, and identifies key avenues for future research.
Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Industrial and Supply Chain Systems)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Secure Streaming Data Encryption and Query Scheme with Electric Vehicle Key Management
by
Zhicheng Li, Jian Xu, Fan Wu, Cen Sun, Xiaomin Wu and Xiangliang Fang
Information 2026, 17(1), 18; https://doi.org/10.3390/info17010018 - 25 Dec 2025
Abstract
The rapid proliferation of Electric Vehicle (EV) infrastructures has led to the massive generation of high-frequency streaming data uploaded to cloud platforms for real-time analysis, while such data supports intelligent energy management and behavioral analytics, it also encapsulates sensitive user information, the disclosure
[...] Read more.
The rapid proliferation of Electric Vehicle (EV) infrastructures has led to the massive generation of high-frequency streaming data uploaded to cloud platforms for real-time analysis, while such data supports intelligent energy management and behavioral analytics, it also encapsulates sensitive user information, the disclosure or misuse of which can lead to significant privacy and security threats. This work addresses these challenges by developing a secure and scalable scheme for protecting and verifying streaming data during storage and collaborative analysis. The proposed scheme ensures end-to-end confidentiality, forward security, and integrity verification while supporting efficient encrypted aggregation and fine-grained, time-based authorization. It introduces a lightweight mechanism that hierarchically organizes cryptographic keys and ciphertexts over time, enabling privacy-preserving queries without decrypting individual data points. Building on this foundation, an electric vehicle key management and query system is further designed to integrate the proposed encryption and verification scheme into practical V2X environments. The system supports privacy-preserving data sharing, verifiable statistical analytics, and flexible access control across heterogeneous cloud and edge infrastructures. Analytical and experimental evidence show that the designed system attains rigorous security guarantees alongside excellent efficiency and scalability, rendering it ideal for large-scale electric vehicle data protection and analysis tasks.
Full article
(This article belongs to the Special Issue Privacy-Preserving Data Analytics and Secure Computation)
►▼
Show Figures

Graphical abstract
Open AccessArticle
TA-LJP: Term-Aware Legal Judgment Prediction
by
Yunkai Shen, Hua Wei and Xuan Tian
Information 2026, 17(1), 17; https://doi.org/10.3390/info17010017 - 24 Dec 2025
Abstract
Legal Judgment Prediction (LJP) is a crucial task in the field of legal artificial intelligence. It leverages the fact description of a case to automatically render a verdict, deriving judgment results (including legal articles, charges, and penalty terms). Current LJP methods are overly
[...] Read more.
Legal Judgment Prediction (LJP) is a crucial task in the field of legal artificial intelligence. It leverages the fact description of a case to automatically render a verdict, deriving judgment results (including legal articles, charges, and penalty terms). Current LJP methods are overly simplistic in integrating legal articles and charge definitions into case fact representations, neglecting attention to key legal elements information such as legal concepts and terminology, resulting in the omission of key legal elements. Simultaneously, they overlook the sentencing range information contained in legal articles, often leading to judgment results that exceed the statutory penalty terms. In light of this, we propose a novel LJP method—TA-LJP (Term-Aware Legal Judgment Prediction). This method effectively fuses legal articles (or charge definitions) with case fact representations step by step through an improved multi-level fusion module, enhancing the weights of key legal elements to highlight their modeling and effectively extracting sentencing range information from legal articles to further strengthen case fact representations, thereby improving the overall performance of the LJP task. TA-LJP consists of three main stages: Firstly, to fully model key legal elements when integrating legal articles and charge definitions into fact representations, legal articles (or charge definitions) are incrementally integrated through an improved multi-level fusion module, finely increasing the weights of key legal elements to initially enhance case fact representations. Subsequently, sentencing range information from legal articles is extracted and effectively utilized to further strengthen case fact representations. Finally, the enhanced fact representations are used to predict the legal articles, charges, and penalty terms of the case. Experimental results on LAIC2021 datasets demonstrate that TA-LJP exhibits distinct advantages in LJP, particularly in the penalty term prediction task, achieving a relative improvement of 3.02% compared to the best baseline method.
Full article
(This article belongs to the Special Issue Advancing Information Systems Through Artificial Intelligence: Innovative Approaches and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
AI/ML Based Anomaly Detection and Fault Diagnosis of Turbocharged Marine Diesel Engines: Experimental Study on Engine of an Operational Vessel
by
Deepesh Upadrashta and Tomi Wijaya
Information 2026, 17(1), 16; https://doi.org/10.3390/info17010016 - 24 Dec 2025
Abstract
Turbocharged diesel engines are widely used for the propulsion and as the generators for powering auxiliary systems in marine applications. Many works were published on the development of diagnosis tools for the engines using data from simulation models or from experiments on a
[...] Read more.
Turbocharged diesel engines are widely used for the propulsion and as the generators for powering auxiliary systems in marine applications. Many works were published on the development of diagnosis tools for the engines using data from simulation models or from experiments on a sophisticated engine test bench. However, the simulation data varies a lot with actual operational data, and the available sensor data on the actual vessel is much less compared to the data from test benches. Therefore, it is necessary to develop anomaly prediction and fault diagnosis models from limited data available from the engines. In this paper, an artificial intelligence (AI)-based anomaly detection model and machine learning (ML)-based fault diagnosis model were developed using the actual data acquired from a diesel engine of a cargo vessel. Unlike the previous works, the study uses operational, thermodynamic, and vibration data for the anomaly detection and fault diagnosis. The paper provides the overall architecture of the proposed predictive maintenance system including details on the sensorization of assets, data acquisition, edge computation, and AI model for anomaly prediction and ML algorithm for fault diagnosis. Faults with varying severity levels were induced in the subcomponents of the engine to validate the accuracy of the anomaly detection and fault diagnosis models. The unsupervised stacked autoencoder AI model predicts the engine anomalies with 87.6% accuracy. The balanced accuracy of supervised fault diagnosis model using Support Vector Machine algorithm is 99.7%. The proposed models are vital in marching towards sustainable shipping and have potential to deploy across various applications.
Full article
(This article belongs to the Special Issue Addressing Real-World Challenges in Recognition and Classification with Cutting-Edge AI Models and Methods)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrating Target Domain Convex Hull with MMD for Cross-Dataset EEG Classification of Parkinson’s Disease
by
Xueqi Wu, Weixiang Gao, Jiangwen Lu and Yunyuan Gao
Information 2026, 17(1), 15; https://doi.org/10.3390/info17010015 - 23 Dec 2025
Abstract
►▼
Show Figures
Parkinson’s disease has brought great harm to human life and health. The detection of Parkinson’s disease based on electroencephalogram (EEG) provides a new way to prevent and treat Parkinson’s disease. However, due to the limited EEG data samples, there are large differences among
[...] Read more.
Parkinson’s disease has brought great harm to human life and health. The detection of Parkinson’s disease based on electroencephalogram (EEG) provides a new way to prevent and treat Parkinson’s disease. However, due to the limited EEG data samples, there are large differences among different subjects, especially among different datasets. In this study, a new method called Improved Convex Hull and Maximum Mean Discrepancy (ICMMD)for cross-dataset classification of Parkinson’s disease is proposed by combining convex hull and transfer learning. The paper innovatively implements cross-data transfer learning in the field of brain–computer interfaces for Parkinson’s disease, using Euclidean distance for data alignment and EEG channel selection, and combines the convex envelope with MMD distance to form an effective source domain selection method. Lowpd, San and UNM datasets are used to verify the effectiveness of the proposed method through experiments on different brain regions and frequency bands in Parkinson’s. The results show that this method has good classification performance in different regions of the brain and frequency bands. The research in this paper provides a new idea and method for disease detection of Parkinson’s disease across datasets.
Full article

Graphical abstract
Open AccessArticle
PRA-Unet: Parallel Residual Attention U-Net for Real-Time Segmentation of Brain Tumors
by
Ali Zakaria Lebani, Medjeded Merati and Saïd Mahmoudi
Information 2026, 17(1), 14; https://doi.org/10.3390/info17010014 - 23 Dec 2025
Abstract
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an
[...] Read more.
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an optimal balance between accuracy and computational cost remains a significant challenge. In many cases, current methods trade speed for accuracy, or vice versa, consuming substantial computing power and making them difficult to use on devices with limited resources. To address this issue, we present PRA-UNet, a lightweight deep learning model optimized for fast and accurate 2D brain tumor segmentation. Using a single 2D input, the architecture processes four types of MRI scans (FLAIR, T1, T1c, and T2). The encoder uses inverted residual blocks and bottleneck residual blocks to capture features at different scales effectively. The Convolutional Block Attention Module (CBAM) and the Spatial Attention Module (SAM) improve the bridge and skip connections by refining feature maps and making it easier to detect and localize brain tumors. The decoder uses depthwise separable convolutions, which significantly reduce computational costs without degrading accuracy. The BraTS2020 dataset shows that PRA-UNet achieves a Dice score of 95.71%, an accuracy of 99.61%, and a processing speed of 60 ms per image, enabling real-time analysis. PRA-UNet outperforms other models in segmentation while requiring less computing power, suggesting it could be suitable for deployment on lightweight edge devices in clinical settings. Its speed and reliability enable radiologists to diagnose tumors quickly and accurately, enhancing practical medical applications.
Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrating VIIRS Fire Detections and ERA5-Land Reanalysis for Modeling Wildfire Probability in Arid Mountain Systems of the Arabian Peninsula
by
Rahmah Al-Qthanin and Zubairul Islam
Information 2026, 17(1), 13; https://doi.org/10.3390/info17010013 - 23 Dec 2025
Abstract
Wildfire occurrence in arid and semiarid landscapes is increasingly driven by shifts in climatic and biophysical conditions, yet its dynamics remain poorly understood in the mountainous environments of western Saudi Arabia. This study modeled wildfire probabilities across the Aseer, Al Baha, Makkah Al-Mukarramah,
[...] Read more.
Wildfire occurrence in arid and semiarid landscapes is increasingly driven by shifts in climatic and biophysical conditions, yet its dynamics remain poorly understood in the mountainous environments of western Saudi Arabia. This study modeled wildfire probabilities across the Aseer, Al Baha, Makkah Al-Mukarramah, and Jazan regions via multisource Earth observation datasets from 2012–2025. Active fire detections from VIIRS were integrated with ERA5-Land reanalysis variables, vegetation indices, and Copernicus DEM GLO30 topography. A random forest classifier was trained and validated via stratified sampling and cross-validation to predict monthly burn probabilities. Calibration, reliability assessment, and independent temporal validation confirmed strong model performance (AUC-ROC = 0.96; Brier = 0.03). Climatic dryness (dew-point deficit), vegetation structure (LAI_lv), and surface soil moisture emerged as dominant predictors, underscoring the coupling between energy balance and fuel desiccation. Temporal trend analyses (Kendall’s τ and Sen’s slope) revealed the gradual intensification of fire probability during the dry-to-transition seasons (February–April and September–November), with Aseer showing the most persistent risk. These findings establish a scalable framework for wildfire early warning and landscape management in arid ecosystems under accelerating climatic stress.
Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Addressing the Dark Side of Differentiation: Bias and Micro-Streaming in Artificial Intelligence Facilitated Lesson Planning
by
Jason Zagami
Information 2026, 17(1), 12; https://doi.org/10.3390/info17010012 - 23 Dec 2025
Abstract
As artificial intelligence (AI) becomes increasingly woven into educational design and decision-making, its use within initial teacher education (ITE) exposes deep tensions between efficiency, equity, and professional agency. A critical action research study conducted across three iterations of a third-year ITE course investigated
[...] Read more.
As artificial intelligence (AI) becomes increasingly woven into educational design and decision-making, its use within initial teacher education (ITE) exposes deep tensions between efficiency, equity, and professional agency. A critical action research study conducted across three iterations of a third-year ITE course investigated how pre-service teachers engaged with AI-supported lesson planning tools while learning to design for inclusion. Analysis of 123 lesson plans, reflective journals, and survey data revealed a striking pattern. Despite instruction in inclusive pedagogy, most participants reproduced fixed-tiered differentiation and deficit-based assumptions about learners’ abilities, a process conceptualised as micro-streaming. AI-generated recommendations often shaped these outcomes, subtly reinforcing hierarchies of capability under the guise of personalisation. Yet, through iterative reflection, dialogue, and critical framing, participants began to recognise and resist these influences, reframing differentiation as design for diversity rather than classification. The findings highlight the paradoxical role of AI in teacher education, as both an amplifier of inequity and a catalyst for critical consciousness and argue for the urgent integration of critical digital pedagogy within ITE programmes. AI can advance inclusive teaching only when educators are empowered to interrogate its epistemologies, question its biases, and reclaim professional judgement as the foundation of ethical pedagogy.
Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Computer Vision for Fashion: A Systematic Review of Design Generation, Simulation, and Personalized Recommendations
by
Ilham Kachbal and Said El Abdellaoui
Information 2026, 17(1), 11; https://doi.org/10.3390/info17010011 - 23 Dec 2025
Abstract
►▼
Show Figures
The convergence of fashion and technology has created new opportunities for creativity, convenience, and sustainability through the integration of computer vision and artificial intelligence. This systematic review, following PRISMA guidelines, examines 200 studies published between 2017 and 2025 to analyze computational techniques for
[...] Read more.
The convergence of fashion and technology has created new opportunities for creativity, convenience, and sustainability through the integration of computer vision and artificial intelligence. This systematic review, following PRISMA guidelines, examines 200 studies published between 2017 and 2025 to analyze computational techniques for garment design, accessories, cosmetics, and outfit coordination across three key areas: generative design approaches, virtual simulation methods, and personalized recommendation systems. We comprehensively evaluate deep learning architectures, datasets, and performance metrics employed for fashion item synthesis, virtual try-on, cloth simulation, and outfit recommendation. Key findings reveal significant advances in Generative adversarial network (GAN)-based and diffusion-based fashion generation, physics-based simulations achieving real-time performance on mobile and virtual reality (VR) devices, and context-aware recommendation systems integrating multimodal data sources. However, persistent challenges remain, including data scarcity, computational constraints, privacy concerns, and algorithmic bias. We propose actionable directions for responsible AI development in fashion and textile applications, emphasizing the need for inclusive datasets, transparent algorithms, and sustainable computational practices. This review provides researchers and industry practitioners with a comprehensive synthesis of current capabilities, limitations, and future opportunities at the intersection of computer vision and fashion design.
Full article

Graphical abstract
Open AccessArticle
Critique of Networked Election Systems: A Comprehensive Analysis of Vulnerabilities and Security Measures
by
Jason M. Green, Abdolhossein Sarrafzadeh and Mohd Anwar
Information 2026, 17(1), 10; https://doi.org/10.3390/info17010010 - 22 Dec 2025
Abstract
The security and integrity of election systems represent fundamental pillars of democratic governance in the 21st century. As electoral processes increasingly rely on networked technologies and digital infrastructures, the vulnerability of these systems to cyber threats has become a paramount concern for election
[...] Read more.
The security and integrity of election systems represent fundamental pillars of democratic governance in the 21st century. As electoral processes increasingly rely on networked technologies and digital infrastructures, the vulnerability of these systems to cyber threats has become a paramount concern for election officials, cybersecurity experts, and policymakers worldwide. This paper presents the first comprehensive synthesis and systematic analysis of vulnerabilities across major U.S. election systems, integrating findings from government assessments, security research, and documented incidents into a unified analytical framework. We compile and categorize previously fragmented vulnerability data from multiple vendors, federal advisories (CISA, EAC), and security assessments to construct a holistic view of the election security landscape. Our novel contribution includes (1) the first cross-vendor vulnerability taxonomy for election systems, (2) a quantitative risk assessment framework specifically designed for election infrastructure, (3) systematic mapping of threat actor capabilities against election system components, and (4) the first proposal for honeynet deployment in election security contexts. Through analysis of over 200 authoritative sources, we identify critical security gaps in federal guidelines, quantify risks in networked election components, and reveal systemic vulnerabilities that only emerge through comprehensive cross-system analysis. Our findings demonstrate that interconnected vulnerabilities create risk-amplification factors of 2-5x compared to isolated component analysis, highlighting the urgent need for comprehensive federal cybersecurity standards, improved network segmentation, and enhanced monitoring capabilities to protect democratic processes.
Full article
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026
Topic in
Applied Sciences, Information, Systems, Technologies, Electronics, AI
Challenges and Opportunities of Integrating Service Science with Data Science and Artificial Intelligence
Topic Editors: Dickson K. W. Chiu, Stuart SoDeadline: 30 April 2026
Conferences
Special Issues
Special Issue in
Information
Interactive Learning: Human in the Loop System Design for Active Human–Computer Interactions
Guest Editors: Fred Petry, Chris J. Michael, Derek T. AndersonDeadline: 31 December 2025
Special Issue in
Information
Generative AI Transformations in Industrial and Societal Applications
Guest Editors: Razi Iqbal, Shereen IsmailDeadline: 31 December 2025
Special Issue in
Information
Semantic Web and Language Models
Guest Editor: Nikolaos PapadakisDeadline: 31 December 2025
Special Issue in
Information
Artificial Intelligence and Data-Driven Strategies for Advancing Smart, Sustainable, and Resilient Infrastructures and Mobility Systems
Guest Editors: Sina Shaffiee Haghshenas, Giuseppe Guido, Vittorio AstaritaDeadline: 31 December 2025
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero




