Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.6 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the first half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Information Systems and Technology: Analytics, Applied System Innovation, Cryptography, Data, Digital, Informatics, Information, Journal of Cybersecurity and Privacy and Multimedia.
Impact Factor:
2.9 (2024);
5-Year Impact Factor:
3.0 (2024)
Latest Articles
Data Management in Smart Manufacturing Supply Chains: A Systematic Review of Practices and Applications (2020–2025)
Information 2026, 17(1), 19; https://doi.org/10.3390/info17010019 (registering DOI) - 27 Dec 2025
Abstract
Smart supply chains, enabled by Industry 4.0 technologies, are increasingly recognized as key drivers of competitiveness, leveraging data across the value chain to enhance visibility, responsiveness, and resilience, while supporting better planning, optimized resource utilization, and agile customer service. Effective data management has
[...] Read more.
Smart supply chains, enabled by Industry 4.0 technologies, are increasingly recognized as key drivers of competitiveness, leveraging data across the value chain to enhance visibility, responsiveness, and resilience, while supporting better planning, optimized resource utilization, and agile customer service. Effective data management has thus become a strategic capability, fostering operational performance, innovation, and long-term value creation. However, existing research and practice remain fragmented, often focusing on isolated functions such as production, logistics, or quality, the most data-intensive and critical domains in smart manufacturing, without comprehensively addressing data acquisition, storage, integration, analysis, and visualization across all supply chain phases. This article addresses these gaps through a systematic literature review of 55 peer-reviewed studies published between 2020 and 2025, conducted following PRISMA guidelines using Scopus and Web of Science. Contributions are categorized into reviews, frameworks/models, and empirical studies, and the analysis examines how data is collected, integrated, and leveraged across the entire supply chain. By adopting a holistic perspective, this study provides a comprehensive understanding of data management in smart manufacturing supply chains, highlights current practices and persistent challenges, and identifies key avenues for future research.
Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Industrial and Supply Chain Systems)
►
Show Figures
Open AccessArticle
Secure Streaming Data Encryption and Query Scheme with Electric Vehicle Key Management
by
Zhicheng Li, Jian Xu, Fan Wu, Cen Sun, Xiaomin Wu and Xiangliang Fang
Information 2026, 17(1), 18; https://doi.org/10.3390/info17010018 - 25 Dec 2025
Abstract
The rapid proliferation of Electric Vehicle (EV) infrastructures has led to the massive generation of high-frequency streaming data uploaded to cloud platforms for real-time analysis, while such data supports intelligent energy management and behavioral analytics, it also encapsulates sensitive user information, the disclosure
[...] Read more.
The rapid proliferation of Electric Vehicle (EV) infrastructures has led to the massive generation of high-frequency streaming data uploaded to cloud platforms for real-time analysis, while such data supports intelligent energy management and behavioral analytics, it also encapsulates sensitive user information, the disclosure or misuse of which can lead to significant privacy and security threats. This work addresses these challenges by developing a secure and scalable scheme for protecting and verifying streaming data during storage and collaborative analysis. The proposed scheme ensures end-to-end confidentiality, forward security, and integrity verification while supporting efficient encrypted aggregation and fine-grained, time-based authorization. It introduces a lightweight mechanism that hierarchically organizes cryptographic keys and ciphertexts over time, enabling privacy-preserving queries without decrypting individual data points. Building on this foundation, an electric vehicle key management and query system is further designed to integrate the proposed encryption and verification scheme into practical V2X environments. The system supports privacy-preserving data sharing, verifiable statistical analytics, and flexible access control across heterogeneous cloud and edge infrastructures. Analytical and experimental evidence show that the designed system attains rigorous security guarantees alongside excellent efficiency and scalability, rendering it ideal for large-scale electric vehicle data protection and analysis tasks.
Full article
(This article belongs to the Special Issue Privacy-Preserving Data Analytics and Secure Computation)
►▼
Show Figures

Graphical abstract
Open AccessArticle
TA-LJP: Term-Aware Legal Judgment Prediction
by
Yunkai Shen, Hua Wei and Xuan Tian
Information 2026, 17(1), 17; https://doi.org/10.3390/info17010017 - 24 Dec 2025
Abstract
Legal Judgment Prediction (LJP) is a crucial task in the field of legal artificial intelligence. It leverages the fact description of a case to automatically render a verdict, deriving judgment results (including legal articles, charges, and penalty terms). Current LJP methods are overly
[...] Read more.
Legal Judgment Prediction (LJP) is a crucial task in the field of legal artificial intelligence. It leverages the fact description of a case to automatically render a verdict, deriving judgment results (including legal articles, charges, and penalty terms). Current LJP methods are overly simplistic in integrating legal articles and charge definitions into case fact representations, neglecting attention to key legal elements information such as legal concepts and terminology, resulting in the omission of key legal elements. Simultaneously, they overlook the sentencing range information contained in legal articles, often leading to judgment results that exceed the statutory penalty terms. In light of this, we propose a novel LJP method—TA-LJP (Term-Aware Legal Judgment Prediction). This method effectively fuses legal articles (or charge definitions) with case fact representations step by step through an improved multi-level fusion module, enhancing the weights of key legal elements to highlight their modeling and effectively extracting sentencing range information from legal articles to further strengthen case fact representations, thereby improving the overall performance of the LJP task. TA-LJP consists of three main stages: Firstly, to fully model key legal elements when integrating legal articles and charge definitions into fact representations, legal articles (or charge definitions) are incrementally integrated through an improved multi-level fusion module, finely increasing the weights of key legal elements to initially enhance case fact representations. Subsequently, sentencing range information from legal articles is extracted and effectively utilized to further strengthen case fact representations. Finally, the enhanced fact representations are used to predict the legal articles, charges, and penalty terms of the case. Experimental results on LAIC2021 datasets demonstrate that TA-LJP exhibits distinct advantages in LJP, particularly in the penalty term prediction task, achieving a relative improvement of 3.02% compared to the best baseline method.
Full article
(This article belongs to the Special Issue Advancing Information Systems Through Artificial Intelligence: Innovative Approaches and Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
AI/ML Based Anomaly Detection and Fault Diagnosis of Turbocharged Marine Diesel Engines: Experimental Study on Engine of an Operational Vessel
by
Deepesh Upadrashta and Tomi Wijaya
Information 2026, 17(1), 16; https://doi.org/10.3390/info17010016 - 24 Dec 2025
Abstract
Turbocharged diesel engines are widely used for the propulsion and as the generators for powering auxiliary systems in marine applications. Many works were published on the development of diagnosis tools for the engines using data from simulation models or from experiments on a
[...] Read more.
Turbocharged diesel engines are widely used for the propulsion and as the generators for powering auxiliary systems in marine applications. Many works were published on the development of diagnosis tools for the engines using data from simulation models or from experiments on a sophisticated engine test bench. However, the simulation data varies a lot with actual operational data, and the available sensor data on the actual vessel is much less compared to the data from test benches. Therefore, it is necessary to develop anomaly prediction and fault diagnosis models from limited data available from the engines. In this paper, an artificial intelligence (AI)-based anomaly detection model and machine learning (ML)-based fault diagnosis model were developed using the actual data acquired from a diesel engine of a cargo vessel. Unlike the previous works, the study uses operational, thermodynamic, and vibration data for the anomaly detection and fault diagnosis. The paper provides the overall architecture of the proposed predictive maintenance system including details on the sensorization of assets, data acquisition, edge computation, and AI model for anomaly prediction and ML algorithm for fault diagnosis. Faults with varying severity levels were induced in the subcomponents of the engine to validate the accuracy of the anomaly detection and fault diagnosis models. The unsupervised stacked autoencoder AI model predicts the engine anomalies with 87.6% accuracy. The balanced accuracy of supervised fault diagnosis model using Support Vector Machine algorithm is 99.7%. The proposed models are vital in marching towards sustainable shipping and have potential to deploy across various applications.
Full article
(This article belongs to the Special Issue Addressing Real-World Challenges in Recognition and Classification with Cutting-Edge AI Models and Methods)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrating Target Domain Convex Hull with MMD for Cross-Dataset EEG Classification of Parkinson’s Disease
by
Xueqi Wu, Weixiang Gao, Jiangwen Lu and Yunyuan Gao
Information 2026, 17(1), 15; https://doi.org/10.3390/info17010015 - 23 Dec 2025
Abstract
►▼
Show Figures
Parkinson’s disease has brought great harm to human life and health. The detection of Parkinson’s disease based on electroencephalogram (EEG) provides a new way to prevent and treat Parkinson’s disease. However, due to the limited EEG data samples, there are large differences among
[...] Read more.
Parkinson’s disease has brought great harm to human life and health. The detection of Parkinson’s disease based on electroencephalogram (EEG) provides a new way to prevent and treat Parkinson’s disease. However, due to the limited EEG data samples, there are large differences among different subjects, especially among different datasets. In this study, a new method called Improved Convex Hull and Maximum Mean Discrepancy (ICMMD)for cross-dataset classification of Parkinson’s disease is proposed by combining convex hull and transfer learning. The paper innovatively implements cross-data transfer learning in the field of brain–computer interfaces for Parkinson’s disease, using Euclidean distance for data alignment and EEG channel selection, and combines the convex envelope with MMD distance to form an effective source domain selection method. Lowpd, San and UNM datasets are used to verify the effectiveness of the proposed method through experiments on different brain regions and frequency bands in Parkinson’s. The results show that this method has good classification performance in different regions of the brain and frequency bands. The research in this paper provides a new idea and method for disease detection of Parkinson’s disease across datasets.
Full article

Graphical abstract
Open AccessArticle
PRA-Unet: Parallel Residual Attention U-Net for Real-Time Segmentation of Brain Tumors
by
Ali Zakaria Lebani, Medjeded Merati and Saïd Mahmoudi
Information 2026, 17(1), 14; https://doi.org/10.3390/info17010014 - 23 Dec 2025
Abstract
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an
[...] Read more.
With the increasing prevalence of brain tumors, it becomes crucial to ensure fast and reliable segmentation in MRI scans. Medical professionals struggle with manual tumor segmentation due to its exhausting and time-consuming nature. Automated segmentation speeds up decision-making and diagnosis; however, achieving an optimal balance between accuracy and computational cost remains a significant challenge. In many cases, current methods trade speed for accuracy, or vice versa, consuming substantial computing power and making them difficult to use on devices with limited resources. To address this issue, we present PRA-UNet, a lightweight deep learning model optimized for fast and accurate 2D brain tumor segmentation. Using a single 2D input, the architecture processes four types of MRI scans (FLAIR, T1, T1c, and T2). The encoder uses inverted residual blocks and bottleneck residual blocks to capture features at different scales effectively. The Convolutional Block Attention Module (CBAM) and the Spatial Attention Module (SAM) improve the bridge and skip connections by refining feature maps and making it easier to detect and localize brain tumors. The decoder uses depthwise separable convolutions, which significantly reduce computational costs without degrading accuracy. The BraTS2020 dataset shows that PRA-UNet achieves a Dice score of 95.71%, an accuracy of 99.61%, and a processing speed of 60 ms per image, enabling real-time analysis. PRA-UNet outperforms other models in segmentation while requiring less computing power, suggesting it could be suitable for deployment on lightweight edge devices in clinical settings. Its speed and reliability enable radiologists to diagnose tumors quickly and accurately, enhancing practical medical applications.
Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrating VIIRS Fire Detections and ERA5-Land Reanalysis for Modeling Wildfire Probability in Arid Mountain Systems of the Arabian Peninsula
by
Rahmah Al-Qthanin and Zubairul Islam
Information 2026, 17(1), 13; https://doi.org/10.3390/info17010013 - 23 Dec 2025
Abstract
Wildfire occurrence in arid and semiarid landscapes is increasingly driven by shifts in climatic and biophysical conditions, yet its dynamics remain poorly understood in the mountainous environments of western Saudi Arabia. This study modeled wildfire probabilities across the Aseer, Al Baha, Makkah Al-Mukarramah,
[...] Read more.
Wildfire occurrence in arid and semiarid landscapes is increasingly driven by shifts in climatic and biophysical conditions, yet its dynamics remain poorly understood in the mountainous environments of western Saudi Arabia. This study modeled wildfire probabilities across the Aseer, Al Baha, Makkah Al-Mukarramah, and Jazan regions via multisource Earth observation datasets from 2012–2025. Active fire detections from VIIRS were integrated with ERA5-Land reanalysis variables, vegetation indices, and Copernicus DEM GLO30 topography. A random forest classifier was trained and validated via stratified sampling and cross-validation to predict monthly burn probabilities. Calibration, reliability assessment, and independent temporal validation confirmed strong model performance (AUC-ROC = 0.96; Brier = 0.03). Climatic dryness (dew-point deficit), vegetation structure (LAI_lv), and surface soil moisture emerged as dominant predictors, underscoring the coupling between energy balance and fuel desiccation. Temporal trend analyses (Kendall’s τ and Sen’s slope) revealed the gradual intensification of fire probability during the dry-to-transition seasons (February–April and September–November), with Aseer showing the most persistent risk. These findings establish a scalable framework for wildfire early warning and landscape management in arid ecosystems under accelerating climatic stress.
Full article
(This article belongs to the Special Issue Predictive Analytics and Data Science, 3rd Edition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Addressing the Dark Side of Differentiation: Bias and Micro-Streaming in Artificial Intelligence Facilitated Lesson Planning
by
Jason Zagami
Information 2026, 17(1), 12; https://doi.org/10.3390/info17010012 - 23 Dec 2025
Abstract
As artificial intelligence (AI) becomes increasingly woven into educational design and decision-making, its use within initial teacher education (ITE) exposes deep tensions between efficiency, equity, and professional agency. A critical action research study conducted across three iterations of a third-year ITE course investigated
[...] Read more.
As artificial intelligence (AI) becomes increasingly woven into educational design and decision-making, its use within initial teacher education (ITE) exposes deep tensions between efficiency, equity, and professional agency. A critical action research study conducted across three iterations of a third-year ITE course investigated how pre-service teachers engaged with AI-supported lesson planning tools while learning to design for inclusion. Analysis of 123 lesson plans, reflective journals, and survey data revealed a striking pattern. Despite instruction in inclusive pedagogy, most participants reproduced fixed-tiered differentiation and deficit-based assumptions about learners’ abilities, a process conceptualised as micro-streaming. AI-generated recommendations often shaped these outcomes, subtly reinforcing hierarchies of capability under the guise of personalisation. Yet, through iterative reflection, dialogue, and critical framing, participants began to recognise and resist these influences, reframing differentiation as design for diversity rather than classification. The findings highlight the paradoxical role of AI in teacher education, as both an amplifier of inequity and a catalyst for critical consciousness and argue for the urgent integration of critical digital pedagogy within ITE programmes. AI can advance inclusive teaching only when educators are empowered to interrogate its epistemologies, question its biases, and reclaim professional judgement as the foundation of ethical pedagogy.
Full article
(This article belongs to the Special Issue AI Technology-Enhanced Learning and Teaching)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Computer Vision for Fashion: A Systematic Review of Design Generation, Simulation, and Personalized Recommendations
by
Ilham Kachbal and Said El Abdellaoui
Information 2026, 17(1), 11; https://doi.org/10.3390/info17010011 - 23 Dec 2025
Abstract
►▼
Show Figures
The convergence of fashion and technology has created new opportunities for creativity, convenience, and sustainability through the integration of computer vision and artificial intelligence. This systematic review, following PRISMA guidelines, examines 200 studies published between 2017 and 2025 to analyze computational techniques for
[...] Read more.
The convergence of fashion and technology has created new opportunities for creativity, convenience, and sustainability through the integration of computer vision and artificial intelligence. This systematic review, following PRISMA guidelines, examines 200 studies published between 2017 and 2025 to analyze computational techniques for garment design, accessories, cosmetics, and outfit coordination across three key areas: generative design approaches, virtual simulation methods, and personalized recommendation systems. We comprehensively evaluate deep learning architectures, datasets, and performance metrics employed for fashion item synthesis, virtual try-on, cloth simulation, and outfit recommendation. Key findings reveal significant advances in Generative adversarial network (GAN)-based and diffusion-based fashion generation, physics-based simulations achieving real-time performance on mobile and virtual reality (VR) devices, and context-aware recommendation systems integrating multimodal data sources. However, persistent challenges remain, including data scarcity, computational constraints, privacy concerns, and algorithmic bias. We propose actionable directions for responsible AI development in fashion and textile applications, emphasizing the need for inclusive datasets, transparent algorithms, and sustainable computational practices. This review provides researchers and industry practitioners with a comprehensive synthesis of current capabilities, limitations, and future opportunities at the intersection of computer vision and fashion design.
Full article

Graphical abstract
Open AccessArticle
Critique of Networked Election Systems: A Comprehensive Analysis of Vulnerabilities and Security Measures
by
Jason M. Green, Abdolhossein Sarrafzadeh and Mohd Anwar
Information 2026, 17(1), 10; https://doi.org/10.3390/info17010010 - 22 Dec 2025
Abstract
The security and integrity of election systems represent fundamental pillars of democratic governance in the 21st century. As electoral processes increasingly rely on networked technologies and digital infrastructures, the vulnerability of these systems to cyber threats has become a paramount concern for election
[...] Read more.
The security and integrity of election systems represent fundamental pillars of democratic governance in the 21st century. As electoral processes increasingly rely on networked technologies and digital infrastructures, the vulnerability of these systems to cyber threats has become a paramount concern for election officials, cybersecurity experts, and policymakers worldwide. This paper presents the first comprehensive synthesis and systematic analysis of vulnerabilities across major U.S. election systems, integrating findings from government assessments, security research, and documented incidents into a unified analytical framework. We compile and categorize previously fragmented vulnerability data from multiple vendors, federal advisories (CISA, EAC), and security assessments to construct a holistic view of the election security landscape. Our novel contribution includes (1) the first cross-vendor vulnerability taxonomy for election systems, (2) a quantitative risk assessment framework specifically designed for election infrastructure, (3) systematic mapping of threat actor capabilities against election system components, and (4) the first proposal for honeynet deployment in election security contexts. Through analysis of over 200 authoritative sources, we identify critical security gaps in federal guidelines, quantify risks in networked election components, and reveal systemic vulnerabilities that only emerge through comprehensive cross-system analysis. Our findings demonstrate that interconnected vulnerabilities create risk-amplification factors of 2-5x compared to isolated component analysis, highlighting the urgent need for comprehensive federal cybersecurity standards, improved network segmentation, and enhanced monitoring capabilities to protect democratic processes.
Full article
Open AccessArticle
Evaluating Model Resilience to Data Poisoning Attacks: A Comparative Study
by
Ifiok Udoidiok, Fuhao Li and Jielun Zhang
Information 2026, 17(1), 9; https://doi.org/10.3390/info17010009 - 22 Dec 2025
Abstract
►▼
Show Figures
Machine learning (ML) has become a cornerstone of critical applications, but its vulnerability to data poisoning attacks threatens system reliability and trustworthiness. Prior studies have begun to investigate the impact of data poisoning and proposed various defense or evaluation methods; however, most efforts
[...] Read more.
Machine learning (ML) has become a cornerstone of critical applications, but its vulnerability to data poisoning attacks threatens system reliability and trustworthiness. Prior studies have begun to investigate the impact of data poisoning and proposed various defense or evaluation methods; however, most efforts remain limited to quantifying performance degradation, with little systematic comparison of internal behaviors across model architectures under attack and insufficient attention to interpretability for revealing model vulnerabilities. To tackle this issue, we build a reproducible evaluation pipeline and emphasize the importance of integrating robustness with interpretability in the design of secure and trustworthy ML systems. To be specific, we propose a unified poisoning evaluation framework that systematically compares traditional ML models, deep neural networks, and large language models under three representative attack strategies including label flipping, random corruption, and adversarial insertion, at escalating severity levels of 30%, 50%, and 75%, and integrate LIME-based explanations to trace the evolution of model reasoning. Experimental results demonstrate that traditional models collapse rapidly under label noise, whereas Bayesian LSTM hybrids and large language models maintain stronger resilience. Further interpretability analysis uncovers attribution failure patterns, such as over-reliance on neutral tokens or misinterpretation of adversarial cues, providing insights beyond accuracy metrics.
Full article

Figure 1
Open AccessArticle
Information-Driven Team Collaboration in RoboCup Rescue
by
Abhijot Bedi, Shelley Zhang and Eugene Chabot
Information 2026, 17(1), 8; https://doi.org/10.3390/info17010008 - 22 Dec 2025
Abstract
►▼
Show Figures
Efficient collaboration in multi-robot systems (MRSs) is essential for handling complex tasks in dynamic environments under physical constraints. This study employs the RoboCup Rescue Simulation (RCRS) platform, which supports programmable rescue agents in disaster response scenarios, to investigate collaborative strategies for MRS. The
[...] Read more.
Efficient collaboration in multi-robot systems (MRSs) is essential for handling complex tasks in dynamic environments under physical constraints. This study employs the RoboCup Rescue Simulation (RCRS) platform, which supports programmable rescue agents in disaster response scenarios, to investigate collaborative strategies for MRS. The proposed approach integrates a task modeling framework into RCRS to enable systematic task decomposition and coordinated request handling among platoon agents. A dedicated communication protocol further allows agents to share and exploit information dynamically in changing conditions. Experiments demonstrate simulation performance improvements ranging from 12% to 48% over default agents across complex map configurations. Results highlight the effectiveness of structured multi-agent system (MAS) collaboration mechanisms when adapted to practical physical constraints, indicating strong potential for enhancing cooperative performance in real-world multi-robot applications.
Full article

Figure 1
Open AccessArticle
Bimodal Gender Classification Across Community Question-Answering Platforms
by
Alejandro Figueroa and Esteban Martínez
Information 2026, 17(1), 7; https://doi.org/10.3390/info17010007 - 22 Dec 2025
Abstract
Community Question-Answering (cQA) sites have an urgent need to be increasingly efficient at (a) offering contextualized/personalized content and (b) linking open questions to people willing to answer. Most recent ideas with respect to attaining this goal combine demographic factors (i.e., gender) with deep
[...] Read more.
Community Question-Answering (cQA) sites have an urgent need to be increasingly efficient at (a) offering contextualized/personalized content and (b) linking open questions to people willing to answer. Most recent ideas with respect to attaining this goal combine demographic factors (i.e., gender) with deep neural networks. In essence, recent studies have shown that high gender classification rates are perfectly viable by independently modeling profile images or textual interactions. This paper advances this body of knowledge by leveraging bimodal transformers that fuse gender signals from text and images. Qualitative results suggest that (a) profile avatars reinforce one of the genders manifested across textual inputs, (b) their positive contribution grows in tandem with the number of community fellows that provide this picture, and (c) their use might be detrimental if the goal is distinguishing throwaway/fake profiles. From a quantitative standpoint, ViLT proved to be a better alternative when coping with sparse datasets such as Stack Exchange, whereas CLIP and FLAVA excel with a large-scale collection—namely, Yahoo! answers and Reddit.
Full article
(This article belongs to the Section Information Systems)
►▼
Show Figures

Figure 1
Open AccessArticle
BWO-Optimized CNN-BiGRU-Attention Model for Short-Term Load Forecasting
by
Ruihan Wu and Xin Wen
Information 2026, 17(1), 6; https://doi.org/10.3390/info17010006 - 22 Dec 2025
Abstract
Short-term load forecasting is essential for optimizing power system operations and supporting renewable energy integration. However, accurately capturing the complex nonlinear features in load data remains challenging. To improve forecasting accuracy, this paper proposes a hybrid CNN-BiGRU-Attention model optimized by the Beluga Whale
[...] Read more.
Short-term load forecasting is essential for optimizing power system operations and supporting renewable energy integration. However, accurately capturing the complex nonlinear features in load data remains challenging. To improve forecasting accuracy, this paper proposes a hybrid CNN-BiGRU-Attention model optimized by the Beluga Whale Optimization (BWO) algorithm. The proposed method integrates deep learning with metaheuristic optimization in four steps: First, a Convolutional Neural Network (CNN) is used to extract spatial features from input data, including historical load and weather variables. Second, a Bidirectional Gated Recurrent Unit (BiGRU) network is employed to learn temporal dependencies from both forward and backward directions. Third, an Attention mechanism is introduced to focus on key features and reduce the influence of redundant information. Finally, the BWO algorithm is applied to automatically optimize the model’s hyperparameters, avoiding the problem of falling into local optima. Comparative experiments against five baseline models (BP, GRU, BiGRU, BiGRU-Attention, and CNN-BiGRU-Attention) demonstrate the effectiveness of the proposed model. The experimental results indicate that the optimized model achieves superior predictive performance with significantly reduced error rates in terms of Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE), along with a higher Coefficient of Determination ( ) compared to the benchmarks, confirming its high accuracy and reliability for power load forecasting.
Full article
(This article belongs to the Special Issue Deep Learning Approach for Time Series Forecasting)
►▼
Show Figures

Figure 1
Open AccessArticle
Transforming Credit Risk Analysis: A Time-Series-Driven ResE-BiLSTM Framework for Post-Loan Default Detection
by
Yue Yang, Yuxiang Lin, Ying Zhang, Zihan Su, Chang Chuan Goh, Tangtangfang Fang, Anthony Bellotti and Boon Giin Lee
Information 2026, 17(1), 5; https://doi.org/10.3390/info17010005 - 21 Dec 2025
Abstract
Credit risk refers to the possibility that a borrower fails to meet contractual repayment obligations, posing potential losses to lenders. This study aims to enhance post-loan default prediction in credit risk management by constructing a time-series modeling framework based on repayment behavior data,
[...] Read more.
Credit risk refers to the possibility that a borrower fails to meet contractual repayment obligations, posing potential losses to lenders. This study aims to enhance post-loan default prediction in credit risk management by constructing a time-series modeling framework based on repayment behavior data, enabling the capture of repayment risks that emerge after loan issuance. To achieve this objective, a Residual Enhanced Encoder Bidirectional Long Short-Term Memory (ResE-BiLSTM) model is proposed, in which the attention mechanism is responsible for discovering long-range correlations, while the residual connections ensure the preservation of distant information. This design mitigates the tendency of conventional recurrent architectures to overemphasize recent inputs while underrepresenting distant temporal information in long-term dependency modeling. Using the real-world large-scale Freddie Mac Single-Family Loan-Level Dataset, the model is evaluated on 44 independent cohorts and compared with five baseline models, including Long Short-Term Memory (LSTM), Bidirectional LSTM (BiLSTM), Gated Recurrent Unit (GRU), Convolutional Neural Network (CNN), and Recurrent Neural Network (RNN) across multiple evaluation metrics. The experimental results demonstrate that ResE-BiLSTM achieves superior performance on key indicators such as F1 and AUC, with average values of 0.92 and 0.97, respectively, and demonstrates robust performance across different feature window lengths and resampling settings. Ablation experiments and SHapley Additive exPlanations (SHAP)-based interpretability analyses further reveal that the model captures non-monotonic temporal importance patterns across key financial features. This study advances time-series–based anomaly detection for credit risk prediction by integrating global and local temporal learning. The findings offer practical value for financial institutions and risk management practitioners, while also providing methodological insights and a transferable modeling paradigm for future research on credit risk assessment.
Full article
(This article belongs to the Special Issue AI and Machine Learning in the Big Data Era: Advanced Algorithms and Real-World Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Pattern-Oriented Ontology and Workflow Modeling Approach for the Sui Move Programming Language
by
Antonios Giatzis and Christos K. Georgiadis
Information 2026, 17(1), 4; https://doi.org/10.3390/info17010004 - 19 Dec 2025
Abstract
Smart contracts are vulnerable to critical, design-level Business Logic Flaws (BLFs) that conventional analysis tools often fail to detect. To address this semantic gap, this study introduces a novel ontological framework that formally models the link between high-level architectural intent and low-level Sui
[...] Read more.
Smart contracts are vulnerable to critical, design-level Business Logic Flaws (BLFs) that conventional analysis tools often fail to detect. To address this semantic gap, this study introduces a novel ontological framework that formally models the link between high-level architectural intent and low-level Sui Move code. The methodology employs a rigorous Linked Open Terms (LOT) approach to construct a comprehensive ontology, integrated with a library of secure design patterns and process-aware Object-Centric Dynamic Condition Response (OC-DCR) graphs. Qualitative validation was conducted on four canonical security patterns (Access Control, Circuit Breaker, Time Incentivization, Escapability) drawn from the official Sui Framework, confirming the framework’s representational adequacy and logical consistency. Ultimately, this work contributes the first machine-readable semantic layer for Sui Move, decoupling reasoning from raw code availability, and providing the essential semantic foundation for the future development of pattern-aware auditing tools.
Full article
(This article belongs to the Special Issue Recent Advances in Smart Contract and Blockchain Analysis)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Interest as the Engine: Leveraging Diverse Hybrid Propagation for Influence Maximization in Interest-Based Social Networks
by
Jian Li, Wei Liu, Wenxin Jiang, Jinhao Yang and Ling Chen
Information 2026, 17(1), 3; https://doi.org/10.3390/info17010003 - 19 Dec 2025
Abstract
►▼
Show Figures
Influence maximization is a crucial research domain in social network analysis, playing a vital role in optimizing information dissemination and managing online public opinion. Traditional IM models focus on network topology, often overlooking user heterogeneity and server-driven propagation dynamics, which often leads to
[...] Read more.
Influence maximization is a crucial research domain in social network analysis, playing a vital role in optimizing information dissemination and managing online public opinion. Traditional IM models focus on network topology, often overlooking user heterogeneity and server-driven propagation dynamics, which often leads to limited model adaptability. To overcome these shortcomings, this study proposes the “Social–Interest Hybrid Influence Maximization” (SIHIM) problem, which explicitly models the joint influence of social topology and user interest in server-mediated propagation, aiming to enhance the effectiveness of information propagation by integrating users’ social relationships and interest preferences. To model this problem, we develop a Server-Based Independent Cascading (SB-IC) model that captures the dynamics of influence propagation. Based on this model, we further propose a novel hybrid centrality algorithm named Pascal Centrality (PaC), which integrates both topological and interest-based attributes to efficiently identify key seed nodes while minimizing influence overlap. Experimental evaluations on ten real-world social network datasets demonstrate that PaC improves influence spread by 5.22% under the standard IC model and by 7.04% under the SB-IC model, outperforming nine state-of-the-art algorithms. These findings underscore the effectiveness and adaptability of the proposed algorithm in complex scenarios.
Full article

Graphical abstract
Open AccessArticle
Super Encryption Standard (SES): A Key-Dependent Block Cipher for Image Encryption
by
Mohammed Abbas Fadhil Al-Husainy, Bassam Al-Shargabi and Omar Sabri
Information 2026, 17(1), 2; https://doi.org/10.3390/info17010002 - 19 Dec 2025
Abstract
Data encryption is a core mechanism in modern security services for protecting confidential data at rest and in transit. This work introduces the Super Encryption Standard (SES), a symmetric block cipher that follows the overall workflow of the Advanced Encryption Standard (AES) but
[...] Read more.
Data encryption is a core mechanism in modern security services for protecting confidential data at rest and in transit. This work introduces the Super Encryption Standard (SES), a symmetric block cipher that follows the overall workflow of the Advanced Encryption Standard (AES) but adopts a key-dependent design to enlarge the effective key space and improve execution efficiency. The SES accepts a user-supplied key file and a selectable block dimension, from which it derives per-block round material and a dynamic substitution box generated using SHA-512. Each round relies only on XOR and a conditional half-byte swap driven by key-derived row and column vectors, enabling lightweight diffusion and confusion with low implementation cost. Experimental evaluation using multiple color images of different sizes shows that the proposed SES algorithm achieves faster encryption than the AES baseline and produces a ciphertext that behaves statistically like random noise. The encrypted images exhibit very low correlation between adjacent pixels, strong sensitivity to even minor changes in the plaintext and in the key, and resistance to standard statistical and differential attacks. Analysis of the SES substitution box also indicates favorable differential and linear properties that are comparable to those of the AES. The SES further supports a very wide key range, scaling well beyond typical fixed-length keys, which substantially increases brute-force difficulty. Therefore, the SES is a promising cipher for image encryption and related data-protection applications.
Full article
(This article belongs to the Special Issue Internet of Things and Cloud-Fog-Edge Computing, 2nd Edition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Integrating Model-Driven Engineering and Large Language Models for Test Scenario Generation for Smart Contracts
by
Issam Al-Azzoni, Saqib Iqbal, Taymour Al Ashkar and Zobia Erum
Information 2026, 17(1), 1; https://doi.org/10.3390/info17010001 - 19 Dec 2025
Abstract
Large Language Models (LLMs) have demonstrated significant potential in transforming software testing by automating tasks such as test case generation. In this work, we explore the integration of LLMs within a Model-Driven Engineering (MDE) approach to enhance the automation of test case generation
[...] Read more.
Large Language Models (LLMs) have demonstrated significant potential in transforming software testing by automating tasks such as test case generation. In this work, we explore the integration of LLMs within a Model-Driven Engineering (MDE) approach to enhance the automation of test case generation for smart contracts. Our focus lies in the use of Role-Based Access Control (RBAC) models as formal specifications that guide the generation of test scenarios. By leveraging LLMs’ ability to interpret both natural language and model artifacts, we enable the derivation of model-based test cases that align with specified access control policies. These test cases are subsequently translated into executable code in Digital Asset Modeling Language (DAML) targeting blockchain-based smart contract platforms. Building on prior research that established a complete MDE pipeline for DAML smart contract development, we extend the framework with LLM-supported test automation capabilities and implement the necessary tooling to support this integration. Our evaluation demonstrates the feasibility of using LLMs in this context, highlighting their potential to improve testing coverage, reduce manual effort, and ensure conformance with access control specifications in smart contract systems.
Full article
(This article belongs to the Special Issue Using Generative Artificial Intelligence Within Software Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
GenIIoT: Generative Models Aided Proactive Fault Management in Industrial Internet of Things
by
Isra Zafat, Arshad Iqbal, Maqbool Khan, Naveed Ahmad and Mohammed Ali Alshara
Information 2025, 16(12), 1114; https://doi.org/10.3390/info16121114 - 18 Dec 2025
Abstract
Detecting active failures is important for the Industrial Internet of Things (IIoT). The IIoT aims to connect devices and machinery across industries. The devices connect via the Internet and provide large amounts of data which, when processed, can generate information and even make
[...] Read more.
Detecting active failures is important for the Industrial Internet of Things (IIoT). The IIoT aims to connect devices and machinery across industries. The devices connect via the Internet and provide large amounts of data which, when processed, can generate information and even make automated decisions on the administration of industries. However, traditional active fault management techniques face significant challenges, including highly imbalanced datasets, a limited availability of failure data, and poor generalization to real-world conditions. These issues hinder the effectiveness of prompt and accurate fault detection in real IIoT environments. To overcome these challenges, this work proposes a data augmentation mechanism which integrates generative adversarial networks (GANs) and the synthetic minority oversampling technique (SMOTE). The integrated GAN-SMOTE method increases minority class data by generating failure patterns that closely resemble industrial conditions, increasing model robustness and mitigating data imbalances. Consequently, the dataset is well balanced and suitable for the robust training and validation of learning models. Then, the data are used to train and evaluate a variety of models, including deep learning architectures, such as convolutional neural networks (CNNs) and long short-term memory networks (LSTMs), and conventional machine learning models, such as support vector machines (SVMs), K-nearest neighbors (KNN), and decision trees. The proposed mechanism provides an end-to-end framework that is validated on both generated and real-world industrial datasets. In particular, the evaluation is performed using the AI4I, Secom and APS datasets, which enable comprehensive testing in different fault scenarios. The proposed scheme improves the usability of the model and supports its deployment in a real IIoT environment. The improved detection performance of the integrated GAN-SMOTE framework effectively addresses fault classification challenges. This newly proposed mechanism enhances the classification accuracy up to 0.99. The proposed GAN-SMOTE framework effectively overcomes the major limitations of traditional fault detection approaches and proposes a robust, scalable and practical solution for intelligent maintenance systems in the IIoT environment.
Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: “Information Processes”, 2nd Edition)
►▼
Show Figures

Figure 1
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Education Sciences, Future Internet, Information, Sustainability
Advances in Online and Distance Learning
Topic Editors: Neil Gordon, Han ReichgeltDeadline: 31 December 2025
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026
Topic in
Applied Sciences, Information, Systems, Technologies, Electronics, AI
Challenges and Opportunities of Integrating Service Science with Data Science and Artificial Intelligence
Topic Editors: Dickson K. W. Chiu, Stuart SoDeadline: 30 April 2026
Conferences
Special Issues
Special Issue in
Information
Sensing and Wireless Communications
Guest Editor: Tien M. NguyenDeadline: 28 December 2025
Special Issue in
Information
Interactive Learning: Human in the Loop System Design for Active Human–Computer Interactions
Guest Editors: Fred Petry, Chris J. Michael, Derek T. AndersonDeadline: 31 December 2025
Special Issue in
Information
Semantic Web and Language Models
Guest Editor: Nikolaos PapadakisDeadline: 31 December 2025
Special Issue in
Information
Decision Models for Economics and Business Management
Guest Editors: Dora Almeida, Andreia Dionísio, Paulo Ferreira, Dimitris ApostolouDeadline: 31 December 2025
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero




