Next Issue
Volume 16, June
Previous Issue
Volume 16, April
 
 

Information, Volume 16, Issue 5 (May 2025) – 89 articles

Cover Story (view full-size image): The primary contribution of this work is the development of schedules for electric buses resilient to extreme conditions that also consider multiple depots, charging stations, and stochastic travel times. The proposed model is a mixed-integer linear program (MILP) with chance constraints. The decision variables include assigning electric buses to trips and scheduling charging events to maintain daily operations under uncertainty. Numerical experiments using GTFS data from Krakow (ZTP) and a sensitivity analysis of travel time variations show that excluding chance constraints leads to operational costs 49% to 75% higher than in a case where stochastic travel times are not considered. This underlines the effectiveness of the proposed model and the broader need to incorporate uncertainties when scheduling the operations of electric buses. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 1649 KiB  
Article
HS-SocialRec: A Study on Boosting Social Recommendations with Hard Negative Sampling in LightGCN
by Ziping Sheng and Lai Wei
Information 2025, 16(5), 422; https://doi.org/10.3390/info16050422 - 21 May 2025
Viewed by 204
Abstract
Most current graph neural network (GNN)-based social recommendation systems mainly extract negative samples from explicit feedback, and are unable to accurately learn the boundaries of similar positive and negative samples, which leads to misjudgment of user preferences. For this reason, we propose to [...] Read more.
Most current graph neural network (GNN)-based social recommendation systems mainly extract negative samples from explicit feedback, and are unable to accurately learn the boundaries of similar positive and negative samples, which leads to misjudgment of user preferences. For this reason, we propose to introduce the hop-mixing technique to synthesize hard negative samples for users to fully explore their preferences. Firstly, positive sample information is injected into the original negative samples in each layer to generate augmented negative samples that are very similar to the positive samples. Then the super-enhanced negative samples with the highest inner product score with the positive samples are identified from each layer, and finally, the super-enhanced negative samples from each layer are aggregated and pooled to obtain the final hard negative samples. Subsequently, a graph fusion mechanism is used to aggregate user representations from the social graph and the user–item bipartite graph. Comparative experiments on two real datasets and ten baseline models are conducted, and the results show that the proposed method has certain performance advantages over other state-of-the-art recommendation models. Full article
Show Figures

Figure 1

22 pages, 2695 KiB  
Article
Comparing Classification Algorithms to Recognize Selected Gestures Based on Microsoft Azure Kinect Joint Data
by Marc Funken and Thomas Hanne
Information 2025, 16(5), 421; https://doi.org/10.3390/info16050421 - 21 May 2025
Viewed by 160
Abstract
This study aims to explore the potential of exergaming (which can be used along with prescriptive medication for children with spinal muscular atrophy) and examine its effects on monitoring and diagnosis. The present study focuses on comparing models trained on joint data for [...] Read more.
This study aims to explore the potential of exergaming (which can be used along with prescriptive medication for children with spinal muscular atrophy) and examine its effects on monitoring and diagnosis. The present study focuses on comparing models trained on joint data for gesture detection, which has not been extensively explored in previous studies. The study investigates three approaches to detect gestures based on 3D Microsoft Azure Kinect joint data. We discuss simple decision rules based on angles and distances to label gestures. In addition, we explore supervised learning methods to increase the accuracy of gesture recognition in gamification. The compared models performed well on the recorded sample data, with the recurrent neural networks outperforming feedforward neural networks and decision trees on the captured motions. The findings suggest that gesture recognition based on joint data can be a valuable tool for monitoring and diagnosing children with spinal muscular atrophy. This study contributes to the growing body of research on the potential of virtual solutions in rehabilitation. The results also highlight the importance of using joint data for gesture recognition and provide insights into the most effective models for this task. The findings of this study can inform the development of more accurate and effective monitoring and diagnostic tools for children with spinal muscular atrophy. Full article
Show Figures

Figure 1

26 pages, 2252 KiB  
Article
PROMPTHEUS: A Human-Centered Pipeline to Streamline Systematic Literature Reviews with Large Language Models
by Joao Torres, Catherine Mulligan, Joaquim Jorge and Catarina Moreira
Information 2025, 16(5), 420; https://doi.org/10.3390/info16050420 - 21 May 2025
Viewed by 284
Abstract
The growing volume of academic publications poses significant challenges for researchers conducting timely and accurate systematic literature reviews (SLR), particularly in fast-evolving fields like artificial intelligence. This growth of academic literature also makes it increasingly difficult for lay people to access scientific knowledge [...] Read more.
The growing volume of academic publications poses significant challenges for researchers conducting timely and accurate systematic literature reviews (SLR), particularly in fast-evolving fields like artificial intelligence. This growth of academic literature also makes it increasingly difficult for lay people to access scientific knowledge effectively, meaning academic literature is often misrepresented in the popular press and, more broadly, in society. Traditional SLRs are labor-intensive and error-prone, and they struggle to keep up with the rapid pace of new research. To address these issues, we developed PROMPTHEUS: an AI-driven pipeline solution that automates the systematic literature review (LR) process using large language models (LLMs). We aimed to enhance efficiency by reducing the manual workload while maintaining the precision and coherence required for comprehensive literature synthesis. PROMPTHEUS automates key SLR stages, including systematic searches, data extraction, and topic modeling using BERTopic and summarization with transformer models. Evaluations across five research domains demonstrated that PROMPTHEUS reduces review time, achieves high precision, and provides coherent topic organization, offering a scalable and effective solution for conducting literature reviews in an increasingly crowded research landscape. Full article
Show Figures

Graphical abstract

22 pages, 609 KiB  
Systematic Review
Leveraging Learning Analytics to Improve the User Experience of Learning Management Systems in Higher Education Institutions
by Patrick Ngulube and Mthokozisi Masumbika Ncube
Information 2025, 16(5), 419; https://doi.org/10.3390/info16050419 - 20 May 2025
Viewed by 241
Abstract
This systematic review examines the application of learning analytics to enhance user experience within Learning Management Systems in higher education institutions. Addressing a salient knowledge gap regarding the optimal integration of learning analytics for diverse learner populations, this study identifies analytical approaches and [...] Read more.
This systematic review examines the application of learning analytics to enhance user experience within Learning Management Systems in higher education institutions. Addressing a salient knowledge gap regarding the optimal integration of learning analytics for diverse learner populations, this study identifies analytical approaches and delineates implementation challenges that contribute to data misinterpretation and underutilisation. Consequently, the absence of a systematic evaluation of analytical methodologies impedes the capacity of higher education institutes to tailor learning processes to individual student needs. Adhering to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, a search was conducted across five academic databases. Studies employing learning analytics within Learning Management Systems environments to improve user experience in higher education institutions were included, while purely theoretical or non-higher education institution studies were excluded, resulting in a final corpus of 41 studies. Methodological rigour was assessed using the Critical Appraisal Skills Programme Checklist. This study revealed diverse learning analytics methodologies and a dual research focus on specific platforms and broader impacts on Learning Management Systems. However, ethical, implementation, generalisability, interpretation, personalisation, and system quality challenges impede effective learning analytics integration for user experience improvement, demanding rigorous and contextually aware strategies. This study’s reliance on existing literature introduces potential selection and database biases. As such, future research should prioritise empirical validation and cross-institutional studies to address these limitations. Full article
Show Figures

Figure 1

19 pages, 641 KiB  
Article
Big Five Personality Trait Prediction Based on User Comments
by Kit-May Shum, Michal Ptaszynski and Fumito Masui
Information 2025, 16(5), 418; https://doi.org/10.3390/info16050418 - 20 May 2025
Viewed by 380
Abstract
The study of personalities is a major component of human psychology, and with an understanding of personality traits, practical applications can be used in various domains, such as mental health care, predicting job performance, and optimising marketing strategies. This study explores the prediction [...] Read more.
The study of personalities is a major component of human psychology, and with an understanding of personality traits, practical applications can be used in various domains, such as mental health care, predicting job performance, and optimising marketing strategies. This study explores the prediction of Big Five personality trait scores from online comments using transformer-based language models, focusing on improving the model performance with a larger dataset and investigating the role of intercorrelations among traits. Using the PANDORA dataset from Reddit, the RoBERTa and BERT models, including both the base and large variants, were fine-tuned and evaluated to determine their effectiveness in personality trait prediction. Compared to previous work, our study utilises a significantly larger dataset to enhance the model’s generalisation and robustness. The results indicate that RoBERTa outperforms BERT across most metrics, with RoBERTa large achieving the best overall performance. In addition to evaluating the overall predictive accuracy, this study investigates the impact of intercorrelations among personality traits. A comparative analysis is conducted between a single-model approach, which predicts all five traits simultaneously, and a multiple-model approach, fine-tuning the models independently and each predicting a single trait. The findings reveal that the single-model approach achieves a lower RMSE and higher R2 values, highlighting the importance of incorporating trait intercorrelations in improving the prediction accuracy. Furthermore, RoBERTa large demonstrated a stronger ability to capture these intercorrelations compared to previous studies. These findings emphasise the potential of transformer-based models in personality computing and underscore the importance of leveraging both larger datasets and intercorrelations to enhance predictive performance. Full article
Show Figures

Figure 1

21 pages, 406 KiB  
Article
A Novel Method for Community Detection in Bipartite Networks
by Ali Khosrozadeh, Ali Movaghar, Mohammad Mehdi Gilanian Sadeghi and Hamidreza Mahyar
Information 2025, 16(5), 417; https://doi.org/10.3390/info16050417 - 20 May 2025
Viewed by 285
Abstract
The community structure is a major feature of bipartite networks, which serve as a typical model for empirical networks consisting of two kinds of nodes. Over the past years, community detection has drawn a lot of attention. Numerous methods for community detection have [...] Read more.
The community structure is a major feature of bipartite networks, which serve as a typical model for empirical networks consisting of two kinds of nodes. Over the past years, community detection has drawn a lot of attention. Numerous methods for community detection have been put forth. Nevertheless, some of them need a lot of time, which restricts their use in large networks. While several low-time complexity algorithms exist, their practical value in real-world applications is limited since they are typically non-deterministic. Typically, in bipartite networks, a unipartite projection of one part of the network is created, and then communities are detected inside that projection using methods for unipartite networks. Unipartite projections may yield incorrect or erroneous findings as they inevitably include a loss of information. In this paper, BiVoting, a two-mode and deterministic community detection method in bipartite networks, is proposed. This method is a consequence of bipartite modularity, which quantifies the strength of partitions and is based on how people vote in social elections. The proposed method’s performance was evaluated, and comparison with four common community detection methods in bipartite networks shows that for calculating the modularity score in large networks, BiVoting performs better than the best method. Full article
Show Figures

Figure 1

29 pages, 3690 KiB  
Article
Application of the Adaptive Mixed-Order Cubature Particle Filter Algorithm Based on Matrix Lie Group Representation for the Initial Alignment of SINS
by Ning Wang and Fanming Liu
Information 2025, 16(5), 416; https://doi.org/10.3390/info16050416 - 20 May 2025
Viewed by 246
Abstract
Under large azimuth misalignment conditions, the initial alignment of strapdown inertial navigation systems (SINS) is challenged by the nonlinear characteristics of the error model. Traditional particle filter (PF) algorithms suffer from the inappropriate selection of importance density functions and severe particle degeneration, which [...] Read more.
Under large azimuth misalignment conditions, the initial alignment of strapdown inertial navigation systems (SINS) is challenged by the nonlinear characteristics of the error model. Traditional particle filter (PF) algorithms suffer from the inappropriate selection of importance density functions and severe particle degeneration, which limit their applicability in high-precision navigation. To address these limitations, this paper proposes an adaptive mixed-order spherical simplex-radial cubature particle filter (MLG-AMSSRCPF) algorithm based on matrix Lie group representation. In this approach, attitude errors are represented on the matrix Lie group SO(3), while velocity errors and inertial sensor biases are retained in Euclidean space. Efficient bidirectional conversion between Euclidean and manifold spaces is achieved through exponential and logarithmic maps, enabling accurate attitude estimation without the need for Jacobian matrices. A hybrid-order cubature transformation is introduced to reduce model linearization errors, thereby enhancing the estimation accuracy. To improve the algorithm’s adaptability in dynamic noise environments, an adaptive noise covariance update mechanism is integrated. Meanwhile, the particle similarity is evaluated using Euclidean distance, allowing the dynamic adjustment of particle numbers to balance the filtering accuracy and computational load. Furthermore, a multivariate Huber loss function is employed to adaptively adjust particle weights, effectively suppressing the influence of outliers and significantly improving the robustness of the filter. Simulation and the experimental results validate the superior performance of the proposed algorithm under moving-base alignment conditions. Compared with the conventional cubature particle filter (CPF), the heading accuracy of the MLG-AMSSRCPF algorithm was improved by 31.29% under measurement outlier interference and by 39.79% under system noise mutation scenarios. In comparison with the unscented Kalman filter (UKF), it yields improvements of 58.51% and 58.82%, respectively. These results demonstrate that the proposed method substantially enhances the filtering accuracy, robustness, and computational efficiency of SINS, confirming its practical value for initial alignment in high-noise, complex dynamic, and nonlinear navigation systems. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

22 pages, 2114 KiB  
Review
Artificial Intelligence in SMEs: Enhancing Business Functions Through Technologies and Applications
by Thang Le Dinh, Manh-Chiên Vu and Giang T.C. Tran
Information 2025, 16(5), 415; https://doi.org/10.3390/info16050415 - 18 May 2025
Viewed by 1097
Abstract
Artificial intelligence (AI) has significant potential to transform small- and medium-sized enterprises (SMEs), yet its adoption is often hindered by challenges such as limited financial and human resources. This study addresses this issue by investigating the core AI technologies adopted by SMEs, their [...] Read more.
Artificial intelligence (AI) has significant potential to transform small- and medium-sized enterprises (SMEs), yet its adoption is often hindered by challenges such as limited financial and human resources. This study addresses this issue by investigating the core AI technologies adopted by SMEs, their broad range of applications across business functions, and the strategies required for successful implementation. Through a systematic literature review of 50 studies published between 2016 and 2025, we identify prominent AI technologies, including machine learning, natural language processing, and generative AI, and their applications in enhancing efficiency, decision-making, and innovation across sales and marketing, operations and logistics, finance and other business functions. The findings emphasize the importance of workforce training, robust technological infrastructure, data-driven cultures, and strategic partnerships for SMEs. Furthermore, the review highlights methods for measuring and optimizing AI’s value, such as tracking key performance indicators and improving customer satisfaction. While acknowledging challenges like financial constraints and ethical considerations, this research provides practical guidance for SMEs to effectively leverage AI for sustainable growth and provides a foundation for future studies to explore customized AI strategies for diverse SME contexts. Full article
Show Figures

Graphical abstract

34 pages, 7452 KiB  
Systematic Review
Knowledge Management Strategies Supported by ICT for the Improvement of Teaching Practice: A Systematic Review
by Miguel-Angel Romero-Ochoa, Julio-Alejandro Romero-González, Alonso Perez-Soltero, Juan Terven, Teresa García-Ramírez, Diana-Margarita Córdova-Esparza and Francisco-Alan Espinoza-Zallas
Information 2025, 16(5), 414; https://doi.org/10.3390/info16050414 - 18 May 2025
Viewed by 312
Abstract
In the modern digital ecosystem, the effective management of knowledge and the integration of information and communication technologies are the keys to revolutionizing educational practices within higher education institutions. This study presents a systematic review of recent literature, examining how the incorporation of [...] Read more.
In the modern digital ecosystem, the effective management of knowledge and the integration of information and communication technologies are the keys to revolutionizing educational practices within higher education institutions. This study presents a systematic review of recent literature, examining how the incorporation of information and communication technologies facilitates the creation and transfer of knowledge, enables collaboration among educators, and supports continuous professional development. We explore the benefits of personalized learning and the application of technological tools to enhance collaboration, access to educational resources, and pedagogical reflection. The key findings emphasize the role of these tools in promoting teacher interaction and exchange of ideas, highlighting the critical importance of training in digital competency to maximize their impact. The study also identifies challenges, including the need to improve effective knowledge transfer and technological training. In conclusion, effective knowledge management, supported by information and communication technologies, fortifies digital competencies and cultivates a culture of collaboration and content creation in higher education institutions. Full article
(This article belongs to the Special Issue Emerging Research in Knowledge Management and Innovation)
Show Figures

Figure 1

13 pages, 1968 KiB  
Article
Drunk Driver Detection Using Thermal Facial Images
by Chin-Heng Chai, Siti Fatimah Abdul Razak, Sumendra Yogarayan and Ramesh Shanmugam
Information 2025, 16(5), 413; https://doi.org/10.3390/info16050413 - 18 May 2025
Viewed by 284
Abstract
This study aims to investigate and propose a machine learning approach that can accurately detect alcohol consumption by analyzing the thermal patterns of facial features. Thermal images from the Tufts Face Database and self-collected images were utilized to train the models in identifying [...] Read more.
This study aims to investigate and propose a machine learning approach that can accurately detect alcohol consumption by analyzing the thermal patterns of facial features. Thermal images from the Tufts Face Database and self-collected images were utilized to train the models in identifying temperature variations in specific facial regions. Convolutional Neural Networks (CNNs) and YOLO (You Only Look Once) algorithms were employed to extract facial features, while classifiers such as Support Vector Machines (SVMs), Multi-Layer Perceptron (MLP), and K-Nearest Neighbors (KNN), as well as Random Forest and linear regression, classify individuals as sober or intoxicated based on their thermal images. The models’ effectiveness in analyzing thermal images to determine alcohol intoxication is expected to provide a foundation for the development of a realistic drunk driver detection system based on thermal images. In this study, MLP obtained 90% accuracy and outperformed the other models in classifying the thermal images, either as sober or showing signs of alcohol consumption. The trained models may be embedded in advanced drunk detection systems as part of an in-vehicle safety application. Full article
Show Figures

Figure 1

21 pages, 1780 KiB  
Article
Information Model for Pharmaceutical Smart Factory Equipment Design
by Roland Wölfle, Irina Saur-Amaral and Leonor Teixeira
Information 2025, 16(5), 412; https://doi.org/10.3390/info16050412 - 17 May 2025
Viewed by 199
Abstract
Pharmaceutical production typically focuses on individual drug types for each production line, which limits flexibility. However, the emergence of Industry 4.0 technologies presents new opportunities for more adaptable and customized manufacturing processes. Despite this promise, the development of innovative design techniques for pharmaceutical [...] Read more.
Pharmaceutical production typically focuses on individual drug types for each production line, which limits flexibility. However, the emergence of Industry 4.0 technologies presents new opportunities for more adaptable and customized manufacturing processes. Despite this promise, the development of innovative design techniques for pharmaceutical production equipment remains incomplete. Manufacturers encounter challenges due to rapid innovation cycles while adhering to stringent Good Manufacturing Practice (GMP) standards. Our research addresses this issue by introducing an information model that organizes the design, development, and testing of pharmaceutical manufacturing equipment. This model is based on an exploratory review of 176 articles concerning design principles in regulated industries and integrates concepts from Axiomatic Design, Quality by Design, Model-Based Systems Engineering, and the V-Model framework. Further refinement was achieved through insights from 10 industry experts. The resultant workflow-based information model can be implemented as software to enhance engineering and project management. This research offers a structured framework that enables pharmaceutical equipment manufacturers and users to collaboratively develop solutions in an iterative manner, effectively closing the gap between industry needs and systematic design methodologies. Full article
Show Figures

Figure 1

22 pages, 509 KiB  
Article
Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis
by Binghan Lu, Kiyoaki Shirai and Natthawut Kertkeidkachorn
Information 2025, 16(5), 411; https://doi.org/10.3390/info16050411 - 16 May 2025
Viewed by 158
Abstract
This study proposes an Aspect-Enhanced Prompting (AEP) method for unsupervised Multi-Source Domain Adaptation in Aspect Sentiment Classification, where data from the target domain are completely unavailable for model training. The proposed AEP is based on two generative language models: one generates a prompt [...] Read more.
This study proposes an Aspect-Enhanced Prompting (AEP) method for unsupervised Multi-Source Domain Adaptation in Aspect Sentiment Classification, where data from the target domain are completely unavailable for model training. The proposed AEP is based on two generative language models: one generates a prompt from a given review, while the other follows the prompt and classifies the sentiment of an aspect. The first model extracts Aspect-Related Features (ARFs), which are words closely related to the aspect, from the review and incorporates them into the prompt in a domain-agnostic manner, thereby directing the second model to identify the sentiment accurately. Our framework incorporates an innovative rescoring mechanism and a cluster-based prompt expansion strategy. Both are intended to enhance the robustness of the generation of the prompt and the adaptability of the model to diverse domains. The results of experiments conducted on five datasets (Restaurant, Laptop, Device, Service, and Location) demonstrate that our method outperforms the baselines, including a state-of-the-art unsupervised domain adaptation method. The effectiveness of both the rescoring mechanism and the cluster-based prompt expansion is also validated through an ablation study. Full article
Show Figures

Figure 1

28 pages, 2499 KiB  
Article
Enhancing the Learning Experience with AI
by Adrian Runceanu, Adrian Balan, Laviniu Gavanescu, Marian-Madalin Neagu, Cosmin Cojocaru, Ilie Borcosi and Aniela Balacescu
Information 2025, 16(5), 410; https://doi.org/10.3390/info16050410 - 16 May 2025
Viewed by 299
Abstract
The exceptional progress in artificial intelligence is transforming the landscape of technical jobs and the educational requirements needed for these. This study’s purpose is to present and evaluate an intuitive open-source framework that transforms existing courses into interactive, AI-enhanced learning environments. Our team [...] Read more.
The exceptional progress in artificial intelligence is transforming the landscape of technical jobs and the educational requirements needed for these. This study’s purpose is to present and evaluate an intuitive open-source framework that transforms existing courses into interactive, AI-enhanced learning environments. Our team performed a study on the proposed method’s advantages in a pilot population of teachers and students which assessed it as “involving, trustworthy and easy to use”. Furthermore, we evaluated the AI components on standard large language model (LLM) benchmarks. This free, open-source, AI-enhanced educational platform can be used to improve the learning experience in all existing secondary and higher education institutions, with the potential of reaching the majority of the world’s students. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 780 KiB  
Article
Personalized Instructional Strategy Adaptation Using TOPSIS: A Multi-Criteria Decision-Making Approach for Adaptive Learning Systems
by Christos Troussas, Akrivi Krouska, Phivos Mylonas and Cleo Sgouropoulou
Information 2025, 16(5), 409; https://doi.org/10.3390/info16050409 - 15 May 2025
Viewed by 248
Abstract
The growing number of educational technologies presents possibilities and challenges for personalized instruction. This paper presents a learner-centered decision support system for selecting adaptive instructional strategies, that embeds the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) in a real-time learning [...] Read more.
The growing number of educational technologies presents possibilities and challenges for personalized instruction. This paper presents a learner-centered decision support system for selecting adaptive instructional strategies, that embeds the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) in a real-time learning environment. The system uses multi-dimensional learner performance data, such as error rate, time-on-task, mastery level, and motivation, to dynamically analyze and recommend the best pedagogical intervention from a pool of strategies, which includes hints, code examples, reflection prompts, and targeted scaffolding. In developing the system, we chose to employ it in a one-off postgraduate Java programming course, as this represents a defined cognitive load structure and samples a spectrum of learners. A robust evaluation was conducted with 100 students and an adaptive system compared to a static/no adaptive control condition. The adaptive system with TOPSIS yielded statistically higher learning outcomes (normalized gain g = 0.49), behavioral engagement (28.3% increase in tasks attempted), and learner satisfaction. A total of 85.3% of the expert evaluators agreed with the system decisions compared to the lecturer’s preferred teaching response towards the prescribed problems and behaviors. In comparison to a rule-based approach, it was clear that the TOPSIS framework provided a more granular and effective adaptation. The findings validate the use of multi-criteria decision-making for real-time instructional support and underscore the transparency, flexibility, and educational potential of the proposed system across broader learning domains. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
Show Figures

Figure 1

19 pages, 17487 KiB  
Article
LiteMP-VTON: A Knowledge-Distilled Diffusion Model for Realistic and Efficient Virtual Try-On
by Shufang Zhang, Lei Wang and Wenxin Ding
Information 2025, 16(5), 408; https://doi.org/10.3390/info16050408 - 15 May 2025
Viewed by 269
Abstract
Diffusion-based approaches have recently emerged as powerful alternatives to GAN-based virtual try-on methods, offering improved detail preservation and visual realism. Despite their advantages, the substantial number of parameters and intensive computational requirements pose significant barriers to deployment on low-resource platforms. To tackle these [...] Read more.
Diffusion-based approaches have recently emerged as powerful alternatives to GAN-based virtual try-on methods, offering improved detail preservation and visual realism. Despite their advantages, the substantial number of parameters and intensive computational requirements pose significant barriers to deployment on low-resource platforms. To tackle these limitations, we propose a diffusion-based virtual try-on framework optimized through feature-level knowledge compression. Our method introduces MP-VTON, an enhanced inpainting pipeline based on Stable Diffusion, which incorporates improved Masking techniques and Pose-conditioned enhancement to alleviate garment boundary artifacts. To reduce model size while maintaining performance, we adopt an attention-guided distillation strategy that transfers semantic and structural knowledge from MP-VTON to a lightweight model, LiteMP-VTON. Experiments demonstrate that LiteMP-VTON achieves nearly a 3× reduction in parameter count and close to 2× speedup in inference, making it well suited for deployment in resource-limited environments without significantly compromising generation quality. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

19 pages, 6616 KiB  
Article
YOLO-SRSA: An Improved YOLOv7 Network for the Abnormal Detection of Power Equipment
by Wan Zou, Yiping Jiang, Wenlong Liao, Songhai Fan, Yueping Yang, Jin Hou and Hao Tang
Information 2025, 16(5), 407; https://doi.org/10.3390/info16050407 - 15 May 2025
Viewed by 173
Abstract
Power equipment anomaly detection is essential for ensuring the stable operation of power systems. Existing models have high false and missed detection rates in complex weather and multi-scale equipment scenarios. This paper proposes a YOLO-SRSA-based anomaly detection algorithm. For data enhancement, geometric and [...] Read more.
Power equipment anomaly detection is essential for ensuring the stable operation of power systems. Existing models have high false and missed detection rates in complex weather and multi-scale equipment scenarios. This paper proposes a YOLO-SRSA-based anomaly detection algorithm. For data enhancement, geometric and color transformations and rain-fog simulations are applied to preprocess the dataset, improving the model’s robustness in outdoor complex weather. In the network structure improvements, first, the ACmix module is introduced to reconstruct the SPPCSPC network, effectively suppressing background noise and irrelevant feature interference to enhance feature extraction capability; second, the BiFormer module is integrated into the efficient aggregation network to strengthen focus on critical features and improve the flexible recognition of multi-scale feature images; finally, the original loss function is replaced with the MPDIoU function, optimizing detection accuracy through a comprehensive bounding box evaluation strategy. The experimental results show significant improvements over the baseline model: mAP@0.5 increases from 89.2% to 93.5%, precision rises from 95.9% to 97.1%, and recall improves from 95% to 97%. Additionally, the enhanced model demonstrates superior anti-interference performance under complex weather conditions compared to other models. Full article
Show Figures

Figure 1

21 pages, 5859 KiB  
Article
Internet of Things-Based Anomaly Detection Hybrid Framework Simulation Integration of Deep Learning and Blockchain
by Ahmad M. Almasabi, Ahmad B. Alkhodre, Maher Khemakhem, Fathy Eassa, Adnan Ahmed Abi Sen and Ahmed Harbaoui
Information 2025, 16(5), 406; https://doi.org/10.3390/info16050406 - 15 May 2025
Viewed by 304
Abstract
IoT environments have introduced diverse logistic support services into our lives and communities, in areas such as education, medicine, transportation, and agriculture. However, with new technologies and services, the issue of privacy and data security has become more urgent. Moreover, the rapid changes [...] Read more.
IoT environments have introduced diverse logistic support services into our lives and communities, in areas such as education, medicine, transportation, and agriculture. However, with new technologies and services, the issue of privacy and data security has become more urgent. Moreover, the rapid changes in IoT and the capabilities of attacks have highlighted the need for an adaptive and reliable framework. In this study, we applied the proposed simulation to the proposed hybrid framework, making use of deep learning to continue monitoring IoT data; we also used the blockchain association in the framework to log, tackle, manage, and document all of the IoT sensor’s data points. Five sensors were run in a SimPy simulation environment to check and examine our framework’s capability in a real-time IoT environment; deep learning (ANN) and the blockchain technique were integrated to enhance the efficiency of detecting certain attacks (benign, part of a horizontal port scan, attack, C&C, Okiru, DDoS, and file download) and to continue logging all of the IoT sensor data, respectively. The comparison of different machine learning (ML) models showed that the DL outperformed all of them. Interestingly, the evaluation results showed a mature and moderate level of accuracy and precision and reached 97%. Moreover, the proposed framework confirmed superior performance under varied conditions like diverse attack types and network sizes comparing to other approaches. It can improve its performance over time and can detect anomalies in real-time IoT environments. Full article
(This article belongs to the Special Issue Machine Learning for the Blockchain)
Show Figures

Figure 1

35 pages, 467 KiB  
Article
SCH-Hunter: A Taint-Based Hybrid Fuzzing Framework for Smart Contract Honeypots
by Haoyu Zhang, Baotong Wang, Wenhao Fu and Leyi Shi
Information 2025, 16(5), 405; https://doi.org/10.3390/info16050405 - 14 May 2025
Viewed by 301
Abstract
Existing smart contract honeypot detection approaches exhibit high false negatives and positives due to (i) their inability to generate transaction sequences triggering order-dependent traps and (ii) their limited code coverage from traditional fuzzing’s random mutations. In this paper, we propose a hybrid fuzzing [...] Read more.
Existing smart contract honeypot detection approaches exhibit high false negatives and positives due to (i) their inability to generate transaction sequences triggering order-dependent traps and (ii) their limited code coverage from traditional fuzzing’s random mutations. In this paper, we propose a hybrid fuzzing framework for smart contract honeypot detection based on taint analysis, SCH-Hunter. SCH-Hunter conducts source-code-level feature analysis of smart contracts and extracts data dependency relationships between variables from the generated Control Flow Graph to construct specific transaction sequences for fuzzing. A symbolic execution module is also introduced to resolve complex conditional branches that fuzzing alone fails to penetrate, enabling constraint solving. Furthermore, real-time dynamic taint propagation monitoring is implemented using taint analysis techniques, leveraging taint flow information to optimize seed mutation processes, thereby directing mutation resources toward high-value code regions. Finally, by integrating EVM (Ethereum Virtual Machine) code instrumentation with taint information flow analysis, the framework effectively identifies and detects security-sensitive operations, ultimately generating a comprehensive detection report. Empirical results are as follows. (i) For code coverage, SCH-Hunter performs better than the state-of-art tool, HoneyBadger, achieving higher average code coverage rates on both datasets, surpassing it by 4.79% and 17.41%, respectively. (ii) For detection capabilities, SCH-Hunter is not only roughly on par with HoneyBadger in terms of precision and recall rate but also capable of detecting a wider variety of smart contract honeypot techniques. (iii) For the evaluation of components, we conducted three ablation studies to demonstrate that the proposed modules in SCH-Hunter significantly improve the framework’s detection capability, code coverage, and detection efficiency, respectively. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

21 pages, 4721 KiB  
Article
PMAKA-IoV: A Physical Unclonable Function (PUF)-Based Multi-Factor Authentication and Key Agreement Protocol for Internet of Vehicles
by Ming Yuan and Yuelei Xiao
Information 2025, 16(5), 404; https://doi.org/10.3390/info16050404 - 14 May 2025
Viewed by 290
Abstract
With the explosion of vehicle-to-infrastructure (V2I) communications in the internet of vehicles (IoV), it is still very important to ensure secure authentication and efficient key agreement because of the vulnerabilities in the existing protocols such as physical capture attacks, privacy leakage, and low [...] Read more.
With the explosion of vehicle-to-infrastructure (V2I) communications in the internet of vehicles (IoV), it is still very important to ensure secure authentication and efficient key agreement because of the vulnerabilities in the existing protocols such as physical capture attacks, privacy leakage, and low computational efficiency. This paper proposes a physical unclonable function (PUF)-based multi-factor authentication and key agreement protocol tailored for V2I environments, named as PMAKA-IoV. The protocol integrates hardware-based PUFs with biometric features, utilizing fuzzy extractors to mitigate biometric template risks, while employing dynamic pseudonyms and lightweight cryptographic operations to enhance anonymity and reduce overhead. Security analysis demonstrates its resilience against physical capture attacks, replay attacks, man-in-the-middle attacks, and desynchronization attacks, and it is verified by formal verification using the strand space model and the automated Scyther tool. Performance analysis demonstrates that, compared to other related schemes, the PMAKA-IoV protocol maintains lower communication and storage overhead. Full article
(This article belongs to the Special Issue Wireless Communication and Internet of Vehicles)
Show Figures

Figure 1

33 pages, 1317 KiB  
Article
Deglobalization Trends and Communication Variables: A Multifaceted Analysis from 2009 to 2023
by James A. Danowski and Han-Woo Park
Information 2025, 16(5), 403; https://doi.org/10.3390/info16050403 - 14 May 2025
Viewed by 480
Abstract
This paper examines the correlation between rising trade protectionism—an indicator of economic deglobalization—and key communication and social variables from 2009 to 2023. Drawing on data from Global Trade Alert, Nexis Uni, Google searches, and Facebook (via CrowdTangle), we investigate the prevalence of “deglobalization” [...] Read more.
This paper examines the correlation between rising trade protectionism—an indicator of economic deglobalization—and key communication and social variables from 2009 to 2023. Drawing on data from Global Trade Alert, Nexis Uni, Google searches, and Facebook (via CrowdTangle), we investigate the prevalence of “deglobalization” discourse, language entropy, political polarization, protests, and digital authoritarianism. The analysis is framed by Optimal Information Theory, World Systems Theory, and other social science perspectives to explain how deglobalization may potentially reshape public communication. The results suggest that greater trade protectionism is associated with increased mentions of deglobalization, higher language entropy (i.e., less dominance of English), amplified political polarization, more frequent protest activity, and heightened digital authoritarian measures. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

14 pages, 2383 KiB  
Article
Performance Variability in Public Clouds: An Empirical Assessment
by Sanjay Ahuja, Victor H. Lopez Chalacan and Hugo Resendez
Information 2025, 16(5), 402; https://doi.org/10.3390/info16050402 - 14 May 2025
Viewed by 212
Abstract
Cloud computing is now established as a viable alternative to on-premise infrastructure from both a system administration and cost perspective. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers, Amazon Web Services (AWS) [...] Read more.
Cloud computing is now established as a viable alternative to on-premise infrastructure from both a system administration and cost perspective. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers, Amazon Web Services (AWS) and Google Cloud Platform (GCP). In order to evaluate the perforance variability between these two providers, synthetic benchmarks including Memory bandwidth (STREAM), Interleave or Random (IoR) performance, and Computational CPU performance by NAS Parallel Benchmarks-Embarrassingly Parallel (NPB-EP) were used. A comparative examination of the two cloud platforms is provided in the context of our research methodology and design. We conclude with a discussion of the results of the experiment and an assessment of the suitability of public cloud platforms for certain types of computing workloads. Both AWS and GCP have their strong points, and this study provides recommendations depending on user needs for high throughput and/or performance predictability across CPU, memory, and Input/Output (I/O). In addition, the study discusses other factors to help users decide between cloud vendors such as ease of use, documentation, and types of instances offered. Full article
(This article belongs to the Special Issue Performance Engineering in Cloud Computing)
Show Figures

Figure 1

28 pages, 586 KiB  
Review
Review and Mapping of Search-Based Approaches for Program Synthesis
by Takfarinas Saber and Ning Tao
Information 2025, 16(5), 401; https://doi.org/10.3390/info16050401 - 14 May 2025
Viewed by 275
Abstract
Context: Program synthesis tools reduce software development costs by generating programs that perform tasks depicted by some specifications. Various methodologies have emerged for program synthesis, among which search-based algorithms have shown promising results. However, the proliferation of search-based program synthesis tools utilising diverse [...] Read more.
Context: Program synthesis tools reduce software development costs by generating programs that perform tasks depicted by some specifications. Various methodologies have emerged for program synthesis, among which search-based algorithms have shown promising results. However, the proliferation of search-based program synthesis tools utilising diverse search algorithms and input types and targeting various programming tasks can overwhelm users seeking the most suitable tool. Objective: This paper contributes to the ongoing discourse by presenting a comprehensive review of search-based approaches employed for program synthesis. We aim to offer an understanding of the guiding principles of current methodologies by mapping them to the required type of user intent, the type of search algorithm, and the representation of the search space. Furthermore, we aim to map the diverse search algorithms to the type of code generation tasks in which they have shown success, which would serve as a guideline for applying search-based approaches for program synthesis. Method: We conducted a literature review of 67 academic papers on search-based program synthesis. Results: Through analysis, we identified and categorised the main techniques with their trends. We have also mapped and shed light on patterns connecting the problem, the representation and the search algorithm type. Conclusions: Our study summarises the field of search-based program synthesis and provides an entry point to the acumen and expertise of the search-based community on program synthesis. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

17 pages, 1544 KiB  
Review
Transforming Auditing in the AI Era: A Comprehensive Review
by Nguyen Thi Thanh Binh
Information 2025, 16(5), 400; https://doi.org/10.3390/info16050400 - 14 May 2025
Viewed by 449
Abstract
This study explores how auditing is evolving in the context of Artificial Intelligence (AI) by analyzing a dataset of 465 peer-reviewed publications from 1982 to 2024, sourced from Scopus and Web of Science. Using Latent Dirichlet Allocation (LDA), an unsupervised machine learning method, [...] Read more.
This study explores how auditing is evolving in the context of Artificial Intelligence (AI) by analyzing a dataset of 465 peer-reviewed publications from 1982 to 2024, sourced from Scopus and Web of Science. Using Latent Dirichlet Allocation (LDA), an unsupervised machine learning method, the study identifies ten key thematic areas reflecting how AI increasingly intersects with auditing research. The analysis suggests that topics related to integrating AI and data-driven technologies are especially prominent. The theme “AI in Auditing” emerges as the most frequently occurring topic, comprising approximately 33.4% of the discussion. In comparison, “Data Security in Auditing” follows at 21.2%, indicating sustained scholarly concern with the integrity and protection of digital audit data. Other notable themes, such as “Auditing and Accounting Technologies” (12.7%) and “AI and Machine Learning in Auditing” (11.1%), suggest a continuing interest in the development and application of advanced technologies within auditing. The analysis also points to the presence of more specialized or emerging areas, including “Ethical AI in Audit Systems”, “Image Processing in Audit”, and “Political Influence in Auditing”, though these appear less frequently. Topics related to environmental ethics and racial and ethnic disparities in auditing were identified. However, their low representation (0.4% each) may indicate that such issues remain relatively peripheral in current academic discourse. The study provides a data-driven overview of how AI-related topics are being discussed in the auditing literature. It may help identify areas of growing interest and potential research gaps. The findings could have implications for researchers, practitioners, and policymakers by offering insights into the technological and ethical priorities shaping the field. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

40 pages, 3397 KiB  
Systematic Review
Intelligent Supply Chain Management: A Systematic Literature Review on Artificial Intelligence Contributions
by António R. Teixeira, José Vasconcelos Ferreira and Ana Luísa Ramos
Information 2025, 16(5), 399; https://doi.org/10.3390/info16050399 - 13 May 2025
Viewed by 1247
Abstract
This systematic literature review investigates the recent applications of artificial intelligence (AI) in supply chain management (SCM), particularly in the domains of resilience, process optimization, sustainability, and implementation challenges. The study is motivated by gaps identified in previous reviews, which often exclude literature [...] Read more.
This systematic literature review investigates the recent applications of artificial intelligence (AI) in supply chain management (SCM), particularly in the domains of resilience, process optimization, sustainability, and implementation challenges. The study is motivated by gaps identified in previous reviews, which often exclude literature published after 2020 and lack an integrated analysis of AI’s contributions across multiple supply chain phases. The review aims to provide an updated synthesis of AI technologies—such as machine learning, deep learning, and generative AI—and their practical implementation between 2021 and 2024. Following the PRISMA framework, a rigorous methodology was applied using the Scopus database, complemented by bibliometric and content analyses. A total of 66 studies were selected based on predefined inclusion criteria and evaluated for methodological quality and thematic relevance. The findings reveal a diverse classification of AI applications across strategic and operational SCM phases and highlight emerging techniques like explainable AI, neurosymbolic systems, and federated learning. The review also identifies persistent barriers such as data governance, ethical concerns, and scalability. Future research should focus on hybrid AI–human collaboration, transparency through explainable models, and integration with technologies such as IoT and blockchain. This review contributes to the literature by offering a structured synthesis of AI’s transformative impact on SCM and by outlining key research directions to guide future investigations and managerial practice. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

37 pages, 1053 KiB  
Article
Innovating Cyber Defense with Tactical Simulators for Management-Level Incident Response
by Dalibor Gernhardt, Stjepan Groš and Gordan Gledec
Information 2025, 16(5), 398; https://doi.org/10.3390/info16050398 - 13 May 2025
Viewed by 290
Abstract
This study introduces a novel approach to cyber defense exercises, emphasizing the emulation of technical tasks to create realistic incident response scenarios. Unlike traditional cyber ranges or tabletop exercises, this method enables both management and technical leaders to engage in decision-making processes without [...] Read more.
This study introduces a novel approach to cyber defense exercises, emphasizing the emulation of technical tasks to create realistic incident response scenarios. Unlike traditional cyber ranges or tabletop exercises, this method enables both management and technical leaders to engage in decision-making processes without requiring a full technical setup. The initial observations indicate that exercises based on the emulation of technical tasks require less preparation time compared to conventional methods, addressing the growing demand for efficient training solutions. This study aims to assist organizations in developing their own cyber defense exercises by providing practical insights into the benefits and challenges of this approach. The key advantages observed include improved procedural compliance, inter-team communication, and a better understanding of the chain of command as participants navigate realistic, organization-wide scenarios. However, new challenges have also emerged, such as managing the simulation tempo and balancing technical complexity—particularly in offense–defense team configurations. This study proposes a structured and scalable approach as a practical alternative to the traditional training methods, aligning better with the evolving demands of modern cyber defense. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Figure 1

37 pages, 1496 KiB  
Article
Machine Learning for Chinese Corporate Fraud Prediction: Segmented Models Based on Optimal Training Windows
by Chang Chuan Goh, Yue Yang, Anthony Bellotti and Xiuping Hua
Information 2025, 16(5), 397; https://doi.org/10.3390/info16050397 - 12 May 2025
Viewed by 231
Abstract
We propose a comprehensive and practical framework for Chinese corporate fraud prediction which incorporates classifiers, class imbalance, population drift, segmented models, and model evaluation using machine learning algorithms. Based on a three-stage experiment, we first find that the random forest classifier has the [...] Read more.
We propose a comprehensive and practical framework for Chinese corporate fraud prediction which incorporates classifiers, class imbalance, population drift, segmented models, and model evaluation using machine learning algorithms. Based on a three-stage experiment, we first find that the random forest classifier has the best performance in predicting corporate fraud among 17 machine learning models. We then implement the sliding time window approach to handle population drift, and the optimal training window found demonstrates the existence of population drift in fraud detection and the need to address it for improved model performance. Using the best machine learning model and optimal training window, we build general model and segmented models to compare fraud types and industries based on their respective predictive performance via four evaluation metrics and top features using SHAP. The results indicate that segmented models have a better predictive performance than the general model for fraud types with low fraud rates and are as good as the general model for most industries when controlling for training set size. The dissimilarities between the top features set of the general and segmented models suggest that segmented models are useful in providing a better understanding of fraud occurrence. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 5732 KiB  
Article
Performance Analysis of Reconfigurable Intelligent Surface-Assisted Millimeter Wave Massive MIMO System Under 3GPP 5G Channels
by Vishnu Vardhan Gudla, Vinoth Babu Kumaravelu, Agbotiname Lucky Imoize, Francisco R. Castillo Soria, Anjana Babu Sujatha, Helen Sheeba John Kennedy, Hindavi Kishor Jadhav, Arthi Murugadass and Samarendra Nath Sur
Information 2025, 16(5), 396; https://doi.org/10.3390/info16050396 - 12 May 2025
Viewed by 365
Abstract
Reconfigurable intelligent surfaces (RIS) and massive multiple input and multiple output (M-MIMO) are the two major enabling technologies for next-generation networks, capable of providing spectral efficiency (SE), energy efficiency (EE), array gain, spatial multiplexing, and reliability. This work introduces an RIS-assisted millimeter wave [...] Read more.
Reconfigurable intelligent surfaces (RIS) and massive multiple input and multiple output (M-MIMO) are the two major enabling technologies for next-generation networks, capable of providing spectral efficiency (SE), energy efficiency (EE), array gain, spatial multiplexing, and reliability. This work introduces an RIS-assisted millimeter wave (mmWave) M-MIMO system to harvest the advantages of RIS and mmWave M-MIMO systems that are required for beyond fifth-generation (B5G) systems. The performance of the proposed system is evaluated under 3GPP TR 38.901 V16.1.0 5G channel models. Specifically, we considered indoor hotspot (InH)—indoor office and urban microcellular (UMi)—street canyon channel environments for 28 GHz and 73 GHz mmWave frequencies. Using the SimRIS channel simulator, the channel matrices were generated for the required number of realizations. Monte Carlo simulations were executed extensively to evaluate the proposed system’s average bit error rate (ABER) and sum rate performances, and it was observed that increasing the number of transmit antennas from 4 to 64 resulted in a better performance gain of ∼10 dB for both InH—indoor office and UMi—street canyon channel environments. The improvement of the number of RIS elements from 64 to 1024 resulted in ∼7 dB performance gain. It was also observed that ABER performance at 28 GHz was better compared to 73 GHz by at least ∼5 dB for the considered channels. The impact of finite resolution RIS on the considered 5G channel models was also evaluated. ABER performance degraded for 2-bit finite resolution RIS compared to ideal infinite resolution RIS by ∼6 dB. Full article
(This article belongs to the Special Issue Advances in Telecommunication Networks and Wireless Technology)
Show Figures

Figure 1

24 pages, 3421 KiB  
Article
Cloud-Based Medical Named Entity Recognition: A FIT4NER-Based Approach
by Philippe Tamla, Florian Freund and Matthias Hemmje
Information 2025, 16(5), 395; https://doi.org/10.3390/info16050395 - 12 May 2025
Viewed by 293
Abstract
This paper presents a cloud-based system that builds upon the FIT4NER framework to support medical experts in training machine learning models for named entity recognition (NER) using Microsoft Azure. The system is designed to simplify complex cloud configurations while providing an intuitive interface [...] Read more.
This paper presents a cloud-based system that builds upon the FIT4NER framework to support medical experts in training machine learning models for named entity recognition (NER) using Microsoft Azure. The system is designed to simplify complex cloud configurations while providing an intuitive interface for managing and converting large-scale training and evaluation datasets across formats such as PDF, DOCX, TXT, BioC, spaCyJSON, and CoNLL-2003. It also enables the configuration of transformer-based spaCy pipelines and orchestrates Azure cloud services for scalable and efficient NER model training. Following the structured Nunamaker research methodology, the paper introduces the research context, surveys the state of the art, and highlights key challenges faced by medical professionals in cloud-based NER. It then details the modeling, implementation, and integration of the system. Evaluation results—both qualitative and quantitative—demonstrate enhanced usability, scalability, and accessibility for non-technical users in medical domains. The paper concludes with insights gained and outlines directions for future work. Full article
Show Figures

Figure 1

21 pages, 472 KiB  
Article
CDAS: A Secure Cross-Domain Data Sharing Scheme Based on Blockchain
by Jiahui Jiang, Tingrui Pei, Jiahao Chen and Zhiwen Hou
Information 2025, 16(5), 394; https://doi.org/10.3390/info16050394 - 12 May 2025
Viewed by 305
Abstract
In the current context of the wide application of Internet of Things (IoT) technology, cross-domain data sharing based on industrial IoT (IIoT) has become the key to maximizing data value, but it also faces many challenges. In response to the security and privacy [...] Read more.
In the current context of the wide application of Internet of Things (IoT) technology, cross-domain data sharing based on industrial IoT (IIoT) has become the key to maximizing data value, but it also faces many challenges. In response to the security and privacy issues in cross-domain data sharing, we proposed a cross-domain secure data sharing scheme (CDAS) based on multiple blockchains. The scheme first designs the cross-domain blockchain in layers and assists the device in completing the data sharing on the chain through the blockchain layer close to the edge device. In addition, we combine smart contract design to implement attribute-based access control (ABAC) and anonymous identity registration. This method simplifies device resource access by minimizing middleware confirmation, double-checking device access rights, and preventing redundant requests caused by illegal access attempts. Finally, in terms of data privacy and security, IPFS is used to store confidential data. In terms of ensuring data sharing security, searchable encryption (SE) is applied to the overall data sharing and improved. Users can find the required data by searching the ciphertext links in the blockchain system to ensure the secure transmission of private data. Compared with the traditional ABAC scheme, we have added modules for data privacy protection and anonymous authentication to further protect user data privacy. At the same time, compared with the access control scheme based on attribute encryption, our scheme has certain advantages in the time complexity calculation of key algorithms such as policy matching and encryption algorithm. At the same time, with the assistance of the edge blockchain layer, it can reduce the burden of limited computing resources of the device. This scheme can solve the security and efficiency problems of cross-domain data sharing in the industrial Internet of Things through security and experimental analysis. Full article
(This article belongs to the Special Issue Blockchain, Technology and Its Application)
Show Figures

Figure 1

21 pages, 3195 KiB  
Article
YOLO-LSM: A Lightweight UAV Target Detection Algorithm Based on Shallow and Multiscale Information Learning
by Chenxing Wu, Changlong Cai, Feng Xiao, Jiahao Wang, Yulin Guo and Longhui Ma
Information 2025, 16(5), 393; https://doi.org/10.3390/info16050393 - 9 May 2025
Viewed by 452
Abstract
To address challenges such as large-scale variations, high density of small targets, and the large number of parameters in deep learning-based target detection models, which limit their deployment on UAV platforms with fixed performance and limited computational resources, a lightweight UAV target detection [...] Read more.
To address challenges such as large-scale variations, high density of small targets, and the large number of parameters in deep learning-based target detection models, which limit their deployment on UAV platforms with fixed performance and limited computational resources, a lightweight UAV target detection algorithm, YOLO-LSM, is proposed. First, to mitigate the loss of small target information, an Efficient Small Target Detection Layer (ESTDL) is developed, alongside structural improvements to the baseline model to reduce parameters. Second, a Multiscale Lightweight Convolution (MLConv) is designed, and a lightweight feature extraction module, MLCSP, is constructed to enhance the extraction of detailed information. Focaler inner IoU is incorporated to improve bounding box matching and localization, thereby accelerating model convergence. Finally, a novel feature fusion network, DFSPP, is proposed to enhance accuracy by optimizing the selection and adjustment of target scale ranges. Validations on the VisDrone2019 and Tiny Person datasets demonstrate that compared to the benchmark network, the YOLO-LSM achieves a mAP0.5 improvement of 6.9 and 3.5 percentage points, respectively, with a parameter count of 1.9 M, representing a reduction of approximately 72%. Different from previous work on medical detection, this study tailors YOLO-LSM for UAV-based small object detection by introducing targeted improvements in feature extraction, detection heads, and loss functions, achieving better adaptation to aerial scenarios. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop