Previous Issue
Volume 16, April
 
 

Information, Volume 16, Issue 5 (May 2025) – 82 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
22 pages, 2114 KiB  
Review
Artificial Intelligence in SMEs: Enhancing Business Functions Through Technologies and Applications
by Thang Le Dinh, Manh-Chiên Vu and Giang T.C. Tran
Information 2025, 16(5), 415; https://doi.org/10.3390/info16050415 - 18 May 2025
Abstract
Artificial intelligence (AI) has significant potential to transform small- and medium-sized enterprises (SMEs), yet its adoption is often hindered by challenges such as limited financial and human resources. This study addresses this issue by investigating the core AI technologies adopted by SMEs, their [...] Read more.
Artificial intelligence (AI) has significant potential to transform small- and medium-sized enterprises (SMEs), yet its adoption is often hindered by challenges such as limited financial and human resources. This study addresses this issue by investigating the core AI technologies adopted by SMEs, their broad range of applications across business functions, and the strategies required for successful implementation. Through a systematic literature review of 50 studies published between 2016 and 2025, we identify prominent AI technologies, including machine learning, natural language processing, and generative AI, and their applications in enhancing efficiency, decision-making, and innovation across sales and marketing, operations and logistics, finance and other business functions. The findings emphasize the importance of workforce training, robust technological infrastructure, data-driven cultures, and strategic partnerships for SMEs. Furthermore, the review highlights methods for measuring and optimizing AI’s value, such as tracking key performance indicators and improving customer satisfaction. While acknowledging challenges like financial constraints and ethical considerations, this research provides practical guidance for SMEs to effectively leverage AI for sustainable growth and provides a foundation for future studies to explore customized AI strategies for diverse SME contexts. Full article
Show Figures

Graphical abstract

34 pages, 7445 KiB  
Systematic Review
Knowledge Management Strategies Supported by ICT for the Improvement of Teaching Practice: A Systematic Review
by Miguel-Angel Romero-Ochoa, Julio-Alejandro Romero-González, Alonso Perez-Soltero, Juan Terven, Teresa García-Ramírez, Diana-Margarita Córdova-Esparza and Francisco-Alan Espinoza-Zallas
Information 2025, 16(5), 414; https://doi.org/10.3390/info16050414 - 18 May 2025
Abstract
In the modern digital ecosystem, the effective management of knowledge and the integration of information and communication technologies are the keys to revolutionizing educational practices within higher education institutions. This study presents a systematic review of recent literature, examining how the incorporation of [...] Read more.
In the modern digital ecosystem, the effective management of knowledge and the integration of information and communication technologies are the keys to revolutionizing educational practices within higher education institutions. This study presents a systematic review of recent literature, examining how the incorporation of information and communication technologies facilitates the creation and transfer of knowledge, enables collaboration among educators, and supports continuous professional development. We explore the benefits of personalized learning and the application of technological tools to enhance collaboration, access to educational resources, and pedagogical reflection. The key findings emphasize the role of these tools in promoting teacher interaction and exchange of ideas, highlighting the critical importance of training in digital competency to maximize their impact. The study also identifies challenges, including the need to improve effective knowledge transfer and technological training. In conclusion, effective knowledge management, supported by information and communication technologies, fortifies digital competencies and cultivates a culture of collaboration and content creation in higher education institutions. Full article
(This article belongs to the Special Issue Emerging Research in Knowledge Management and Innovation)
13 pages, 1968 KiB  
Article
Drunk Driver Detection Using Thermal Facial Images
by Chin-Heng Chai, Siti Fatimah Abdul Razak, Sumendra Yogarayan and Ramesh Shanmugam
Information 2025, 16(5), 413; https://doi.org/10.3390/info16050413 - 18 May 2025
Abstract
This study aims to investigate and propose a machine learning approach that can accurately detect alcohol consumption by analyzing the thermal patterns of facial features. Thermal images from the Tufts Face Database and self-collected images were utilized to train the models in identifying [...] Read more.
This study aims to investigate and propose a machine learning approach that can accurately detect alcohol consumption by analyzing the thermal patterns of facial features. Thermal images from the Tufts Face Database and self-collected images were utilized to train the models in identifying temperature variations in specific facial regions. Convolutional Neural Networks (CNNs) and YOLO (You Only Look Once) algorithms were employed to extract facial features, while classifiers such as Support Vector Machines (SVMs), Multi-Layer Perceptron (MLP), and K-Nearest Neighbors (KNN), as well as Random Forest and linear regression, classify individuals as sober or intoxicated based on their thermal images. The models’ effectiveness in analyzing thermal images to determine alcohol intoxication is expected to provide a foundation for the development of a realistic drunk driver detection system based on thermal images. In this study, MLP obtained 90% accuracy and outperformed the other models in classifying the thermal images, either as sober or showing signs of alcohol consumption. The trained models may be embedded in advanced drunk detection systems as part of an in-vehicle safety application. Full article
Show Figures

Figure 1

21 pages, 1780 KiB  
Article
Information Model for Pharmaceutical Smart Factory Equipment Design
by Roland Wölfle, Irina Saur-Amaral and Leonor Teixeira
Information 2025, 16(5), 412; https://doi.org/10.3390/info16050412 - 17 May 2025
Viewed by 46
Abstract
Pharmaceutical production typically focuses on individual drug types for each production line, which limits flexibility. However, the emergence of Industry 4.0 technologies presents new opportunities for more adaptable and customized manufacturing processes. Despite this promise, the development of innovative design techniques for pharmaceutical [...] Read more.
Pharmaceutical production typically focuses on individual drug types for each production line, which limits flexibility. However, the emergence of Industry 4.0 technologies presents new opportunities for more adaptable and customized manufacturing processes. Despite this promise, the development of innovative design techniques for pharmaceutical production equipment remains incomplete. Manufacturers encounter challenges due to rapid innovation cycles while adhering to stringent Good Manufacturing Practice (GMP) standards. Our research addresses this issue by introducing an information model that organizes the design, development, and testing of pharmaceutical manufacturing equipment. This model is based on an exploratory review of 176 articles concerning design principles in regulated industries and integrates concepts from Axiomatic Design, Quality by Design, Model-Based Systems Engineering, and the V-Model framework. Further refinement was achieved through insights from 10 industry experts. The resultant workflow-based information model can be implemented as software to enhance engineering and project management. This research offers a structured framework that enables pharmaceutical equipment manufacturers and users to collaboratively develop solutions in an iterative manner, effectively closing the gap between industry needs and systematic design methodologies. Full article
Show Figures

Figure 1

22 pages, 509 KiB  
Article
Aspect-Enhanced Prompting Method for Unsupervised Domain Adaptation in Aspect-Based Sentiment Analysis
by Binghan Lu, Kiyoaki Shirai and Natthawut Kertkeidkachorn
Information 2025, 16(5), 411; https://doi.org/10.3390/info16050411 - 16 May 2025
Viewed by 18
Abstract
This study proposes an Aspect-Enhanced Prompting (AEP) method for unsupervised Multi-Source Domain Adaptation in Aspect Sentiment Classification, where data from the target domain are completely unavailable for model training. The proposed AEP is based on two generative language models: one generates a prompt [...] Read more.
This study proposes an Aspect-Enhanced Prompting (AEP) method for unsupervised Multi-Source Domain Adaptation in Aspect Sentiment Classification, where data from the target domain are completely unavailable for model training. The proposed AEP is based on two generative language models: one generates a prompt from a given review, while the other follows the prompt and classifies the sentiment of an aspect. The first model extracts Aspect-Related Features (ARFs), which are words closely related to the aspect, from the review and incorporates them into the prompt in a domain-agnostic manner, thereby directing the second model to identify the sentiment accurately. Our framework incorporates an innovative rescoring mechanism and a cluster-based prompt expansion strategy. Both are intended to enhance the robustness of the generation of the prompt and the adaptability of the model to diverse domains. The results of experiments conducted on five datasets (Restaurant, Laptop, Device, Service, and Location) demonstrate that our method outperforms the baselines, including a state-of-the-art unsupervised domain adaptation method. The effectiveness of both the rescoring mechanism and the cluster-based prompt expansion is also validated through an ablation study. Full article
Show Figures

Figure 1

28 pages, 2499 KiB  
Article
Enhancing the Learning Experience with AI
by Adrian Runceanu, Adrian Balan, Laviniu Gavanescu, Marian-Madalin Neagu, Cosmin Cojocaru, Ilie Borcosi and Aniela Balacescu
Information 2025, 16(5), 410; https://doi.org/10.3390/info16050410 (registering DOI) - 16 May 2025
Viewed by 51
Abstract
The exceptional progress in artificial intelligence is transforming the landscape of technical jobs and the educational requirements needed for these. This study’s purpose is to present and evaluate an intuitive open-source framework that transforms existing courses into interactive, AI-enhanced learning environments. Our team [...] Read more.
The exceptional progress in artificial intelligence is transforming the landscape of technical jobs and the educational requirements needed for these. This study’s purpose is to present and evaluate an intuitive open-source framework that transforms existing courses into interactive, AI-enhanced learning environments. Our team performed a study on the proposed method’s advantages in a pilot population of teachers and students which assessed it as “involving, trustworthy and easy to use”. Furthermore, we evaluated the AI components on standard large language model (LLM) benchmarks. This free, open-source, AI-enhanced educational platform can be used to improve the learning experience in all existing secondary and higher education institutions, with the potential of reaching the majority of the world’s students. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

19 pages, 780 KiB  
Article
Personalized Instructional Strategy Adaptation Using TOPSIS: A Multi-Criteria Decision-Making Approach for Adaptive Learning Systems
by Christos Troussas, Akrivi Krouska, Phivos Mylonas and Cleo Sgouropoulou
Information 2025, 16(5), 409; https://doi.org/10.3390/info16050409 - 15 May 2025
Viewed by 89
Abstract
The growing number of educational technologies presents possibilities and challenges for personalized instruction. This paper presents a learner-centered decision support system for selecting adaptive instructional strategies, that embeds the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) in a real-time learning [...] Read more.
The growing number of educational technologies presents possibilities and challenges for personalized instruction. This paper presents a learner-centered decision support system for selecting adaptive instructional strategies, that embeds the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) in a real-time learning environment. The system uses multi-dimensional learner performance data, such as error rate, time-on-task, mastery level, and motivation, to dynamically analyze and recommend the best pedagogical intervention from a pool of strategies, which includes hints, code examples, reflection prompts, and targeted scaffolding. In developing the system, we chose to employ it in a one-off postgraduate Java programming course, as this represents a defined cognitive load structure and samples a spectrum of learners. A robust evaluation was conducted with 100 students and an adaptive system compared to a static/no adaptive control condition. The adaptive system with TOPSIS yielded statistically higher learning outcomes (normalized gain g = 0.49), behavioral engagement (28.3% increase in tasks attempted), and learner satisfaction. A total of 85.3% of the expert evaluators agreed with the system decisions compared to the lecturer’s preferred teaching response towards the prescribed problems and behaviors. In comparison to a rule-based approach, it was clear that the TOPSIS framework provided a more granular and effective adaptation. The findings validate the use of multi-criteria decision-making for real-time instructional support and underscore the transparency, flexibility, and educational potential of the proposed system across broader learning domains. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
Show Figures

Figure 1

19 pages, 17487 KiB  
Article
LiteMP-VTON: A Knowledge-Distilled Diffusion Model for Realistic and Efficient Virtual Try-On
by Shufang Zhang, Lei Wang and Wenxin Ding
Information 2025, 16(5), 408; https://doi.org/10.3390/info16050408 - 15 May 2025
Viewed by 76
Abstract
Diffusion-based approaches have recently emerged as powerful alternatives to GAN-based virtual try-on methods, offering improved detail preservation and visual realism. Despite their advantages, the substantial number of parameters and intensive computational requirements pose significant barriers to deployment on low-resource platforms. To tackle these [...] Read more.
Diffusion-based approaches have recently emerged as powerful alternatives to GAN-based virtual try-on methods, offering improved detail preservation and visual realism. Despite their advantages, the substantial number of parameters and intensive computational requirements pose significant barriers to deployment on low-resource platforms. To tackle these limitations, we propose a diffusion-based virtual try-on framework optimized through feature-level knowledge compression. Our method introduces MP-VTON, an enhanced inpainting pipeline based on Stable Diffusion, which incorporates improved Masking techniques and Pose-conditioned enhancement to alleviate garment boundary artifacts. To reduce model size while maintaining performance, we adopt an attention-guided distillation strategy that transfers semantic and structural knowledge from MP-VTON to a lightweight model, LiteMP-VTON. Experiments demonstrate that LiteMP-VTON achieves nearly a 3× reduction in parameter count and close to 2× speedup in inference, making it well suited for deployment in resource-limited environments without significantly compromising generation quality. Full article
(This article belongs to the Special Issue Intelligent Image Processing by Deep Learning, 2nd Edition)
Show Figures

Figure 1

19 pages, 6616 KiB  
Article
YOLO-SRSA: An Improved YOLOv7 Network for the Abnormal Detection of Power Equipment
by Wan Zou, Yiping Jiang, Wenlong Liao, Songhai Fan, Yueping Yang, Jin Hou and Hao Tang
Information 2025, 16(5), 407; https://doi.org/10.3390/info16050407 - 15 May 2025
Viewed by 56
Abstract
Power equipment anomaly detection is essential for ensuring the stable operation of power systems. Existing models have high false and missed detection rates in complex weather and multi-scale equipment scenarios. This paper proposes a YOLO-SRSA-based anomaly detection algorithm. For data enhancement, geometric and [...] Read more.
Power equipment anomaly detection is essential for ensuring the stable operation of power systems. Existing models have high false and missed detection rates in complex weather and multi-scale equipment scenarios. This paper proposes a YOLO-SRSA-based anomaly detection algorithm. For data enhancement, geometric and color transformations and rain-fog simulations are applied to preprocess the dataset, improving the model’s robustness in outdoor complex weather. In the network structure improvements, first, the ACmix module is introduced to reconstruct the SPPCSPC network, effectively suppressing background noise and irrelevant feature interference to enhance feature extraction capability; second, the BiFormer module is integrated into the efficient aggregation network to strengthen focus on critical features and improve the flexible recognition of multi-scale feature images; finally, the original loss function is replaced with the MPDIoU function, optimizing detection accuracy through a comprehensive bounding box evaluation strategy. The experimental results show significant improvements over the baseline model: mAP@0.5 increases from 89.2% to 93.5%, precision rises from 95.9% to 97.1%, and recall improves from 95% to 97%. Additionally, the enhanced model demonstrates superior anti-interference performance under complex weather conditions compared to other models. Full article
Show Figures

Figure 1

21 pages, 5859 KiB  
Article
Internet of Things-Based Anomaly Detection Hybrid Framework Simulation Integration of Deep Learning and Blockchain
by Ahmad M. Almasabi, Ahmad B. Alkhodre, Maher Khemakhem, Fathy Eassa, Adnan Ahmed Abi Sen and Ahmed Harbaoui
Information 2025, 16(5), 406; https://doi.org/10.3390/info16050406 - 15 May 2025
Viewed by 114
Abstract
IoT environments have introduced diverse logistic support services into our lives and communities, in areas such as education, medicine, transportation, and agriculture. However, with new technologies and services, the issue of privacy and data security has become more urgent. Moreover, the rapid changes [...] Read more.
IoT environments have introduced diverse logistic support services into our lives and communities, in areas such as education, medicine, transportation, and agriculture. However, with new technologies and services, the issue of privacy and data security has become more urgent. Moreover, the rapid changes in IoT and the capabilities of attacks have highlighted the need for an adaptive and reliable framework. In this study, we applied the proposed simulation to the proposed hybrid framework, making use of deep learning to continue monitoring IoT data; we also used the blockchain association in the framework to log, tackle, manage, and document all of the IoT sensor’s data points. Five sensors were run in a SimPy simulation environment to check and examine our framework’s capability in a real-time IoT environment; deep learning (ANN) and the blockchain technique were integrated to enhance the efficiency of detecting certain attacks (benign, part of a horizontal port scan, attack, C&C, Okiru, DDoS, and file download) and to continue logging all of the IoT sensor data, respectively. The comparison of different machine learning (ML) models showed that the DL outperformed all of them. Interestingly, the evaluation results showed a mature and moderate level of accuracy and precision and reached 97%. Moreover, the proposed framework confirmed superior performance under varied conditions like diverse attack types and network sizes comparing to other approaches. It can improve its performance over time and can detect anomalies in real-time IoT environments. Full article
(This article belongs to the Special Issue Machine Learning for the Blockchain)
Show Figures

Figure 1

35 pages, 465 KiB  
Article
SCH-Hunter: A Taint-Based Hybrid Fuzzing Framework for Smart Contract Honeypots
by Haoyu Zhang, Baotong Wang, Wenhao Fu and Leyi Shi
Information 2025, 16(5), 405; https://doi.org/10.3390/info16050405 - 14 May 2025
Viewed by 137
Abstract
Existing smart contract honeypot detection approaches exhibit high false negatives and positives due to (i) their inability to generate transaction sequences triggering order-dependent traps and (ii) their limited code coverage from traditional fuzzing’s random mutations. In this paper, we propose a hybrid fuzzing [...] Read more.
Existing smart contract honeypot detection approaches exhibit high false negatives and positives due to (i) their inability to generate transaction sequences triggering order-dependent traps and (ii) their limited code coverage from traditional fuzzing’s random mutations. In this paper, we propose a hybrid fuzzing framework for smart contract honeypot detection based on taint analysis, SCH-Hunter. SCH-Hunter conducts source-code-level feature analysis of smart contracts and extracts data dependency relationships between variables from the generated Control Flow Graph to construct specific transaction sequences for fuzzing. A symbolic execution module is also introduced to resolve complex conditional branches that fuzzing alone fails to penetrate, enabling constraint solving. Furthermore, real-time dynamic taint propagation monitoring is implemented using taint analysis techniques, leveraging taint flow information to optimize seed mutation processes, thereby directing mutation resources toward high-value code regions. Finally, by integrating EVM (Ethereum Virtual Machine) code instrumentation with taint information flow analysis, the framework effectively identifies and detects security-sensitive operations, ultimately generating a comprehensive detection report. Empirical results are as follows. (i) For code coverage, SCH-Hunter performs better than the state-of-art tool, HoneyBadger, achieving higher average code coverage rates on both datasets, surpassing it by 4.79% and 17.41%, respectively. (ii) For detection capabilities, SCH-Hunter is not only roughly on par with HoneyBadger in terms of precision and recall rate but also capable of detecting a wider variety of smart contract honeypot techniques. (iii) For the evaluation of components, we conducted three ablation studies to demonstrate that the proposed modules in SCH-Hunter significantly improve the framework’s detection capability, code coverage, and detection efficiency, respectively. Full article
(This article belongs to the Topic Software Engineering and Applications)
Show Figures

Figure 1

21 pages, 4721 KiB  
Article
PMAKA-IoV: A Physical Unclonable Function (PUF)-Based Multi-Factor Authentication and Key Agreement Protocol for Internet of Vehicles
by Ming Yuan and Yuelei Xiao
Information 2025, 16(5), 404; https://doi.org/10.3390/info16050404 - 14 May 2025
Viewed by 151
Abstract
With the explosion of vehicle-to-infrastructure (V2I) communications in the internet of vehicles (IoV), it is still very important to ensure secure authentication and efficient key agreement because of the vulnerabilities in the existing protocols such as physical capture attacks, privacy leakage, and low [...] Read more.
With the explosion of vehicle-to-infrastructure (V2I) communications in the internet of vehicles (IoV), it is still very important to ensure secure authentication and efficient key agreement because of the vulnerabilities in the existing protocols such as physical capture attacks, privacy leakage, and low computational efficiency. This paper proposes a physical unclonable function (PUF)-based multi-factor authentication and key agreement protocol tailored for V2I environments, named as PMAKA-IoV. The protocol integrates hardware-based PUFs with biometric features, utilizing fuzzy extractors to mitigate biometric template risks, while employing dynamic pseudonyms and lightweight cryptographic operations to enhance anonymity and reduce overhead. Security analysis demonstrates its resilience against physical capture attacks, replay attacks, man-in-the-middle attacks, and desynchronization attacks, and it is verified by formal verification using the strand space model and the automated Scyther tool. Performance analysis demonstrates that, compared to other related schemes, the PMAKA-IoV protocol maintains lower communication and storage overhead. Full article
(This article belongs to the Special Issue Wireless Communication and Internet of Vehicles)
Show Figures

Figure 1

33 pages, 1317 KiB  
Article
Deglobalization Trends and Communication Variables: A Multifaceted Analysis from 2009 to 2023
by James A. Danowski and Han-Woo Park
Information 2025, 16(5), 403; https://doi.org/10.3390/info16050403 - 14 May 2025
Viewed by 269
Abstract
This paper examines the correlation between rising trade protectionism—an indicator of economic deglobalization—and key communication and social variables from 2009 to 2023. Drawing on data from Global Trade Alert, Nexis Uni, Google searches, and Facebook (via CrowdTangle), we investigate the prevalence of “deglobalization” [...] Read more.
This paper examines the correlation between rising trade protectionism—an indicator of economic deglobalization—and key communication and social variables from 2009 to 2023. Drawing on data from Global Trade Alert, Nexis Uni, Google searches, and Facebook (via CrowdTangle), we investigate the prevalence of “deglobalization” discourse, language entropy, political polarization, protests, and digital authoritarianism. The analysis is framed by Optimal Information Theory, World Systems Theory, and other social science perspectives to explain how deglobalization may potentially reshape public communication. The results suggest that greater trade protectionism is associated with increased mentions of deglobalization, higher language entropy (i.e., less dominance of English), amplified political polarization, more frequent protest activity, and heightened digital authoritarian measures. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

14 pages, 2383 KiB  
Article
Performance Variability in Public Clouds: An Empirical Assessment
by Sanjay Ahuja, Victor H. Lopez Chalacan and Hugo Resendez
Information 2025, 16(5), 402; https://doi.org/10.3390/info16050402 - 14 May 2025
Viewed by 128
Abstract
Cloud computing is now established as a viable alternative to on-premise infrastructure from both a system administration and cost perspective. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers, Amazon Web Services (AWS) [...] Read more.
Cloud computing is now established as a viable alternative to on-premise infrastructure from both a system administration and cost perspective. This research provides insight into cluster computing performance and variability in cloud-provisioned infrastructure from two popular public cloud providers, Amazon Web Services (AWS) and Google Cloud Platform (GCP). In order to evaluate the perforance variability between these two providers, synthetic benchmarks including Memory bandwidth (STREAM), Interleave or Random (IoR) performance, and Computational CPU performance by NAS Parallel Benchmarks-Embarrassingly Parallel (NPB-EP) were used. A comparative examination of the two cloud platforms is provided in the context of our research methodology and design. We conclude with a discussion of the results of the experiment and an assessment of the suitability of public cloud platforms for certain types of computing workloads. Both AWS and GCP have their strong points, and this study provides recommendations depending on user needs for high throughput and/or performance predictability across CPU, memory, and Input/Output (I/O). In addition, the study discusses other factors to help users decide between cloud vendors such as ease of use, documentation, and types of instances offered. Full article
(This article belongs to the Special Issue Performance Engineering in Cloud Computing)
Show Figures

Figure 1

28 pages, 586 KiB  
Review
Review and Mapping of Search-Based Approaches for Program Synthesis
by Takfarinas Saber and Ning Tao
Information 2025, 16(5), 401; https://doi.org/10.3390/info16050401 - 14 May 2025
Viewed by 138
Abstract
Context: Program synthesis tools reduce software development costs by generating programs that perform tasks depicted by some specifications. Various methodologies have emerged for program synthesis, among which search-based algorithms have shown promising results. However, the proliferation of search-based program synthesis tools utilising diverse [...] Read more.
Context: Program synthesis tools reduce software development costs by generating programs that perform tasks depicted by some specifications. Various methodologies have emerged for program synthesis, among which search-based algorithms have shown promising results. However, the proliferation of search-based program synthesis tools utilising diverse search algorithms and input types and targeting various programming tasks can overwhelm users seeking the most suitable tool. Objective: This paper contributes to the ongoing discourse by presenting a comprehensive review of search-based approaches employed for program synthesis. We aim to offer an understanding of the guiding principles of current methodologies by mapping them to the required type of user intent, the type of search algorithm, and the representation of the search space. Furthermore, we aim to map the diverse search algorithms to the type of code generation tasks in which they have shown success, which would serve as a guideline for applying search-based approaches for program synthesis. Method: We conducted a literature review of 67 academic papers on search-based program synthesis. Results: Through analysis, we identified and categorised the main techniques with their trends. We have also mapped and shed light on patterns connecting the problem, the representation and the search algorithm type. Conclusions: Our study summarises the field of search-based program synthesis and provides an entry point to the acumen and expertise of the search-based community on program synthesis. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

17 pages, 1544 KiB  
Review
Transforming Auditing in the AI Era: A Comprehensive Review
by Nguyen Thi Thanh Binh
Information 2025, 16(5), 400; https://doi.org/10.3390/info16050400 - 14 May 2025
Viewed by 102
Abstract
This study explores how auditing is evolving in the context of Artificial Intelligence (AI) by analyzing a dataset of 465 peer-reviewed publications from 1982 to 2024, sourced from Scopus and Web of Science. Using Latent Dirichlet Allocation (LDA), an unsupervised machine learning method, [...] Read more.
This study explores how auditing is evolving in the context of Artificial Intelligence (AI) by analyzing a dataset of 465 peer-reviewed publications from 1982 to 2024, sourced from Scopus and Web of Science. Using Latent Dirichlet Allocation (LDA), an unsupervised machine learning method, the study identifies ten key thematic areas reflecting how AI increasingly intersects with auditing research. The analysis suggests that topics related to integrating AI and data-driven technologies are especially prominent. The theme “AI in Auditing” emerges as the most frequently occurring topic, comprising approximately 33.4% of the discussion. In comparison, “Data Security in Auditing” follows at 21.2%, indicating sustained scholarly concern with the integrity and protection of digital audit data. Other notable themes, such as “Auditing and Accounting Technologies” (12.7%) and “AI and Machine Learning in Auditing” (11.1%), suggest a continuing interest in the development and application of advanced technologies within auditing. The analysis also points to the presence of more specialized or emerging areas, including “Ethical AI in Audit Systems”, “Image Processing in Audit”, and “Political Influence in Auditing”, though these appear less frequently. Topics related to environmental ethics and racial and ethnic disparities in auditing were identified. However, their low representation (0.4% each) may indicate that such issues remain relatively peripheral in current academic discourse. The study provides a data-driven overview of how AI-related topics are being discussed in the auditing literature. It may help identify areas of growing interest and potential research gaps. The findings could have implications for researchers, practitioners, and policymakers by offering insights into the technological and ethical priorities shaping the field. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

40 pages, 3397 KiB  
Systematic Review
Intelligent Supply Chain Management: A Systematic Literature Review on Artificial Intelligence Contributions
by António R. Teixeira, José Vasconcelos Ferreira and Ana Luísa Ramos
Information 2025, 16(5), 399; https://doi.org/10.3390/info16050399 - 13 May 2025
Viewed by 342
Abstract
This systematic literature review investigates the recent applications of artificial intelligence (AI) in supply chain management (SCM), particularly in the domains of resilience, process optimization, sustainability, and implementation challenges. The study is motivated by gaps identified in previous reviews, which often exclude literature [...] Read more.
This systematic literature review investigates the recent applications of artificial intelligence (AI) in supply chain management (SCM), particularly in the domains of resilience, process optimization, sustainability, and implementation challenges. The study is motivated by gaps identified in previous reviews, which often exclude literature published after 2020 and lack an integrated analysis of AI’s contributions across multiple supply chain phases. The review aims to provide an updated synthesis of AI technologies—such as machine learning, deep learning, and generative AI—and their practical implementation between 2021 and 2024. Following the PRISMA framework, a rigorous methodology was applied using the Scopus database, complemented by bibliometric and content analyses. A total of 66 studies were selected based on predefined inclusion criteria and evaluated for methodological quality and thematic relevance. The findings reveal a diverse classification of AI applications across strategic and operational SCM phases and highlight emerging techniques like explainable AI, neurosymbolic systems, and federated learning. The review also identifies persistent barriers such as data governance, ethical concerns, and scalability. Future research should focus on hybrid AI–human collaboration, transparency through explainable models, and integration with technologies such as IoT and blockchain. This review contributes to the literature by offering a structured synthesis of AI’s transformative impact on SCM and by outlining key research directions to guide future investigations and managerial practice. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

37 pages, 1053 KiB  
Article
Innovating Cyber Defense with Tactical Simulators for Management-Level Incident Response
by Dalibor Gernhardt, Stjepan Groš and Gordan Gledec
Information 2025, 16(5), 398; https://doi.org/10.3390/info16050398 - 13 May 2025
Viewed by 192
Abstract
This study introduces a novel approach to cyber defense exercises, emphasizing the emulation of technical tasks to create realistic incident response scenarios. Unlike traditional cyber ranges or tabletop exercises, this method enables both management and technical leaders to engage in decision-making processes without [...] Read more.
This study introduces a novel approach to cyber defense exercises, emphasizing the emulation of technical tasks to create realistic incident response scenarios. Unlike traditional cyber ranges or tabletop exercises, this method enables both management and technical leaders to engage in decision-making processes without requiring a full technical setup. The initial observations indicate that exercises based on the emulation of technical tasks require less preparation time compared to conventional methods, addressing the growing demand for efficient training solutions. This study aims to assist organizations in developing their own cyber defense exercises by providing practical insights into the benefits and challenges of this approach. The key advantages observed include improved procedural compliance, inter-team communication, and a better understanding of the chain of command as participants navigate realistic, organization-wide scenarios. However, new challenges have also emerged, such as managing the simulation tempo and balancing technical complexity—particularly in offense–defense team configurations. This study proposes a structured and scalable approach as a practical alternative to the traditional training methods, aligning better with the evolving demands of modern cyber defense. Full article
(This article belongs to the Special Issue Data Privacy Protection in the Internet of Things)
Show Figures

Figure 1

37 pages, 1496 KiB  
Article
Machine Learning for Chinese Corporate Fraud Prediction: Segmented Models Based on Optimal Training Windows
by Chang Chuan Goh, Yue Yang, Anthony Bellotti and Xiuping Hua
Information 2025, 16(5), 397; https://doi.org/10.3390/info16050397 - 12 May 2025
Viewed by 129
Abstract
We propose a comprehensive and practical framework for Chinese corporate fraud prediction which incorporates classifiers, class imbalance, population drift, segmented models, and model evaluation using machine learning algorithms. Based on a three-stage experiment, we first find that the random forest classifier has the [...] Read more.
We propose a comprehensive and practical framework for Chinese corporate fraud prediction which incorporates classifiers, class imbalance, population drift, segmented models, and model evaluation using machine learning algorithms. Based on a three-stage experiment, we first find that the random forest classifier has the best performance in predicting corporate fraud among 17 machine learning models. We then implement the sliding time window approach to handle population drift, and the optimal training window found demonstrates the existence of population drift in fraud detection and the need to address it for improved model performance. Using the best machine learning model and optimal training window, we build general model and segmented models to compare fraud types and industries based on their respective predictive performance via four evaluation metrics and top features using SHAP. The results indicate that segmented models have a better predictive performance than the general model for fraud types with low fraud rates and are as good as the general model for most industries when controlling for training set size. The dissimilarities between the top features set of the general and segmented models suggest that segmented models are useful in providing a better understanding of fraud occurrence. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

24 pages, 5732 KiB  
Article
Performance Analysis of Reconfigurable Intelligent Surface-Assisted Millimeter Wave Massive MIMO System Under 3GPP 5G Channels
by Vishnu Vardhan Gudla, Vinoth Babu Kumaravelu, Agbotiname Lucky Imoize, Francisco R. Castillo Soria, Anjana Babu Sujatha, Helen Sheeba John Kennedy, Hindavi Kishor Jadhav, Arthi Murugadass and Samarendra Nath Sur
Information 2025, 16(5), 396; https://doi.org/10.3390/info16050396 - 12 May 2025
Viewed by 227
Abstract
Reconfigurable intelligent surfaces (RIS) and massive multiple input and multiple output (M-MIMO) are the two major enabling technologies for next-generation networks, capable of providing spectral efficiency (SE), energy efficiency (EE), array gain, spatial multiplexing, and reliability. This work introduces an RIS-assisted millimeter wave [...] Read more.
Reconfigurable intelligent surfaces (RIS) and massive multiple input and multiple output (M-MIMO) are the two major enabling technologies for next-generation networks, capable of providing spectral efficiency (SE), energy efficiency (EE), array gain, spatial multiplexing, and reliability. This work introduces an RIS-assisted millimeter wave (mmWave) M-MIMO system to harvest the advantages of RIS and mmWave M-MIMO systems that are required for beyond fifth-generation (B5G) systems. The performance of the proposed system is evaluated under 3GPP TR 38.901 V16.1.0 5G channel models. Specifically, we considered indoor hotspot (InH)—indoor office and urban microcellular (UMi)—street canyon channel environments for 28 GHz and 73 GHz mmWave frequencies. Using the SimRIS channel simulator, the channel matrices were generated for the required number of realizations. Monte Carlo simulations were executed extensively to evaluate the proposed system’s average bit error rate (ABER) and sum rate performances, and it was observed that increasing the number of transmit antennas from 4 to 64 resulted in a better performance gain of ∼10 dB for both InH—indoor office and UMi—street canyon channel environments. The improvement of the number of RIS elements from 64 to 1024 resulted in ∼7 dB performance gain. It was also observed that ABER performance at 28 GHz was better compared to 73 GHz by at least ∼5 dB for the considered channels. The impact of finite resolution RIS on the considered 5G channel models was also evaluated. ABER performance degraded for 2-bit finite resolution RIS compared to ideal infinite resolution RIS by ∼6 dB. Full article
(This article belongs to the Special Issue Advances in Telecommunication Networks and Wireless Technology)
Show Figures

Figure 1

24 pages, 3421 KiB  
Article
Cloud-Based Medical Named Entity Recognition: A FIT4NER-Based Approach
by Philippe Tamla, Florian Freund and Matthias Hemmje
Information 2025, 16(5), 395; https://doi.org/10.3390/info16050395 - 12 May 2025
Viewed by 198
Abstract
This paper presents a cloud-based system that builds upon the FIT4NER framework to support medical experts in training machine learning models for named entity recognition (NER) using Microsoft Azure. The system is designed to simplify complex cloud configurations while providing an intuitive interface [...] Read more.
This paper presents a cloud-based system that builds upon the FIT4NER framework to support medical experts in training machine learning models for named entity recognition (NER) using Microsoft Azure. The system is designed to simplify complex cloud configurations while providing an intuitive interface for managing and converting large-scale training and evaluation datasets across formats such as PDF, DOCX, TXT, BioC, spaCyJSON, and CoNLL-2003. It also enables the configuration of transformer-based spaCy pipelines and orchestrates Azure cloud services for scalable and efficient NER model training. Following the structured Nunamaker research methodology, the paper introduces the research context, surveys the state of the art, and highlights key challenges faced by medical professionals in cloud-based NER. It then details the modeling, implementation, and integration of the system. Evaluation results—both qualitative and quantitative—demonstrate enhanced usability, scalability, and accessibility for non-technical users in medical domains. The paper concludes with insights gained and outlines directions for future work. Full article
Show Figures

Figure 1

21 pages, 472 KiB  
Article
CDAS: A Secure Cross-Domain Data Sharing Scheme Based on Blockchain
by Jiahui Jiang, Tingrui Pei, Jiahao Chen and Zhiwen Hou
Information 2025, 16(5), 394; https://doi.org/10.3390/info16050394 - 12 May 2025
Viewed by 213
Abstract
In the current context of the wide application of Internet of Things (IoT) technology, cross-domain data sharing based on industrial IoT (IIoT) has become the key to maximizing data value, but it also faces many challenges. In response to the security and privacy [...] Read more.
In the current context of the wide application of Internet of Things (IoT) technology, cross-domain data sharing based on industrial IoT (IIoT) has become the key to maximizing data value, but it also faces many challenges. In response to the security and privacy issues in cross-domain data sharing, we proposed a cross-domain secure data sharing scheme (CDAS) based on multiple blockchains. The scheme first designs the cross-domain blockchain in layers and assists the device in completing the data sharing on the chain through the blockchain layer close to the edge device. In addition, we combine smart contract design to implement attribute-based access control (ABAC) and anonymous identity registration. This method simplifies device resource access by minimizing middleware confirmation, double-checking device access rights, and preventing redundant requests caused by illegal access attempts. Finally, in terms of data privacy and security, IPFS is used to store confidential data. In terms of ensuring data sharing security, searchable encryption (SE) is applied to the overall data sharing and improved. Users can find the required data by searching the ciphertext links in the blockchain system to ensure the secure transmission of private data. Compared with the traditional ABAC scheme, we have added modules for data privacy protection and anonymous authentication to further protect user data privacy. At the same time, compared with the access control scheme based on attribute encryption, our scheme has certain advantages in the time complexity calculation of key algorithms such as policy matching and encryption algorithm. At the same time, with the assistance of the edge blockchain layer, it can reduce the burden of limited computing resources of the device. This scheme can solve the security and efficiency problems of cross-domain data sharing in the industrial Internet of Things through security and experimental analysis. Full article
(This article belongs to the Special Issue Blockchain, Technology and Its Application)
Show Figures

Figure 1

21 pages, 3195 KiB  
Article
YOLO-LSM: A Lightweight UAV Target Detection Algorithm Based on Shallow and Multiscale Information Learning
by Chenxing Wu, Changlong Cai, Feng Xiao, Jiahao Wang, Yulin Guo and Longhui Ma
Information 2025, 16(5), 393; https://doi.org/10.3390/info16050393 - 9 May 2025
Viewed by 330
Abstract
To address challenges such as large-scale variations, high density of small targets, and the large number of parameters in deep learning-based target detection models, which limit their deployment on UAV platforms with fixed performance and limited computational resources, a lightweight UAV target detection [...] Read more.
To address challenges such as large-scale variations, high density of small targets, and the large number of parameters in deep learning-based target detection models, which limit their deployment on UAV platforms with fixed performance and limited computational resources, a lightweight UAV target detection algorithm, YOLO-LSM, is proposed. First, to mitigate the loss of small target information, an Efficient Small Target Detection Layer (ESTDL) is developed, alongside structural improvements to the baseline model to reduce parameters. Second, a Multiscale Lightweight Convolution (MLConv) is designed, and a lightweight feature extraction module, MLCSP, is constructed to enhance the extraction of detailed information. Focaler inner IoU is incorporated to improve bounding box matching and localization, thereby accelerating model convergence. Finally, a novel feature fusion network, DFSPP, is proposed to enhance accuracy by optimizing the selection and adjustment of target scale ranges. Validations on the VisDrone2019 and Tiny Person datasets demonstrate that compared to the benchmark network, the YOLO-LSM achieves a mAP0.5 improvement of 6.9 and 3.5 percentage points, respectively, with a parameter count of 1.9 M, representing a reduction of approximately 72%. Different from previous work on medical detection, this study tailors YOLO-LSM for UAV-based small object detection by introducing targeted improvements in feature extraction, detection heads, and loss functions, achieving better adaptation to aerial scenarios. Full article
Show Figures

Figure 1

19 pages, 3724 KiB  
Article
SYNCode: Synergistic Human–LLM Collaboration for Enhanced Data Annotation in Stack Overflow
by Meng Xia, Shradha Maharjan, Tammy Le, Will Taylor and Myoungkyu Song
Information 2025, 16(5), 392; https://doi.org/10.3390/info16050392 - 9 May 2025
Viewed by 304
Abstract
Large language models (LLMs) have rapidly advanced natural language processing, showcasing remarkable effectiveness as automated annotators across various applications. Despite their potential to significantly reduce annotation costs and expedite workflows, annotations produced solely by LLMs can suffer from inaccuracies and inherent biases, highlighting [...] Read more.
Large language models (LLMs) have rapidly advanced natural language processing, showcasing remarkable effectiveness as automated annotators across various applications. Despite their potential to significantly reduce annotation costs and expedite workflows, annotations produced solely by LLMs can suffer from inaccuracies and inherent biases, highlighting the necessity of maintaining human oversight. In this article, we present a synergistic human–LLM collaboration approach for data annotation enhancement (SYNCode). This framework is designed explicitly to facilitate collaboration between humans and LLMs for annotating complex, code-centric datasets such as Stack Overflow. The proposed approach involves an integrated pipeline that initially employs TF-IDF analysis for quick identification of relevant textual elements. Subsequently, we leverage advanced transformer-based models, specifically NLP Transformer and UniXcoder, to capture nuanced semantic contexts and code structures, generating more accurate preliminary annotations. Human annotators then engage in iterative refinement, validating and adjusting annotations to enhance accuracy and mitigate biases introduced during automated labeling. To operationalize this synergistic workflow, we developed the SYNCode prototype, featuring an interactive graphical interface that supports real-time collaborative annotation between humans and LLMs. This enables annotators to iteratively refine and validate automated suggestions effectively. Our integrated human–LLM collaborative methodology demonstrates considerable promise in achieving high-quality, reliable annotations, particularly for domain-specific and technically demanding datasets, thereby enhancing downstream tasks in software engineering and natural language processing. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Information Systems)
Show Figures

Graphical abstract

22 pages, 5596 KiB  
Article
A Fully Decentralized Web Application Framework with Dynamic Multi-Point Publishing and Shortest Access Path
by Bin Yu, Yuhui Fan, Peng Zhao, Xiaoyan Li and Lei Chen
Information 2025, 16(5), 391; https://doi.org/10.3390/info16050391 - 8 May 2025
Viewed by 224
Abstract
Decentralized applications (DApps) have found extensive use across various industries. However, they still face several issues that need to be resolved. Currently, DApps are in a semi-decentralized stage, as only partial decentralization has been achieved. This paper presents FDW, a fully decentralized web [...] Read more.
Decentralized applications (DApps) have found extensive use across various industries. However, they still face several issues that need to be resolved. Currently, DApps are in a semi-decentralized stage, as only partial decentralization has been achieved. This paper presents FDW, a fully decentralized web application framework, which mainly includes the DWeb market, developer client, publisher client, and visitor client. The DWeb (Decentralized Web) market is established to manage all DWebs. In the DWeb market, developers can register, upload, and maintain DWebs; publishers can download, validate, and deploy DWebs; and visitors can browse DWebs and provide content. To guarantee the reliable operation of DWebs, multiple publisher nodes deploy a DWeb through dynamic multi-point publishing. By adopting the shortest access path, client nodes can efficiently access any DWeb from the closest publishing node. Additionally, the incentive and governance mechanisms encourage collaboration among all participants, ensuring the security of FDW. A prototype system of FDW has been developed, which consists of a DWeb container and an example DWeb. An analysis and evaluation of the decentralization, scalability, and security of FDW are provided. Compared with other related schemes, FDW shows certain advantages in these aspects. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

17 pages, 3077 KiB  
Article
A Process Tree-Based Incomplete Event Log Repair Approach
by Qiushi Wang, Liye Zhang, Rui Cao, Na Guo, Haijun Zhang and Cong Liu
Information 2025, 16(5), 390; https://doi.org/10.3390/info16050390 - 8 May 2025
Viewed by 207
Abstract
The low quality of business process event logs—particularly the widespread occurrence of incomplete traces—poses significant challenges to the reliability, accuracy, and efficiency of process mining analysis. In real-world scenarios, these data imperfections severely undermine the practical value of process mining techniques. The primary [...] Read more.
The low quality of business process event logs—particularly the widespread occurrence of incomplete traces—poses significant challenges to the reliability, accuracy, and efficiency of process mining analysis. In real-world scenarios, these data imperfections severely undermine the practical value of process mining techniques. The primary research problem addressed in this study is the inefficiency and limited effectiveness of existing Petri-net-based incomplete trace repair approaches, which often struggle to accurately recover missing events in the presence of complex and nested loop structures. To tackle these limitations, we aim to develop a faster and more accurate approach for repairing incomplete event logs. Specifically, we propose a novel repair approach based on process trees as an alternative to traditional Petri nets, thus alleviating issues such as state space explosion. Our approach incorporates process tree model decomposition and innovative branch indexing techniques, enabling rapid localization of candidate branches for repair and a significant reduction in the solution space. Furthermore, by leveraging activity information within the traces, our approach achieves efficient and precise repair of loop nodes through a single traversal of the process tree. To comprehensively evaluate our approach, we conduct experiments on four real-life and five synthetic event logs, comparing performance against state-of-the-art techniques. The experimental results demonstrate that our approach consistently delivers repair accuracies exceeding 70%, with time efficiency improved by up to three orders of magnitude. These findings validate the superior accuracy, efficiency, and scalability of the proposed approach, highlighting its strong potential for practical applications in business process mining. Full article
Show Figures

Figure 1

35 pages, 1866 KiB  
Systematic Review
A Systematic Literature Review on Serious Games Methodologies for Training in the Mining Sector
by Claudia Gómez, Paola Vallejo and Jose Aguilar
Information 2025, 16(5), 389; https://doi.org/10.3390/info16050389 - 8 May 2025
Viewed by 266
Abstract
High-risk industries like mining must address occupational safety to reduce accidents and fatalities. Training through role-playing, simulations, and Serious Games (SGs) can reduce occupational risks. This study aims to conduct a systematic literature review (SLR) on SG methodologies for the mining sector. This [...] Read more.
High-risk industries like mining must address occupational safety to reduce accidents and fatalities. Training through role-playing, simulations, and Serious Games (SGs) can reduce occupational risks. This study aims to conduct a systematic literature review (SLR) on SG methodologies for the mining sector. This review was based on a methodology inspired by the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Three research questions were formulated to explore how SGs contribute to immediate feedback, brain stimulation, and training for high-risk scenarios. The review initially identified 1987 studies, which were reduced to 30 relevant publications following a three-phase process: (1) A search string based on three research questions was defined and applied to databases. (2) Publications were filtered by title and abstract. (3) A full-text reading was conducted to select relevant publications. The SLR showed SG development methodologies with structured processes that are adaptable to any case study. Additionally, it was found that Virtual Reality, despite its implementation costs, is the most used technology for safety training, inspection, and operation of heavy machinery. The first conclusion of this SLR indicates the lack of methodologies for the development of SG for training in the mining field, and the relevance of carrying out specific methodological studies in this field. Additionally, the main findings obtained from this SLR are the following: (1) Modeling languages (e.g., GML and UML) and metamodeling are important in SG development. (2) SG is a significant mechanism for cooperative and participative learning strategies. (3) Virtual Reality technology is widely used in safe virtual environments for mining training. (4) There is a need for methodologies that integrate the specification of cognitive functions with the affective part of the users for SGs suitable for learning environments. Finally, this review highlights critical gaps in current research and underscores the need for more integrative approaches to SG development. Full article
(This article belongs to the Section Review)
Show Figures

Figure 1

32 pages, 10773 KiB  
Article
E-Exam Cheating Detection System for Moodle LMS
by Ahmed S. Shatnawi, Fahed Awad, Dheya Mustafa, Abdel-Wahab Al-Falaky, Mohammed Shatarah and Mustafa Mohaidat
Information 2025, 16(5), 388; https://doi.org/10.3390/info16050388 - 7 May 2025
Viewed by 209
Abstract
The rapid growth of online education has raised significant concerns about identifying and addressing academic dishonesty in online exams. Although existing solutions aim to prevent and detect such misconduct, they often face limitations that make them impractical for many educational institutions. This paper [...] Read more.
The rapid growth of online education has raised significant concerns about identifying and addressing academic dishonesty in online exams. Although existing solutions aim to prevent and detect such misconduct, they often face limitations that make them impractical for many educational institutions. This paper introduces a novel online education integrity system utilizing well-established statistical methods to identify academic dishonesty. The system has been developed and integrated as an open-source Moodle plug-in. The evaluation involved utilizing an open-source Moodle quiz log database and creating synthetic benchmarks that represented diverse forms of academic dishonesty. The findings indicate that the system accurately identifies instances of academic dishonesty. The anticipated deployment includes institutions that rely on the Moodle Learning Management System (LMS) as their primary platform for administering online exams. Full article
Show Figures

Figure 1

19 pages, 254 KiB  
Article
Human-Centered AI and the Future of Translation Technologies: What Professionals Think About Control and Autonomy in the AI Era
by Miguel A. Jiménez-Crespo
Information 2025, 16(5), 387; https://doi.org/10.3390/info16050387 - 7 May 2025
Viewed by 343
Abstract
Two key pillars of human-centered AI (HCAI) approaches are “control” and “autonomy”. To date, little is known about professional translators’ attitudes towards these concepts in the AI era. This paper explores this issue through a survey study of US-based professional translators in mid-2024. [...] Read more.
Two key pillars of human-centered AI (HCAI) approaches are “control” and “autonomy”. To date, little is known about professional translators’ attitudes towards these concepts in the AI era. This paper explores this issue through a survey study of US-based professional translators in mid-2024. Methodologically, this paper presents a qualitative analysis of open-ended questions through thematic coding to identify themes related to (1) present conceptualizations of control and autonomy over translation technologies, (2) future attitudes towards control and autonomy in the AI era, (3) main threats and challenges, and (4) recommendations to developers to enhance perceptions of control and autonomy. The results show that professionals perceive control and autonomy differently in both the present and the future. The main themes are usability, the ability to turn on and off technologies or reject jobs that require specific technologies, collaboration with developers, and differences in working with LSPs versus private clients. In terms of future attitudes, the most frequent ones are post-editing, quality, communicating or informing clients, LSPs or society at large, and creativity or rates. Overall, the study helps identify how professionals conceptualize control and autonomy and what specific issues could help foster the development of truly human-centered AI in the translation profession. Full article
(This article belongs to the Special Issue Human and Machine Translation: Recent Trends and Foundations)
21 pages, 571 KiB  
Article
DDA-MSLD: A Multi-Feature Speech Lie Detection Algorithm Based on a Dual-Stream Deep Architecture
by Pengfei Guo, Shucheng Huang and Mingxing Li
Information 2025, 16(5), 386; https://doi.org/10.3390/info16050386 - 6 May 2025
Viewed by 172
Abstract
Speech lie detection is a technique that analyzes speech signals in detail to determine whether a speaker is lying. It has significant application value and has attracted attention from various fields. However, existing speech lie detection algorithms still have certain limitations. These algorithms [...] Read more.
Speech lie detection is a technique that analyzes speech signals in detail to determine whether a speaker is lying. It has significant application value and has attracted attention from various fields. However, existing speech lie detection algorithms still have certain limitations. These algorithms fail to fully explore manually extracted features based on prior knowledge and also neglect the dynamic characteristics of speech as well as the impact of temporal context, resulting in reduced detection accuracy and generalization. To address these issues, this paper proposes a multi-feature speech lie detection algorithm based on the dual-stream deep architecture (DDA-MSLD).This algorithm employs a dual-stream structure to learn different types of features simultaneously. Firstly, it combines a gated recurrent unit (GRU) network with the attention mechanism. This combination enables the network to more comprehensively capture the context of speech signals and focus on the parts that are more critical for lie detection. It can perform in-depth sequence pattern analysis on manually extracted static prosodic features and nonlinear dynamic features, obtaining high-order dynamic features related to lies. Secondly, the encoder part of the transformer is used to simultaneously capture the macroscopic structure and microscopic details of speech signals, specifically for high-precision feature extraction of Mel spectrogram features of speech signals, obtaining deep features related to lies. This dual-stream structure processes various features of speech simultaneously, describing the subjective state of speech signals from different perspectives and thereby improving detection accuracy and generalization. Experiments were conducted on the multi-person scenario lie detection dataset CSC, and the results show that this algorithm outperformed existing state-of-the-art algorithms in detection performance. Considering the significant differences in lie speech in different lying scenarios, and to further evaluate the algorithm’s generalization performance, a single-person scenario Chinese lie speech dataset Local was constructed, and experiments were conducted on it. The results indicate that the algorithm has a strong generalization ability in different scenarios. Full article
Show Figures

Figure 1

Previous Issue
Back to TopTop