Next Issue
Volume 14, July
Previous Issue
Volume 14, May
 
 

Computers, Volume 14, Issue 6 (June 2025) – 38 articles

Cover Story (view full-size image): Today’s complex business problems demand robust decision support. In this paper, we systematically review how Multi-Criteria Decision Making (MCDM) methods, such as AHP, TOPSIS, fuzzy logic, and ANP, enhance Management Information Systems for both strategic and operational decision-making. By analyzing 40 peer-reviewed studies, we categorize MCDM applications, map method strengths to specific MIS tasks, and offer theoretical guidance for selecting suitable techniques. These insights will aid scholars and practitioners in harnessing MCDM to improve data-driven, transparent decisions in dynamic organizational environments. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 1456 KiB  
Article
Informing Disaster Recovery Through Predictive Relocation Modeling
by Chao He and Da Hu
Computers 2025, 14(6), 240; https://doi.org/10.3390/computers14060240 - 19 Jun 2025
Viewed by 299
Abstract
Housing recovery represents a critical component of disaster recovery, and accurately forecasting household relocation decisions is essential for guiding effective post-disaster reconstruction policies. This study explores the use of machine learning algorithms to improve the prediction of household relocation in the aftermath of [...] Read more.
Housing recovery represents a critical component of disaster recovery, and accurately forecasting household relocation decisions is essential for guiding effective post-disaster reconstruction policies. This study explores the use of machine learning algorithms to improve the prediction of household relocation in the aftermath of disasters. Leveraging data from 1304 completed interviews conducted as part of the Displaced New Orleans Residents Survey (DNORS) following Hurricane Katrina, we evaluate the performance of Logistic Regression (LR), Random Forest (RF), and Weighted Support Vector Machine (WSVM) models. Results indicate that WSVM significantly outperforms LR and RF, particularly in identifying the minority class of relocated households, achieving the highest F1 score. Key predictors of relocation include homeownership, extent of housing damage, and race. By integrating variable importance rankings and partial dependence plots, the study also enhances interpretability of machine learning outputs. These findings underscore the value of advanced predictive models in disaster recovery planning, particularly in geographically vulnerable regions like New Orleans where accurate relocation forecasting can guide more effective policy interventions. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

24 pages, 2410 KiB  
Article
UA-HSD-2025: Multi-Lingual Hate Speech Detection from Tweets Using Pre-Trained Transformers
by Muhammad Ahmad, Muhammad Waqas, Ameer Hamza, Sardar Usman, Ildar Batyrshin and Grigori Sidorov
Computers 2025, 14(6), 239; https://doi.org/10.3390/computers14060239 - 18 Jun 2025
Cited by 1 | Viewed by 523
Abstract
The rise in social media has improved communication but also amplified the spread of hate speech, creating serious societal risks. Automated detection remains difficult due to subjectivity, linguistic diversity, and implicit language. While prior research focuses on high-resource languages, this study addresses the [...] Read more.
The rise in social media has improved communication but also amplified the spread of hate speech, creating serious societal risks. Automated detection remains difficult due to subjectivity, linguistic diversity, and implicit language. While prior research focuses on high-resource languages, this study addresses the underexplored multilingual challenges of Arabic and Urdu hate speech through a comprehensive approach. To achieve this objective, this study makes four different key contributions. First, we have created a unique multi-lingual, manually annotated binary and multi-class dataset (UA-HSD-2025) sourced from X, which contains the five most important multi-class categories of hate speech. Secondly, we created detailed annotation guidelines to make a robust and perfect hate speech dataset. Third, we explore two strategies to address the challenges of multilingual data: a joint multilingual and translation-based approach. The translation-based approach involves converting all input text into a single target language before applying a classifier. In contrast, the joint multilingual approach employs a unified model trained to handle multiple languages simultaneously, enabling it to classify text across different languages without translation. Finally, we have employed state-of-the-art 54 different experiments using different machine learning using TF-IDF, deep learning using advanced pre-trained word embeddings such as FastText and Glove, and pre-trained language-based models using advanced contextual embeddings. Based on the analysis of the results, our language-based model (XLM-R) outperformed traditional supervised learning approaches, achieving 0.99 accuracy in binary classification for Arabic, Urdu, and joint-multilingual datasets, and 0.95, 0.94, and 0.94 accuracy in multi-class classification for joint-multilingual, Arabic, and Urdu datasets, respectively. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

23 pages, 1176 KiB  
Article
Bridging the AI Gap in Medical Education: A Study of Competency, Readiness, and Ethical Perspectives in Developing Nations
by Mostafa Aboulnour Salem, Ossama M. Zakaria, Eman Abdulaziz Aldoughan, Zeyad Aly Khalil and Hazem Mohamed Zakaria
Computers 2025, 14(6), 238; https://doi.org/10.3390/computers14060238 - 17 Jun 2025
Cited by 2 | Viewed by 561
Abstract
Background: The rapid integration of artificial intelligence (AI) into medical education in developing nations necessitates that educators develop comprehensive AI competencies and readiness. This study explores AI competence and readiness among medical educators in higher education, focusing on the five key dimensions of [...] Read more.
Background: The rapid integration of artificial intelligence (AI) into medical education in developing nations necessitates that educators develop comprehensive AI competencies and readiness. This study explores AI competence and readiness among medical educators in higher education, focusing on the five key dimensions of the ADELE technique: (A) AI Awareness, (D) Development of AI Skills, (E) AI Efficacy, (L) Leanings Towards AI, and (E) AI Enforcement. Structured surveys were used to assess AI competencies and readiness among medical educators for the sustainable integration of AI in medical education. Methods: A cross-sectional study was conducted using a 40-item survey distributed to 253 educators from the Middle East (Saudi Arabia, Egypt, Jordan) and South Asia (India, Pakistan, Philippines). Statistical analyses examined variations in AI competency and readiness by gender and nationality and assessed their predictive impact on the adoption of sustainable AI in medical education. Results: The findings revealed that AI competency and readiness are the primary drivers of sustainable AI adoption, highlighting the need to bridge the gap between theoretical knowledge and practical application. No significant differences were observed based on gender or discipline, suggesting a balanced approach to AI education. However, ethical perspectives on AI integration varied between Middle East and South Asian educators, likely reflecting cultural influences. Conclusions: This study underscores the importance of advancing from foundational AI knowledge to hands-on applications while promoting responsible AI use. The ADELE technique provides a strategic approach to enhancing AI competency in medical education within developing nations, fostering both technological proficiency and ethical awareness among educators. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

27 pages, 1417 KiB  
Article
A BERT-Based Multimodal Framework for Enhanced Fake News Detection Using Text and Image Data Fusion
by Mohammed Al-alshaqi, Danda B. Rawat and Chunmei Liu
Computers 2025, 14(6), 237; https://doi.org/10.3390/computers14060237 - 16 Jun 2025
Viewed by 1125
Abstract
The spread of fake news on social media is complicated by the fact that fake information spreads extremely fast in both textual and visual formats. Traditional approaches to the detection of fake news focus mainly on text and image features, thereby missing valuable [...] Read more.
The spread of fake news on social media is complicated by the fact that fake information spreads extremely fast in both textual and visual formats. Traditional approaches to the detection of fake news focus mainly on text and image features, thereby missing valuable information contained within images and texts. In response to this, we propose a multimodal fake news detection method based on BERT, with an extension to text combined with the extracted text from images through Optical Character Recognition (OCR). Here, we consider extending feature analysis with BERT_base_uncased to process inputs for retrieving relevant text from images and determining a confidence score that suggests the probability of the news being authentic. We report extensive experimental results on the ISOT, WELFAKE, TRUTHSEEKER, and ISOT_WELFAKE_TRUTHSEEKER datasets. Our proposed model demonstrates better generalization on the TRUTHSEEKER dataset with an accuracy of 99.97%, achieving substantial improvements over existing methods with an F1-score of 0.98. Experimental results indicate a potential accuracy increment of +3.35% compared to the latest baselines. These results highlight the potential of our approach to serve as a strong resource for automatic fake news detection by effectively integrating both text and visual data streams. Findings suggest that using diverse datasets enhances the resilience of detection systems against misinformation strategies. Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Show Figures

Figure 1

28 pages, 1509 KiB  
Article
Adaptive Congestion Detection and Traffic Control in Software-Defined Networks via Data-Driven Multi-Agent Reinforcement Learning
by Kaoutar Boussaoud, Abdeslam En-Nouaary and Meryeme Ayache
Computers 2025, 14(6), 236; https://doi.org/10.3390/computers14060236 - 16 Jun 2025
Viewed by 454
Abstract
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven [...] Read more.
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven framework based on Multi-Agent Reinforcement Learning (MARL) to enable intelligent, adaptive congestion control in SDNs. The framework integrates two collaborative agents: a Congestion Classification Agent that identifies congestion levels using metrics such as delay and packet loss, and a Decision-Making Agent based on Deep Q-Learning (DQN or its variants), which selects the optimal actions for routing and bandwidth management. The agents are trained offline using both synthetic and real network traces (e.g., the MAWI dataset), and deployed in a simulated SDN testbed using Mininet and the Ryu controller. Extensive experiments demonstrate the superiority of the proposed system across key performance metrics. Compared to baseline controllers, including standalone DQN and static heuristics, the MARL system achieves up to 3.0% higher throughput, maintains end-to-end delay below 10 ms, and reduces packet loss by over 10% in real traffic scenarios. Furthermore, the architecture exhibits stable cumulative reward progression and balanced action selection, reflecting effective learning and policy convergence. These results validate the benefit of agent specialization and modular learning in scalable and intelligent SDN traffic engineering. Full article
Show Figures

Figure 1

15 pages, 1461 KiB  
Article
Quantum Computing in Data Science and STEM Education: Mapping Academic Trends and Analyzing Practical Tools
by Eloy López-Meneses, Jesús Cáceres-Tello, José Javier Galán-Hernández and Luis López-Catalán
Computers 2025, 14(6), 235; https://doi.org/10.3390/computers14060235 - 16 Jun 2025
Viewed by 532
Abstract
Quantum computing is emerging as a key enabler of digital transformation in data science and STEM education. This study investigates how quantum computing can be meaningfully integrated into higher education by combining a dual approach: a structured assessment of the specialized literature and [...] Read more.
Quantum computing is emerging as a key enabler of digital transformation in data science and STEM education. This study investigates how quantum computing can be meaningfully integrated into higher education by combining a dual approach: a structured assessment of the specialized literature and a practical evaluation of educational tools. First, a science mapping study based on 281 peer-reviewed publications indexed in Scopus (2015–2024) identifies growth trends, thematic clusters, and international collaboration networks at the intersection of quantum computing, data science, and education. Second, a comparative analysis of widely used educational platforms—such as Qiskit, Quantum Inspire, QuTiP, and Amazon Braket—is conducted using pedagogical criteria including accessibility, usability, and curriculum integration. The results highlight a growing convergence between quantum technologies, artificial intelligence, and data-driven learning. A strategic framework and roadmap are proposed to support the gradual and scalable adoption of quantum literacy in university-level STEM programs. Full article
Show Figures

Graphical abstract

22 pages, 7560 KiB  
Article
An Innovative Process Chain for Precision Agriculture Services
by Christos Karydas, Miltiadis Iatrou and Spiros Mourelatos
Computers 2025, 14(6), 234; https://doi.org/10.3390/computers14060234 - 13 Jun 2025
Viewed by 1075
Abstract
In this work, an innovative process chain is set up for the regular provision of fertilization consultation services to farmers for a variety of crops, within a precision agriculture framework. The central hub of this mechanism is a geographic information system (GIS), while [...] Read more.
In this work, an innovative process chain is set up for the regular provision of fertilization consultation services to farmers for a variety of crops, within a precision agriculture framework. The central hub of this mechanism is a geographic information system (GIS), while a 5 × 5 m point grid is the information carrier. Potential data sources include soil samples, satellite imagery, meteorological parameters, yield maps, and agronomic information. Whenever big data are available per crop, decision-making is supported by machine learning systems (MLSs). All the map data are uploaded to a farm management information system (FMIS) for visualization and storage. The recipe maps are transmitted wirelessly to variable rate technologies (VRTs) for applications in the field. To a large degree, the process chain has been automated with programming at many levels. Currently, four different service modules based on the new process chain are available in the market. Full article
Show Figures

Figure 1

22 pages, 1803 KiB  
Article
Intelligent Fault Detection and Self-Healing Mechanisms in Wireless Sensor Networks Using Machine Learning and Flying Fox Optimization
by Almamoon Alauthman and Abeer Al-Hyari
Computers 2025, 14(6), 233; https://doi.org/10.3390/computers14060233 - 13 Jun 2025
Viewed by 553
Abstract
WSNs play a critical role in many applications that require network reliability, such as environmental monitoring, healthcare, and industrial automation. Thus, fault detection and self-healing are two effective mechanisms for addressing the challenges of node failure, communication disruption, a energy constraints faced by [...] Read more.
WSNs play a critical role in many applications that require network reliability, such as environmental monitoring, healthcare, and industrial automation. Thus, fault detection and self-healing are two effective mechanisms for addressing the challenges of node failure, communication disruption, a energy constraints faced by WSNs. This paper presents an intelligent framework based on Light Gradient Boosting Machine integration for fault detection and a Flying Fox Optimization Algorithm in dynamic self-healing. The LGBM model provides very accurate and scalable performance related to effective fault identification, whereas FFOA optimizes the recovery strategies to minimize downtown and maximize network resilience. Extensive performance evaluation of the developed system using a large dataset was presented and compared with the state-of-the-art heuristic-based traditional methods and machine learning models. The results showed that the proposed framework could achieve 94.6% fault detection accuracy, with a minimum of 120 milliseconds of recovery time and network resilience of 98.5%. These results hence attest to the efficiency of the proposed approach in ensuring robust and adaptive WSN operations toward the quest for enhanced reliability within dynamic and resource-constrained environments. Full article
Show Figures

Figure 1

26 pages, 1627 KiB  
Article
RVR Blockchain Consensus: A Verifiable, Weighted-Random, Byzantine-Tolerant Framework for Smart Grid Energy Trading
by Huijian Wang, Xiao Liu and Jining Chen
Computers 2025, 14(6), 232; https://doi.org/10.3390/computers14060232 - 13 Jun 2025
Viewed by 492
Abstract
Blockchain technology empowers decentralized transactions in smart grids, but existing consensus algorithms face efficiency and security bottlenecks under Byzantine attacks. This article proposes the RVR consensus algorithm, which innovatively integrates dynamic reputation evaluation, verifiable random function (VRF), and a weight-driven probability election mechanism [...] Read more.
Blockchain technology empowers decentralized transactions in smart grids, but existing consensus algorithms face efficiency and security bottlenecks under Byzantine attacks. This article proposes the RVR consensus algorithm, which innovatively integrates dynamic reputation evaluation, verifiable random function (VRF), and a weight-driven probability election mechanism to achieve (1) behavior-aware dynamic adjustment of reputation weights and (2) manipulation-resistant random leader election via VRF. Experimental verification shows that under a silence attack, the maximum latency is reduced by 37.88% compared to HotStuff, and under a forking attack, the maximum throughput is increased by 50.66%, providing an efficient and secure new paradigm for distributed energy trading. Full article
Show Figures

Figure 1

20 pages, 472 KiB  
Review
Immersive, Secure, and Collaborative Air Quality Monitoring
by José Marinho and Nuno Cid Martins
Computers 2025, 14(6), 231; https://doi.org/10.3390/computers14060231 - 12 Jun 2025
Viewed by 539
Abstract
Air pollution poses a serious threat to both public health and the environment, contributing to millions of premature deaths worldwide each year. The integration of augmented reality (AR), blockchain, and the Internet of Things (IoT) technologies can provide a transformative approach to collaborative [...] Read more.
Air pollution poses a serious threat to both public health and the environment, contributing to millions of premature deaths worldwide each year. The integration of augmented reality (AR), blockchain, and the Internet of Things (IoT) technologies can provide a transformative approach to collaborative air quality monitoring (AQM), enabling real-time, transparent, and intuitive access to environmental data for community awareness, behavioural change, informed decision-making, and proactive responses to pollution challenges. This article presents a unified vision of the key elements and technologies to consider when designing such AQM systems, allowing dynamic and user-friendly immersive air quality data visualization interfaces, secure and trusted data storage, fine-grained data collection through crowdsourcing, and active community learning and participation. It serves as a conceptual basis for any design and implementation of such systems. Full article
Show Figures

Figure 1

36 pages, 1232 KiB  
Article
Exploring the Factors Influencing AI Adoption Intentions in Higher Education: An Integrated Model of DOI, TOE, and TAM
by Rawan N. Abulail, Omar N. Badran, Mohammad A. Shkoukani and Fandi Omeish
Computers 2025, 14(6), 230; https://doi.org/10.3390/computers14060230 - 11 Jun 2025
Viewed by 1648
Abstract
This study investigates the primary technological and socio-environmental factors influencing the adoption intentions of AI-powered technology at the corporate level within higher education institutions. A conceptual model based on the Diffusion of Innovation Theory (DOI), the Technology–Organization–Environment (TOE), and the Technology Acceptance Model [...] Read more.
This study investigates the primary technological and socio-environmental factors influencing the adoption intentions of AI-powered technology at the corporate level within higher education institutions. A conceptual model based on the Diffusion of Innovation Theory (DOI), the Technology–Organization–Environment (TOE), and the Technology Acceptance Model (TAM) combined framework were proposed and tested using data collected from 367 higher education students, faculty members, and employees. SPSS Amos 24 was used for CB-SEM to choose the best-fitting model, which proved more efficient than traditional multiple regression analysis to examine the relationships among the proposed constructs, ensuring model fit and statistical robustness. The findings reveal that Compatibility “C”, Complexity “CX”, User Interface “UX”, Perceived Ease of Use “PEOU”, User Satisfaction “US”, Performance Expectation “PE”, Artificial intelligence “AI” introducing new tools “AINT”, AI Strategic Alignment “AIS”, Availability of Resources “AVR”, Technological Support “TS”, and Facilitating Conditions “FC” significantly impact AI adoption intentions. At the same time, Competitive Pressure “COP” and Government Regulations “GOR” do not. Demographic factors, including major and years of experience, moderated these associations, and there were large differences across educational backgrounds and experience. Full article
Show Figures

Figure 1

22 pages, 3451 KiB  
Article
LSTM-Based Music Generation Technologies
by Yi-Jen Mon
Computers 2025, 14(6), 229; https://doi.org/10.3390/computers14060229 - 11 Jun 2025
Viewed by 516
Abstract
In deep learning, Long Short-Term Memory (LSTM) is a well-established and widely used approach for music generation. Nevertheless, creating musical compositions that match the quality of those created by human composers remains a formidable challenge. The intricate nature of musical components, including pitch, [...] Read more.
In deep learning, Long Short-Term Memory (LSTM) is a well-established and widely used approach for music generation. Nevertheless, creating musical compositions that match the quality of those created by human composers remains a formidable challenge. The intricate nature of musical components, including pitch, intensity, rhythm, notes, chords, and more, necessitates the extraction of these elements from extensive datasets, making the preliminary work arduous. To address this, we employed various tools to deconstruct the musical structure, conduct step-by-step learning, and then reconstruct it. This article primarily presents the techniques for dissecting musical components in the preliminary phase. Subsequently, it introduces the use of LSTM to build a deep learning network architecture, enabling the learning of musical features and temporal coherence. Finally, through in-depth analysis and comparative studies, this paper validates the efficacy of the proposed research methodology, demonstrating its ability to capture musical coherence and generate compositions with similar styles. Full article
Show Figures

Figure 1

33 pages, 518 KiB  
Article
Quantum Classification Outside the Promised Class
by Theodore Andronikos, Constantinos Bitsakos, Konstantinos Nikas, Georgios I. Goumas and Nectarios Koziris
Computers 2025, 14(6), 228; https://doi.org/10.3390/computers14060228 - 10 Jun 2025
Viewed by 250
Abstract
This paper studies the important problem of quantum classification of Boolean functions from an entirely novel perspective. Typically, quantum classification algorithms allow us to classify functions with a probability of 1.0, if we are promised that they meet specific unique properties. The [...] Read more.
This paper studies the important problem of quantum classification of Boolean functions from an entirely novel perspective. Typically, quantum classification algorithms allow us to classify functions with a probability of 1.0, if we are promised that they meet specific unique properties. The primary objective of this study is to explore whether it is feasible to obtain any insights when the input function deviates from the promised class. For concreteness, we use a recently introduced quantum algorithm that is designed to classify a large class of imbalanced Boolean functions with probability 1.0 using just a single oracular query. First, we establish a completely new concept characterizing “nearness” between Boolean functions. Utilizing this concept, we show that, as long as the unknown function is close enough to the promised class, it is still possible to obtain useful information about its behavioral pattern from the classification algorithm. In this regard, the current study is among the first to provide evidence that shows how useful it is to apply quantum classification algorithms to functions outside the promised class in order to get a glimpse of important information. Full article
Show Figures

Figure 1

20 pages, 2898 KiB  
Article
Deploying a Mental Health Chatbot in Higher Education: The Development and Evaluation of Luna, an AI-Based Mental Health Support System
by Phillip Olla, Ashlee Barnes, Lauren Elliott, Mustafa Abumeeiz, Venus Olla and Joseph Tan
Computers 2025, 14(6), 227; https://doi.org/10.3390/computers14060227 - 10 Jun 2025
Viewed by 769
Abstract
Rising mental health challenges among postsecondary students have increased the demand for scalable, ethical solutions. This paper presents the design, development, and safety evaluation of Luna, a GPT-4-based mental health chatbot. Built using a modular PHP architecture, Luna integrates multi-layered prompt engineering, safety [...] Read more.
Rising mental health challenges among postsecondary students have increased the demand for scalable, ethical solutions. This paper presents the design, development, and safety evaluation of Luna, a GPT-4-based mental health chatbot. Built using a modular PHP architecture, Luna integrates multi-layered prompt engineering, safety guardrails, and referral logic. The Institutional Review Board (IRB) at the University of Detroit Mercy (Protocol #23-24-38) reviewed the proposed study and deferred full human subject approval, requesting technical validation prior to deployment. In response, we conducted a pilot test with a variety of users—including clinicians and students who simulated at-risk student scenarios. Results indicated that 96% of expert interactions were deemed safe, and 90.4% of prompts were considered useful. This paper describes Luna’s architecture, prompt strategy, and expert feedback, concluding with recommendations for future human research trials. Full article
Show Figures

Figure 1

49 pages, 552 KiB  
Systematic Review
Ethereum Smart Contracts Under Scrutiny: A Survey of Security Verification Tools, Techniques, and Challenges
by Mounira Kezadri Hamiaz and Maha Driss
Computers 2025, 14(6), 226; https://doi.org/10.3390/computers14060226 - 9 Jun 2025
Viewed by 921
Abstract
Smart contracts are self-executing programs that facilitate trustless transactions between multiple parties, most commonly deployed on the Ethereum blockchain. They have become integral to decentralized applications in areas such as voting, digital agreements, and financial systems. However, the immutable and transparent nature of [...] Read more.
Smart contracts are self-executing programs that facilitate trustless transactions between multiple parties, most commonly deployed on the Ethereum blockchain. They have become integral to decentralized applications in areas such as voting, digital agreements, and financial systems. However, the immutable and transparent nature of smart contracts makes security vulnerabilities especially critical, as deployed contracts cannot be modified. Security flaws have led to substantial financial losses, underscoring the need for robust verification before deployment. This survey presents a comprehensive review of the state of the art in smart contract security verification, with a focus on Ethereum. We analyze a wide range of verification methods, including static and dynamic analysis, formal verification, and machine learning, and evaluate 62 open-source tools across their detection accuracy, efficiency, and usability. In addition, we highlight emerging trends, challenges, and the need for cross-methodological integration and benchmarking. Our findings aim to guide researchers, developers, and security auditors in selecting and advancing effective verification approaches for building secure and reliable smart contracts. Full article
Show Figures

Figure 1

27 pages, 1178 KiB  
Article
Ensemble-Based Biometric Verification: Defending Against Multi-Strategy Deepfake Image Generation
by Hilary Zen, Rohan Wagh, Miguel Wanderley, Gustavo Bicalho, Rachel Park, Megan Sun, Rafael Palacios, Lucas Carvalho, Guilherme Rinaldo and Amar Gupta
Computers 2025, 14(6), 225; https://doi.org/10.3390/computers14060225 - 9 Jun 2025
Viewed by 738
Abstract
Deepfake images, synthetic images created using digital software, continue to present a serious threat to online platforms. This is especially relevant for biometric verification systems, as deepfakes that attempt to bypass such measures increase the risk of impersonation, identity theft and scams. Although [...] Read more.
Deepfake images, synthetic images created using digital software, continue to present a serious threat to online platforms. This is especially relevant for biometric verification systems, as deepfakes that attempt to bypass such measures increase the risk of impersonation, identity theft and scams. Although research on deepfake image detection has provided many high-performing classifiers, many of these commonly used detection models lack generalizability across different methods of deepfake generation. For companies and governments fighting identify fraud, a lack of generalization is challenging, as malicious actors may use a variety of deepfake image-generation methods available through online wrappers. This work explores if combining multiple classifiers into an ensemble model can improve generalization without losing performance across different generation methods. It also considers current methods of deepfake image generation, with a focus on publicly available and easily accessible methods. We compare our framework against its underlying models to show how companies can better respond to emerging deepfake generation methods. Full article
Show Figures

Graphical abstract

31 pages, 9733 KiB  
Article
Gamifying Sociological Surveys Through Serious Games—A Data Analysis Approach Applied to Multiple-Choice Question Responses Datasets
by Alexandros Gazis and Eleftheria Katsiri
Computers 2025, 14(6), 224; https://doi.org/10.3390/computers14060224 - 7 Jun 2025
Viewed by 675
Abstract
E-polis is a serious digital game designed to gamify sociological surveys studying young people’s political opinions. In this platform game, players navigate a digital world, encountering quests posing sociological questions. Players’ answers shape the city-game world, altering building structures based on their choices. [...] Read more.
E-polis is a serious digital game designed to gamify sociological surveys studying young people’s political opinions. In this platform game, players navigate a digital world, encountering quests posing sociological questions. Players’ answers shape the city-game world, altering building structures based on their choices. E-polis is a serious game, not a government simulation, aiming to understand players’ behaviors and opinions; thus, we do not train the players but rather understand them and help them visualize their choices in shaping a city’s future. Also, it is noticed that no correct or incorrect answers apply. Moreover, our game utilizes a novel middleware architecture for development, diverging from typical asset-prefab-scene and script segregation. This article presents the data layer of our game’s middleware, specifically focusing on data analysis based on respondents’ gameplay answers. E-polis represents an innovative approach to gamifying sociological research, providing a unique platform for gathering and analyzing data on political opinions among youth and contributing to the broader field of serious games. Full article
Show Figures

Figure 1

26 pages, 12177 KiB  
Article
An Efficient Hybrid 3D Computer-Aided Cephalometric Analysis for Lateral Cephalometric and Cone-Beam Computed Tomography (CBCT) Systems
by Laurine A. Ashame, Sherin M. Youssef, Mazen Nabil Elagamy and Sahar M. El-Sheikh
Computers 2025, 14(6), 223; https://doi.org/10.3390/computers14060223 - 7 Jun 2025
Viewed by 528
Abstract
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study [...] Read more.
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study addresses fully automatic orthodontics tracing based on the usage of artificial intelligence (AI) applied to 2D and 3D images, by designing a cephalometric system that analyzes the significant landmarks and regions of interest (ROI) needed in orthodontics tracing, especially for the mandible and maxilla teeth. In this research, a computerized system is developed to automate the tasks of orthodontics evaluation during 2D and Cone-Beam Computed Tomography (CBCT or 3D) systems measurements. This work was tested on a dataset that contains images of males and females obtained from dental hospitals with patient-informed consent. The dataset consists of 2D lateral cephalometric, panorama and CBCT radiographs. Many scenarios were applied to test the proposed system in landmark prediction and detection. Moreover, this study integrates the Grad-CAM (Gradient-Weighted Class Activation Mapping) technique to generate heat maps, providing transparent visualization of the regions the model focuses on during its decision-making process. By enhancing the interpretability of deep learning predictions, Grad-CAM strengthens clinical confidence in the system’s outputs, ensuring that ROI detection aligns with orthodontic diagnostic standards. This explainability is crucial in medical AI applications, where understanding model behavior is as important as achieving high accuracy. The experimental results achieved an accuracy exceeding 98.9%. This research evaluates and differentiates between the two-dimensional and the three-dimensional tracing analyses applied to measurements based on the practices of the European Board of Orthodontics. The results demonstrate the proposed methodology’s robustness when applied to cephalometric images. Furthermore, the evaluation of 3D analysis usage provides a clear understanding of the significance of integrated deep-learning techniques in orthodontics. Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
Show Figures

Figure 1

13 pages, 817 KiB  
Article
Evaluating the Predictive Power of Software Metrics for Fault Localization
by Issar Arab, Kenneth Magel and Mohammed Akour
Computers 2025, 14(6), 222; https://doi.org/10.3390/computers14060222 - 6 Jun 2025
Viewed by 439
Abstract
Fault localization remains a critical challenge in software engineering, directly impacting debugging efficiency and software quality. This study investigates the predictive power of various software metrics for fault localization by framing the task as a multi-class classification problem and evaluating it using the [...] Read more.
Fault localization remains a critical challenge in software engineering, directly impacting debugging efficiency and software quality. This study investigates the predictive power of various software metrics for fault localization by framing the task as a multi-class classification problem and evaluating it using the Defects4J dataset. We fitted thousands of models and benchmarked different algorithms—including deep learning, Random Forest, XGBoost, and LightGBM—to choose the best-performing model. To enhance model transparency, we applied explainable AI techniques to analyze feature importance. The results revealed that test suite metrics consistently outperform static and dynamic metrics, making them the most effective predictors for identifying faulty classes. These findings underscore the critical role of test quality and coverage in automated fault localization. By combining machine learning with transparent feature analysis, this work delivers practical insights to support more efficient debugging workflows. It lays the groundwork for an iterative process that integrates metric-based predictive models with large language models (LLMs), enabling future systems to automatically generate targeted test cases for the most fault-prone components, which further enhances the automation and precision of software testing. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Graphical abstract

27 pages, 3100 KiB  
Article
Reducing Delivery Times by Utilising On-Site Wire Arc Additive Manufacturing with Digital-Twin Methods
by Stefanie Sell, Kevin Villani and Marc Stautner
Computers 2025, 14(6), 221; https://doi.org/10.3390/computers14060221 - 6 Jun 2025
Viewed by 386
Abstract
The increasing demand for smaller batch sizes and mass customisation in production poses considerable challenges to logistics and manufacturing efficiency. Conventional methodologies are unable to address the need for expeditious, cost-effective distribution of premium-quality products tailored to individual specifications. Additionally, the reliability and [...] Read more.
The increasing demand for smaller batch sizes and mass customisation in production poses considerable challenges to logistics and manufacturing efficiency. Conventional methodologies are unable to address the need for expeditious, cost-effective distribution of premium-quality products tailored to individual specifications. Additionally, the reliability and resilience of global logistics chains are increasingly under pressure. Additive manufacturing is regarded as a potentially viable solution to these problems, as it enables on-demand, on-site production, with reduced resource usage in production. Nevertheless, there are still significant challenges to be addressed, including the assurance of product quality and the optimisation of production processes with respect to time and resource efficiency. This article examines the potential of integrating digital twin methodologies to establish a fully digital and efficient process chain for on-site additive manufacturing. This study focuses on wire arc additive manufacturing (WAAM), a technology that has been successfully implemented in the on-site production of naval ship propellers and excavator parts. The proposed approach aims to enhance process planning efficiency, reduce material and energy consumption, and minimise the expertise required for operational deployment by leveraging digital twin methodologies. The present paper details the current state of research in this domain and outlines a vision for a fully virtualised process chain, highlighting the transformative potential of digital twin technologies in advancing on-site additive manufacturing. In this context, various aspects and components of a digital twin framework for wire arc additive manufacturing are examined regarding their necessity and applicability. The overarching objective of this paper is to conduct a preliminary investigation for the implementation and further development of a comprehensive DT framework for WAAM. Utilising a real-world sample, current already available process steps are validated and actual missing technical solutions are pointed out. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

16 pages, 452 KiB  
Article
GARMT: Grouping-Based Association Rule Mining to Predict Future Tables in Database Queries
by Peixiong He, Libo Sun, Xian Gao, Yi Zhou and Xiao Qin
Computers 2025, 14(6), 220; https://doi.org/10.3390/computers14060220 - 6 Jun 2025
Viewed by 331
Abstract
In modern data management systems, structured query language (SQL) databases, as a mature and stable technology, have become the standard for processing structured data. These databases ensure data integrity through strongly typed schema definitions and support complex transaction management and efficient query processing [...] Read more.
In modern data management systems, structured query language (SQL) databases, as a mature and stable technology, have become the standard for processing structured data. These databases ensure data integrity through strongly typed schema definitions and support complex transaction management and efficient query processing capabilities. However, data sparsity—where most fields in large table sets remain unused by most queries—leads to inefficiencies in access optimization. We propose a grouping-based approach (GARMT) that partitions SQL queries into fixed-size groups and applies a modified FP-Growth algorithm (GFP-Growth) to identify frequent table access patterns. Experiments on a real-world dataset show that grouping significantly reduces runtime—by up to 40%—compared to the ungrouped baseline while preserving rule relevance. These results highlight the practical value of query grouping for efficient pattern discovery in sparse database environments. Full article
(This article belongs to the Special Issue Cloud Computing and Big Data Mining)
Show Figures

Figure 1

18 pages, 1435 KiB  
Article
Threats to the Digital Ecosystem: Can Information Security Management Frameworks, Guided by Criminological Literature, Effectively Prevent Cybercrime and Protect Public Data?
by Shahrukh Mushtaq and Mahmood Shah
Computers 2025, 14(6), 219; https://doi.org/10.3390/computers14060219 - 4 Jun 2025
Viewed by 607
Abstract
As cyber threats escalate in scale and sophistication, the imperative to secure public data through theoretically grounded and practically viable frameworks becomes increasingly urgent. This review investigates whether and how criminology theories have effectively informed the development and implementation of information security management [...] Read more.
As cyber threats escalate in scale and sophistication, the imperative to secure public data through theoretically grounded and practically viable frameworks becomes increasingly urgent. This review investigates whether and how criminology theories have effectively informed the development and implementation of information security management frameworks (ISMFs) to prevent cybercrime and fortify the digital ecosystem’s resilience. Anchored in a comprehensive bibliometric analysis of 617 peer-reviewed records extracted from Scopus and Web of Science, the study employs Multiple Correspondence Analysis (MCA), conceptual co-word mapping, and citation coupling to systematically chart the intellectual landscape bridging criminology and cybersecurity. The review reveals those foundational criminology theories—particularly routine activity theory, rational choice theory, and deterrence theory—have been progressively adapted to cyber contexts, offering novel insights into offender behaviour, target vulnerability, and systemic guardianship. In parallel, the study critically engages with global cybersecurity standards such as National Institute of Standards and Technology (NIST) and ISO, to evaluate how criminological principles are embedded in practice. Using data from the Global Cybersecurity Index (GCI), the paper introduces an innovative visual mapping of the divergence between cybersecurity preparedness and digital development across 170+ countries, revealing strategic gaps and overperformers. This paper ultimately argues for an interdisciplinary convergence between criminology and cybersecurity governance, proposing that the integration of criminological logic into cybersecurity frameworks can enhance risk anticipation, attacker deterrence, and the overall security posture of digital public infrastructures. Full article
(This article belongs to the Special Issue Using New Technologies in Cyber Security Solutions (2nd Edition))
Show Figures

Figure 1

23 pages, 1163 KiB  
Article
Exploring the Potential of the Bicameral Mind Theory in Reinforcement Learning Algorithms
by Munavvarkhon Mukhitdinova and Mariana Petrova
Computers 2025, 14(6), 218; https://doi.org/10.3390/computers14060218 - 3 Jun 2025
Viewed by 504
Abstract
This study explores the potential of Julian Jaynes’ bicameral mind theory in enhancing reinforcement learning (RL) algorithms and large language models (LLMs) for artificial intelligence (AI) systems. By drawing parallels between the dual-process structure of the bicameral mind, the observation–action cycle in RL, [...] Read more.
This study explores the potential of Julian Jaynes’ bicameral mind theory in enhancing reinforcement learning (RL) algorithms and large language models (LLMs) for artificial intelligence (AI) systems. By drawing parallels between the dual-process structure of the bicameral mind, the observation–action cycle in RL, and the “thinking”/”writing” processes in LLMs, we hypothesize that incorporating principles from this theory could lead to more efficient and adaptive AI. Empirical evidence from OpenAI’s CoinRun and RainMazes models, together with analysis of Claude, Gemini, and ChatGPT functioning, supports our hypothesis, demonstrating the universality of the dual-component structure across different types of AI systems. We propose a conceptual model for integrating bicameral mind principles into AI architectures capable of guiding the development of systems that effectively generalize knowledge across various tasks and environments. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

41 pages, 4206 KiB  
Systematic Review
A Systematic Literature Review on Load-Balancing Techniques in Fog Computing: Architectures, Strategies, and Emerging Trends
by Danah Aldossary, Ezaz Aldahasi, Taghreed Balharith and Tarek Helmy
Computers 2025, 14(6), 217; https://doi.org/10.3390/computers14060217 - 2 Jun 2025
Viewed by 594
Abstract
Fog computing has emerged as a promising paradigm to extend cloud services toward the edge of the network, enabling low-latency processing and real-time responsiveness for Internet of Things (IoT) applications. However, the distributed, heterogeneous, and resource-constrained nature of fog environments introduces significant challenges [...] Read more.
Fog computing has emerged as a promising paradigm to extend cloud services toward the edge of the network, enabling low-latency processing and real-time responsiveness for Internet of Things (IoT) applications. However, the distributed, heterogeneous, and resource-constrained nature of fog environments introduces significant challenges in balancing workloads efficiently. This study presents a systematic literature review (SLR) of 113 peer-reviewed articles published between 2020 and 2024, aiming to provide a comprehensive overview of load-balancing strategies in fog computing. This review categorizes fog computing architectures, load-balancing algorithms, scheduling and offloading techniques, fault-tolerance mechanisms, security models, and evaluation metrics. The analysis reveals that three-layer (IoT–Fog–Cloud) architectures remain predominant, with dynamic clustering and virtualization commonly employed to enhance adaptability. Heuristic and hybrid load-balancing approaches are most widely adopted due to their scalability and flexibility. Evaluation frequently centers on latency, energy consumption, and resource utilization, while simulation is primarily conducted using tools such as iFogSim and YAFS. Despite considerable progress, key challenges persist, including workload diversity, security enforcement, and real-time decision-making under dynamic conditions. Emerging trends highlight the growing use of artificial intelligence, software-defined networking, and blockchain to support intelligent, secure, and autonomous load balancing. This review synthesizes current research directions, identifies critical gaps, and offers recommendations for designing efficient and resilient fog-based load-balancing systems. Full article
(This article belongs to the Special Issue Edge and Fog Computing for Internet of Things Systems (2nd Edition))
Show Figures

Figure 1

29 pages, 1299 KiB  
Article
Towards Trustworthy Energy Efficient P2P Networks: A New Method for Validating Computing Results in Decentralized Networks
by Fernando Rodríguez-Sela and Borja Bordel
Computers 2025, 14(6), 216; https://doi.org/10.3390/computers14060216 - 2 Jun 2025
Viewed by 337
Abstract
Decentralized P2P networks have emerged as robust instruments to execute computing tasks, with enhanced security and transparency. Solutions such as Blockchain have proved to be successful in a large catalog of critical applications such as cryptocurrency, intellectual property, etc. However, although executions are [...] Read more.
Decentralized P2P networks have emerged as robust instruments to execute computing tasks, with enhanced security and transparency. Solutions such as Blockchain have proved to be successful in a large catalog of critical applications such as cryptocurrency, intellectual property, etc. However, although executions are transparent and P2P are resistant against common cyberattacks, they tend to be untrustworthy. P2P nodes typically do not offer any evidence about the quality of their resolution of the delegated computing tasks, so trustworthiness of results is threatened. To mitigate this challenge, in usual P2P networks, many different replicas of the same computing task are delegated to different nodes. The final result is the one most nodes reached. But this approach is very resource consuming, especially in terms of energy, as many unnecessary computing tasks are executed. Therefore, new solutions to achieve trustworthy P2P networks, but with an energy efficiency perspective, are needed. This study addresses this challenge. The purpose of the research is to evaluate the effectiveness of an audit-based and score-based approach is assigned to each node instead of performing identical tasks redundantly on different nodes in the network. The proposed solution employs probabilistic methods to detect the malicious nodes taking into account parameters like number of executed tasks and number of audited ones giving a value to the node, and game theory which consider that all nodes play with the same rules. Qualitative and quantitative experimental methods are used to evaluate its impact. The results reveal a significant reduction in network energy consumption, minimum a 50% comparing to networks in which each task is delivered to different nodes considering the task is delivered to a pair of nodes, supporting the effectiveness of the proposed approach. Full article
(This article belongs to the Section Blockchain Infrastructures and Enabled Applications)
Show Figures

Figure 1

17 pages, 1481 KiB  
Article
Enhancing Injector Performance Through CFD Optimization: Focus on Cavitation Reduction
by Jose Villagomez-Moreno, Aurelio Dominguez-Gonzalez, Carlos Gustavo Manriquez-Padilla, Juan Jose Saucedo-Dorantes and Angel Perez-Cruz
Computers 2025, 14(6), 215; https://doi.org/10.3390/computers14060215 - 2 Jun 2025
Viewed by 600
Abstract
The use of computer-aided engineering (CAE) tools has become essential in modern design processes, significantly streamlining mechanical design tasks. The integration of optimization algorithms further enhances these processes by facilitating studies on mechanical behavior and accelerating iterative operations. A key focus lies in [...] Read more.
The use of computer-aided engineering (CAE) tools has become essential in modern design processes, significantly streamlining mechanical design tasks. The integration of optimization algorithms further enhances these processes by facilitating studies on mechanical behavior and accelerating iterative operations. A key focus lies in understanding and mitigating the detrimental effects of cavitation on injector surfaces, as it can reduce the injector lifespan and induce material degradation. By combining advanced numerical finite element tools with algorithmic optimization, these adverse effects can be effectively mitigated. The incorporation of computational tools enables efficient numerical analyses and rapid, automated modifications of injector designs, significantly enhancing the ability to explore and refine geometries. The primary goal remains the minimization of cavitation phenomena and the improvement in injector performance, while the collaborative use of specialized software environments ensures a more robust and streamlined design process. Specifically, using the simulated annealing algorithm (SA) helps identify the optimal configuration that minimizes cavitation-induced effects. The proposed approach provides a robust set of tools for engineers and researchers to enhance injector performance and effectively address cavitation-related challenges. The results derived from this integrated framework illustrate the effectiveness of the optimization methodology in facilitating the development of more efficient and reliable injector systems. Full article
Show Figures

Figure 1

29 pages, 2066 KiB  
Article
Improved Big Data Security Using Quantum Chaotic Map of Key Sequence
by Archana Kotangale, Meesala Sudhir Kumar and Amol P. Bhagat
Computers 2025, 14(6), 214; https://doi.org/10.3390/computers14060214 - 1 Jun 2025
Viewed by 468
Abstract
In the era of ubiquitous big data, ensuring secure storage, transmission, and processing has become a paramount concern. Classical cryptographic methods face increasing vulnerabilities in the face of quantum computing advancements. This research proposes an enhanced big data security framework integrating a quantum [...] Read more.
In the era of ubiquitous big data, ensuring secure storage, transmission, and processing has become a paramount concern. Classical cryptographic methods face increasing vulnerabilities in the face of quantum computing advancements. This research proposes an enhanced big data security framework integrating a quantum chaotic map of key sequence (QCMKS), which synergizes the principles of quantum mechanics and chaos theory to generate highly unpredictable and non-repetitive key sequences. The system incorporates quantum random number generation (QRNG) for true entropy sources, quantum key distribution (QKD) for secure key exchange immune to eavesdropping, and quantum error correction (QEC) to maintain integrity against quantum noise. Additionally, quantum optical elements transformation (QOET) is employed to implement state transformations on photonic qubits, ensuring robustness during transmission across quantum networks. The integration of QCMKS with QRNG, QKD, QEC, and QOET significantly enhances the confidentiality, integrity, and availability of big data systems, laying the groundwork for a quantum-resilient data security paradigm. While the proposed framework demonstrates strong theoretical potential for improving big data security, its practical robustness and performance are subject to current quantum hardware limitations, noise sensitivity, and integration complexities. Full article
(This article belongs to the Section ICT Infrastructures for Cybersecurity)
Show Figures

Figure 1

36 pages, 2094 KiB  
Article
Generating Accessible Webpages from Models
by Karla Ordoñez-Briceño, José R. Hilera, Luis De-Marcos and Rodrigo Saraguro-Bravo
Computers 2025, 14(6), 213; https://doi.org/10.3390/computers14060213 - 31 May 2025
Cited by 1 | Viewed by 706
Abstract
Despite significant efforts to promote web accessibility through the adoption of various standards and tools, the web remains inaccessible to many users. One of the main barriers is the limited knowledge of accessibility issues among website designers. This gap in expertise results in [...] Read more.
Despite significant efforts to promote web accessibility through the adoption of various standards and tools, the web remains inaccessible to many users. One of the main barriers is the limited knowledge of accessibility issues among website designers. This gap in expertise results in the development of websites that fail to meet accessibility standards, hindering access for people with diverse abilities and needs. In response to this challenge, this paper presents the ACG WebAcc prototype, which enables the automatic generation of accessible HTML code using a model-driven development (MDD) approach. The tool takes as input a Unified Modeling Language (UML) model, with a specific profile, and incorporates predefined Object Constraint Language (OCL) rules to ensure compliance with accessibility guidelines. By automating this process, ACG WebAcc reduces the need for extensive knowledge of accessibility standards, making it easier for designers to create accessible websites. Full article
Show Figures

Figure 1

30 pages, 1368 KiB  
Article
Pain Level Classification Using Eye-Tracking Metrics and Machine Learning Models
by Oussama El Othmani and Sami Naouali
Computers 2025, 14(6), 212; https://doi.org/10.3390/computers14060212 - 30 May 2025
Viewed by 495
Abstract
Pain estimation is a critical aspect of healthcare, particularly for patients who are unable to communicate discomfort effectively. The traditional methods, such as self-reporting or observational scales, are subjective and prone to bias. This study proposes a novel system for non-invasive pain estimation [...] Read more.
Pain estimation is a critical aspect of healthcare, particularly for patients who are unable to communicate discomfort effectively. The traditional methods, such as self-reporting or observational scales, are subjective and prone to bias. This study proposes a novel system for non-invasive pain estimation using eye-tracking technology and advanced machine learning models. The methodology begins with preprocessing steps, including resizing, normalization, and data augmentation, to prepare high-quality input face images. DeepLabV3+ is employed for the precise segmentation of the eye and face regions, achieving 95% accuracy. Feature extraction is performed using VGG16, capturing key metrics such as pupil size, blink rate, and saccade velocity. Multiple machine learning models, including Random Forest, SVM, MLP, XGBoost, and NGBoost, are trained on the extracted features. XGBoost achieves the highest classification accuracy of 99.5%, demonstrating its robustness for pain level classification on a scale from 0 to 5. The feature analysis using SHAP values reveals that pupil size and blink rate contribute most to the predictions, with SHAP contribution scores of 0.42 and 0.35, respectively. The loss curves for DeepLabV3+ confirm rapid convergence during training, ensuring reliable segmentation. This work highlights the transformative potential of combining eye-tracking data with machine learning for non-invasive pain estimation, with significant applications in healthcare, human–computer interaction, and assistive technologies. Full article
Show Figures

Figure 1

31 pages, 1751 KiB  
Article
Enhancing User Experiences in Digital Marketing Through Machine Learning: Cases, Trends, and Challenges
by Alexios Kaponis, Manolis Maragoudakis and Konstantinos Chrysanthos Sofianos
Computers 2025, 14(6), 211; https://doi.org/10.3390/computers14060211 - 29 May 2025
Viewed by 1653
Abstract
Online marketing environments are rapidly being transformed by Artificial Intelligence (AI). This represents the implementation of Machine Learning (ML) that has significant potential in content personalization, enhanced usability, and hyper-targeted marketing, and it will reconfigure how businesses reach and serve customers. This systematic [...] Read more.
Online marketing environments are rapidly being transformed by Artificial Intelligence (AI). This represents the implementation of Machine Learning (ML) that has significant potential in content personalization, enhanced usability, and hyper-targeted marketing, and it will reconfigure how businesses reach and serve customers. This systematic examination of machine learning in the Digital Marketing (DM) industry is also closely examined, focusing on its effect on human–computer interaction (HCI). This research methodically elucidates how machine learning can be applied to the automation of strategies for user engagement that increase user experience (UX) and customer retention, and how to optimize recommendations from consumer behavior. The objective of the present study is to critically analyze the functional and ethical considerations of ML integration in DM and to evaluate its implications on data-driven personalization. Through selected case studies, the investigation also provides empirical evidence of the implications of ML applications on UX/customer loyalty as well as associated ethical aspects. These include algorithmic bias, concerns about the privacy of the data, and the need for greater transparency of ML-based decision-making processes. This research also contributes to the field by delivering actionable, data-driven strategies for marketing professionals and offering them frameworks to deal with the evolving responsibilities and tasks that accompany the introduction of ML technologies into DM. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop