Journal Description
Computers
Computers
is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, Ei Compendex, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q1 (Computer Science (miscellaneous))
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 15.5 days after submission; acceptance to publication is undertaken in 3.8 days (median values for papers published in this journal in the second half of 2024).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor:
4.2 (2024);
5-Year Impact Factor:
3.5 (2024)
Latest Articles
Informing Disaster Recovery Through Predictive Relocation Modeling
Computers 2025, 14(6), 240; https://doi.org/10.3390/computers14060240 - 19 Jun 2025
Abstract
Housing recovery represents a critical component of disaster recovery, and accurately forecasting household relocation decisions is essential for guiding effective post-disaster reconstruction policies. This study explores the use of machine learning algorithms to improve the prediction of household relocation in the aftermath of
[...] Read more.
Housing recovery represents a critical component of disaster recovery, and accurately forecasting household relocation decisions is essential for guiding effective post-disaster reconstruction policies. This study explores the use of machine learning algorithms to improve the prediction of household relocation in the aftermath of disasters. Leveraging data from 1304 completed interviews conducted as part of the Displaced New Orleans Residents Survey (DNORS) following Hurricane Katrina, we evaluate the performance of Logistic Regression (LR), Random Forest (RF), and Weighted Support Vector Machine (WSVM) models. Results indicate that WSVM significantly outperforms LR and RF, particularly in identifying the minority class of relocated households, achieving the highest F1 score. Key predictors of relocation include homeownership, extent of housing damage, and race. By integrating variable importance rankings and partial dependence plots, the study also enhances interpretability of machine learning outputs. These findings underscore the value of advanced predictive models in disaster recovery planning, particularly in geographically vulnerable regions like New Orleans where accurate relocation forecasting can guide more effective policy interventions.
Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
►
Show Figures
Open AccessArticle
UA-HSD-2025: Multi-Lingual Hate Speech Detection from Tweets Using Pre-Trained Transformers
by
Muhammad Ahmad, Muhammad Waqas, Ameer Hamza, Sardar Usman, Ildar Batyrshin and Grigori Sidorov
Computers 2025, 14(6), 239; https://doi.org/10.3390/computers14060239 - 18 Jun 2025
Abstract
The rise in social media has improved communication but also amplified the spread of hate speech, creating serious societal risks. Automated detection remains difficult due to subjectivity, linguistic diversity, and implicit language. While prior research focuses on high-resource languages, this study addresses the
[...] Read more.
The rise in social media has improved communication but also amplified the spread of hate speech, creating serious societal risks. Automated detection remains difficult due to subjectivity, linguistic diversity, and implicit language. While prior research focuses on high-resource languages, this study addresses the underexplored multilingual challenges of Arabic and Urdu hate speech through a comprehensive approach. To achieve this objective, this study makes four different key contributions. First, we have created a unique multi-lingual, manually annotated binary and multi-class dataset (UA-HSD-2025) sourced from X, which contains the five most important multi-class categories of hate speech. Secondly, we created detailed annotation guidelines to make a robust and perfect hate speech dataset. Third, we explore two strategies to address the challenges of multilingual data: a joint multilingual and translation-based approach. The translation-based approach involves converting all input text into a single target language before applying a classifier. In contrast, the joint multilingual approach employs a unified model trained to handle multiple languages simultaneously, enabling it to classify text across different languages without translation. Finally, we have employed state-of-the-art 54 different experiments using different machine learning using TF-IDF, deep learning using advanced pre-trained word embeddings such as FastText and Glove, and pre-trained language-based models using advanced contextual embeddings. Based on the analysis of the results, our language-based model (XLM-R) outperformed traditional supervised learning approaches, achieving 0.99 accuracy in binary classification for Arabic, Urdu, and joint-multilingual datasets, and 0.95, 0.94, and 0.94 accuracy in multi-class classification for joint-multilingual, Arabic, and Urdu datasets, respectively.
Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
Open AccessArticle
Bridging the AI Gap in Medical Education: A Study of Competency, Readiness, and Ethical Perspectives in Developing Nations
by
Mostafa Aboulnour Salem, Ossama M. Zakaria, Eman Abdulaziz Aldoughan, Zeyad Aly Khalil and Hazem Mohamed Zakaria
Computers 2025, 14(6), 238; https://doi.org/10.3390/computers14060238 - 17 Jun 2025
Abstract
Background: The rapid integration of artificial intelligence (AI) into medical education in developing nations necessitates that educators develop comprehensive AI competencies and readiness. This study explores AI competence and readiness among medical educators in higher education, focusing on the five key dimensions of
[...] Read more.
Background: The rapid integration of artificial intelligence (AI) into medical education in developing nations necessitates that educators develop comprehensive AI competencies and readiness. This study explores AI competence and readiness among medical educators in higher education, focusing on the five key dimensions of the ADELE technique: (A) AI Awareness, (D) Development of AI Skills, (E) AI Efficacy, (L) Leanings Towards AI, and (E) AI Enforcement. Structured surveys were used to assess AI competencies and readiness among medical educators for the sustainable integration of AI in medical education. Methods: A cross-sectional study was conducted using a 40-item survey distributed to 253 educators from the Middle East (Saudi Arabia, Egypt, Jordan) and South Asia (India, Pakistan, Philippines). Statistical analyses examined variations in AI competency and readiness by gender and nationality and assessed their predictive impact on the adoption of sustainable AI in medical education. Results: The findings revealed that AI competency and readiness are the primary drivers of sustainable AI adoption, highlighting the need to bridge the gap between theoretical knowledge and practical application. No significant differences were observed based on gender or discipline, suggesting a balanced approach to AI education. However, ethical perspectives on AI integration varied between Middle East and South Asian educators, likely reflecting cultural influences. Conclusions: This study underscores the importance of advancing from foundational AI knowledge to hands-on applications while promoting responsible AI use. The ADELE technique provides a strategic approach to enhancing AI competency in medical education within developing nations, fostering both technological proficiency and ethical awareness among educators.
Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
A BERT-Based Multimodal Framework for Enhanced Fake News Detection Using Text and Image Data Fusion
by
Mohammed Al-alshaqi, Danda B. Rawat and Chunmei Liu
Computers 2025, 14(6), 237; https://doi.org/10.3390/computers14060237 - 16 Jun 2025
Abstract
The spread of fake news on social media is complicated by the fact that fake information spreads extremely fast in both textual and visual formats. Traditional approaches to the detection of fake news focus mainly on text and image features, thereby missing valuable
[...] Read more.
The spread of fake news on social media is complicated by the fact that fake information spreads extremely fast in both textual and visual formats. Traditional approaches to the detection of fake news focus mainly on text and image features, thereby missing valuable information contained within images and texts. In response to this, we propose a multimodal fake news detection method based on BERT, with an extension to text combined with the extracted text from images through Optical Character Recognition (OCR). Here, we consider extending feature analysis with BERT_base_uncased to process inputs for retrieving relevant text from images and determining a confidence score that suggests the probability of the news being authentic. We report extensive experimental results on the ISOT, WELFAKE, TRUTHSEEKER, and ISOT_WELFAKE_TRUTHSEEKER datasets. Our proposed model demonstrates better generalization on the TRUTHSEEKER dataset with an accuracy of 99.97%, achieving substantial improvements over existing methods with an F1-score of 0.98. Experimental results indicate a potential accuracy increment of +3.35% compared to the latest baselines. These results highlight the potential of our approach to serve as a strong resource for automatic fake news detection by effectively integrating both text and visual data streams. Findings suggest that using diverse datasets enhances the resilience of detection systems against misinformation strategies.
Full article
(This article belongs to the Special Issue Recent Advances in Social Networks and Social Media)
►▼
Show Figures

Figure 1
Open AccessArticle
Adaptive Congestion Detection and Traffic Control in Software-Defined Networks via Data-Driven Multi-Agent Reinforcement Learning
by
Kaoutar Boussaoud, Abdeslam En-Nouaary and Meryeme Ayache
Computers 2025, 14(6), 236; https://doi.org/10.3390/computers14060236 - 16 Jun 2025
Abstract
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven
[...] Read more.
Efficient congestion management in Software-Defined Networks (SDNs) remains a significant challenge due to dynamic traffic patterns and complex topologies. Conventional congestion control techniques based on static or heuristic rules often fail to adapt effectively to real-time network variations. This paper proposes a data-driven framework based on Multi-Agent Reinforcement Learning (MARL) to enable intelligent, adaptive congestion control in SDNs. The framework integrates two collaborative agents: a Congestion Classification Agent that identifies congestion levels using metrics such as delay and packet loss, and a Decision-Making Agent based on Deep Q-Learning (DQN or its variants), which selects the optimal actions for routing and bandwidth management. The agents are trained offline using both synthetic and real network traces (e.g., the MAWI dataset), and deployed in a simulated SDN testbed using Mininet and the Ryu controller. Extensive experiments demonstrate the superiority of the proposed system across key performance metrics. Compared to baseline controllers, including standalone DQN and static heuristics, the MARL system achieves up to 3.0% higher throughput, maintains end-to-end delay below 10 ms, and reduces packet loss by over 10% in real traffic scenarios. Furthermore, the architecture exhibits stable cumulative reward progression and balanced action selection, reflecting effective learning and policy convergence. These results validate the benefit of agent specialization and modular learning in scalable and intelligent SDN traffic engineering.
Full article
(This article belongs to the Special Issue Emerging Trends and Challenges of Software-Defined Networking (SDN) Technologies—2nd Edition)
►▼
Show Figures

Figure 1
Open AccessArticle
Quantum Computing in Data Science and STEM Education: Mapping Academic Trends and Analyzing Practical Tools
by
Eloy López-Meneses, Jesús Cáceres-Tello, José Javier Galán-Hernández and Luis López-Catalán
Computers 2025, 14(6), 235; https://doi.org/10.3390/computers14060235 - 16 Jun 2025
Abstract
Quantum computing is emerging as a key enabler of digital transformation in data science and STEM education. This study investigates how quantum computing can be meaningfully integrated into higher education by combining a dual approach: a structured assessment of the specialized literature and
[...] Read more.
Quantum computing is emerging as a key enabler of digital transformation in data science and STEM education. This study investigates how quantum computing can be meaningfully integrated into higher education by combining a dual approach: a structured assessment of the specialized literature and a practical evaluation of educational tools. First, a science mapping study based on 281 peer-reviewed publications indexed in Scopus (2015–2024) identifies growth trends, thematic clusters, and international collaboration networks at the intersection of quantum computing, data science, and education. Second, a comparative analysis of widely used educational platforms—such as Qiskit, Quantum Inspire, QuTiP, and Amazon Braket—is conducted using pedagogical criteria including accessibility, usability, and curriculum integration. The results highlight a growing convergence between quantum technologies, artificial intelligence, and data-driven learning. A strategic framework and roadmap are proposed to support the gradual and scalable adoption of quantum literacy in university-level STEM programs.
Full article
(This article belongs to the Special Issue Recent Advances in Data Mining: Methods, Trends, and Emerging Applications)
►▼
Show Figures

Graphical abstract
Open AccessArticle
An Innovative Process Chain for Precision Agriculture Services
by
Christos Karydas, Miltiadis Iatrou and Spiros Mourelatos
Computers 2025, 14(6), 234; https://doi.org/10.3390/computers14060234 - 13 Jun 2025
Abstract
►▼
Show Figures
In this work, an innovative process chain is set up for the regular provision of fertilization consultation services to farmers for a variety of crops, within a precision agriculture framework. The central hub of this mechanism is a geographic information system (GIS), while
[...] Read more.
In this work, an innovative process chain is set up for the regular provision of fertilization consultation services to farmers for a variety of crops, within a precision agriculture framework. The central hub of this mechanism is a geographic information system (GIS), while a 5 × 5 m point grid is the information carrier. Potential data sources include soil samples, satellite imagery, meteorological parameters, yield maps, and agronomic information. Whenever big data are available per crop, decision-making is supported by machine learning systems (MLSs). All the map data are uploaded to a farm management information system (FMIS) for visualization and storage. The recipe maps are transmitted wirelessly to variable rate technologies (VRTs) for applications in the field. To a large degree, the process chain has been automated with programming at many levels. Currently, four different service modules based on the new process chain are available in the market.
Full article

Figure 1
Open AccessArticle
Intelligent Fault Detection and Self-Healing Mechanisms in Wireless Sensor Networks Using Machine Learning and Flying Fox Optimization
by
Almamoon Alauthman and Abeer Al-Hyari
Computers 2025, 14(6), 233; https://doi.org/10.3390/computers14060233 - 13 Jun 2025
Abstract
►▼
Show Figures
WSNs play a critical role in many applications that require network reliability, such as environmental monitoring, healthcare, and industrial automation. Thus, fault detection and self-healing are two effective mechanisms for addressing the challenges of node failure, communication disruption, a energy constraints faced by
[...] Read more.
WSNs play a critical role in many applications that require network reliability, such as environmental monitoring, healthcare, and industrial automation. Thus, fault detection and self-healing are two effective mechanisms for addressing the challenges of node failure, communication disruption, a energy constraints faced by WSNs. This paper presents an intelligent framework based on Light Gradient Boosting Machine integration for fault detection and a Flying Fox Optimization Algorithm in dynamic self-healing. The LGBM model provides very accurate and scalable performance related to effective fault identification, whereas FFOA optimizes the recovery strategies to minimize downtown and maximize network resilience. Extensive performance evaluation of the developed system using a large dataset was presented and compared with the state-of-the-art heuristic-based traditional methods and machine learning models. The results showed that the proposed framework could achieve 94.6% fault detection accuracy, with a minimum of 120 milliseconds of recovery time and network resilience of 98.5%. These results hence attest to the efficiency of the proposed approach in ensuring robust and adaptive WSN operations toward the quest for enhanced reliability within dynamic and resource-constrained environments.
Full article

Figure 1
Open AccessArticle
RVR Blockchain Consensus: A Verifiable, Weighted-Random, Byzantine-Tolerant Framework for Smart Grid Energy Trading
by
Huijian Wang, Xiao Liu and Jining Chen
Computers 2025, 14(6), 232; https://doi.org/10.3390/computers14060232 - 13 Jun 2025
Abstract
Blockchain technology empowers decentralized transactions in smart grids, but existing consensus algorithms face efficiency and security bottlenecks under Byzantine attacks. This article proposes the RVR consensus algorithm, which innovatively integrates dynamic reputation evaluation, verifiable random function (VRF), and a weight-driven probability election mechanism
[...] Read more.
Blockchain technology empowers decentralized transactions in smart grids, but existing consensus algorithms face efficiency and security bottlenecks under Byzantine attacks. This article proposes the RVR consensus algorithm, which innovatively integrates dynamic reputation evaluation, verifiable random function (VRF), and a weight-driven probability election mechanism to achieve (1) behavior-aware dynamic adjustment of reputation weights and (2) manipulation-resistant random leader election via VRF. Experimental verification shows that under a silence attack, the maximum latency is reduced by 37.88% compared to HotStuff, and under a forking attack, the maximum throughput is increased by 50.66%, providing an efficient and secure new paradigm for distributed energy trading.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessReview
Immersive, Secure, and Collaborative Air Quality Monitoring
by
José Marinho and Nuno Cid Martins
Computers 2025, 14(6), 231; https://doi.org/10.3390/computers14060231 - 12 Jun 2025
Abstract
Air pollution poses a serious threat to both public health and the environment, contributing to millions of premature deaths worldwide each year. The integration of augmented reality (AR), blockchain, and the Internet of Things (IoT) technologies can provide a transformative approach to collaborative
[...] Read more.
Air pollution poses a serious threat to both public health and the environment, contributing to millions of premature deaths worldwide each year. The integration of augmented reality (AR), blockchain, and the Internet of Things (IoT) technologies can provide a transformative approach to collaborative air quality monitoring (AQM), enabling real-time, transparent, and intuitive access to environmental data for community awareness, behavioural change, informed decision-making, and proactive responses to pollution challenges. This article presents a unified vision of the key elements and technologies to consider when designing such AQM systems, allowing dynamic and user-friendly immersive air quality data visualization interfaces, secure and trusted data storage, fine-grained data collection through crowdsourcing, and active community learning and participation. It serves as a conceptual basis for any design and implementation of such systems.
Full article
(This article belongs to the Special Issue Harnessing the Blockchain Technology in Unveiling Futuristic Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Exploring the Factors Influencing AI Adoption Intentions in Higher Education: An Integrated Model of DOI, TOE, and TAM
by
Rawan N. Abulail, Omar N. Badran, Mohammad A. Shkoukani and Fandi Omeish
Computers 2025, 14(6), 230; https://doi.org/10.3390/computers14060230 - 11 Jun 2025
Abstract
This study investigates the primary technological and socio-environmental factors influencing the adoption intentions of AI-powered technology at the corporate level within higher education institutions. A conceptual model based on the Diffusion of Innovation Theory (DOI), the Technology–Organization–Environment (TOE), and the Technology Acceptance Model
[...] Read more.
This study investigates the primary technological and socio-environmental factors influencing the adoption intentions of AI-powered technology at the corporate level within higher education institutions. A conceptual model based on the Diffusion of Innovation Theory (DOI), the Technology–Organization–Environment (TOE), and the Technology Acceptance Model (TAM) combined framework were proposed and tested using data collected from 367 higher education students, faculty members, and employees. SPSS Amos 24 was used for CB-SEM to choose the best-fitting model, which proved more efficient than traditional multiple regression analysis to examine the relationships among the proposed constructs, ensuring model fit and statistical robustness. The findings reveal that Compatibility “C”, Complexity “CX”, User Interface “UX”, Perceived Ease of Use “PEOU”, User Satisfaction “US”, Performance Expectation “PE”, Artificial intelligence “AI” introducing new tools “AINT”, AI Strategic Alignment “AIS”, Availability of Resources “AVR”, Technological Support “TS”, and Facilitating Conditions “FC” significantly impact AI adoption intentions. At the same time, Competitive Pressure “COP” and Government Regulations “GOR” do not. Demographic factors, including major and years of experience, moderated these associations, and there were large differences across educational backgrounds and experience.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessArticle
LSTM-Based Music Generation Technologies
by
Yi-Jen Mon
Computers 2025, 14(6), 229; https://doi.org/10.3390/computers14060229 - 11 Jun 2025
Abstract
In deep learning, Long Short-Term Memory (LSTM) is a well-established and widely used approach for music generation. Nevertheless, creating musical compositions that match the quality of those created by human composers remains a formidable challenge. The intricate nature of musical components, including pitch,
[...] Read more.
In deep learning, Long Short-Term Memory (LSTM) is a well-established and widely used approach for music generation. Nevertheless, creating musical compositions that match the quality of those created by human composers remains a formidable challenge. The intricate nature of musical components, including pitch, intensity, rhythm, notes, chords, and more, necessitates the extraction of these elements from extensive datasets, making the preliminary work arduous. To address this, we employed various tools to deconstruct the musical structure, conduct step-by-step learning, and then reconstruct it. This article primarily presents the techniques for dissecting musical components in the preliminary phase. Subsequently, it introduces the use of LSTM to build a deep learning network architecture, enabling the learning of musical features and temporal coherence. Finally, through in-depth analysis and comparative studies, this paper validates the efficacy of the proposed research methodology, demonstrating its ability to capture musical coherence and generate compositions with similar styles.
Full article
(This article belongs to the Special Issue Machine Learning: Techniques, Industry Applications, Code Sharing, and Future Trends)
►▼
Show Figures

Figure 1
Open AccessArticle
Quantum Classification Outside the Promised Class
by
Theodore Andronikos, Constantinos Bitsakos, Konstantinos Nikas, Georgios I. Goumas and Nectarios Koziris
Computers 2025, 14(6), 228; https://doi.org/10.3390/computers14060228 - 10 Jun 2025
Abstract
►▼
Show Figures
This paper studies the important problem of quantum classification of Boolean functions from an entirely novel perspective. Typically, quantum classification algorithms allow us to classify functions with a probability of , if we are promised that they meet specific unique properties. The
[...] Read more.
This paper studies the important problem of quantum classification of Boolean functions from an entirely novel perspective. Typically, quantum classification algorithms allow us to classify functions with a probability of , if we are promised that they meet specific unique properties. The primary objective of this study is to explore whether it is feasible to obtain any insights when the input function deviates from the promised class. For concreteness, we use a recently introduced quantum algorithm that is designed to classify a large class of imbalanced Boolean functions with probability using just a single oracular query. First, we establish a completely new concept characterizing “nearness” between Boolean functions. Utilizing this concept, we show that, as long as the unknown function is close enough to the promised class, it is still possible to obtain useful information about its behavioral pattern from the classification algorithm. In this regard, the current study is among the first to provide evidence that shows how useful it is to apply quantum classification algorithms to functions outside the promised class in order to get a glimpse of important information.
Full article

Figure 1
Open AccessArticle
Deploying a Mental Health Chatbot in Higher Education: The Development and Evaluation of Luna, an AI-Based Mental Health Support System
by
Phillip Olla, Ashlee Barnes, Lauren Elliott, Mustafa Abumeeiz, Venus Olla and Joseph Tan
Computers 2025, 14(6), 227; https://doi.org/10.3390/computers14060227 - 10 Jun 2025
Abstract
Rising mental health challenges among postsecondary students have increased the demand for scalable, ethical solutions. This paper presents the design, development, and safety evaluation of Luna, a GPT-4-based mental health chatbot. Built using a modular PHP architecture, Luna integrates multi-layered prompt engineering, safety
[...] Read more.
Rising mental health challenges among postsecondary students have increased the demand for scalable, ethical solutions. This paper presents the design, development, and safety evaluation of Luna, a GPT-4-based mental health chatbot. Built using a modular PHP architecture, Luna integrates multi-layered prompt engineering, safety guardrails, and referral logic. The Institutional Review Board (IRB) at the University of Detroit Mercy (Protocol #23-24-38) reviewed the proposed study and deferred full human subject approval, requesting technical validation prior to deployment. In response, we conducted a pilot test with a variety of users—including clinicians and students who simulated at-risk student scenarios. Results indicated that 96% of expert interactions were deemed safe, and 90.4% of prompts were considered useful. This paper describes Luna’s architecture, prompt strategy, and expert feedback, concluding with recommendations for future human research trials.
Full article
(This article belongs to the Special Issue Transformative Approaches in Education: Harnessing AI, Augmented Reality, and Virtual Reality for Innovative Teaching and Learning)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Ethereum Smart Contracts Under Scrutiny: A Survey of Security Verification Tools, Techniques, and Challenges
by
Mounira Kezadri Hamiaz and Maha Driss
Computers 2025, 14(6), 226; https://doi.org/10.3390/computers14060226 - 9 Jun 2025
Abstract
Smart contracts are self-executing programs that facilitate trustless transactions between multiple parties, most commonly deployed on the Ethereum blockchain. They have become integral to decentralized applications in areas such as voting, digital agreements, and financial systems. However, the immutable and transparent nature of
[...] Read more.
Smart contracts are self-executing programs that facilitate trustless transactions between multiple parties, most commonly deployed on the Ethereum blockchain. They have become integral to decentralized applications in areas such as voting, digital agreements, and financial systems. However, the immutable and transparent nature of smart contracts makes security vulnerabilities especially critical, as deployed contracts cannot be modified. Security flaws have led to substantial financial losses, underscoring the need for robust verification before deployment. This survey presents a comprehensive review of the state of the art in smart contract security verification, with a focus on Ethereum. We analyze a wide range of verification methods, including static and dynamic analysis, formal verification, and machine learning, and evaluate 62 open-source tools across their detection accuracy, efficiency, and usability. In addition, we highlight emerging trends, challenges, and the need for cross-methodological integration and benchmarking. Our findings aim to guide researchers, developers, and security auditors in selecting and advancing effective verification approaches for building secure and reliable smart contracts.
Full article
(This article belongs to the Special Issue Blockchain Technology—a Breakthrough Innovation for Modern Industries (2nd Edition))
►▼
Show Figures

Figure 1
Open AccessArticle
Ensemble-Based Biometric Verification: Defending Against Multi-Strategy Deepfake Image Generation
by
Hilary Zen, Rohan Wagh, Miguel Wanderley, Gustavo Bicalho, Rachel Park, Megan Sun, Rafael Palacios, Lucas Carvalho, Guilherme Rinaldo and Amar Gupta
Computers 2025, 14(6), 225; https://doi.org/10.3390/computers14060225 - 9 Jun 2025
Abstract
►▼
Show Figures
Deepfake images, synthetic images created using digital software, continue to present a serious threat to online platforms. This is especially relevant for biometric verification systems, as deepfakes that attempt to bypass such measures increase the risk of impersonation, identity theft and scams. Although
[...] Read more.
Deepfake images, synthetic images created using digital software, continue to present a serious threat to online platforms. This is especially relevant for biometric verification systems, as deepfakes that attempt to bypass such measures increase the risk of impersonation, identity theft and scams. Although research on deepfake image detection has provided many high-performing classifiers, many of these commonly used detection models lack generalizability across different methods of deepfake generation. For companies and governments fighting identify fraud, a lack of generalization is challenging, as malicious actors may use a variety of deepfake image-generation methods available through online wrappers. This work explores if combining multiple classifiers into an ensemble model can improve generalization without losing performance across different generation methods. It also considers current methods of deepfake image generation, with a focus on publicly available and easily accessible methods. We compare our framework against its underlying models to show how companies can better respond to emerging deepfake generation methods.
Full article

Graphical abstract
Open AccessArticle
Gamifying Sociological Surveys Through Serious Games—A Data Analysis Approach Applied to Multiple-Choice Question Responses Datasets
by
Alexandros Gazis and Eleftheria Katsiri
Computers 2025, 14(6), 224; https://doi.org/10.3390/computers14060224 - 7 Jun 2025
Abstract
►▼
Show Figures
E-polis is a serious digital game designed to gamify sociological surveys studying young people’s political opinions. In this platform game, players navigate a digital world, encountering quests posing sociological questions. Players’ answers shape the city-game world, altering building structures based on their choices.
[...] Read more.
E-polis is a serious digital game designed to gamify sociological surveys studying young people’s political opinions. In this platform game, players navigate a digital world, encountering quests posing sociological questions. Players’ answers shape the city-game world, altering building structures based on their choices. E-polis is a serious game, not a government simulation, aiming to understand players’ behaviors and opinions; thus, we do not train the players but rather understand them and help them visualize their choices in shaping a city’s future. Also, it is noticed that no correct or incorrect answers apply. Moreover, our game utilizes a novel middleware architecture for development, diverging from typical asset-prefab-scene and script segregation. This article presents the data layer of our game’s middleware, specifically focusing on data analysis based on respondents’ gameplay answers. E-polis represents an innovative approach to gamifying sociological research, providing a unique platform for gathering and analyzing data on political opinions among youth and contributing to the broader field of serious games.
Full article

Figure 1
Open AccessArticle
An Efficient Hybrid 3D Computer-Aided Cephalometric Analysis for Lateral Cephalometric and Cone-Beam Computed Tomography (CBCT) Systems
by
Laurine A. Ashame, Sherin M. Youssef, Mazen Nabil Elagamy and Sahar M. El-Sheikh
Computers 2025, 14(6), 223; https://doi.org/10.3390/computers14060223 - 7 Jun 2025
Abstract
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study
[...] Read more.
Lateral cephalometric analysis is commonly used in orthodontics for skeletal classification to ensure an accurate and reliable diagnosis for treatment planning. However, most current research depends on analyzing different type of radiographs, which requires more computational time than 3D analysis. Consequently, this study addresses fully automatic orthodontics tracing based on the usage of artificial intelligence (AI) applied to 2D and 3D images, by designing a cephalometric system that analyzes the significant landmarks and regions of interest (ROI) needed in orthodontics tracing, especially for the mandible and maxilla teeth. In this research, a computerized system is developed to automate the tasks of orthodontics evaluation during 2D and Cone-Beam Computed Tomography (CBCT or 3D) systems measurements. This work was tested on a dataset that contains images of males and females obtained from dental hospitals with patient-informed consent. The dataset consists of 2D lateral cephalometric, panorama and CBCT radiographs. Many scenarios were applied to test the proposed system in landmark prediction and detection. Moreover, this study integrates the Grad-CAM (Gradient-Weighted Class Activation Mapping) technique to generate heat maps, providing transparent visualization of the regions the model focuses on during its decision-making process. By enhancing the interpretability of deep learning predictions, Grad-CAM strengthens clinical confidence in the system’s outputs, ensuring that ROI detection aligns with orthodontic diagnostic standards. This explainability is crucial in medical AI applications, where understanding model behavior is as important as achieving high accuracy. The experimental results achieved an accuracy exceeding 98.9%. This research evaluates and differentiates between the two-dimensional and the three-dimensional tracing analyses applied to measurements based on the practices of the European Board of Orthodontics. The results demonstrate the proposed methodology’s robustness when applied to cephalometric images. Furthermore, the evaluation of 3D analysis usage provides a clear understanding of the significance of integrated deep-learning techniques in orthodontics.
Full article
(This article belongs to the Special Issue Machine Learning Applications in Pattern Recognition)
►▼
Show Figures

Figure 1
Open AccessArticle
Evaluating the Predictive Power of Software Metrics for Fault Localization
by
Issar Arab, Kenneth Magel and Mohammed Akour
Computers 2025, 14(6), 222; https://doi.org/10.3390/computers14060222 - 6 Jun 2025
Abstract
Fault localization remains a critical challenge in software engineering, directly impacting debugging efficiency and software quality. This study investigates the predictive power of various software metrics for fault localization by framing the task as a multi-class classification problem and evaluating it using the
[...] Read more.
Fault localization remains a critical challenge in software engineering, directly impacting debugging efficiency and software quality. This study investigates the predictive power of various software metrics for fault localization by framing the task as a multi-class classification problem and evaluating it using the Defects4J dataset. We fitted thousands of models and benchmarked different algorithms—including deep learning, Random Forest, XGBoost, and LightGBM—to choose the best-performing model. To enhance model transparency, we applied explainable AI techniques to analyze feature importance. The results revealed that test suite metrics consistently outperform static and dynamic metrics, making them the most effective predictors for identifying faulty classes. These findings underscore the critical role of test quality and coverage in automated fault localization. By combining machine learning with transparent feature analysis, this work delivers practical insights to support more efficient debugging workflows. It lays the groundwork for an iterative process that integrates metric-based predictive models with large language models (LLMs), enabling future systems to automatically generate targeted test cases for the most fault-prone components, which further enhances the automation and precision of software testing.
Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Reducing Delivery Times by Utilising On-Site Wire Arc Additive Manufacturing with Digital-Twin Methods
by
Stefanie Sell, Kevin Villani and Marc Stautner
Computers 2025, 14(6), 221; https://doi.org/10.3390/computers14060221 - 6 Jun 2025
Abstract
The increasing demand for smaller batch sizes and mass customisation in production poses considerable challenges to logistics and manufacturing efficiency. Conventional methodologies are unable to address the need for expeditious, cost-effective distribution of premium-quality products tailored to individual specifications. Additionally, the reliability and
[...] Read more.
The increasing demand for smaller batch sizes and mass customisation in production poses considerable challenges to logistics and manufacturing efficiency. Conventional methodologies are unable to address the need for expeditious, cost-effective distribution of premium-quality products tailored to individual specifications. Additionally, the reliability and resilience of global logistics chains are increasingly under pressure. Additive manufacturing is regarded as a potentially viable solution to these problems, as it enables on-demand, on-site production, with reduced resource usage in production. Nevertheless, there are still significant challenges to be addressed, including the assurance of product quality and the optimisation of production processes with respect to time and resource efficiency. This article examines the potential of integrating digital twin methodologies to establish a fully digital and efficient process chain for on-site additive manufacturing. This study focuses on wire arc additive manufacturing (WAAM), a technology that has been successfully implemented in the on-site production of naval ship propellers and excavator parts. The proposed approach aims to enhance process planning efficiency, reduce material and energy consumption, and minimise the expertise required for operational deployment by leveraging digital twin methodologies. The present paper details the current state of research in this domain and outlines a vision for a fully virtualised process chain, highlighting the transformative potential of digital twin technologies in advancing on-site additive manufacturing. In this context, various aspects and components of a digital twin framework for wire arc additive manufacturing are examined regarding their necessity and applicability. The overarching objective of this paper is to conduct a preliminary investigation for the implementation and further development of a comprehensive DT framework for WAAM. Utilising a real-world sample, current already available process steps are validated and actual missing technical solutions are pointed out.
Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
►▼
Show Figures

Figure 1
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Applied Sciences, Computers, Electronics, Sensors, Virtual Worlds, IJGI
Simulations and Applications of Augmented and Virtual Reality, 2nd Edition
Topic Editors: Radu Comes, Dorin-Mircea Popovici, Calin Gheorghe Dan Neamtu, Jing-Jing FangDeadline: 20 June 2025
Topic in
Applied Sciences, Automation, Computers, Electronics, Sensors, JCP, Mathematics
Intelligent Optimization, Decision-Making and Privacy Preservation in Cyber–Physical Systems
Topic Editors: Lijuan Zha, Jinliang Liu, Jian LiuDeadline: 31 August 2025
Topic in
Animals, Computers, Information, J. Imaging, Veterinary Sciences
AI, Deep Learning, and Machine Learning in Veterinary Science Imaging
Topic Editors: Vitor Filipe, Lio Gonçalves, Mário GinjaDeadline: 31 October 2025
Topic in
Applied Sciences, Computers, Electronics, JSAN, Technologies
Emerging AI+X Technologies and Applications
Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 December 2025

Conferences
Special Issues
Special Issue in
Computers
Intelligent Edge: When AI Meets Edge Computing
Guest Editor: Riduan AbidDeadline: 30 June 2025
Special Issue in
Computers
Future Systems Based on Healthcare 5.0 for Pandemic Preparedness 2024
Guest Editor: Xuhui ChenDeadline: 30 June 2025
Special Issue in
Computers
Machine Learning Applications in Pattern Recognition
Guest Editor: Xiaochen LuDeadline: 30 June 2025
Special Issue in
Computers
When Natural Language Processing Meets Machine Learning—Opportunities, Challenges and Solutions
Guest Editors: Lu Bai, Huiru Zheng, Zhibao WangDeadline: 30 June 2025