Next Issue
Volume 14, April
Previous Issue
Volume 14, February
 
 

Computers, Volume 14, Issue 3 (March 2025) – 36 articles

Cover Story (view full-size image): This paper presents a comprehensive overview of Intrusion Detection Systems (IDSs) for computer networking security, addressing the following topics: IDS architectures and types, key detection techniques, datasets, test environments, implementations in modern network environments, current challenges, limitations, and emerging trends. It also discusses the major components of computer networks, common models and their characteristics, and vulnerabilities. This paper differentiates itself from prior reviews by considering a broad scope of IDS technologies, including advanced technologies like AI and blockchain, providing comprehensive dataset analysis. Key challenges in this field highlighted in the overview include reducing false positives, handling encrypted traffic, improving energy efficiency, and enhancing resilience against adversarial attacks to meet evolving cybersecurity demands. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
35 pages, 16100 KiB  
Article
Algorithmic Generation of Realistic 3D Graphics for Liquid Surfaces Within Arbitrary-Form Vessels in a Virtual Laboratory and Application in Flow Simulation
by Dimitrios S. Karpouzas, Vasilis Zafeiropoulos and Dimitris Kalles
Computers 2025, 14(3), 112; https://doi.org/10.3390/computers14030112 - 20 Mar 2025
Viewed by 198
Abstract
Hellenic Open University has developed Onlabs, a virtual biology laboratory designed to safely and effectively prepare its students for hands-on work in the university’s on-site labs. This platform simulates key experimental processes, such as 10X TBE solution preparation, agarose gel preparation and electrophoresis, [...] Read more.
Hellenic Open University has developed Onlabs, a virtual biology laboratory designed to safely and effectively prepare its students for hands-on work in the university’s on-site labs. This platform simulates key experimental processes, such as 10X TBE solution preparation, agarose gel preparation and electrophoresis, which involve liquid transfers between bottles. However, accurately depicting liquid volumes and their flow within complex-shaped laboratory vessels, such as Erlenmeyer flasks and burettes, remains a challenge. This paper addresses this limitation by introducing a unified parametric framework for modeling circular cross-section pipes, including straight pipes with a constant diameter, curved pipes with a constant diameter and straight conical pipes. Analytical expressions are developed to define the position and orientation of points along a pipe’s central axis, as well as the surface geometry of composite pipes formed by combining these elements in planar configurations. Moreover, the process of surface discretization with finite triangular elements is analyzed with the aim of optimizing their representation during the algorithmic implementation. The functions of the current length with respect to the volume of each considered container shape are developed. Finally, the methodology for handling and combining the analytical expressions during the filling of a composite pipe is explained, the filling of certain characteristic bottles is implemented and the results of the implementations are presented. The primary goal is to enable the precise algorithmic generation of 3D graphics representing the surfaces of liquids within various laboratory vessels and, subsequently, the simulation of their flow. By leveraging these parametric models, liquid volumes can be accurately visualized, reflecting the vessels’ geometries and improving the realism of simulations and the filling of various vessels can be realistically simulated. Full article
Show Figures

Figure 1

23 pages, 5670 KiB  
Article
A Conceptual Study of Rapidly Reconfigurable and Scalable Optical Convolutional Neural Networks Based on Free-Space Optics Using a Smart Pixel Light Modulator
by Young-Gu Ju
Computers 2025, 14(3), 111; https://doi.org/10.3390/computers14030111 - 20 Mar 2025
Viewed by 184
Abstract
The smart-pixel-based optical convolutional neural network was proposed to improve kernel refresh rates in scalable optical convolutional neural networks (CNNs) by replacing the spatial light modulator with a smart pixel light modulator while preserving benefits such as an unlimited input node size, cascadability, [...] Read more.
The smart-pixel-based optical convolutional neural network was proposed to improve kernel refresh rates in scalable optical convolutional neural networks (CNNs) by replacing the spatial light modulator with a smart pixel light modulator while preserving benefits such as an unlimited input node size, cascadability, and direct kernel representation. The smart pixel light modulator enhances weight update speed, enabling rapid reconfigurability. Its fast updating capability and memory expand the application scope of scalable optical CNNs, supporting operations like convolution with multiple kernel sets and difference mode. Simplifications using electrical fan-out reduce hardware complexity and costs. An evolution of this system, the smart-pixel-based bidirectional optical CNN, employs a bidirectional architecture and single lens-array optics, achieving a computational throughput of 8.3 × 1014 MAC/s with a smart pixel light modulator resolution of 3840 × 2160. Further advancements led to the two-mirror-like smart-pixel-based bidirectional optical CNN, which emulates 2n layers using only two physical layers, significantly reducing hardware requirements despite increased time delay. This architecture was demonstrated for solving partial differential equations by leveraging local interactions as a sequence of convolutions. These advancements position smart-pixel-based optical CNNs and their derivatives as promising solutions for future CNN applications. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Graphical abstract

40 pages, 3307 KiB  
Article
A Novel Approach to Efficiently Verify Sequential Consistency in Concurrent Programs
by Mohammed H. Abdulwahhab, Parosh Aziz Abdulla and Karwan Jacksi
Computers 2025, 14(3), 110; https://doi.org/10.3390/computers14030110 - 19 Mar 2025
Viewed by 940
Abstract
Verifying sequential consistency (SC) in concurrent programs is computationally challenging due to the exponential growth of possible interleavings among read and write operations. Many of these interleavings produce identical outcomes, rendering exhaustive verification approaches inefficient and computationally expensive, especially as thread counts increase. [...] Read more.
Verifying sequential consistency (SC) in concurrent programs is computationally challenging due to the exponential growth of possible interleavings among read and write operations. Many of these interleavings produce identical outcomes, rendering exhaustive verification approaches inefficient and computationally expensive, especially as thread counts increase. To mitigate this challenge, this study introduces a novel approach that efficiently verifies SC by identifying a minimal subset of valid event orderings. The proposed method iteratively focuses on ordering write events and evaluates their compatibility with SC conditions, including program order, read-from (rf) relations, and SC semantics, thereby significantly reducing redundant computations. Corresponding read events are subsequently integrated according to program order once the validity of write events has been confirmed, enabling rapid identification of violations to SC criteria. Three algorithmic variants of this approach were developed and empirically evaluated. The final variant exhibited superior performance, achieving substantial improvements in execution time—ranging from 31.919% to 99.992%—compared to the optimal existing practical SC verification algorithms. Additionally, comparative experiments demonstrated that the proposed approach consistently outperforms other state-of-the-art methods in both efficiency and scalability. Full article
Show Figures

Figure 1

15 pages, 2030 KiB  
Article
Transformer-Based Student Engagement Recognition Using Few-Shot Learning
by Wejdan Alarefah, Salma Kammoun Jarraya and Nihal Abuzinadah
Computers 2025, 14(3), 109; https://doi.org/10.3390/computers14030109 - 18 Mar 2025
Viewed by 528
Abstract
Improving the recognition of online learning engagement is a critical issue in educational information technology, due to the complexities of student behavior and varying assessment standards. Additionally, the scarcity of publicly available datasets for engagement recognition exacerbates this challenge. The majority of existing [...] Read more.
Improving the recognition of online learning engagement is a critical issue in educational information technology, due to the complexities of student behavior and varying assessment standards. Additionally, the scarcity of publicly available datasets for engagement recognition exacerbates this challenge. The majority of existing methods for detecting student engagement necessitate significant amounts of annotated data to capture variations in behaviors and interaction patterns. To address these limitations, we investigate few-shot learning (FSL) techniques to reduce the dependency on extensive training data. Transformer-based models have shown comprehensive results for video-based facial recognition tasks, thus paving new ground for understanding complicated patterns. In this research, we propose an innovative FSL model that employs a prototypical network with the vision transformer (ViT) model pre-trained on a face recognition dataset (e.g., MS1MV2) for spatial feature extraction, followed by an LSTM layer for temporal feature extraction. This approach effectively addresses the challenges of limited labeled data in engagement recognition. Our proposed approach achieves state-of-the-art performance on the EngageNet dataset, demonstrating its efficacy and potential in advancing engagement recognition research. Full article
(This article belongs to the Special Issue Advanced Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 2490 KiB  
Article
Developing a Crowdsourcing Digital Repository for Natural and Cultural Heritage Preservation and Promotion: A Report on the Experience in Zakynthos Island (Greece)
by Stergios Palamas, Yorghos Voutos, Katerina Kabassi and Phivos Mylonas
Computers 2025, 14(3), 108; https://doi.org/10.3390/computers14030108 - 17 Mar 2025
Viewed by 457
Abstract
The present study discusses the design and development of a digital repository for the preservation and dissemination of the cultural and natural heritage of Zakynthos Island (Greece). Following a crowdsourcing approach, the platform allows users to actively contribute to its content while aiming [...] Read more.
The present study discusses the design and development of a digital repository for the preservation and dissemination of the cultural and natural heritage of Zakynthos Island (Greece). Following a crowdsourcing approach, the platform allows users to actively contribute to its content while aiming to integrate scattered information from other relative initiatives. The platform is based on a popular Content Management System (CMS) to provide the core functionality, extended with the use of the CMS’s API to provide additional, personalized functionality for end-users, such as organizing content into thematic routes. The system also features a web application, mainly targeting users visiting the island of Zakynthos, and is developed exclusively with open web technologies and JavaScript frameworks. The web application is an alternative, map-centered, mobile-optimized front-end for the platform’s content featured in the CMS. A RESTful API is also provided, allowing integration with third-party systems and web applications, thereby expanding the repository’s reach and capabilities. Content delivery is personalized, based on users’ profiles, location, and preferences, enhancing engagement and usability. By integrating these features, the repository effectively preserves and makes accessible the unique cultural and natural heritage of Zakynthos to both local and global audiences. Full article
Show Figures

Figure 1

19 pages, 643 KiB  
Article
Hybrid Deep Neural Network Optimization with Particle Swarm and Grey Wolf Algorithms for Sunburst Attack Detection
by Mohammad Almseidin, Amjad Gawanmeh, Maen Alzubi, Jamil Al-Sawwa, Ashraf S. Mashaleh and Mouhammd Alkasassbeh
Computers 2025, 14(3), 107; https://doi.org/10.3390/computers14030107 - 17 Mar 2025
Viewed by 322
Abstract
Deep Neural Networks (DNNs) have been widely used to solve complex problems in natural language processing, image classification, and autonomous systems. The strength of DNNs is derived from their ability to model complex functions and to improve detection engines through deeper architecture. Despite [...] Read more.
Deep Neural Networks (DNNs) have been widely used to solve complex problems in natural language processing, image classification, and autonomous systems. The strength of DNNs is derived from their ability to model complex functions and to improve detection engines through deeper architecture. Despite the strengths of DNN engines, they present several crucial challenges, such as the number of hidden layers, the learning rate, and the neuron weight. These parameters are considered to play a crucial role in the ability of DNNs to detect anomalies. Optimizing these parameters could improve the detection engine and expand the utilization of DNNs for various areas of application. Bio-inspired optimization algorithms, especially Particle Swarm Intelligence (PSO) and the Gray Wolf Optimizer (GWO), have been widely used to optimize complex tasks because of their ability to explore the search space and their fast convergence. Despite the significant successes of PSO and GWO, there remains a gap in the literature regarding their hybridization and application in Intrusion Detection Systems (IDSs), such as Sunburst attack detection, especially using DNN. Therefore, in this paper, we introduce a hybrid detection model that investigates the ability to integrate PSO and GWO so as to improve the DNN architecture to detect the Sunburst attack. The PSO algorithm was used to optimize the learning rate and the number of hidden layers, while the GWO algorithm was used to optimize the neuron weight. The hybrid model was tested and evaluated based on open-source Sunburst attacks. The results demonstrate the effectiveness and robustness of the suggested hybrid DNN model. Furthermore, an extensive analysis was conducted by evaluating the suggested hybrid PSO–GWO along with other hybrid optimization techniques, namely Genetic Algorithm (GA), Differential Evolution (DE), and Ant Colony Optimization (ACO). The results demonstrate that the suggested hybrid model outperformed other optimization techniques in terms of accuracy, precision, recall, and F1-score. Full article
Show Figures

Figure 1

23 pages, 428 KiB  
Article
EnterpriseAI: A Transformer-Based Framework for Cost Optimization and Process Enhancement in Enterprise Systems
by Shinoy Vengaramkode Bhaskaran
Computers 2025, 14(3), 106; https://doi.org/10.3390/computers14030106 - 16 Mar 2025
Viewed by 232
Abstract
Coordination among multiple interdependent processes and stakeholders and the allocation of optimal resources make enterprise systems management a challenging process. Even for experienced professionals, it is not uncommon to cause inefficiencies and escalate operational costs. This paper introduces EnterpriseAI, a novel transformer-based framework [...] Read more.
Coordination among multiple interdependent processes and stakeholders and the allocation of optimal resources make enterprise systems management a challenging process. Even for experienced professionals, it is not uncommon to cause inefficiencies and escalate operational costs. This paper introduces EnterpriseAI, a novel transformer-based framework designed to automate enterprise system management. This transformer model has been designed and customized to reduce manual effort, minimize errors, and enhance resource allocation. Moreover, it assists in decision making by incorporating all interdependent and independent variables associated with a matter. All of these together lead to significant cost savings across organizational workflows. A unique dataset has been derived in this study from real-world enterprise scenarios. Using the transfer learning approach, the EnterpriseAI transformer has been trained to analyze complex operational dependencies and deliver context-aware solutions related to enterprise systems. The experimental results demonstrate EnterpriseAI’s effectiveness, achieving an accuracy of 92.1%, a precision of 92.5%, and a recall of 91.8%, with a perplexity score of 14. These results represent the ability of the EnterpriseAI to accurately respond to queries. The scalability and resource utilization tests reflect the astonishing factors that significantly reduce resource consumption while adapting to demand. Most importantly, it reduces the operational cost while enhancing the operational flow of business. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

27 pages, 8269 KiB  
Article
Evaluating Optimal Deep Learning Models for Freshness Assessment of Silver Barb Through Technique for Order Preference by Similarity to Ideal Solution with Linear Programming
by Atchara Choompol, Sarayut Gonwirat, Narong Wichapa, Anucha Sriburum, Sarayut Thitapars, Thanakorn Yarnguy, Noppakun Thongmual, Waraporn Warorot, Kiatipong Charoenjit and Ronnachai Sangmuenmao
Computers 2025, 14(3), 105; https://doi.org/10.3390/computers14030105 - 16 Mar 2025
Viewed by 304
Abstract
Automating fish freshness assessment is crucial for ensuring quality control and operational efficiency in large-scale fish processing. This study evaluates deep learning models for classifying the freshness of Barbonymus gonionotus (Silver Barb) and optimizing their deployment in an automated fish quality sorting system. [...] Read more.
Automating fish freshness assessment is crucial for ensuring quality control and operational efficiency in large-scale fish processing. This study evaluates deep learning models for classifying the freshness of Barbonymus gonionotus (Silver Barb) and optimizing their deployment in an automated fish quality sorting system. Three lightweight deep learning architectures, MobileNetV2, MobileNetV3, and EfficientNet Lite2, were analyzed across 18 different configurations, varying model size (Small, Medium, Large) and preprocessing methods (With and Without Preprocessing). A dataset comprising 1200 images, categorized into three freshness levels, was collected from the Lam Pao Dam in Thailand. To enhance classification performance, You Only Look Once version 8 (YOLOv8) was utilized for object detection and image preprocessing. The models were evaluated based on classification accuracy, inference speed, and computational efficiency, with Technique for Order Preference by Similarity to Ideal Solution with Linear Programming (TOPSIS-LP) applied as a multi-criteria decision-making approach. The results indicated that the MobileNetV3 model with a large parameter size and preprocessing (M2-PL-P) achieved the highest closeness coefficient (CC) score, with an accuracy of 98.33% and an inference speed of 6.95 frames per second (fps). This study establishes a structured framework for integrating AI-driven fish quality assessment into fishery-based community enterprises, improving productivity and reducing reliance on manual sorting processes. Full article
Show Figures

Figure 1

16 pages, 413 KiB  
Article
LLMPC: Large Language Model Predictive Control
by Gabriel Maher
Computers 2025, 14(3), 104; https://doi.org/10.3390/computers14030104 - 15 Mar 2025
Viewed by 701
Abstract
Recent advancements in planning prompting techniques for Large Language Models have improved their reasoning, planning, and action abilities. This paper develops a planning framework for Large Language Models using model predictive control that enables them to iteratively solve complex problems with long horizons. [...] Read more.
Recent advancements in planning prompting techniques for Large Language Models have improved their reasoning, planning, and action abilities. This paper develops a planning framework for Large Language Models using model predictive control that enables them to iteratively solve complex problems with long horizons. We show that in the model predictive control formulation, LLM planners act as approximate cost function optimizers and solve complex problems by breaking them down into smaller iterative steps. With our proposed planning framework, we demonstrate improved performance over few-shot prompting and improved efficiency over Monte Carlo Tree Search on several planning benchmarks. Full article
(This article belongs to the Special Issue Artificial Intelligence in Control)
Show Figures

Figure 1

25 pages, 1618 KiB  
Article
Optimizing Post-Quantum Digital Signatures with Verkle Trees and Quantum Seed-Based Pseudo-Random Generators
by Maksim Iavich and Nursulu Kapalova
Computers 2025, 14(3), 103; https://doi.org/10.3390/computers14030103 - 14 Mar 2025
Viewed by 501
Abstract
Nowadays, quantum computing is developing at an unprecedented speed. This will pose a serious threat to the security of widely used public-key cryptosystems in the near future. Scientists are actively looking for ways to protect against quantum attacks; however, existing solutions still face [...] Read more.
Nowadays, quantum computing is developing at an unprecedented speed. This will pose a serious threat to the security of widely used public-key cryptosystems in the near future. Scientists are actively looking for ways to protect against quantum attacks; however, existing solutions still face different limitations in terms of efficiency and practicality. This paper explores hash-based digital signature schemes, post-quantum vector commitments and Verkle tree-based approaches for protecting against quantum attacks. The paper proposes an improved approach to generating digital signatures based on Verkle trees using lattice based vector commitments. In order to further reduce the memory space, the paper offers the methodology of integrating a post-quantum secure pseudo-random number generator into the scheme. Finally, the paper proposes an efficient post-quantum digital signature scheme based on Verkle trees, which minimizes memory requirements and reduces the signature size. Our proposed framework has strong resistance to quantum attacks, as well as high speed and efficiency. This study is an important contribution to the elaboration of post-quantum cryptosystems, which lays the foundation for developing secure and practical digital signature systems in the face of emerging quantum threats. Full article
Show Figures

Figure 1

16 pages, 799 KiB  
Article
Advanced Identification of Prosodic Boundaries, Speakers, and Accents Through Multi-Task Audio Pre-Processing and Speech Language Models
by Francisco Javier Lima Florido and Gloria Corpas Pastor
Computers 2025, 14(3), 102; https://doi.org/10.3390/computers14030102 - 14 Mar 2025
Viewed by 480
Abstract
In recent years, the advances in deep neural networks (DNNs) and large language models (LLMs) have led to major breakthroughs and new levels of performance in Natural Language Processing (NLP), including tasks related to speech processing. Based on these new trends, new models [...] Read more.
In recent years, the advances in deep neural networks (DNNs) and large language models (LLMs) have led to major breakthroughs and new levels of performance in Natural Language Processing (NLP), including tasks related to speech processing. Based on these new trends, new models such as Whisper and Wav2Vec 2.0 achieve robust performance in speech processing tasks, even in speech-to-text translation and end-to-end speech translation, far exceeding all previous results. Although these models have shown excellent results in real-time speech processing, they still have some accuracy issues for some tasks and high latency problems when working with large amounts of audio data. In addition, many of them need audio to be segmented and labelled for speech synthesis and annotation tasks. Speaker diarisation, background noise detection, prosodic boundary detection and accent classification are some of the pre-processing tasks required in these cases. In this study, we will fine-tune a small Wav2Vec 2.0 base model for multi-task classification and audio segmentation. A corpus of spoken American English will be used for the experiments. We intend to explore this new approach and, more specifically, the performance of the model with regard to prosodic boundaries detection for audio segmentation, and advanced accent identification. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

23 pages, 2409 KiB  
Article
Generative AI in Higher Education Constituent Relationship Management (CRM): Opportunities, Challenges, and Implementation Strategies
by Carrie Marcinkevage and Akhil Kumar
Computers 2025, 14(3), 101; https://doi.org/10.3390/computers14030101 - 12 Mar 2025
Viewed by 1134
Abstract
This research explores opportunities for generative artificial intelligence (GenAI) in higher education constituent (customer) relationship management (CRM) to address the industry’s need for digital transformation driven by demographic shifts, economic challenges, and technological advancements. Using a qualitative research approach grounded in the principles [...] Read more.
This research explores opportunities for generative artificial intelligence (GenAI) in higher education constituent (customer) relationship management (CRM) to address the industry’s need for digital transformation driven by demographic shifts, economic challenges, and technological advancements. Using a qualitative research approach grounded in the principles of grounded theory, we conducted semi-structured interviews and an open-ended qualitative data collection instrument with technology vendors, implementation consultants, and HEI professionals that are actively exploring GenAI applications. Our findings highlight six primary types of GenAI—textual analysis and synthesis, data summarization, next-best action recommendations, speech synthesis and translation, code development, and image and video creation—each with applications across student recruitment, advising, alumni engagement, and administrative processes. We propose an evaluative framework with eight readiness criteria to assess institutional preparedness for GenAI adoption. While GenAI offers potential benefits, such as increased efficiency, reduced costs, and improved student engagement, its success depends on data readiness, ethical safeguards, and institutional leadership. By integrating GenAI as a co-intelligence alongside human expertise, HEIs can enhance CRM ecosystems and better support their constituents. Full article
(This article belongs to the Special Issue Smart Learning Environments)
Show Figures

Figure 1

12 pages, 965 KiB  
Article
Multifaceted Assessment of Responsible Use and Bias in Language Models for Education
by Ishrat Ahmed, Wenxing Liu, Rod D. Roscoe, Elizabeth Reilley and Danielle S. McNamara
Computers 2025, 14(3), 100; https://doi.org/10.3390/computers14030100 - 12 Mar 2025
Viewed by 958
Abstract
Large language models (LLMs) are increasingly being utilized to develop tools and services in various domains, including education. However, due to the nature of the training data, these models are susceptible to inherent social or cognitive biases, which can influence their outputs. Furthermore, [...] Read more.
Large language models (LLMs) are increasingly being utilized to develop tools and services in various domains, including education. However, due to the nature of the training data, these models are susceptible to inherent social or cognitive biases, which can influence their outputs. Furthermore, their handling of critical topics, such as privacy and sensitive questions, is essential for responsible deployment. This study proposes a framework for the automatic detection of biases and violations of responsible use using a synthetic question-based dataset mimicking student–chatbot interactions. We employ the LLM-as-a-judge method to evaluate multiple LLMs for biased responses. Our findings show that some models exhibit more bias than others, highlighting the need for careful consideration when selecting models for deployment in educational and other high-stakes applications. These results emphasize the importance of addressing bias in LLMs and implementing robust mechanisms to uphold responsible AI use in real-world services. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

31 pages, 616 KiB  
Review
Fog Service Placement Optimization: A Survey of State-of-the-Art Strategies and Techniques
by Hemant Kumar Apat, Veena Goswami, Bibhudatta Sahoo, Rabindra K. Barik and Manob Jyoti Saikia
Computers 2025, 14(3), 99; https://doi.org/10.3390/computers14030099 - 11 Mar 2025
Viewed by 681
Abstract
The rapid development of Internet of Things (IoT) devices in various smart city-based applications such as healthcare, traffic management systems, environment sensing systems, and public safety systems produce large volumes of data. To process these data, it requires substantial computing and storage resources [...] Read more.
The rapid development of Internet of Things (IoT) devices in various smart city-based applications such as healthcare, traffic management systems, environment sensing systems, and public safety systems produce large volumes of data. To process these data, it requires substantial computing and storage resources for smooth implementation and execution. While centralized cloud computing offers scalability, flexibility, and resource sharing, it faces significant limitations in IoT-based applications, especially in terms of latency, bandwidth, security, and cost. The fog computing paradigm complements the existing cloud computing services at the edge of the network to facilitate the various services without sending the data to a centralized cloud server. By processing the data in fog computing, it satisfies the delay requirement of various time-sensitive services of IoT applications. However, many resource-intensive IoT systems exist that require substantial computing resources for their processing. In such scenarios, finding the optimal computing node for processing and executing the service is a challenge. The optimal placement of various IoT applications services in heterogeneous fog computing environments is a well-known NP-complete problem. To solve this problem, various authors proposed different algorithms like the randomized algorithm, heuristic algorithm, meta heuristic algorithm, machine learning algorithm, and graph-based algorithm for finding the optimal placement. In the present survey, we first describe the fundamental and mathematical aspects of the three-layer IoT–fog–cloud computing model. Then, we classify the IoT application model based on different attributes that help to find the optimal computing node. Furthermore, we discuss the complexity analysis of the service placement problem in detail. Finally, we provide a comprehensive evaluation of both single-objective and multi-objective IoT service placement strategies in fog computing. Additionally, we highlight new challenges and identify promising directions for future research, specifically in the context of multi-objective IoT service optimization. Full article
Show Figures

Figure 1

23 pages, 435 KiB  
Article
MultiGLICE: Combining Graph Neural Networks and Program Slicing for Multiclass Software Vulnerability Detection
by Wesley de Kraker, Harald Vranken and Arjen Hommersom
Computers 2025, 14(3), 98; https://doi.org/10.3390/computers14030098 - 8 Mar 2025
Viewed by 565
Abstract
This paper presents MultiGLICE (Multi class Graph Neural Network with Program Slice), a model for static code analysis to detect security vulnerabilities. MultiGLICE extends our previous GLICE model with multiclass detection for a large number of vulnerabilities across multiple programming languages. It builds [...] Read more.
This paper presents MultiGLICE (Multi class Graph Neural Network with Program Slice), a model for static code analysis to detect security vulnerabilities. MultiGLICE extends our previous GLICE model with multiclass detection for a large number of vulnerabilities across multiple programming languages. It builds upon the earlier SySeVR and FUNDED models and uniquely integrates inter-procedural program slicing with a graph neural network. Users can configure the depth of the inter-procedural analysis, which allows a trade-off between the detection performance and computational efficiency. Increasing the depth of the inter-procedural analysis improves the detection performance, at the cost of computational efficiency. We conduct experiments with MultiGLICE for the multiclass detection of 38 different CWE types in C/C++, C#, Java, and PHP code. We evaluate the trade-offs in the depth of the inter-procedural analysis and compare its vulnerability detection performance and resource usage with those of prior models. Our experimental results show that MultiGLICE improves the weighted F1-score by about 23% when compared to the FUNDED model adapted for multiclass classification. Furthermore, MultiGLICE offers a significant improvement in computational efficiency. The time required to train the MultiGLICE model is approximately 17 times less than that of FUNDED. Full article
(This article belongs to the Special Issue Best Practices, Challenges and Opportunities in Software Engineering)
Show Figures

Figure 1

21 pages, 2585 KiB  
Article
“Optimizing the Optimization”: A Hybrid Evolutionary-Based AI Scheme for Optimal Performance
by Agathoklis A. Krimpenis and Loukas Athanasakos
Computers 2025, 14(3), 97; https://doi.org/10.3390/computers14030097 - 8 Mar 2025
Viewed by 474
Abstract
Optimization algorithms for solving technological and scientific problems often face long convergence times and high computational costs due to numerous input/output parameters and complex calculations. This study focuses on proposing a method for minimizing response times for such algorithms across various scientific fields, [...] Read more.
Optimization algorithms for solving technological and scientific problems often face long convergence times and high computational costs due to numerous input/output parameters and complex calculations. This study focuses on proposing a method for minimizing response times for such algorithms across various scientific fields, including the design and manufacturing of high-performance, high-quality components. It introduces an innovative mixed-scheme optimization algorithm aimed at effective optimization with minimal objective function evaluations. Indicative key optimization algorithms—namely, the Genetic Algorithm, Firefly Algorithm, Harmony Search Algorithm, and Black Hole Algorithm—were analyzed as paradigms to standardize parameters for integration into the mixed scheme. The proposed scheme designates one algorithm as a “leader” to initiate optimization, guiding others in iterative evaluations and enforcing intermediate solution exchanges. This collaborative process seeks to achieve optimal solutions at reduced convergence costs. This mixed scheme was tested on challenging benchmark functions, demonstrating convergence speeds that were at least three times faster than the best-performing standalone algorithms while maintaining solution quality. These results highlight its potential as an efficient optimization approach for computationally intensive problems, regardless of the included algorithms and their standalone performance. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

17 pages, 6295 KiB  
Article
A Chatbot Student Support System in Open and Distance Learning Institutions
by Juliana Ngozi Ndunagu, Christiana Uchenna Ezeanya, Benjamin Osondu Onuorah, Jude Chukwuma Onyeakazi and Elochukwu Ukwandu
Computers 2025, 14(3), 96; https://doi.org/10.3390/computers14030096 - 7 Mar 2025
Viewed by 934
Abstract
The disruptive innovation of artificial intelligence (AI) chatbots is affecting educational dominance, which must be considered by higher educational institutions. Open and Distance Learning (ODL) becomes imperative for the effective and interactive communication between the institutions and learners. Drawbacks of isolation, motivation, insufficient [...] Read more.
The disruptive innovation of artificial intelligence (AI) chatbots is affecting educational dominance, which must be considered by higher educational institutions. Open and Distance Learning (ODL) becomes imperative for the effective and interactive communication between the institutions and learners. Drawbacks of isolation, motivation, insufficient time to study, and delay feedback mechanisms are some of the challenges encountered by ODL learners. The consequences have led to an increase in students’ attrition rate, which is one of the key issues observed by many authors facing ODL institutions. The National Open University of Nigeria (NOUN), one of the ODL institutions in Nigeria, is limited to an existing e-ticketing support system which is manually operated. A study on 2000 students of the NOUN using an online survey method revealed that 579 students responded to the questionnaire, equalling 29%. Further findings revealed significant delay time responses and inadequate resolutions as major barriers affecting the e-ticketing system in the NOUN. However, despite the quantitative method employed in the study, an artificial intelligence chatbot for automatic responses was also developed using Python 3.8+, ChatterBot (Version 1.0.5) Chatbot Framework, SQLite (default ChatterBot Storage, NLTK, and Web Interface: Flask (for integration with a web application). In testing the system, out of the 579 respondents, 370, representing 64% of the respondents, claimed that the chatbot was extremely helpful in resolving their issues and complaints. The adaptation of an AI chatbot in an ODL institution as a support system reduces the attrition rate, thereby revolutionising support services’ potential in Open and Distance Learning systems. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

21 pages, 738 KiB  
Article
Unpacking Sarcasm: A Contextual and Transformer-Based Approach for Improved Detection
by Parul Dubey, Pushkar Dubey and Pitshou N. Bokoro
Computers 2025, 14(3), 95; https://doi.org/10.3390/computers14030095 - 6 Mar 2025
Viewed by 808
Abstract
Sarcasm detection is a crucial task in natural language processing (NLP), particularly in sentiment analysis and opinion mining, where sarcasm can distort sentiment interpretation. Accurately identifying sarcasm remains challenging due to its context-dependent nature and linguistic complexity across informal text sources like social [...] Read more.
Sarcasm detection is a crucial task in natural language processing (NLP), particularly in sentiment analysis and opinion mining, where sarcasm can distort sentiment interpretation. Accurately identifying sarcasm remains challenging due to its context-dependent nature and linguistic complexity across informal text sources like social media and conversational dialogues. This study utilizes three benchmark datasets, namely, News Headlines, Mustard, and Reddit (SARC), which contain diverse sarcastic expressions from headlines, scripted dialogues, and online conversations. The proposed methodology leverages transformer-based models (RoBERTa and DistilBERT), integrating context summarization, metadata extraction, and conversational structure preservation to enhance sarcasm detection. The novelty of this research lies in combining contextual summarization with metadata-enhanced embeddings to improve model interpretability and efficiency. Performance evaluation is based on accuracy, F1 score, and the Jaccard coefficient, ensuring a comprehensive assessment. Experimental results demonstrate that RoBERTa achieves 98.5% accuracy with metadata, while DistilBERT offers a 1.74x speedup, highlighting the trade-off between accuracy and computational efficiency for real-world sarcasm detection applications. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

25 pages, 2529 KiB  
Article
Beyond Snippet Assistance: A Workflow-Centric Framework for End-to-End AI-Driven Code Generation
by Vladimir Sonkin and Cătălin Tudose
Computers 2025, 14(3), 94; https://doi.org/10.3390/computers14030094 - 6 Mar 2025
Viewed by 824
Abstract
Recent AI-assisted coding tools, such as GitHub Copilot and Cursor, have enhanced developer productivity through real-time snippet suggestions. However, these tools primarily assist with isolated coding tasks and lack a structured approach to automating complex, multi-step software development workflows. This paper introduces a [...] Read more.
Recent AI-assisted coding tools, such as GitHub Copilot and Cursor, have enhanced developer productivity through real-time snippet suggestions. However, these tools primarily assist with isolated coding tasks and lack a structured approach to automating complex, multi-step software development workflows. This paper introduces a workflow-centric AI framework for end-to-end automation, from requirements gathering to code generation, validation, and integration, while maintaining developer oversight. Key innovations include automatic context discovery, which selects relevant codebase elements to improve LLM accuracy; a structured execution pipeline using Prompt Pipeline Language (PPL) for iterative code refinement; self-healing mechanisms that generate tests, detect errors, trigger rollbacks, and regenerate faulty code; and AI-assisted code merging, which preserves manual modifications while integrating AI-generated updates. These capabilities enable efficient automation of repetitive tasks, enforcement of coding standards, and streamlined development workflows. This approach lays the groundwork for AI-driven development that remains adaptable as LLM models advance, progressively reducing the need for human intervention while ensuring code reliability. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

27 pages, 1950 KiB  
Review
Machine Learning and Deep Learning Paradigms: From Techniques to Practical Applications and Research Frontiers
by Kamran Razzaq and Mahmood Shah
Computers 2025, 14(3), 93; https://doi.org/10.3390/computers14030093 - 6 Mar 2025
Cited by 3 | Viewed by 1617
Abstract
Machine learning (ML) and deep learning (DL), subsets of artificial intelligence (AI), are the core technologies that lead significant transformation and innovation in various industries by integrating AI-driven solutions. Understanding ML and DL is essential to logically analyse the applicability of ML and [...] Read more.
Machine learning (ML) and deep learning (DL), subsets of artificial intelligence (AI), are the core technologies that lead significant transformation and innovation in various industries by integrating AI-driven solutions. Understanding ML and DL is essential to logically analyse the applicability of ML and DL and identify their effectiveness in different areas like healthcare, finance, agriculture, manufacturing, and transportation. ML consists of supervised, unsupervised, semi-supervised, and reinforcement learning techniques. On the other hand, DL, a subfield of ML, comprising neural networks (NNs), can deal with complicated datasets in health, autonomous systems, and finance industries. This study presents a holistic view of ML and DL technologies, analysing algorithms and their application’s capacity to address real-world problems. The study investigates the real-world application areas in which ML and DL techniques are implemented. Moreover, the study highlights the latest trends and possible future avenues for research and development (R&D), which consist of developing hybrid models, generative AI, and incorporating ML and DL with the latest technologies. The study aims to provide a comprehensive view on ML and DL technologies, which can serve as a reference guide for researchers, industry professionals, practitioners, and policy makers. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 1097 KiB  
Article
ICT Teachers’ Vision and Experience in Developing Digital Skills of Primary School Students in Computer Science Lessons
by Aliya Katyetova and Symbat Issabayeva
Computers 2025, 14(3), 92; https://doi.org/10.3390/computers14030092 - 6 Mar 2025
Viewed by 617
Abstract
The rapid development of technology sets its own rules for adults and children. For younger schoolchildren, acquiring digital skills from primary school will give them the confidence to apply them correctly in school, at university, and in their lives. Schools should be interested [...] Read more.
The rapid development of technology sets its own rules for adults and children. For younger schoolchildren, acquiring digital skills from primary school will give them the confidence to apply them correctly in school, at university, and in their lives. Schools should be interested in providing the necessary conditions to develop children’s digital skills. Teachers can equip them with the basic skills needed to live successfully in the digital age by teaching them digital literacy skills. They can help children develop their digital skills and move consciously in the digital environment. The development of digital literacy in primary school students and the role of information and communication technologies (ICT) teachers in this development are considered relevant and timely in the article. The study examines the vision and experiences of Kazakhstani primary school computer science teachers in developing students’ digital skills in informatics classes. The article discusses research methods such as questionnaires, interviewing ICT teachers, observation, and participation in computer science lessons to better understand the actual situation in primary schools in the Republic of Kazakhstan. The study’s results will be helpful for schools and are suggested for improving computer science curricula. Full article
Show Figures

Figure 1

19 pages, 6430 KiB  
Article
Improving Road Safety with AI: Automated Detection of Signs and Surface Damage
by Davide Merolla, Vittorio Latorre, Antonio Salis and Gianluca Boanelli
Computers 2025, 14(3), 91; https://doi.org/10.3390/computers14030091 - 4 Mar 2025
Viewed by 914
Abstract
Public transportation plays a crucial role in our lives, and the road network is a vital component in the implementation of smart cities. Recent advancements in AI have enabled the development of advanced monitoring systems capable of detecting anomalies in road surfaces and [...] Read more.
Public transportation plays a crucial role in our lives, and the road network is a vital component in the implementation of smart cities. Recent advancements in AI have enabled the development of advanced monitoring systems capable of detecting anomalies in road surfaces and road signs, which can lead to serious accidents. This paper presents an innovative approach to enhance road safety through the detection and classification of traffic signs and road surface damage using advanced deep learning techniques (CNN), achieving over 90% precision and accuracy in both detection and classification of traffic signs and road surface damage. This integrated approach supports proactive maintenance strategies, improving road safety and resource allocation for the Molise region and the city of Campobasso. The resulting system, developed as part of the CTE Molise research project funded by the Italian Minister of Economic Growth (MIMIT), leverages cutting-edge technologies such as cloud computing and High-Performance Computing with GPU utilization. It serves as a valuable tool for municipalities, for the quick detection of anomalies and the prompt organization of maintenance operations. Full article
(This article belongs to the Special Issue AI in Its Ecosystem)
Show Figures

Figure 1

26 pages, 5269 KiB  
Article
Criteria for Evaluating Digital Technology Used to Support Computational Thinking via Inquiry Learning—The Case of Two Educational Software Applications for Mathematics and Physics
by Aikaterini Bounou, Nikolaos Tselios, George Kaliampos, Konstantinos Lavidas and Stamatios Papadakis
Computers 2025, 14(3), 90; https://doi.org/10.3390/computers14030090 - 4 Mar 2025
Viewed by 898
Abstract
There is an ongoing need to evaluate whether commonly used educational software effectively supports inquiry-based learning and computational thinking skills development, which are key objectives in secondary STEM curricula. This research establishes criteria for characterising digital technologies, such as modelling and simulation software, [...] Read more.
There is an ongoing need to evaluate whether commonly used educational software effectively supports inquiry-based learning and computational thinking skills development, which are key objectives in secondary STEM curricula. This research establishes criteria for characterising digital technologies, such as modelling and simulation software, virtual laboratories, and microcosms, to ensure their suitability in supporting students’ computational thinking through inquiry-based activities in STEM courses. The main criteria focus on six key areas: (a) production of meaning, (b) support in problem formulation, (c) ability to manage processes easily, (d) support in expressing solutions, (e) support in executing and evaluating solutions, and (f) ability to articulate and reflect on processes and solutions. Using this evaluation framework, two widely used software tools, Tracker 6.1.3 and GeoGebra 5, commonly employed in high school physics and mathematics, were assessed. The trial evaluation results are discussed, with recommendations for improving the software to support these educational objectives. Full article
Show Figures

Graphical abstract

24 pages, 836 KiB  
Article
Fuzzy Memory Networks and Contextual Schemas: Enhancing ChatGPT Responses in a Personalized Educational System
by Christos Troussas, Akrivi Krouska, Phivos Mylonas, Cleo Sgouropoulou and Ioannis Voyiatzis
Computers 2025, 14(3), 89; https://doi.org/10.3390/computers14030089 - 4 Mar 2025
Viewed by 772
Abstract
Educational AI systems often do not employ proper sophistication techniques to enhance learner interactions, organize their contextual knowledge or even deliver personalized feedback. To address this gap, this paper seeks to reform the way ChatGPT supports learners by employing fuzzy memory retention and [...] Read more.
Educational AI systems often do not employ proper sophistication techniques to enhance learner interactions, organize their contextual knowledge or even deliver personalized feedback. To address this gap, this paper seeks to reform the way ChatGPT supports learners by employing fuzzy memory retention and thematic clustering. To achieve this, three modules have been developed: (a) the Fuzzy Memory Module which models human memory retention using time decay fuzzy weights to assign relevance to user interactions, (b) the Schema Manager which then organizes these prioritized interactions into thematic clusters for structured contextual representation, and (c) the Response Generator which uses the output of the other two modules to provide feedback to ChatGPT by synthesizing personalized responses. The synergy of these three modules is a novel approach to intelligent and AI tutoring that enhances the output of ChatGPT to learners for a more personalized learning experience. The system was evaluated by 120 undergraduate students in the course of Java programming, and the results are very promising, showing memory retrieval accuracy, schema relevance and personalized response quality. The results also show the system outperforms traditional methods in delivering adaptive and contextually enriched educational feedback. Full article
Show Figures

Figure 1

21 pages, 17670 KiB  
Article
Advancing Traffic Sign Recognition: Explainable Deep CNN for Enhanced Robustness in Adverse Environments
by Ilyass Benfaress, Afaf Bouhoute and Ahmed Zinedine
Computers 2025, 14(3), 88; https://doi.org/10.3390/computers14030088 - 4 Mar 2025
Viewed by 772
Abstract
This paper presents a traffic sign recognition (TSR) system based on the deep convolutional neural network (CNN) architecture, which proves to be extremely accurate in recognizing traffic signs under challenging conditions such as bad weather, low-resolution images, and various environmental-impact factors. The proposed [...] Read more.
This paper presents a traffic sign recognition (TSR) system based on the deep convolutional neural network (CNN) architecture, which proves to be extremely accurate in recognizing traffic signs under challenging conditions such as bad weather, low-resolution images, and various environmental-impact factors. The proposed CNN is compared with other architectures, including GoogLeNet, AlexNet, DarkNet-53, ResNet-34, VGG-16, and MicronNet-BF. Experimental results confirm that the proposed CNN significantly improves recognition accuracy compared to existing models. In order to make our model interpretable, we utilize explainable AI (XAI) approaches, specifically Gradient-weighted Class Activation Mapping (Grad-CAM), that can give insight into how the system comes to its decision. The evaluation of the Tsinghua-Tencent 100K (TT100K) traffic sign dataset showed that the proposed method significantly outperformed existing state-of-the-art methods. Additionally, we evaluated our model on the German Traffic Sign Recognition Benchmark (GTSRB) dataset to ensure generalization, demonstrating its ability to perform well in diverse traffic sign conditions. Design issues such as noise, contrast, blurring, and zoom effects were added to enhance performance in real applications. These verified results indicate both the strength and reliability of the CNN architecture proposed for TSR tasks and that it is a good option for integration into intelligent transportation systems (ITSs). Full article
Show Figures

Figure 1

44 pages, 642 KiB  
Review
Overview on Intrusion Detection Systems for Computers Networking Security
by Lorenzo Diana, Pierpaolo Dini and Davide Paolini
Computers 2025, 14(3), 87; https://doi.org/10.3390/computers14030087 - 3 Mar 2025
Cited by 1 | Viewed by 3332
Abstract
The rapid growth of digital communications and extensive data exchange have made computer networks integral to organizational operations. However, this increased connectivity has also expanded the attack surface, introducing significant security risks. This paper provides a comprehensive review of Intrusion Detection System (IDS) [...] Read more.
The rapid growth of digital communications and extensive data exchange have made computer networks integral to organizational operations. However, this increased connectivity has also expanded the attack surface, introducing significant security risks. This paper provides a comprehensive review of Intrusion Detection System (IDS) technologies for network security, examining both traditional methods and recent advancements. The review covers IDS architectures and types, key detection techniques, datasets and test environments, and implementations in modern network environments such as cloud computing, virtualized networks, Internet of Things (IoT), and industrial control systems. It also addresses current challenges, including scalability, performance, and the reduction of false positives and negatives. Special attention is given to the integration of advanced technologies like Artificial Intelligence (AI) and Machine Learning (ML), and the potential of distributed technologies such as blockchain. By maintaining a broad-spectrum analysis, this review aims to offer a holistic view of the state-of-the-art in IDSs, support a diverse audience, and identify future research and development directions in this critical area of cybersecurity. Full article
Show Figures

Figure 1

24 pages, 627 KiB  
Article
An Empirical Evaluation of Neural Network Architectures for 3D Spheroid Segmentation
by Fadoua Oudouar, Ahmed Bir-Jmel, Hanane Grissette, Sidi Mohamed Douiri, Yassine Himeur, Sami Miniaoui, Shadi Atalla and Wathiq Mansoor
Computers 2025, 14(3), 86; https://doi.org/10.3390/computers14030086 - 28 Feb 2025
Viewed by 624
Abstract
The accurate segmentation of 3D spheroids is crucial in advancing biomedical research, particularly in understanding tumor development and testing therapeutic responses. As 3D spheroids emulate in vivo conditions more closely than traditional 2D cultures, efficient segmentation methods are essential for precise analysis. This [...] Read more.
The accurate segmentation of 3D spheroids is crucial in advancing biomedical research, particularly in understanding tumor development and testing therapeutic responses. As 3D spheroids emulate in vivo conditions more closely than traditional 2D cultures, efficient segmentation methods are essential for precise analysis. This study evaluates three prominent neural network architectures—U-Net, HRNet, and DeepLabV3+—for the segmentation of 3D spheroids, a critical challenge in biomedical image analysis. Through empirical analysis across a comprehensive Tumour Spheroid dataset, HRNet and DeepLabV3+ emerged as top performers, achieving high segmentation accuracy, with HRNet achieving 99.72% validation accuracy, a Dice coefficient of 96.70%, and a Jaccard coefficient of 93.62%. U-Net, although widely used in medical imaging, struggled to match the performance of the other models. The study also examines the impact of optimizers, with the Adam optimizer frequently causing overfitting, especially in U-Net models. Despite improvements with SGD and Adagrad, these optimizers did not surpass HRNet and DeepLabV3+. The study highlights the importance of selecting the right model–optimizer combination for optimal segmentation. Full article
Show Figures

Figure 1

21 pages, 1040 KiB  
Article
FungiLT: A Deep Learning Approach for Species-Level Taxonomic Classification of Fungal ITS Sequences
by Kai Liu, Hongyuan Zhao, Dongliang Ren, Dongna Ma, Shuangping Liu and Jian Mao
Computers 2025, 14(3), 85; https://doi.org/10.3390/computers14030085 - 28 Feb 2025
Viewed by 669
Abstract
With the explosive growth of sequencing data, rapidly and accurately classifying and identifying species has become a critical challenge in amplicon analysis research. The internal transcribed spacer (ITS) region is widely used for fungal species classification and identification. However, most existing ITS databases [...] Read more.
With the explosive growth of sequencing data, rapidly and accurately classifying and identifying species has become a critical challenge in amplicon analysis research. The internal transcribed spacer (ITS) region is widely used for fungal species classification and identification. However, most existing ITS databases cover limited fungal species diversity, and current classification methods struggle to efficiently handle such large-scale data. This study integrates multiple publicly available databases to construct an ITS sequence database encompassing 93,975 fungal species, making it a resource with broader species diversity for fungal taxonomy. In this study, a fungal classification model named FungiLT is proposed, integrating Transformer and BiLSTM architectures while incorporating a dual-channel feature fusion mechanism. On a dataset where each fungal species is represented by 100 ITS sequences, it achieves a species-level classification accuracy of 98.77%. Compared to BLAST, QIIME2, and the deep learning model CNN_FunBar, FungiLT demonstrates significant advantages in ITS species classification. This study provides a more efficient and accurate solution for large-scale fungal classification tasks and offers new technical support and insights for species annotation in amplicon analysis research. Full article
(This article belongs to the Special Issue Emerging Trends in Machine Learning and Artificial Intelligence)
Show Figures

Graphical abstract

19 pages, 1613 KiB  
Article
A Secure Cooperative Adaptive Cruise Control Design with Unknown Leader Dynamics Under False Data Injection Attacks
by Parisa Ansari Bonab and Arman Sargolzaei
Computers 2025, 14(3), 84; https://doi.org/10.3390/computers14030084 - 27 Feb 2025
Viewed by 569
Abstract
The combination of connectivity and automation allows connected and autonomous vehicles (CAVs) to operate autonomously using advanced on-board sensors while communicating with each other via vehicle-to-vehicle (V2V) technology to enhance safety, efficiency, and mobility. One of the most promising features of CAVs is [...] Read more.
The combination of connectivity and automation allows connected and autonomous vehicles (CAVs) to operate autonomously using advanced on-board sensors while communicating with each other via vehicle-to-vehicle (V2V) technology to enhance safety, efficiency, and mobility. One of the most promising features of CAVs is cooperative adaptive cruise control (CACC). This system extends the capabilities of conventional adaptive cruise control (ACC) by facilitating the exchange of critical parameters among vehicles to enhance safety, traffic flow, and efficiency. However, increased connectivity introduces new vulnerabilities, making CACC susceptible to cyber-attacks, including false data injection (FDI) attacks, which can compromise vehicle safety. To address this challenge, we propose a secure observer-based control design leveraging Lyapunov stability analysis, which is capable of mitigating the adverse impact of FDI attacks and ensuring system safety. This approach uniquely addresses system security without relying on a known lead vehicle model. The developed approach is validated through simulation results, demonstrating its effectiveness. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in IoT Era)
Show Figures

Figure 1

21 pages, 11251 KiB  
Article
Predicting Student Performance and Enhancing Learning Outcomes: A Data-Driven Approach Using Educational Data Mining Techniques
by Athanasios Angeioplastis, John Aliprantis, Markos Konstantakis and Alkiviadis Tsimpiris
Computers 2025, 14(3), 83; https://doi.org/10.3390/computers14030083 - 27 Feb 2025
Viewed by 1010
Abstract
This study investigates the use of educational data mining (EDM) techniques to predict student performance and enhance learning outcomes in higher education. Leveraging data from Moodle, a widely used learning management system (LMS), we analyzed 450 students’ academic records spanning nine semesters. Five [...] Read more.
This study investigates the use of educational data mining (EDM) techniques to predict student performance and enhance learning outcomes in higher education. Leveraging data from Moodle, a widely used learning management system (LMS), we analyzed 450 students’ academic records spanning nine semesters. Five machine learning algorithms—k-nearest neighbors, random forest, logistic regression, decision trees, and neural networks—were applied to identify correlations between courses and predict grades. The results indicated that courses with strong correlations (+0.3 and above) significantly enhanced predictive accuracy, particularly in binary classification tasks. kNN and neural networks emerged as the most robust models, achieving F1 scores exceeding 0.8. These findings underscore the potential of EDM to optimize instructional strategies and support personalized learning pathways. This study offers insights into the effective application of data-driven approaches to improve educational outcomes and foster student success. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop